Projectsへ戻る
LLM Research Algorithms Heuristics Bias Analysis

LLM Bias in Algorithm Evolution

Researching why LLMs drift toward conservative local optimization inside heuristic-search and algorithm-evolution frameworks.

Overview

This research explores how large language models behave when they are used inside frameworks that iteratively improve heuristic algorithms. The central question is why these systems often make only small, low-risk edits instead of the bold changes that could unlock better solutions.

The work combines empirical experiments with framework analysis to understand where conservatism enters the loop and how evaluation design influences exploration.

Key Features

Bias Analysis

Examines the conservative behaviors LLMs exhibit when asked to evolve heuristic algorithms.

Local Optima Problem

Studies why models settle into low-impact iteration loops instead of bold structural changes.

Evolutionary Frameworks

Uses ShinkaEvolve and OpenEvolve as test beds for iterative algorithm generation.

Hypothesis Validation

Designs experiments to validate where bias enters the evolution loop and how to mitigate it.

Technologies Used

Python ShinkaEvolve OpenEvolve OpenAI API Anthropic API NumPy Genetic Algorithms

Challenges Overcome

  • LLMs often avoid high-impact algorithm modifications.
  • Models become trapped in local optimization patterns.
  • Bias in the evaluation loop makes exploration harder to sustain.

Outcomes & Impact

  • Identified recurring bias patterns in algorithm evolution.
  • Developed a working hypothesis for conservative behavior in iterative search.
  • Continuing experiments to test mitigation strategies.