LLM4EA 2025

Hybrid Intelligence Challenge — Evolving with Language

Join the global competition to shape the future of optimization by combining the strengths of Large Language Models and Evolutionary Algorithms.

Register Now

Overview and Theme

The LLM4EA 2025 Challenge is a pioneering competition that seeks to explore and define the future of hybrid algorithm design by uniting two powerful paradigms: Large Language Models (LLMs) and Evolutionary Algorithms (EAs). As both fields have evolved rapidly, with EAs demonstrating robust problem-solving across a wide range of domains, and LLMs revolutionizing reasoning, synthesis, and knowledge representation, this challenge aims to harness their synergy to unlock new frontiers in intelligent optimization.

Traditional EA design relies heavily on expert-driven heuristics, trial-and-error parameter tuning, and handcrafted operators. However, these design processes can be abstract, time-consuming, and difficult to generalize across problems. LLMs offer a transformative advantage by introducing natural language reasoning, code generation, prompt engineering, and adaptive decision-making into the optimization workflow. When effectively integrated, LLMs can act as a co-pilot, designer, or even a dynamic component of an EA system.

This challenge embraces the human-in-the-loop and AI-in-the-loop paradigms, inviting researchers to rethink how optimization tools are developed. It opens the door to LLM-guided metaheuristics that can self-adapt, generate operators, respond to problem features, or even evolve their own strategies over time. Participants are encouraged to innovate not only through performance but through explainable and generalizable designs that highlight the potential of intelligent hybrid systems.

The competition features two complementary tracks. Track A emphasizes the creative and conceptual side, how LLMs can drive or support the design of novel EAs, with a focus on methodology and explainability over performance. Track B tests the practical capabilities of such developed hybrid innovation systems by applying them to established benchmark functions from the CEC 2017 suite, evaluating their real-world efficiency, robustness, and competitiveness.

In essence, LLM4EA 2025 is more than a competition, it is a collaborative experiment in building AI systems that can design better AI systems, setting the stage for a new era of optimization research powered by machine reasoning, language understanding, and evolutionary intelligence.

Objectives

Scope & Tracks

    Track A: LLM-Centric Hybrid Algorithm Design (Poster Presentation Track)

    • Participants submit an algorithm or conceptual workflow where an LLM contributes significantly to design, strategy generation, reasoning, or adaptive components.
    • Evaluation through poster presentations.
    • Participants explain novelty, the design process, and how LLMs are used in designing EA.

    The aim is to encourage creativity and advancement in the design of intelligent EAs where LLMs play an integral, explanatory, or generative role in the algorithmic workflow. This track invites participants to explore how LLMs can be used to rethink or co-create EAs in novel and intelligent ways. Submissions should showcase how LLMs assist in the design, adaptation, or generation of EA components and explain the rationale behind their design choices.

    This track emphasizes conceptual novelty, technical creativity, and explainability over raw performance. It's intended for researchers who wish to demonstrate original thinking, frameworks, or architectures, even if they are in early-stage development or lack competitive benchmarking results.

    Track B: LLM-Proposed or LLM-Refined Algorithm on CEC 2017 (Benchmark Performance Track)

    • Participants use LLMs to propose or refine EAs and test them on CEC 2017 bound-constrained optimization problems.
    • Emphasis is on empirical performance while ensuring a clear role of the LLM in algorithm design or adaptation.
    • Evaluation through benchmark performance and reproducibility.
    • LLM involvement must be documented and justified.

    The aim is to evaluate the practical performance and competitive advantage of LLM-assisted EAs by applying them to standardized, real-parameter optimization benchmarks. This track focuses on advancing algorithmic effectiveness through intelligent, LLM-integrated design and decision-making processes.

    Participants must demonstrate that the LLM meaningfully contributes to the EA's development, such as through parameter control, variation operator tuning, restart strategies, or diversity mechanisms, and quantify its impact on optimization performance.

    Submitted algorithms will be tested by the participants themselves on the 30 test problems of the CEC 2017 Bound-Constrained Optimization Suite, a rigorous benchmark set comprising multimodal, hybrid, and composition functions, under 30 dimensions (30D). Performance should be evaluated based on the given evaluation criteria in the CEC 2017 test report, and the results should be compared with four EAs including the top tiers' winners (L-SRDE, EBOwithCMR, RDE, and mLSHADE-RL). Submissions must include a transparent report documenting the role and contributions of the LLM.

Rules & Guidelines

  • Each team may submit one entry per track.
  • Submissions must include code, a short report (max 4 pages), and performance logs.
  • LLMs must play an active role in the solution pipeline (e.g., operator selection, prompt generation, evaluation, etc.).
  • Plagiarism or rule violations may lead to disqualification.

Evaluation Criteria

Submission Requirements

    For Track A

  • A 4-page technical report detailing:
    • The proposed EA and details on LLM integration
    • Conceptual architecture and motivation
    • Preliminary results on the CEC 2017 benchmark
    • Explainability features or how the LLM output is interpreted
  • Reproducible code
  • Prompts or training data (if custom or fine-tuned LLMs are used)
  • For Track B

  • A 4-page technical report that includes:
    • The proposed EA and details on LLM integration
    • Conceptual architecture and motivation
    • System overview and LLM integration details
    • Benchmark setup and test protocol
    • Results tables and comparisons with L-SRDE, RDE, mLSAHDE-RL, 1st, 2nd, and 4th winners of the CEC 2024 competition, and EBOwithCMR, winner of the CEC 2017 competition.
  • Complete source code with README
  • Benchmark result files (CSV format)

Timeline

  • Launch: May 19, 2025
  • Registration Deadline: June 5, 2025
  • Submission Deadline: June 10, 2025
  • Winners Announced: June 20, 2025

Prizes

  • First three winners will receive certificates.

Registration for Competition

Competition Registration Portal

Leaderboard (Coming Soon)

The top-ranked teams and their scores will be displayed here after the evaluation phase.

Submission

Send here evoml@nitj.ac.in

Organizer & Contact Info


Department of Mathematics and Computing, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, India.

For inquiries, contact: evoml@nitj.ac.in

© 2025 LLM4EA Challenge. All rights reserved.