Join the global competition to shape the future of optimization by combining the strengths of Large Language Models and Evolutionary Algorithms.
Register NowThe LLM4EA 2025 Challenge is a pioneering competition that seeks to explore and define the future of hybrid algorithm design by uniting two powerful paradigms: Large Language Models (LLMs) and Evolutionary Algorithms (EAs). As both fields have evolved rapidly, with EAs demonstrating robust problem-solving across a wide range of domains, and LLMs revolutionizing reasoning, synthesis, and knowledge representation, this challenge aims to harness their synergy to unlock new frontiers in intelligent optimization.
Traditional EA design relies heavily on expert-driven heuristics, trial-and-error parameter tuning, and handcrafted operators. However, these design processes can be abstract, time-consuming, and difficult to generalize across problems. LLMs offer a transformative advantage by introducing natural language reasoning, code generation, prompt engineering, and adaptive decision-making into the optimization workflow. When effectively integrated, LLMs can act as a co-pilot, designer, or even a dynamic component of an EA system.
This challenge embraces the human-in-the-loop and AI-in-the-loop paradigms, inviting researchers to rethink how optimization tools are developed. It opens the door to LLM-guided metaheuristics that can self-adapt, generate operators, respond to problem features, or even evolve their own strategies over time. Participants are encouraged to innovate not only through performance but through explainable and generalizable designs that highlight the potential of intelligent hybrid systems.
The competition features two complementary tracks. Track A emphasizes the creative and conceptual side, how LLMs can drive or support the design of novel EAs, with a focus on methodology and explainability over performance. Track B tests the practical capabilities of such developed hybrid innovation systems by applying them to established benchmark functions from the CEC 2017 suite, evaluating their real-world efficiency, robustness, and competitiveness.
In essence, LLM4EA 2025 is more than a competition, it is a collaborative experiment in building AI systems that can design better AI systems, setting the stage for a new era of optimization research powered by machine reasoning, language understanding, and evolutionary intelligence.
The aim is to encourage creativity and advancement in the design of intelligent EAs where LLMs play an integral, explanatory, or generative role in the algorithmic workflow. This track invites participants to explore how LLMs can be used to rethink or co-create EAs in novel and intelligent ways. Submissions should showcase how LLMs assist in the design, adaptation, or generation of EA components and explain the rationale behind their design choices.
This track emphasizes conceptual novelty, technical creativity, and explainability over raw performance. It's intended for researchers who wish to demonstrate original thinking, frameworks, or architectures, even if they are in early-stage development or lack competitive benchmarking results.
The aim is to evaluate the practical performance and competitive advantage of LLM-assisted EAs by applying them to standardized, real-parameter optimization benchmarks. This track focuses on advancing algorithmic effectiveness through intelligent, LLM-integrated design and decision-making processes.
Participants must demonstrate that the LLM meaningfully contributes to the EA's development, such as through parameter control, variation operator tuning, restart strategies, or diversity mechanisms, and quantify its impact on optimization performance.
Submitted algorithms will be tested by the participants themselves on the 30 test problems of the CEC 2017 Bound-Constrained Optimization Suite, a rigorous benchmark set comprising multimodal, hybrid, and composition functions, under 30 dimensions (30D). Performance should be evaluated based on the given evaluation criteria in the CEC 2017 test report, and the results should be compared with four EAs including the top tiers' winners (L-SRDE, EBOwithCMR, RDE, and mLSHADE-RL). Submissions must include a transparent report documenting the role and contributions of the LLM.
The top-ranked teams and their scores will be displayed here after the evaluation phase.
Send here evoml@nitj.ac.in
Department of Mathematics and Computing, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, India.
For inquiries, contact: evoml@nitj.ac.in