ICLR 2026 Workshop on AI with Recursive Self-Improvement
Abstract
Recursive self-improvement (RSI) is moving from thought experiments to deployed AI systems. LLM agents now rewrite their own codebases or prompts, scientific discovery pipelines schedule continual fine-tuning, and robotics stacks patch controllers from streaming telemetry, even improving product-level code. The ICLR 2026 Workshop on AI with Recursive Self-Improvement brings together researchers to discuss a simple question with big consequences: how do we build the algorithmic foundations for powerful and reliable self-improving AI systems? As loops that update weights, rewrite prompts, or adapt controllers move from labs into production, we will surface the methods that work — how to design, evaluate, and govern these loops without hand-waving. This workshop examines algorithms for self-improvement across experience learning, synthetic data pipelines, multimodal agentic systems, weak-to-strong generalization, and inference-time scaling, and will discuss and refine methods for recursive self-improvement. In short, we care about loops that actually get better — and can show it. To give the workshop a clear spine, we organize contributions around five lenses: change targets inside the system, temporal regime of adaptation, mechanisms and drivers, operating contexts, and evidence of improvement. This framing synthesizes recent perspectives on self-evolving agents while grounding them in practical, auditable deployment settings. We are paradigm-agnostic: we welcome work on foundation models, agent frameworks, robots, learning algorithms and optimizers, control and program synthesis, as well as data and infrastructure systems and evaluation tooling that enable recursive self-improvement.