Latent & Implicit Thinking – Going Beyond CoT Reasoning
Abstract
Recent advances in AI have revealed that explicit Chain-of-Thought (CoT) reasoning—where models verbalize intermediate reasoning steps—while powerful, is not the only or most efficient form of reasoning. The emerging paradigm of latent and implicit thinking explores how models can reason within their hidden representations or parameter space, using continuous latent states, recurrent or looped architectures, and non-autoregressive formulations such as diffusion or search-based models. This workshop, Latent & Implicit Thinking: Going Beyond CoT Reasoning (LIT), aims to unify these growing research efforts across difference areas. It will feature discussions on latent-space reasoning tokens, looped and recurrent architectures, latent generative paradigms, and theoretical insights on the nature of latent reasoning depth and efficiency. By bringing together experts from academia and industry, LIT will provide a forum for deep technical exchange and cross-disciplinary collaboration, fostering a new shared framework for understanding and enhancing reasoning in the latent space of neural networks.