Skip to yearly menu bar Skip to main content


Timezone: America/Sao_Paulo
Filter Events
Workshop

Latent & Implicit Thinking – Going Beyond CoT Reasoning

Xinyi Wang · Nikunj Saunshi · Rui-Jie Zhu · Liu Yang · Yuntian Deng · Nishanth Dikkala · JIAHENG LIU · Zhiyuan Li
9:00 AM - 5:00 PM

Recent advances in AI have revealed that explicit Chain-of-Thought (CoT) reasoning—where models verbalize intermediate reasoning steps—while powerful, is not the only or most efficient form of reasoning. The emerging paradigm of latent and implicit thinking explores how models can reason within their hidden representations or parameter space, using continuous latent states, recurrent or looped architectures, and non-autoregressive formulations such as diffusion or search-based models. This workshop, Latent & Implicit Thinking: Going Beyond CoT Reasoning (LIT), aims to unify these growing research efforts across difference areas. It will feature discussions on latent-space reasoning tokens, looped and recurrent architectures, latent generative paradigms, and theoretical insights on the nature of latent reasoning depth and efficiency. By bringing together experts from academia and industry, LIT will provide a forum for deep technical exchange and cross-disciplinary collaboration, fostering a new shared framework for understanding and enhancing reasoning in the latent space of neural networks.

... more
Workshop

Agentic AI in the Wild: From Hallucinations to Reliable Autonomy

Grigorios Chrysos · Yixuan Li · Etsuko Ishii · Xuefeng Du · Katia Sycara
9:00 AM - 5:00 PM

When we delegate tasks to AI agents—can we count on them to get it right? Agentic AI systems are increasingly stepping beyond static generation tasks into autonomous decision-making: scheduling meetings, booking travel, managing workflows, and assisting in scientific research. In these contexts, reliability is not just important—it is essential. Yet today’s foundation models remain prone to a critical failure mode: hallucination, where outputs are factually incorrect, semantically implausible, or detached from reality. While hallucinations are concerning in any generative system, these challenges are amplified in agentic settings, where models execute sequences of decisions without continuous human oversight.

... more
Workshop

2nd Workshop on World Models: Understanding, Modelling and Scaling

Mengyue Yang · Xidong Feng · Nick Hansen · Francesco Faccio · Dima Damen
9:00 AM - 5:00 PM

The second ICLR Workshop on World Models explores scalable frameworks that unify generative modeling, sequential decision-making, multimodal learning, and causal reasoning. As world models mature from conceptual prototypes into system-level infrastructures for intelligence, this edition focuses on three core themes: (i) understanding and knowledge extraction of the world, (ii) large-scale training and rigorous evaluation, and (iii) cross-modal and control-centric scaling across language, vision, and action. Building on the success of the 2025 inaugural workshop with over 1,500 participants, the 2026 edition introduces systems-level discussions, robotics case studies, and failure-mode post-mortems emphasizing reproducibility, safety, and robustness. The workshop will culminate in a synthesis article summarizing insights from both editions—tracing the evolution of world model research, consolidating key lessons, and outlining future directions toward scalable, grounded, and causally coherent intelligence.

... more
Workshop

ICLR 2026 Workshop on Memory for LLM-Based Agentic Systems (MemAgents)

Zhenguang Cai · Wenyue Hua · Keshuang Li · Yunpu Ma · Ercong Nie · Hinrich Schuetze · Karolina Stanczak · Matthew E Taylor
9:00 AM - 5:00 PM

Agentic systems are already being deployed in high-stakes settings such as robotics, autonomous web interaction, and software maintenance, and their capabilities ultimately hinge on memory. While LLM memorization typically refers to static, in-weights retention of training data or recent context, agent memory is online, interaction-driven, and under the agent’s control. Agentic systems must operate over extended horizons, learn from interaction, and adapt as goals and contexts shift. The limiting factor is increasingly not raw model capability but memory: how agents encode, retain, retrieve, and consolidate experience into useful knowledge for future decisions. Consistent with this view, recent commentary has argued that reinforcement learning can finally generalize when supplied with strong priors and explicit reasoning; however, current evaluations often underplay sequential accumulation of experience, where memory becomes decisive. In this context, we propose a workshop devoted to the memory layer for LLM-based agentic systems. Our premise is that long-lived, safe, and useful agents require a princi- pled memory substrate that supports single-shot learning of instances, context-aware retrieval, and consolidation into generalizable knowledge. This workshop aims to advance the design of the memory layer for agentic systems and to convene interdisciplinary researchers across reinforcement learning, memory research, large language models, agentic systems, and neuroscience, with an organizing team that spans these communities.

... more
Workshop

Learning Meaningful Representations of Life (LMRL) Workshop @ ICLR 2026

Kristina Ulicna · Rebecca Boiarsky · Till Richter · Soo-Jeong Kim · Lazar Atanackovic · Jason Hartford · Romain Lopez · Thouis Jones
9:00 AM - 5:00 PM

Learning Meaningful Representation Learning (LMRL) Workshop 2026 aims to identify the key bottlenecks in the development of virtual cells. Virtual cells are an in silico representation of a cell’s behaviour and dynamics in both health and disease, with immense implications for research, diagnostics and therapeutic development. Building towards such a system begins with learning meaningful representations within individual modalities, which form the foundations for scaling the complex heterogeneous biological signals into a coherent model of a cell, and combining them into integrative models that capture biology’s complexity. LMRL 2026 highlights emerging directions for overcoming these challenges by focusing on four core ingredients - causality in biological systems, generative modelling, interpretable representations, and leveraging virtual cells for real-world impact. This workshop aims to catalyse the advances in how we learn meaningful representations by bringing together the AIxBio community around a shared scientific roadmap.

... more
Workshop

The 2nd Workshop on Foundation Models for Science: Real-World Impact and Science-First Design

Wuyang Chen · Yongji Wang · N. Benjamin Erichson · Laurence Perreault-Levasseur · Bo Li · Damian Borth · Swarat Chaudhuri
9:00 AM - 5:00 PM

Scientific foundation models should be built for science, not for generic AI tastes or leaderboard prestige. This workshop centers problem-driven design: models that measurably advance real scientific inquiries, e.g., forecasting extreme climate events, accelerating materials discovery, understanding biological mechanisms, co-developed with domain experts and validated against field data, experiments, and downstream impact. We argue that foundation models for science must be built differently from language and vision. Scientific data are physical, causal, spatiotemporal, and often scarce or biased; objectives must reflect mechanistic fidelity, not just predictive accuracy. This calls for scientific priors and constraints, robust uncertainty quantification (UQ), and architectures that natively handle multi-modality (e.g., grids, meshes, spectra, time series, point clouds, text, images, code). It also demands tight integration with classical scientific tools (simulators, PDE solvers, optimization and inference engines, and HPC workflows) to yield hybrid systems that are faster, more accurate, and more trustworthy. We will highlight opportunities and hard problems unique to science: enforcing conservation laws and symmetries; learning across vast spatial and temporal scales; representing extreme events and tipping points; calibrating and validating UQ; and developing evaluation protocols that reward mechanistic insight and actionable reliability. The goal is a roadmap for building, training, and deploying scientific foundation models that accelerate discovery while respecting the structure of the natural world.

... more
Workshop

Representational Alignment (Re$^4$-Align)

Badr AlKhamissi · Brian Cheung · Dota Tianai Dong · Stephanie Fu · Erin Grant · Kushin Mukherjee · Ilia Sucholutsky · SIDDHARTH SURESH
9:00 AM - 5:00 PM

Representational alignment among artificial and biological neural systems continues to be a rapidly growing research area across machine learning, neuroscience, and cognitive science communities; we counted 688 papers submitted to ICLR 2026 on this set of interdisciplinary topics, up from 443 papers submitted to ICLR 2025, and 303 to ICLR 2024, representing an average 51% yearly increase. The Re-Align Workshop at ICLR 2026 facilitates interdisciplinary discussion among these communities, highlights unexpected findings from last year’s hackathon, and pushes beyond the foundational questions of alignment addressed in the previous workshops to focus on two novel and critical interdisciplinary applications of representational alignment: enabling neural control via representational alignment and evaluating the downstream behaviors enabled by representational alignment.

... more
Workshop

Workshop on Multi-Agent Learning and Its Opportunities in the Era of Generative AI

Jianhong Wang · Caroline Wang · Feng Chen · Arrasy Rahman · Felipe Leno da Silva · Rupali Bhati · Bo Liu · Mustafa Mert Çelikok
9:00 AM - 5:00 PM

The rapid emergence of generative AI has revitalized interest in multi-agent learning as a foundation for building systems that can reason, coordinate, and adapt across diverse environments. This workshop seeks to explore the growing convergence between multi-agent learning and generative AI, emphasizing their mutual potential to advance both theoretical understanding and practical capability. We focus on three interrelated fronts where this integration is most visible: (1) LLM-based multi-agent systems, where large language models interact, cooperate, or compete in structured settings; (2) real-world distributed system control, where multi-agent learning offers scalable and data-driven coordination strategies for complex real-world systems such as smart cities; and (3) human-AI interaction, where generative AI enables richer modelling of human preferences, values, and behaviours, supporting more human-aligned multi-agent systems. By bringing together researchers from machine learning, game theory, cognitive science, and human-computer interaction, this workshop aims to bridge methodological insights and emerging applications, fostering a shared agenda for the age of multi-agent generative AI systems.

... more
Workshop

The 3rd Workshop on Test-Time Updates (TTU)

Evan Shelhamer · francesco croce · Teresa Yeo · Shuaicheng Niu · Behzad Bozorgtabar · Xiaoxiao Li
9:00 AM - 5:00 PM

The common paradigm of deep learning distinguishes the training stage, where model parameters are learnt on massive datasets, and deployment, during which the frozen models are tested on unseen data. In case the test-time data distribution changes, or the model needs to satisfy new requirements, a new training round is needed. Test-time updates (TTU), including test-time adaptation (TTA), post-training editing, in-context learning, and online continual learning, offer a complementary path to re-training: adapt when and where data shift occurs. Test-time updates are relevant across model size: they can be used to edit the knowledge in large foundation models for which re-training has prohibitive costs, as well as to adapt models on edge devices. Moreover, test-time adaption finds applications on a variety of tasks, from vision to natural language tasks or time series analysis, each presenting its specific challenges and methods. Finally, the goals of test-time approaches are multiple, spanning robustness, customization, and computational efficiency. In this workshop we want to bring together these different facets of test-time updates, connecting researchers focusing on topics typically treated as independent problems. We believe that this will offer a unique opportunity for cross-area collaborations. Sharing domain-specific challenges and solutions will bridge diverse communities, providing beneficial contamination. In fact, we will welcome works on methods, theory, systems, and evaluations for TTU/TTA across modalities (vision, language, audio, etc.), scales (from edge to cloud), and openness (open/closed models, black-/white-box scenarios). We will highlight principled objectives, safe/robust updates, practical parameterizations (inputs, features, adapters, heads), and cost-aware/green practices that respect latency, energy, and monetary budgets.

... more
Workshop

Generative AI in Genomics (Gen^2): Barriers and Frontiers

Pinar Demetci · Maria Skoularidou · Dongshunyi Li · Valentin De Bortoli · Tamara Broderick · Max Welling · Arnaud Doucet · Renzo Soatto
9:00 AM - 5:00 PM

Generative AI (GenAI) is transforming biology, with breakthrough applications like directed evolution in protein science. The parallel ambition to engineer cellular and tissue states in genomics is now a major frontier, yet progress is hampered by domain-specific roadblocks. Our workshop is designed to bridge this gap between GenAI's promise and its practical applications towards this goal. With recent large-scale data initiatives launched to support GenAI models creating an inflection point for the field, timing is ideal. Through a field-grounding keynote by a genomics expert, invited talks by GenAI practitioners, contributed presentations, and a moderated debate, we will bring together experts and early-career scientists from machine learning and experimental genomics to collaboratively define a roadmap for progress. Our program will target core, interconnected challenges across the development pipeline: from data generation priorities and model design for genomic hierarchies to biologically-grounded evaluation frameworks and interpretability. By defining promising research directions and critical evaluations, our ultimate goal is to catalyze a new generation of models for tangible biological impact.

... more
Workshop

Principled Design for Trustworthy AI: Interpretability, Robustness, and Safety Across Modalities

Tsui-Wei (Lily) Weng · Nghia Hoang · Tengfei Ma · Jake Snell · francesco croce · Chandan Singh · Subarna Tripathi · Lam Nguyen
9:00 AM - 5:00 PM

Modern AI systems, particularly large language models, vision-language models, and deep vision networks, are increasingly deployed in high-stakes settings such as healthcare, autonomous driving, and legal decisions. Yet, their lack of transparency, fragility to distributional shifts between train/test environments, and representation misalignment in emerging tasks and data/feature modalities raise serious concerns about their trustworthiness. This workshop focuses on developing trustworthy AI systems by principled design: models that are interpretable, robust, and aligned across the full lifecycle – from training and evaluation to inference-time behavior and deployment. We aim to unify efforts across modalities (language, vision, audio, and time series) and across technical areas spanning interpretability, robustness, uncertainty, safety, and policy. Our goal is to create a workshop platform for cross-disciplinary discussion and idea exchange across key dimensions of trustworthiness in modern AI systems. These include interpretability & mechanistic transparency, uncertainty quantification & risk assessment for safe operation, adversarial & distributional robustness, and representation & safety alignment across diverse tasks & modalities. By bringing together these efforts under a cohesive design paradigm, the workshop seeks to advance forward-looking solutions and foster community building around shared technical & societal challenges in building trustworthy AI systems. This workshop differs from the recent prior workshop efforts (e.g ICML’24 TiFA, NeurIPS’24 Interpretable AI, IJCAI’24 Trustworthy AI) in its unique focus on building Trustworthy AI systems by design and its broad coverage of the full machine learning lifecycle across both single- and multi-modal settings. Topics of interest include 6 pillars: (1) Interpretable and Intervenable Models: concept bottlenecks and modular architectures, neuron tracing and causal influence methods, mechanistic interpretability and concept-based reasoning, interpretability for control and real-time intervention; (2) Inference-Time Safety and Monitoring: reasoning trace auditing in LLMs and VLMs, inference-time safeguards and safety mechanisms, chain-of-thought consistency and hallucination detection, real-time monitoring and failure intervention mechanisms; (3) Multimodal Trust Challenges: grounding failures and cross-modal misalignment, safety in vision-language and deep vision systems, cross-modal alignment and robust multimodal reasoning, trust and uncertainty in video, audio, and time-series models; (4) Robustness and Threat Models: adversarial attacks and defenses, robustness to distributional, conceptual, and cascading shifts, formal verification methods and safety guarantees, robustness under streaming, online, or low-resource conditions; (5) Trust Evaluation and Responsible Deployment: human-AI trust calibration, confidence estimation, and uncertainty quantification, metrics for interpretability, alignment, and robustness, transparent, reproducible, and accountable deployment pipelines, safety alignment in fine-tuning, instruction-tuning, and retrieval-augmented systems. (6) Safety and Trustworthiness in LLM Agents: Autonomous tool use and agentic behavior in LLMs, Safety and failures in planning and action execution, emergent behaviors in multi-agent interactions, intervention and control in agent loops, alignment of long-horizon goals with user intent, auditing and debugging LLM agents in real-world deployment.

... more
Workshop

The First Workshop on Efficient Spatial Reasoning

Haozheng Luo · Yijiang Li · Zhenyu Pan · Ruiyang Qin · Weiyang Liu · Zhijian Liu · Manling Li · Nuno Vasconcelos
9:00 AM - 5:00 PM

Spatial reasoning—the ability to understand, represent, and manipulate spatial relationships among objects, agents, and environments—has been profoundly advanced by large foundation models, enabling breakthroughs in 3D reconstruction, scene understanding, and vision–language reasoning. However, current models often rely on massive parameter scales or test-time extensions, introducing significant inefficiencies during training and inference. They also struggle with multi-step reasoning and the nuanced comprehension of complex spatial relations, where unreliable reasoning paths undermine both efficiency and accuracy. To address these challenges, we propose a workshop that unites researchers and practitioners from academia and industry to advance efficient spatial reasoning—approaches that improve generalization and robustness while remaining computationally practical. Topics include symbolic–neural integration, geometric deep learning, scalable reasoning architectures, and evaluation frameworks. Through invited talks and discussions, the workshop will examine efficiency–accuracy trade-offs, cross-modal reasoning, and real-world robustness, fostering collaboration across AI, cognitive science, and applied domains.

... more
Workshop

I Can't Believe It's Not Better: Where Large Language Models need to improve

Arno Blaas · Priya DCosta · Fan Feng · Zhaoying Pan · Nikolai Rozanov · Jennifer Williams · Yubin Xie · Rui Yang
9:00 AM - 5:00 PM

Large language models (LLMs) have advanced rapidly, yet these advances have also highlighted gaps, such as hallucination, brittle reasoning, alignment failures, and hard efficiency/scaling constraints, especially in safety-critical settings. Ideally, evidence of such limitations would immediately lead to improvements to address these gaps, but compute constraints and unfruitful approaches often stall iteration; meanwhile, publication norms still prioritize positive results over informative null or negative findings. This workshop creates a venue for negative results on LLMs including: (i) rigorous studies that demonstrate and analyze limitations (e.g., leak-resistant reasoning probes, alignment stress tests, failure audits in critical applications), and (ii) attempts on well-established ideas that did not deliver expected gains, with analyses that identify failure modes, boundary conditions, and lessons learned. We welcome diagnostics, replications, counterfactual evaluations, and ablations that separate genuine capability from shortcut learning and clarify when methods break, why they break, and how to fix them. By aggregating evidence of negative results and actionable takeaways, the workshop aims to convert setbacks into robust principles and practices for building more reliable LLMs.

... more
Workshop

Machine Learning for Genomics Explorations (MLGenX)

Ehsan Hajiramezanali · Wei Qiu · Arman Hasanzadeh · Tommaso Biancalani · Mihaela van der Schaar · Fabian Theis · Aviv Regev
9:00 AM - 5:00 PM

Despite rapid advances in data-driven biology, our limited understanding of the biological mechanisms underlying diseases continues to hinder therapeutic innovation. While genomics and multi-omics platforms have generated vast datasets, translating these into actionable biological insights remains an open challenge. At the same time, the emergence of foundation models and AI agents capable of reasoning, planning, and hypothesis generation offers a unique opportunity to reimagine how we approach discovery in biology. The 3rd MLGenX workshop aims to bring together the machine learning, genomics, and biology communities to explore this new frontier. This year’s theme, “From Reasoning to Experimentation: Closing the Loop Between AI Agents and the Biological Lab,” focuses on adaptive, interpretable, and experiment-aware AI systems that learn from feedback and drive biological insight. By fostering interdisciplinary collaboration, benchmark sharing, and open discussion, MLGenX 2026 aims to chart the path toward lab-in-the-loop science and accelerate innovation in biology and drug discovery.

... more
Workshop

Workshop on Scaling Post-training for LLMs (SPOT)

Devvrit Khatri · Rishabh Tiwari · Lovish Madaan · Sewon Min · Gagan Jain · Nan Rosemary Ke · Kurt Keutzer · Prateek Jain
9:00 AM - 5:00 PM

Post-training, encompassing techniques like Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), is no longer a mere final step for task-specific adaptation. It is evolving into a compute-intensive phase in its own right, crucial for unlocking the full potential of foundational models and optimizing for critical downstream behaviors. Yet, the science of post-training, at scale, remains in its infancy. This workshop is motivated by the urgent need to establish rigorous and scalable methodologies, design choices, and approaches for post-training. While today's design choices in pre-training are made with a core focus on their ability to scale, a similar scaling laws mindset for post-training is largely absent. Our goal is to catalyze a systematic understanding of how post-training scales—across algorithms, data regimes, infrastructure, and objectives—and to identify the open questions that must be addressed to turn post-training into a science of its own. This workshop aims to bring together diverse perspectives from academic and industrial researchers and practitioners, to share practical experiences, and to outline a clear research direction toward building a principled science of post-training at scale.

... more
Workshop

The 2nd Workshop on Advances in Financial AI Workshop: Towards Agentic and Responsible Systems

Nazanin Mehrasa · Ioana Boier · CHANYEOL CHOI · Yongjae Lee · Salwa Alamir · Simon Lucey
9:00 AM - 5:00 PM

The financial domain is undergoing rapid transformation driven by advances in artificial intelligence. Building on last year’s "Advances in Financial AI: Opportunities, Innovations, and Responsible AI" workshop, this second edition will focus particularly on the emergence of agentic systems in finance (autonomous or semi-autonomous agents, decision-making systems, multi-agent interactions), and the imperative of responsibility (ethics, fairness, accountability, transparency, robustness, regulation). This workshop aims to bring together researchers, practitioners, and policymakers to explore both the opportunities and risks of agentic financial AI systems, to share recent innovations, and to work towards foundations and best practices that ensure such systems are safe, trustworthy, and socially aligned.

... more
Workshop

4th ICLR Workshop on Machine Learning for Remote Sensing

Esther Rolf · Bianca Zadrozny · Hannah Kerner · Marc Rußwurm · Evan Shelhamer · Gabriel Tseng · Ronny Hänsch · Hamed Alemohammad
9:00 AM - 5:00 PM

Machine Learning for Remote Sensing (ML4RS) has rapidly evolved into a vibrant research area. Remote sensing provides the ML community with an unparalleled source of multimodal, spatiotemporal data—challenging algorithms to learn from vast, heterogeneous, and dynamically changing observations of our planet. Building on the success of ML4RS workshops at ICLR 2023-2025, the 4th ICLR Workshop on Machine Learning for Remote Sensing will focus on bridging the persistent gap between publication and practice. Our theme, “ML4RS: From Publication to Practice,” aims to connect research innovations with their real-world deployment. This year’s workshop introduces two new elements: an interactive tutorials track and an opportunity for research track papers to be published in journal proceedings. Alongside invited provocations and debates on “Foundation Models in ML4RS: Are We There Yet?”, our program highlights contributions across key challenges in the field—including data efficiency, interpretability, benchmarking, and global versus local model design. Building on ML4RS’s tradition of highlighting speakers and challenges related to the ICLR host location, ML4RS 2026 emphasizes local engagement with Brazil’s dynamic remote sensing and ML communities while continuing to cultivate a diverse, international ecosystem of researchers, practitioners, and end-users. By bridging methodological innovation and practical application, ML4RS 2026 aims to advance the scientific and societal impact of machine learning for Earth observation.

... more
Workshop

Deep Generative Model in Machine Learning: Theory, Principle and Efficacy (2nd Workshop)

Andi Han · Valentin De Bortoli · Mingyuan Bai · Sara Fridovich-Keil · Wei Huang · Taiji Suzuki · Qing Qu · Kenji Fukumizu
9:00 AM - 5:00 PM

The 2nd Deep Generative Models in Machine Learning: Theories, Principles, and Efficacy (DeLTa 2026) workshop aims to bridge the gap between theory and practice in modern generative modeling. Deep Generative Models (DGMs)—including VAEs, GANs, flows, autoregressive, and diffusion models—have transformed AI research, yet fundamental theoretical and algorithmic challenges persist. DeLTa 2026 will bring together experts across statistics, optimization, and deep learning to address two central questions: (1) How can we develop unified theoretical frameworks to understand and design advanced generative models? and (2) How can we improve their efficiency, reliability, and transferability in real-world applications? This year’s workshop expands its scope to include emerging frontiers such as flow matching, stochastic control, discrete and low-dimensional diffusion models, post-training theory, and large language diffusion models. By fostering dialogue between theoretical and applied communities, DeLTa 2026 seeks to establish principled foundations that guide scalable, interpretable, and safe generative modeling. The workshop will feature invited talks, contributed papers, and a dedicated short-paper track to encourage participation from early-career and underrepresented researchers. Building on the success of DeLTa 2025, we anticipate over 400 participants and vibrant interdisciplinary engagement at ICLR 2026.

... more
Workshop

ReALM-GEN: Real-World Constrained and Preference-Aligned Flow- and Diffusion-based Generative Models

Paris Giampouras · Morteza Mardani · Yingzhen Li · Giannis Daras · Johann Wenckstern · Charlotte Bunne
9:00 AM - 5:00 PM

Diffusion and flow-based generative models power today’s breakthroughs in Generative AI, showing impressive results in generating various types of data ranging from images and video to protein molecules and text. However, making them \emph{respect real-world constraints} and \emph{align with users' preferences} at post-training phase or at inference time, is still an unsolved challenge. ReALM- GEN at ICLR 2026 will bring together a diverse community of researchers spanning theoretical foundations of ML and generative models, vision, language, robotics, and scientific applications of AI, to explore bold ideas and practical tools for {\it adapting and/or steering pretrained flow- and diffusion-based models} toward real-world constraint satisfaction and alignment with user preferences.

... more
Workshop

Integrating Generative and Experimental Platforms for Biomolecular Design

Chenghao Liu · Jarrid Rector-Brooks · Soojung Yang · Sidney Lisanza · Jacob Gershon · Lauren Hong · Pranam Chatterjee · Yoshua Bengio
9:00 AM - 5:00 PM

Biomolecular design, through artificial engineering of proteins, ligands, nucleic acids, and cells, holds immense promise in addressing pressing medical, industrial, and environmental challenges. While generative machine learning has shown significant potential in this area, a disconnect exists with experimental biology: many ML research efforts prioritize static benchmark performance, potentially sidelining impactful biological applications. This workshop seeks to bridge this gap by bringing computationalists and experimentalists together, catalyzing a deeper interdisciplinary discourse. Together, we will explore the strengths and challenges of generative ML in biology, experimental integration of generative ML, and biological problems ready for ML. To attract high-quality and diverse research, we partnered with Nature Biotechnology for a special collection, and we created dedicated tracks for in-silico ML research and hybrid ML-experimental biology research. Our lineup features emerging leaders as speakers and renowned scientists as panelists, encapsulating a spectrum from high-throughput experimentation and computational biology to generative ML. To catalyze new collaborations, we will host a seed-grant competition for pairs of experimentalists and computationalists proposing fresh joint projects. To connect dry and wet lab practice, a wet-lab challenge sponsored by Adaptyv Bio will empirically evaluate protein design models. With a diverse organizing team and backed by industry sponsors, we dedicate the workshop to pushing the boundaries of ML's role in biology. This will be the third edition of this workshop following the previous versions of it we organized at ICLR 2024 and 2025.

... more