Skip to yearly menu bar Skip to main content


Timezone: Singapore
Filter Events
Workshop

World Models: Understanding, Modelling and Scaling

Mengyue Yang · Haoxuan Li · Firas Laakom · Xidong Feng · Jiaxin Shi · Zhu Li · Guohao Li · Francesco Faccio · Jürgen Schmidhuber
8:30 AM - 6:00 PM

Our workshop covers the widest range of topics related to World Models, including understanding, modelling, and closely aligning with cutting-edge generative AI and broader applications such as robotics and embodied AI. We are glad to announce that nine confirmed top-tier researchers including the founder of world models have confirmed to attend in person as speakers and panelists. The workshop widely targets AI researchers, industry professionals, and students interested in World Models, generative AI, reinforcement learning and related applications. Participants should have a basic understanding of generative models and reinforcement learning concepts. Familiarity with recent advancements in both fields will be beneficial but not mandatory. We also welcome submissions from researchers in the natural sciences (e.g., physics,chemistry, biology) and social sciences (e.g., pedagogy, sociology) to offer attendees a more comprehensiveperspective. In summary, our topics of interest mainly include, but are not limited to:- Understanding World Rules;- World model training and evaluation;- Scaling World Models across language, vision, and control;- World Models in general domains.For the contributed paper sessions, regarding the recent surge in publications in related areas and the success of similar workshops, we project over 250 paper submissions and over 1,500 participants.

... more
Workshop

Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions

Hesam Asadollahzadeh · Mahdi Ghaznavi · Polina Kirichenko · Parsa Hosseini · Arash Marioriyad · Nahal Mirzaie · Aahlad Manas Puli · Mohammad Hossein Rohban · Mahdieh Baghshah · Shikai Qiu
8:30 AM - 6:00 PM

Despite the remarkable advancements towards generalizability and autonomy in AI systems, persistent challenges such as spurious correlations and shortcut learning continue to hinder the robustness, reliability, and ethical deployment of machine learning systems. These challenges arise from the statistical nature of machine learning algorithms and their implicit or inductive biases at all stages, including data preprocessing, architectures, and optimization. As a result, models rely on spurious patterns rather than understanding underlying causal relationships, making them vulnerable to failure in real-world scenarios where data distributions involve under-represented groups or minority populations. The foundational nature and widespread occurrence of reliance on spurious correlations and shortcut learning make it an important research topic and a gateway to understanding how deep models learn patterns and the underlying mechanisms responsible for their effectiveness and generalization. This workshop aims to foster a collaborative community to address these critical issues by bringing together experts from diverse fields and pushing the boundaries of current research. We will focus on promoting three key avenues: (i) the development of comprehensive evaluation benchmarks and the exploration of under-examined facets of the problem, (ii) the creation of novel solutions for building robust models that effectively tackle spurious correlations in real-world applications, and (iii) shedding light on lesser-explored aspects to deepen our understanding of the nature of these phenomena.

... more
Workshop

2nd Workshop on Navigating and Addressing Data Problems for Foundation Models (DATA-FM)

Jiachen (Tianhao) Wang · Ruoxi Jia · Pang Wei Koh · Dawn Song · Jerone Andrews · Hoang Anh Just · Feiyang Kang
8:30 AM - 6:00 PM

Foundation models (FMs) have become central to modern machine learning, with data playing a crucial role in their development and sparking increased attention to data-related challenges such as curation and attribution. Adapting traditional data-centric methods to FMs is challenging due to the scale of both data and model architectures, necessitating interdisciplinary collaboration and community efforts. Building on the success of the first Data Problems in Foundation Models workshop at ICLR 2024, the second workshop will address persistent and emerging data-related challenges in FM deployment. While longstanding issues in data collection, curation, and synthesis remain relevant, new challenges have arisen as FMs are integrated into a growing number of applications and become increasingly multi-modal. Concurrently, the societal impact of AI has intensified, highlighting concerns such as data copyright. These evolving challenges emphasize the need for continued, focused discussions on data-related issues in FM development. Our goals include fostering a comprehensive understanding of these challenges across the entire FM pipeline and creating a platform for interdisciplinary researchers to connect, collaborate, and drive progress. We hope this workshop will serve as a catalyst for innovative solutions to critical data challenges, shaping the future of FMs and their wide-ranging applications.

... more
Workshop

The 3rd DL4C Workshop: Emergent Possibilities and Challenges in Deep Learning for Code

Zijian Wang · Ying Sheng · Giovanni Zappella · Qian Liu · Devjeet Roy · Gabriel Orlanski · Zora Zhiruo Wang · Wen-Ding Li
8:30 AM - 5:20 PM
Workshop

Workshop on Reasoning and Planning for Large Language Models

Zhiyuan Hu · Yilun Zhao · Xidong Feng · Min-Yen Kan · Nouha Dziri · Yali Du · Pang Wei Koh · Bryan Hooi · Arman Cohan
8:30 AM - 5:40 PM

This workshop explores the growing capabilities of large language models (LLMs), such as OpenAI's o1 model, in reasoning, planning, and decision-making, highlighting recent advances and challenges. We aim to examine how reinforcement learning methods, post-training optimization, and efficient inference techniques can further enhance LLMs' reasoning capabilities. Topics include training approach for enhancing reasoning and planning abilities, scaling inference for complex tasks, developing robust benchmarks, and extending LLMs to multi-modal and embodied environments. We will also discuss broader themes such as causal reasoning, collaborative multi-agent systems, uncertainty, and explainability to offer insights and guidance for the further development of reasoning and planning in LLMs.

... more
Workshop

I Can't Believe It's Not Better: Challenges in Applied Deep Learning

Arno Blaas · Priya DCosta · Fan Feng · Andreas Kriegler · Zhaoying Pan · Tobias Uelwer · Jennifer Williams · Yubin Xie · Rui Yang
8:30 AM - 5:15 PM

The goal of the I Can’t Believe It’s Not Better (ICBINB) workshop series is to promote slow science and build a community to discuss surprising and negative results, thereby encouraging a culture of transparency and shared learning. In recent years, we have witnessed a remarkable rise of Deep Learning (DL), whose impressive performance on benchmark tasks has led to increasing ambitions to deploy DL in real-world applications across all fields and disciplines. However, despite its potential, DL still faces many challenges during deployment in dynamic, real-world conditions, thus exposing practical limitations that are often overlooked in controlled benchmarks. Therefore, in this year’s ICBINB workshop, we aim to explore the challenges, unexpected outcomes, and common principles underlying similar issues and failure modes encountered across various fields and disciplines when deploying DL models in real-world scenarios. We will invite contributions and discussions from diverse fields including, but not limited to, healthcare, scientific discovery, robotics, education, equality & fairness, and social sciences. The failure modes may include suboptimal performance, concerns with the safety and reliability of applying DL models in unpredictable real-world applications, as well as ethical and societal challenges. More importantly, we aim to discuss common reasons or patterns in challenges and failure modes across disciplines. By creating a platform for researchers from different domains to interact and share insights, we hope to accelerate research by translating findings from one field to another, and also deepen DL researchers’ understanding of the universal fundamental issues that should be addressed within the current theoretical and empirical research paradigms. Embracing negative results as valuable learning opportunities will, therefore, help the community learn from past mistakes, and drive the development of more robust, reliable, and applicable AI models.

... more
Workshop

SCOPE: SCALABLE OPTIMIZATION FOR EFFICIENT AND ADPATIVE FOUNDATION MODELS

Souvik Kundu · Tianlong Chen · Shiwei Liu · Haizhong Zheng · Amir Yazdanbakhsh · Beidi Chen · Yingyan Celine Lin
8:30 AM - 6:00 PM

In the rapidly evolving landscape of AI, the development of scalable optimization methods to yield efficient and adaptive foundation models has significant demand in the space of their inference service. In specific, enabling model efficiency while allowing them to be adaptable to various new down-stream tasks has multifold challenges. Firstly, the model’s ability to quickly learn adaptive and efficient sub-model selection on different tasks requires the capability to perform continual weight updates, compute- and memory-efficient fine-tuning, and personalized adaptation. Secondly, with the increased demand for long context understanding and reasoning, the model needs to yieldsuch efficient adaptation with the informative usefulness of the query-specific token fetching. For instance, imagine a model that continually learns from current news events, adapting to the everchanging global landscape by integrating up-to-date knowledge. Such models may not only need efficient fine-tuning to new incoming data stream, but also understand efficient handling of the KV cache that may keep on growing with the requirement to handle longer contextual information. Additionally, the integration of retrieval-augmented generation (RAG) into foundation models can ensure that generated content is not only relevant, but also reflects the most current knowledge while costing the prefill size to go up. Thirdly, with such growing demand for contextual adaptation, mixture of experts (MoE) models have also received significant traction that can perform test time adaptation via learned routing policy. In addition, the emergence of sub-quadratic models with constant KV states as opposed to KV caching of transformers, has opened up a new avenue of the model’s adaptation ability in the context of information retention into compressive KV states. These capabilities rely on techniques for adapting foundation models, including fine-tuning, conversion, distillation, and in-context/few-shot learning. This workshop aims to capture advances in scalable, adaptive fine-tuning, calibration, and conversion to yield inference efficient quadratic and sub-quadratic foundation models, focusing on methodologies across vision, language, and multi-modal domains.

... more
Workshop

AI4MAT-ICLR-2025: AI for Accelerated Materials Design

Santiago Miret · Marta Skreta · N. M. Anoop Krishnan · Rocío Mercado · Mohamad Moosavi · Stefano Martiniani
8:30 AM - 5:00 PM

We propose a full-day, medium-sized workshop at ICLR 2025 titled “AI for Accelerated Materials Design” (AI4Mat-ICLR-2025). This workshop will serve as a venue for researchers at the intersection of AI and materials science to address pressing scientific challenges using AI-driven techniques. AI is starting to revolutionize materials science and engineering, driving major global research initiatives from academic and government institutions and corporate research labs, alongside the rise of several startups for AI driven materials discovery. AI4Mat's holistic approach to materials design and machine learning ensures comprehensive discussions and foster novel directions across the materials landscape. AI4Mat-ICLR-2025 centers on understanding crucial and timely technical challenges that are unique to AI for materials design: 1. How Do We Build a Foundation Model for Materials Science?: The success of foundation models in various machine learning domains has led to growing relevance and interest in materials foundation models. As such, we propose a discussion that centers on understanding the complex, interdisciplinary nature of foundational models for materials and how the community can contribute towards building them. 2. What are Next-Generation Representations of Materials Data?: Materials representation learning continue to be a rapidly evolving technical challenge with unique considerations informed by real-world materials challenges.AI4Mat-ICLR-2025 also aims to grow and empower a notable community to leverage AI for impactful materials applications. Concretely we plan to build upon past AI4Mat programs: 1. Travel Grant Program: Building upon the success of past AI4Mat programs, we plan to continue a travel grant program funded by AI4Mat corporate sponsors to enable researcher participation with a focus on underrepresented communities. 2: Tiny Papers Track: This track extends our efforts in inclusive research participation based on previous ICLR innovations. 3. Themed Submission Track: We plan to conduct a themed submission track on multi-modal data collection, structured data sharing, and multi-modal representation learning, in order to encourage the community to tackle a common problem of interest. 4. Journal Track: Similar to previous AI4Mat workshops, we aim to provide AI4Mat researchers an opportunity to submit their work to a prestigious venue for their interdisciplinary research.

... more
Workshop

Building Trust in LLMs and LLM Applications: From Guardrails to Explainability to Regulation

Micah Goldblum · Ramasuri Narayanam · Bang An · Soumyabrata Pal · Martin Pawelczyk · Hima Lakkaraju · Shiv Saini
8:50 AM - 6:00 PM

As Large Language Models (LLMs) are rapidly adopted across diverse industries, concerns around their trustworthiness, safety, and ethical implications increasingly motivate academic research, industrial development, and legal innovation. LLMs are increasingly integrated into complex applications, where they must navigate challenges related to data privacy, regulatory compliance, and dynamic user interactions. These complex applications amplify the potential of LLMs to violate the trust of humans. Ensuring the trustworthiness of LLMs is paramount as they transition from standalone tools to integral components of real-world applications used by millions.This workshop addresses the unique challenges posed by the deployment of LLMs, ranging from guardrails to explainability to regulation and beyond. The proposed workshop will bring together researchers and practitioners from academia and industry to explore cutting-edge solutions for improving the trustworthiness of LLMs and LLM-driven applications. The workshop will feature invited talks, a panel discussion, interactive breakout discussion sessions, and poster presentations, fostering rich dialogue and knowledge exchange. We aim to bridge the gap between foundational research and the practical challenges of deploying LLMs in trustworthy, use-centric systems.

... more
Workshop

ICLR 2025 Workshop on Tackling Climate Change with Machine Learning: Data-Centric Approaches in ML for Climate Action

Konstantin Klemmer · Melissa Chapman · Lily Xu · Poon Ho · Mélisande Teng · Patrick Emami · Yoshua Bengio · Binyu Lei
8:50 AM - 5:10 PM

Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the global machine learning community who wish to help tackle climate change, and is further aimed to help foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields. Building on our past workshops on this topic, this workshop particularly aims to explore data-centric ML approaches for climate action. Data-centric ML is not only a timely topic within the ICLR community, as analyzing and engineering (pre)training datasets becomes increasingly important, but holds specific challenges and opportunities in climate-related areas. We also want to take the opportunity of ICLR being hosted in Singapore to engage with local communities and shine a light on work that deploys, analyzes or critiques ML methods and their use for climate change adaptation and mitigation on the Asian continent.

... more
Workshop

Generative Models for Robot Learning

Ziwei Wang · Congyue Deng · Changliu Liu · Zhenyu Jiang · Haoran Geng · Huazhe Xu · Yansong Tang · Philip Torr · Ziwei Liu · Angelique Taylor · Yuke Zhu
9:00 AM - 6:00 PM

Next generation of robots should combine ideas from other fields such as computer vision, natural language processing, machine learning and many others, because the close-loop system is required to deal with complex tasks based on multimodal input in the complicated real environment. This workshop proposal focuses on generative models for robot learning, which lies in the important and fundamental field of AI and robotics. Learning-based methods in robotics have achieved high success rate and generalization ability in a wide variety of tasks such as manipulation, navigation, SLAM, scene reconstruction, proprioception, and physics modeling. However, robot learning faces several challenges including the expensive cost of data collection and weak transferability across different tasks and scenarios. Inspired by the significant progress in computer vision and natural language processing, efforts have been made to combine generative models with robot learning to address the above challenges such as synthesizing high-quality data, and incorporating generation frameworks into representation and policy learning. Besides, pre-trained large language models (LLMs), vision-language models (VLMs) and vision-language-action (VLA) models are adapted to various downstream tasks to fully leverage the rich commonsense knowledge. This progressive development enables robot learning frameworks to be applied in complex and diverse real-world tasks. This workshop aims to enable interdisciplinary communication for researchers in the broader community, so that more attention can be drawn to this field. In this workshop, the state-of-the-art process and promising future directions will be discussed, which will inspire new ideas and fantastic applications in related fields.

... more
Workshop

Learning Meaningful Representations of Life (LMRL) Workshop @ ICLR 2025

Kristina Ulicna · Rebecca Boiarsky · Eeshaan Jain · Till Richter · Giovanni Palla · Jason Hartford · Oren Kraus · Aleksandrina Goeva · Charlotte Bunne · Fabian Theis
9:00 AM - 5:40 PM

Learning Meaningful Representations of Life 2025 (LMRL 2025) aims to address the growing interest in large-scale representation learning for biological data, driven by the availability of large biological datasets, such as DNA and RNA sequences, protein structures, and cell imaging. There have been many recent papers proposing “foundation models” for biological data, but the performance of these models varies dramatically across domains: in some settings, large-scale pre-training has significantly expanded the range of solvable tasks, while in others, foundation models are often outperformed by simple baselines. This workshop will encourage work that explains this gap by focusing on two key issues: first, identifying the data, models, and algorithms necessary to extract meaningful representations that generalize well to downstream tasks, and second, establishing appropriate methods to evaluate the quality and utility of these learned representations. By bringing together researchers from AI and biology, the workshop aims to foster collaboration, promote standardization of datasets and evaluation metrics, and explore real-world applications that can benefit from improved strategies in representation learning.

... more
Workshop

Second Workshop on Representational Alignment (Re$^2$-Align)

Brian Cheung · Dota Tianai Dong · Erin Grant · Ilia Sucholutsky · Lukas Muttenthaler · SIDDHARTH SURESH
9:00 AM - 5:30 PM
Both natural and artificial intelligences form representations of the world that they use to reason, make decisions, and communicate. Despite extensive research across machine learning, neuroscience, and cognitive science, it remains unclear what the most appropriate ways are to compare and align the representations of intelligence systems. In the second edition of the Workshop on Representational Alignment (Re$$^2$$-Align), we bring together researchers from diverse fields who study representational alignment to make concrete progress on this set of open interdisciplinary problems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to participate and contribute to the workshop in two main ways: (1) via contributed papers and participation in structured discussions during the workshop; and (2) by participating in the workshop hackathon.
... more
Workshop

Advances in Financial AI: Opportunities, Innovations, and Responsible AI

Jiawei He · Yongjae Lee · Bo An · Yixuan Li · Alberto Pozanco
9:00 AM - 5:10 PM

The financial industry is experiencing a paradigm shift propelled by rapid advancements in artificial intelligence. From algorithmic trading and fraud detection to personalized banking and investment strategies, AI technologies are redefining financial services. Our workshop aims to convene researchers, industry professionals, and policymakers to explore the latest developments, discuss challenges, and chart a course for responsible AI integration in finance.Topics of interest include, but not limit to Generative AI with applications to finance, time-series modeling, financial datasets, multi-agent systems, and practical financial applications such as forecasting, fraud detection, risk management, and quantitative finance, etc. By bringing together diverse perspectives from academia and industry, we seek to foster collaboration and drive forward advancements in the responsible use of AI in finance.

... more
Workshop

AI for Nucleic Acids (AI4NA)

Ivona Martinović · Lovro Vrček · Chaitanya Joshi · Tin Vlašić · Agata Kilar · Bruno Trentini · Max Ward · Maria Brbic · Bryan Hooi · Fran Supek · Pietro Lio · Elena Rivas · Mile Sikic
9:00 AM - 6:00 PM

In recent years, the AI community has made significant strides in protein research, particularly since the breakthrough of AlphaFold2, which has led to advancements in structural biology and drug discovery. The success achieved on proteins gives hope to achieve comparable success on nucleic acids, RNA and DNA. The proposed workshop aims to highlight the unique challenges and possibilities of applying AI to nucleic acids. While advances in RNA structure prediction and nucleic acid language models show promise, the field lags behind proteins in the scale and quality of data and predictive accuracy. Addressing these challenges will drive critical applications in diagnostics, therapeutics, and biotechnology, such as mRNA therapeutics design, RNA-targeting small molecules, and improved genetic variant calling. Furthermore, there is space for advancement in reconstructing complex genomes, such as cancer or plant genomes, and detecting and understanding epigenetic and epitranscriptomic modifications. By bringing together AI researchers and domain experts in nucleic acids at the ICLR workshop, we aim to foster collaborations that advance the role of AI in nucleic acid research, ultimately pushing the boundaries of what AI can achieve in understanding and manipulating life’s fundamental molecules.

... more
Workshop

Open Science for Foundation Models

Jiaheng Liu · Riza Batista-Navarro · Qian Liu · Niklas Muennighoff · Ge Zhang
9:00 AM - 6:00 PM

Foundation models (FMs) have revolutionized artificial intelligence (AI) research across many domains, enabling rapid adaptation to diverse downstream tasks. These FMs, trained on massive and high-quality datasets, have demonstrated remarkable performance in natural language processing (e.g., BERT [4], GPT [12], Gemini [14]), computer vision (e.g., ViT [5], VQGAN [6]), speech recognition (e.g., Whisper [13]), and multi-modal understanding (e.g., GPT-4o, LLaVA [10], QwenVL [2]). Despite these advancements, the scientific transparency and reproducibility of FMs have not kept pace. Proprietary interfaces conceal crucial details, such as training data, architectural design, and development processes, limiting scientific understanding of these models’ biases and risks. To bridge this gap, there is a growing need for truly open foundation models that the research community can access and study.In response, a surge of open science works has emerged to address the issue, encouraging transparency of FMs within the research community. Notable examples include open-access large language models (LLMs) such as Llama [15], Mistral [8], and Qwen [1], as well as extensive pre-training datasets like RedPajama [3] and The Stack [9]. These efforts have democratized access to high-performance models and sparked further innovation. Moreover, several initiatives like OLMo [7] and StarCoder [11] now offer fully transparent models, providing detailed insights into training protocols, intermediate checkpoints, and data processing pipelines. Such transparency is critical to fostering reproducibility and accelerating research across the field.Therefore, the first Open Science for Foundation Models (OS-FMs) workshop aims to bring together a community of researchers committed to open science, reproducible research, and the open-source movement within AI. This workshop seeks contributions that explore key aspects of FMs, such as dataset curation, evaluation methodologies, high-performing models, and efficient implementations. While models have become increasingly large in this era, the workshop promotes the open sharing of both small (e.g., 1B) and large models, as long as their conclusions are based on rigorous scientific experiments. By emphasizing scientific discovery and open sharing, the workshop seeks to address the growing inaccessibility of foundation models, ensuring that the benefits of AI advancements are disseminated across the global research community.

... more
Workshop

Workshop on Embodied Intelligence with Large Language Models In Open City Environment

Chen Gao · Yitao Liang · Xin Wang · Yu Zheng · Tong Xia · Fengli Xu · Yong Li
9:00 AM - 6:00 PM

This workshop is motivated by a fact: human beings have strong embodied intelligence in an open environment, but it is still challenging for large language models and LLM agents. Despite some progress on embodied AI in static and indoor environments, the LLM agents still struggle with tasks in large-scale outdoor environments, such as navigation, search, spatial reasoning, task planning, etc. Therefore, we propose this workshop to discuss the recent advances in the related research area and look forward to future development. Specifically, it delves into topics of outdoor embodied intelligence, such as spatial intelligence and embodied perception, reasoning and planning, decision-making and action, multi-agent and human-agent collaboration, and the development of simulators, testbeds, datasets, and benchmarks. This comprehensive exploration of embodied LLM agents in an open city environment holds the potential to advance the field of artificial intelligence and open up new applications in various domains.

Invited speaker (listed in alphabetical order):
- Jieneng Chen (Johns Hopkins University, remote)
- Xihui Liu (The University of Hong Kong)
- Dhruv Shah (DeepMind)
- Saining Xie (New York University)
- Qi Wu (University of Adelaide)
- Hengshuang Zhao (The University of Hong Kong)

... more
Workshop

ICLR 2025 Workshop on Bidirectional Human-AI Alignment

Hua Shen · Ziqiao Ma · Reshmi Ghosh · Tiffany Knearem · Michael Xieyang Liu · Sherry Wu · Andrés Monroy-Hernández · Diyi Yang · Antoine Bosselut · Furong Huang · Tanu Mitra · Joyce Chai · Marti Hearst · Dawn Song · Yang Li
9:00 AM - 6:00 PM

As AI systems grow more integrated into real-world applications, the traditional one-way approach to AI alignment is proving insufficient. Bidirectional Human-AI Alignment proposes a new, dynamic framework where alignment is viewed as an ongoing, reciprocal process, with both humans and AI systems adapting over time. This paradigm acknowledges the complexity of human-AI interactions and emphasizes the need for continuous adaptation to evolving human values, societal contexts, and feedback loops. Our workshop at ICLR 2025 focuses on machine learning techniques that can drive this bidirectional alignment, including reinforcement learning, interactive learning, and multi-task learning, enabling AI systems to evolve in response to real-world changes. We also explore value specification, human-in-the-loop frameworks, and scalable post-training alignment methods. Additionally, the workshop will address evaluation techniques for real-time alignment adjustments and the societal implications of maintaining alignment across diverse human populations. By fostering collaboration between AI, HCI, and social science researchers, the workshop aims to create scalable, adaptive alignment frameworks that reflect ethical and societal goals. This event offers a novel approach to alignment research, emphasizing mutual human-AI adaptation and interdisciplinary cooperation to ensure AI systems remain aligned with human values.

... more
Workshop

Frontiers in Probabilistic Inference: learning meets Sampling

Tara Akhound-Sadegh · Marta Skreta · Yuanqi Du · Sarthak Mittal · Joey Bose · Alexander Tong · Kirill Neklyudov · Max Welling · Michael Bronstein · Arnaud Doucet · Aapo Hyvarinen
9:00 AM - 6:00 PM

Probabilistic inference, particularly through the use of sampling-based methods, is a cornerstone for modeling across diverse fields, from machine learning and statistics to natural sciences such as physics, biology, and chemistry. However, many challenges exist, including scaling, which has resulted in the development of new machine learning methods. In response to these rapid developments, we propose a workshop, Frontiers in Probabilistic Inference: learning meets Sampling (FIP), to foster collaboration between communities working on sampling and learning-based inference. The workshop aims to center community discussions on (i) key challenges in sampling, (ii) new sampling methods, and (iii) their applications to natural sciences and uncertainty estimation. We have assembled an exciting speaker list with diverse perspectives; our goal is that attendees leave with a deeper understanding of the latest advances in sampling methods, practical insights into their applications, and new connections to collaborate on future research endeavors.

... more
Workshop

Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Wei Huang · Mingyuan Bai · Andi Han · Taiji Suzuki · Qibin Zhao · Bamdev Mishra · Denny Wu · Ye Yuan · Maud Lemercier · Ernest Ryu
9:00 AM - 5:50 PM

Deep Generative Models (DGMs) have significantly advanced artificial intelligence (AI) through innovations like variational autoencoders, flow-based models, generative adversarial networks, and diffusion models. Despite their success, substantial theoretical and practical challenges remain, including the lack of rigorous theoretical frameworks, training instability, scalability issues, and challenges in adapting to structured domains. This workshop aims to bridge the gap between theory and practice by addressing two key questions: (1) How to develop comprehensive theoretical frameworks for DGMs? (2) How to develop principled strategies to improve the practical efficiency, reliability and transferability of DGMs in real-world applications? By bringing together experts from diverse backgrounds, the workshop will foster interdisciplinary collaboration to develop principled solutions, ultimately advancing the theoretical foundations and practical efficacy of DGMs.

... more