Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: I Can't Believe It's Not Better: Challenges in Applied Deep Learning

Context is King: Unpacking the Generalizability Fallacy in Deep Learning

John Kalantari


Abstract:

Abstract: For over twenty years, the deep learning community has celebrated each new model that surpasses its predecessors on standard benchmarks, often with the implicit assumption that these performance gains guarantee success in novel applications. Yet, this optimism repeatedly unravels when models are introduced in the real-world, exposing a critical blind spot: context is not just a detail—it’s the foundation of robustness and performance. In this talk, we will discuss the persistent fallacy that a model’s benchmark superiority ensures generalizability. We will explore how contextual factors—ranging from shifting data distributions to domain-specific constraints—consistently challenge the universality of even the most advanced architectures. Drawing on real-world examples and empirical insights from the world’s preeminent healthcare institution, this talk will highlight why ignoring context undermines applied deep learning and propose strategies to rethink the ML lifecycle for truly adaptive, resilient systems. Context isn’t a footnote; it’s the key to unlocking deep learning’s potential beyond the lab.

Bio: John Kalantari is the Chief Technology Officer of YRIKKA and an Assistant Professor at the University of Minnesota. He previously served as Director of AI at the Mayo Clinic, holding appointments in the Department of Surgery, the Department of Quantitative Health Sciences, and the Center for Individualized Medicine. He is also the founder of the Biomedical Artificial General Intelligence Lab (BAGIL) at Mayo Clinic, an interdisciplinary group focused on developing digital health tools and predictive models to improve patient care and expand healthcare access through causal machine learning and reinforcement learning. At YRIKKA, he leads pioneering advancements in multi-modal generative AI, emphasizing the quantification of model uncertainty and robustness in high-stakes applications such as national defense and healthcare. His work bridges the gap between AI research and critical real-world implementations, pushing the boundaries of generative models to handle diverse data modalities and enhance decision-making in complex environments.

Chat is not available.