Skip to yearly menu bar Skip to main content



Invited Talks
Pushmeet Kohli

Scientific advances over the last several centuries have not only expanded our understanding of the world, but have also raised the standard of living for many people across the globe. However, there are still massive challenges facing humanity, as evidenced by climate change and the COVID-19 pandemic. One of the difficulties of modern science is to make sense of the vast amount of information we’ve gathered about the world - from the Large Hadron Collider to massive genome projects—it’s impossible for any individual person to comprehend it all.

In this talk, I will discuss how AI (and techniques like Machine Learning) can contribute to progress on challenging and important problems in a wide range of scientific disciplines - from genomics and structural biology to quantum chemistry and even pure mathematics.

Been Kim

AI arrived in our lives, making important decisions affecting us. How should we work with this new class of co-workers? The goal of Interpretability is to engineer our relationships with AI, in part by making tools to produce explanations from AI models. But I argue that we also need to study AI machines as scientific objects, in isolation and with humans. Doing so not only provides principles for tools we make, but also is necessary to take our working relationship with AI to the next level. Our ultimate goal is a language that will enable us to learn from and be inspired by AI. This language will not be perfect– no language is–but it will be useful. As human language is known to shape our thinking, this will also shape us and future AI.

Jenny Davis

Affordances are how the features of a technology shape, but do not determine, the uses and effects of that technology. In this address, I will demonstrate the value of an affordance framework for the analysis and design of ML systems. Specifically, I will delineate and apply the mechanisms and conditions framework of affordance, which models the way technologies request, demand, encourage, discourage, refuse, and allow technical and social outcomes. Illustrated through a case example that traverses critical analysis of an ML systems and its imagined (re)making, the mechanisms and conditions framework lays bare not just that technical choices are profoundly social, but also how and for whom. This approach displaces vagaries and general claims with the particularities of systems in context, empowering critically minded practitioners while holding power—and the systems power relations produce—to account.

Cordelia Schmid

In this talk, we present recent progress on large-scale learning of multimodal video representations. We start by presenting VideoBert, a joint model for video and language, repurposing the Bert model for multimodal data. This model achieves state-of-the-art results on zero shot prediction and video captioning. Next, we present an approach for video question answering which relies on training from instruction videos and cross-modal supervision with a textual question answer module. We show state-of-the-art results for video question answering without any supervision (zero-shot VQA) and demonstrate that our approach obtains competitive results for pre-training and then fine-tuning on video question answering datasets. We conclude our talk by presenting the recent VideoCC dataset, which transfers image captions to video and allows obtaining state-of-the-art performance for zero-shot video and audio retrieval and video captioning.

Kunle Olukotun

As the benefits from Moore’s Law diminish, future computing performance improvements must rely on specialized accelerators for applications in artificial intelligence and data processing. In the future, these applications will be characterized by terabyte sized models, data sparsity and irregular control flow that will challenge the capabilities of conventional CPUs and GPUs.

In this talk, I explain how Reconfigurable Dataflow Accelerators (RDAs) can be used to boost the performance of a broad set of data-intensive applications with these characteristics. SambaNova Systems is using RDA technology contained in Reconfigurable Dataflow Units (RDUs) to achieve record-setting performance on challenging machine learning tasks.

From Reinforcement Learning to AI
Doina Precup

Reinforcement learning achieved great success in domains ranging from games to complex control tasks. But reinforcement learning can go beyond specific tasks, and provide the foundation for building AI agents that can continually learn from interaction, in order to build knowledge and achieve goals.

In this talk, I will discuss the importance of rewards as a way to specify goals, and the way in which reinforcement learning can be used to learn general procedural and predictive knowledge. I will outline recent progress made in this area, and important open questions.

H. Sebastian Seung

A connectome represents brain connectivity as a directed graph in which nodes are neurons and edges are synapses. The connectome of C. elegans was reconstructed from electron microscopic images in the 1970s and 80s, but the manual labor of image analysis was prohibitive. Convolutional nets were applied to automate image analysis starting in the 2000s, and are now the basis of computational systems engineered to handle petascale datasets.

The connectome of the fruit fly Drosophila is expected in 2023. Cubic millimeter volumes of cerebral cortex have also been reconstructed. The explosion of connectomic information is revealing innate structures of nervous systems, and is expected to constrain theories of how brains learn. An exascale project to reconstruct an entire mouse brain connectome is now being planned, and depends on improving the accuracy of automated image analysis by confronting a long tail of failure modes, including diverse kinds of image artifacts.