Affinity Posters
Blog Track Session 2
David Dobre · Leo Schwinn · Claire Vernade · Charlie Gauthier · Fabian Pedregosa · Gauthier Gidel
Halle B
Live content is unavailable. Log in and register to view live content
Schedule
Tue 7:30 a.m. - 9:30 a.m.
|
It's Time to Move On: Primacy Bias and Why It Helps to Forget
(
Poster
#3
)
>
link
Poster Location: Halle B #3 'The Primacy Bias in Deep Reinforcement Learning' (Nikishin et al. 2022) demonstrates how the first experiences of a deep learning model can cause catastrophic memorization and how this can be prevented. In this post we describe primacy bias, summarize the authors' key findings, and present a simple environment to experiment with primacy bias. |
Matthew Kielo · Vladimir Lukin 🔗 |
Tue 7:30 a.m. - 9:30 a.m.
|
Building Diffusion Model's theory from ground up
(
Poster
#2
)
>
link
Poster Location: Halle B #2 Diffusion Model, a new generative model family, has taken the world by storm after the seminal paper by Ho et al. [2020]. While diffusion models are often described as a probabilistic Markov Chain, their fundamental principle lies in the decade-old theory of Stochastic Differential Equation (SDE), as found out later by Song et al. [2021]. In this article, we will go back and revisit the 'fundamental ingredients' behind the SDE formulation, and show how the idea can be 'shaped' to get to the modern form of Score-based Diffusion Models. We'll start from the very definition of 'score', how it was used in the context of generative modeling, how we achieve the necessary theoretical guarantees, how the design choices were made and finally arrive at the more 'principled' framework of Score-based Diffusion. Throughout the article, we provide several intuitive illustrations for ease of understanding. |
Ayan Das 🔗 |
Tue 7:30 a.m. - 9:30 a.m.
|
Understanding gradient inversion attacks from the prior knowledge perspective
(
Poster
#1
)
>
link
Poster Location: Halle B #1 In this blogpost, we mention multiple works in gradient inversion attacks, point out the chanllenges we need to solve in GIAs, and provide a perspective from the prior knowledge to understand the logic behind recent papers. |
Yanbo Wang · Jian Liang · Ran He 🔗 |