Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neurosymbolic Generative Models (NeSy-GeMs)

[Remote poster] Grounded physical language understanding with probabilistic programs and simulated worlds

Cedegao Zhang · Catherine Wong · Gabriel Grand · Joshua B Tenenbaum


Abstract:

Human language richly invokes our intuitive physical knowledge. We talk about physical objects, scenes, properties, and events; and we can make predictions and draw inferences about physical worlds described entirely in language. Understanding this everyday language requires inherently probabilistic reasoning---over possible physical worlds invoked in language and over uncertainty inherent to those physical worlds. In this paper, we propose \textbf{PiLoT}, a neurosymbolic generative model that translates language into probabilistic programs grounded in a physics engine. Our model integrates a large language-code model to robustly parse language into program expressions and uses a probabilistic physics engine to support inferences over scenes described in language. We construct a \textbf{linguistic reasoning benchmark} based on prior psychophysics experiments that requires reasoning about physical outcomes based on linguistic scene descriptions. We show that PiLoT well predicts human judgments and outperforms LLM baselines.

Chat is not available.