Keynote Talk: Yuandong Tian (Meta): Reason by Search or by Representation? A Path Towards Unifying Neural and Symbolic Decision Making
in
Workshop: Workshop on Reasoning and Planning for Large Language Models
Abstract
By simply learning to predict the next token, Large Language Models (LLMs) have made impressive achievements even for certain nontrivial reasoning and planning tasks, which are often handled by symbolic solvers, while still fail in others. To demystify why this is the case, in this talk, we study two LLM reasoning patterns, reasoning by search and reasoning by representation, that may contribute to the power of LLMs. For reasoning by search, we introduce our work on leveraging search traces from symbolic solvers as chains of thoughts, and further compressing them for better efficiency. For reasoning by representation, we explore alternative architectures and simple yet representative tasks to demonstrate how neural representations can be creative in solve such tasks, and how such representations can be explained by symbolic frameworks, leading to a possible unification of both paradigms in the future.