Workshop on Logical Reasoning of Large Language Models
Abstract
Large language models (LLMs) have achieved remarkable breakthroughs in natural language understanding and generation, but their logical reasoning capabilities remain a significant bottleneck. Logical reasoning is crucial for tasks requiring precise deduction, induction, or abduction, such as medical diagnosis, legal reasoning, and scientific hypothesis verification. However, LLMs often fail to handle complex logical problems with multiple premises and constraints, and they frequently produce self-contradictory responses across different questions. These limitations not only restrict the reliability of LLMs in complex problem-solving but also hinder their real-world applications. In response to these emerging needs, we propose the workshop on Logical Reasoning of LLMs. This workshop will explore the challenges and opportunities for improving deduction, induction, and abduction capabilities of LLMs, implementing symbolic representation and reasoning via LLMs, avoiding logical contradictions across responses to multiple related questions, enhancing LLM reasoning by leveraging external logical solvers, and benchmarking LLM logical reasoning and consistencies. As LLMs continue to expand their role in AI research and applications, this workshop will serve as a platform to discuss and refine the methods for advancing logical reasoning within LLMs.