Skip to yearly menu bar Skip to main content


Poster
in
Workshop: VerifAI: AI Verification in the Wild

ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction, and Column Exploration

Minghang Deng · Ashwin Ramachandran · Canwen Xu · Lanxiang Hu · Zhewei Yao · Anupam Datta · Hao Zhang


Abstract: Text-to-SQL systems have unlocked easier access to critical data insights by enabling natural language queries over structured databases. However, deploying such systems in enterprise environments remains challenging due to factors such as large, complex schemas ($> 3000$ columns), diverse SQL dialects (e.g., BigQuery, Snowflake) and sophisticated query requirements (e.g., transformation, analytics). Current state-of-the-art performance on the Spider 2.0 dataset --- a benchmark built to mimic such complex environments --- remains limited at 20\%. Key limitations include inadequate instruction-following, poor long-context comprehension, weak self-refinement, and insufficient dialect-specific knowledge. To address these gaps, we propose $\textbf{ReFoRCE}$ (Self-$\textbf{Re}$finement Agent with $\textbf{Fo}$rmat $\textbf{R}$estriction and $\textbf{C}$olumn $\textbf{E}$xploration) which introduces (1) table compression to mitigate long-context limitations (2) format restriction to ensure accurate answer format, and (3) iterative column exploration for enhanced schema understanding. Additionally, it employs self-refinement pipeline consisting of (1) parallelized workflows with voting mechanisms and (2) a Common Table Expression (CTE) based refinement approach to handle unresolved cases. ReFoRCE achieves state-of-the-art results scoring 26.69 on the Spider 2.0-Snow and scoring 24.50 on the Spider 2.0-Lite tasks.

Chat is not available.