Benchmarking Overton Pluralism in LLMs
Elinor Poole-Dayan · Jiayi Wu · Taylor Sorensen · Jiaxin Pei · Michiel Bakker
Abstract
We introduce a novel framework for measuring Overton pluralism in LLMs—the extent to which diverse viewpoints are represented in model outputs. We (i) formalize Overton pluralism as a set-coverage metric (OVERTONSCORE), (ii) conduct a large-scale US-representative human study (N=1209; 60 questions; 8 LLMs), and (iii) develop an automated benchmark that closely reproduces human judgments. On average, models achieve OVERTONSCOREs of 0.35 – 0.41, with Deepseek V3 performing best; yet all models remain far below the theoretical maximum of 1.0, revealing substantial headroom for improvement. Because repeated large-scale human studies are costly and slow, scalable evaluation tools are essential for model development. Hence, we propose an automated benchmark that achieves high rank correlation with human judgments ($\rho=0.88$), providing a practical proxy while not replacing human assessment. By turning pluralistic alignment from a normative aim into a measurable benchmark, our work establishes a foundation for systematic progress toward more pluralistic LLMs.
Successful Page Load