Skip to yearly menu bar Skip to main content


Poster

DarkBench: Benchmarking Dark Patterns in Large Language Models

Esben Kran · Hieu Minh Nguyen · Akash Kundu · Sami Jawhar · Jinsuk Park · Mateusz Jurewicz

[ ] [ Project Page ]
2025 Poster

Abstract:

We introduce DarkBench, a comprehensive benchmark for detecting dark design patterns—manipulative techniques that influence user behavior—in interactions with large language models (LLMs). Our benchmark comprises 660 prompts across six categories: brand bias, user retention, sycophancy, anthropomorphism, harmful generation, and sneaking. We evaluate models from five leading companies (OpenAI, Anthropic, Meta, Mistral, Google) and find that some LLMs are explicitly designed to favor their developers' products and exhibit untruthful communication, among other manipulative behaviors. Companies developing LLMs should recognize and mitigate the impact of dark design patterns to promote more ethical Al.

Chat is not available.