Towards Effective Discrimination Testing for Generative AI
Abstract
Generative AI (GenAI) models present new challenges in regulating against discriminatory behavior. We argue that GenAI fairness research still has not met these challenges; instead, a significant gap remains between bias assessment methods and regulatory goals. This leads to ineffective regulation that can allow deployment of reportedly fair, yet actually discriminatory, GenAI systems. Towards remedying this problem, we connect the legal and technical literature around GenAI bias evaluation and identify areas of misalignment. Through four case studies, we demonstrate how this misalignment can result in discriminatory outcomes in real-world deployments, especially in adaptive or complex environments. We offer practical recommendations for improving discrimination testing to better align with regulatory goals and enhance the reliability of fairness assessments in the future.