Skip to yearly menu bar Skip to main content


Oral
in
Workshop: ICLR 2025 Workshop on Bidirectional Human-AI Alignment

Human Alignment: How Much We Adapt to LLMs?


Abstract:

Large Language Models (LLMs) are becoming a common part of our daily communication, yet most studies focus on improving these models, with fewer examining how they influence our behavior. Using a cooperative word game in which players aim to agree on a shared word, we investigate how people adapt their linguistic strategies when paired with either an LLM or another human. Our findings show that interactions with LLMs lead to more self-referential language and distinct alignment patterns, with users’ beliefs about their partners further modulating these effects. These findings highlight the reciprocal influence of human–AI dialogue and raise important questions about the long-term implications of embedding LLMs in everyday communication.

Chat is not available.