Skip to yearly menu bar Skip to main content


Oral
in
Workshop: ICLR 2025 Workshop on Bidirectional Human-AI Alignment

Cooperative Agency-Centered LLMs


Abstract:

AI models are not readily accepted and adopted in highly consequential fields like education which cater to societal ideals. This is because injecting human values such as agency, which is an essential ingredient for learning, is hard. Using knowledge from cooperative AI and Liu et al. (2023) Stable Alignment method, we propose (without evaluation) a method for aligning large language models (LLMs) with the human agency of two different groups: teachers and students. This could ensure effective learning occurs even with LLMs.

Chat is not available.