Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Network Weights as a New Data Modality

Uncovering Latent Chain of Thought Vectors in Large Language Models

Jason Zhang · Scott Viteri

Keywords: [ Interpretability ] [ Activation Engineering ] [ Chain of Thought Reasoning ] [ Steering Vectors ]


Abstract:

In this work, we examine how targeted perturbations in the activation space of Language Models (LMs) can encode complex reasoning patterns. We inject steering vectors, derived from LM activations, into LMs during inference time and study whether these vectors can induce Chain-of-Thought (CoT) reasoning in LMs without the need for natural language prompting. We demonstrate this approach on Llama3 8B Instruct and Mistral 7B v0.2 Instruct and show that activation-space interventions achieve competitive, if not superior, performance compared to traditional CoT prompting across multiple reasoning benchmarks, including GSM8k, MMLU, AGI Eval, and ARC AI2. These findings suggest that neural network activations can encode reasoning patterns, offering a new application of activation space manipulation as a tool for tuning model behavior.

Chat is not available.