Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Building Trust in LLMs and LLM Applications: From Guardrails to Explainability to Regulation

Fast Proxies for LLM Robustness Evaluation

Tim Beyer · Jan Schuchardt · Leo Schwinn · Stephan Günnemann


Abstract: Evaluating the robustness of LLMs to adversarial attacks is crucial for safe deployment, yet current red-teaming methods are often prohibitively expensive. We compare the ability of fast proxy metrics to predict the real-world robustness of an LLM against a simulated attacker ensemble.This allows us to estimate a model's robustness to computationally expensive attacks without requiring runs of the attacks themselves.Specifically, we consider gradient-descent-based embedding-space attacks, prefilling attacks, and direct attacks.Even though direct attacks in particular do not achieve high ASR, we find that they and embedding-space attacks can predict attack success rates well, achieving $r_p=0.86$ (linear) and $r_s=0.97$ (Spearman rank) correlations with the full attack ensemble while reducing computational cost by three orders of magnitude.

Chat is not available.