Poster
in
Workshop: Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI
Predictive Inference Is Really Free with In-Context Learning
Sohom Mukherjee · Ivane Antonov · Kai Günder · Magnus Maichle
Keywords: [ Transformers ] [ In-context Learning ] [ Conformal Prediction ] [ Regression ] [ Predictive Inference ]
In this work, we consider the problem of constructing PIs for point predictions that are obtained using transformers. We propose a novel method for constructing PIs called in-context Jackknife+ (ICJ+), by using a meta-learned transformer trained via ICL to perform training-free leave-one-out (LOO) predictions, i.e., by only prompting the transformer with LOO datasets and no retraining. We provide distribution-free coverage guarantees for our proposed ICJ+ algorithm under mild assumptions, by leveraging the stability of in-context trained transformers. We evaluate the coverage and width of the intervals obtained using ICJ+ on synthetic i.i.d. data for five classes of functions, and observe that their performance is comparable or superior to the benchmark J+ and true confidence intervals.