Uncertainty Estimation via Hyperspherical Confidence Mapping
Abstract
Quantifying uncertainty in neural network predictions is essential for deploying models in high-stakes domains such as autonomous driving, healthcare, and manufacturing. While conventional approaches often depend on costly sampling or parametric distributional assumptions, we propose Hyperspherical Confidence Mapping (HCM), a simple yet principled framework for uncertainty estimation that is both sampling-free and distribution-free. HCM decomposes model outputs into a magnitude and a normalized direction vector constrained to lie on a unit hypersphere, enabling a novel interpretation of uncertainty as the degree of violation of a geometric constraint. Grounded in this geometric constraint formulation, our method provides deterministic and interpretable uncertainty estimates applicable to both regression and classification. We validate the effectiveness of HCM across diverse benchmarks and real-world industrial tasks, demonstrating competitive or superior performance to ensemble and evidential approaches, while significantly reducing inference cost and ensuring strong confidence–error alignment. Our results highlight the value of geometric structure in uncertainty estimation and position HCM as a versatile alternative to conventional techniques.