Unbalanced Soft-Matching Distance For Neural Representational Comparison With Partial Unit Correspondence
Abstract
Representational similarity metrics typically force all units to be matched, making them susceptible to noise and outliers common in neural representations. We extend the soft-matching distance to a partial optimal transport setting that allows some neurons to remain unmatched, yielding rotation-sensitive but robust correspondences. This unbalanced soft-matching distance provides theoretical advantages---relaxing strict mass conservation while maintaining interpretable transport costs---and practical benefits through efficient neuron ranking in terms of cross-network alignment without costly iterative recomputation. In simulations, it preserves correct matches under outliers and reliably selects the correct model in noise-corrupted identification tasks. On fMRI data, it automatically excludes low-reliability voxels and produces voxel rankings by alignment quality that closely match computationally expensive brute-force approaches. It achieves higher alignment precision across homologous brain areas than standard soft-matching, which is forced to match all units regardless of quality. In deep networks, highly matched units exhibit similar maximally exciting images, while unmatched units show divergent patterns. This ability to partition by match quality enables focused analyses, \emph{e.g.,} testing whether networks have privileged axes even within their most aligned subpopulations. Overall, unbalanced soft-matching provides a principled and practical method for representational comparison under partial correspondence.