Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Conditional Positional Encodings for Vision Transformers

Xiangxiang Chu · Zhi Tian · Bo Zhang · Xinlong Wang · Chunhua Shen

Keywords: [ Deep Learning and representational learning ] [ vision transformer ]


Abstract:

We propose a conditional positional encoding (CPE) scheme for vision Transformers. Unlike previous fixed or learnable positional encodings that are predefined and independent of input tokens, CPE is dynamically generated and conditioned on the local neighborhood of the input tokens. As a result, CPE can easily generalize to the input sequences that are longer than what the model has ever seen during the training. Besides, CPE can keep the desired translation equivalence in vision tasks, resulting in improved performance. We implement CPE with a simple Position Encoding Generator (PEG) to get seamlessly incorporated into the current Transformer framework. Built on PEG, we present Conditional Position encoding Vision Transformer (CPVT). We demonstrate that CPVT has visually similar attention maps compared to those with learned positional encodings and delivers outperforming results.

Chat is not available.