Skip to yearly menu bar Skip to main content


Poster

Xformer: Hybrid X-Shaped Transformer for Image Denoising

Jiale Zhang · Yulun Zhang · Jinjin Gu · Jiahua Dong · Linghe Kong · Xiaokang Yang

Halle B #29
[ ] [ Project Page ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

In this paper, we present a hybrid X-shaped vision Transformer, named Xformer, which performs notably on image denoising tasks. We explore strengthening the global representation of tokens from different scopes. In detail, we adopt two types of Transformer blocks. The spatial-wise Transformer block performs fine-grained local patches interactions across tokens defined by spatial dimension. The channel-wise Transformer block performs direct global context interactions across tokens defined by channel dimension. Based on the concurrent network structure, we design two branches to conduct these two interaction fashions. Within each branch, we employ an encoder-decoder architecture to capture multi-scale features. Besides, we propose the Bidirectional Connection Unit (BCU) to couple the learned representations from these two branches while providing enhanced information fusion. The joint designs make our Xformer powerful to conduct global information modeling in both spatial and channel dimensions. Extensive experiments show that Xformer, under the comparable model complexity, achieves state-of-the-art performance on the synthetic and real-world image denoising tasks. We also provide code and models at https://github.com/gladzhang/Xformer.

Live content is unavailable. Log in and register to view live content