Skip to yearly menu bar Skip to main content


Poster

Scaling Transformers for Low-Bitrate High-Quality Speech Coding

Julian Parker · Anton Smirnov · Jordi Pons · CJ Carr · Zack Zukowski · Zach Evans · Xubo Liu

[ ] [ Project Page ]
2025 Poster

Abstract: The tokenization of audio with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by applying a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of $400$ or $700$ bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.

Chat is not available.