Skip to yearly menu bar Skip to main content


Poster

Discovering Non-monotonic Autoregressive Orderings with Variational Inference

Xuanlin Li · Brandon Trabucco · Dong Huk Park · Michael Luo · Sheng Shen · trevor darrell · Yang Gao

Keywords: [ computer vision ] [ variational inference ] [ natural language processing ] [ reinforcement learning ] [ optimization ] [ unsupervised learning ]


Abstract:

The predominant approach for language modeling is to encode a sequence of tokens from left to right, but this eliminates a source of information: the order by which the sequence was naturally generated. One strategy to recover this information is to decode both the content and ordering of tokens. Some prior work supervises content and ordering with hand-designed loss functions to encourage specific orders or bootstraps from a predefined ordering. These approaches require domain-specific insight. Other prior work searches over valid insertion operations that lead to ground truth sequences during training, which has high time complexity and cannot be efficiently parallelized. We address these limitations with an unsupervised learner that can be trained in a fully-parallelizable manner to discover high-quality autoregressive orders in a data driven way without a domain-specific prior. The learner is a neural network that performs variational inference with the autoregressive ordering as a latent variable. Since the corresponding variational lower bound is not differentiable, we develop a practical algorithm for end-to-end optimization using policy gradients. Strong empirical results with our solution on sequence modeling tasks suggest that our algorithm is capable of discovering various autoregressive orders for different sequences that are competitive with or even better than fixed orders.

Chat is not available.