Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Time Series Representation Learning for Health

Time Series as Images: Vision Transformer for Irregularly Sampled Time Series

Zekun Li · Shiyang Li · Xifeng Yan


Abstract:

Irregularly sampled time series are becoming increasingly prevalent in various domains, especially in medical applications. Although different highly-customized methods have been proposed to tackle irregularity, how to effectively model their complicated dynamics and high sparsity is still an open problem. This paper studies the problem from a whole new perspective: transforming irregularly sampled time series into line graph images and adapting powerful vision transformers to perform time series classification in the same way as image classification. Our approach largely simplifies algorithm designs without assuming prior knowledge and can be potentially extended as a general-purpose framework. Despite its simplicity, we show that it substantially outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets. Our code and data are anonymously available at \url{https://anonymous.4open.science/r/ViTST-TSRL4H-ICLR2023}.

Chat is not available.