Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Time Series Representation Learning for Health

DyNeMoC: A semi-supervised architecture for classifying time series brain data

Abu Mohammad Shabbir Khan · Chetan Gohil · Pascal Notin · Joost van Amersfoort · Mark Woolrich · Yarin Gal


Abstract:

Understanding how different regional networks of the brain get activated and how those activations change over time can help in identifying the onset of various neurodegenerative diseases, studying the efficacy of different treatment regimens for those illnesses, and developing brain-computer interfaces for patients with different types of disabilities. To explain dynamic brain networks, an RNN-VAE model named DyNeMo has recently been proposed. This model can take into account the whole recorded history of brain states while modeling their dynamics and is able to better capture the complexities in larger datasets than previous works. In this paper, we show that the latent representations learned by DyNeMo through unsupervised training are not sufficient for downstream classification tasks and propose a new semi-supervised model named DyNeMoC that overcomes this shortcoming. The downstream task we study is the classification of visual stimuli from MEG recordings. We show that both of our proposed variants of DyNeMoC --- DyNeMoC-RNN and DyNeMoC-Transformer --- lead to more useful latent representations for stimuli classification with the transformer variant outperforming the RNN one. Learning representations that are directly linked to a downstream task in this manner could ultimately be used to improve the monitoring and treatment of certain neurodegenerative diseases and building better brain-computer interfaces.

Chat is not available.