Skip to yearly menu bar Skip to main content


Spotlight Presentation
in
Workshop: Time Series Representation Learning for Health

Sound-Based Sleep Staging By Exploiting Real-World Unlabeled Data

JongMok Kim · Daewoo Kim · Eunsung Cho · Hai Tran · Joonki Hong · Dongheon Lee · JungKyung Hong · In-Young Yoon · Jeong-Whun Kim · Hyeryung Jang · Nojun Kwak


Abstract:

With a growing interest in sleep monitoring at home, sound-based sleep staging with deep learning has emerged as a potential solution. However, collecting labeled data is restrictive in the home environments due to the inconvenience of installing medical equipment at home. To handle this, we propose novel training approaches using accessible real-world sleep sound data. Our key contributions include a new semi-supervised learning technique called sequential consistency loss that considers the time-series nature of sleep sound and a semi-supervised contrastive learning method which handles out-of-distribution data in unlabeled home recordings. Our model was evaluated on various datasets including a labeled home sleep sound dataset and the public PSG-Audio dataset, demonstrating the robustness and generalizability of our model across real-world scenarios.

Chat is not available.