Sound-based transportation mode recognition with smartphones

Wang, Lin and Roggen, Daniel (2019) Sound-based transportation mode recognition with smartphones. IEEE ICASSP 2019: Spatial Audio Recording and Detection and Classification of Acoustic Scenes and Events, Brighton, U.K., 12 -17 May 2019. Published in: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 930-934. IEEE ISSN 1520-6149

[img] PDF - Accepted Version
Download (1MB)


Smartphone-based identification of the mode of transportation of the user is important for context-aware services. We investigate the feasibility of recognizing the 8 most common modes of locomotion and transportation from the sound recorded by a smartphone carried by the user. We propose a convolutional neural network based recognition pipeline, which operates on the short- time Fourier transform (STFT) spectrogram of the sound in the log domain. Experiment with the Sussex-Huawei locomotion- transportation (SHL) dataset on 366 hours of data shows promising results where the proposed pipeline can recognize the activities Still, Walk, Run, Bike, Car, Bus, Train and Subway with a global accuracy of 86.6%, which is 23% higher than classical machine learning pipelines. It is shown that sound is particularly useful for distinguishing between various vehicle activities (e.g. Car vs Bus, Train vs Subway). This discriminablity is complementary to the widely used motion sensors, which are poor at distinguish between rail and road transport.

Item Type: Conference Proceedings
Keywords: Computational auditory scene analysis, context- awareness, convolutional neural network, sound event classification, transportation mode recognition
Schools and Departments: School of Engineering and Informatics > Engineering and Design
Research Centres and Groups: Sensor Technology Research Centre
Depositing User: Lucy Arnold
Date Deposited: 06 Mar 2019 14:16
Last Modified: 22 Jul 2022 13:39

View download statistics for this item

📧 Request an update