Exploring human activity annotation using a privacy preserving 3D Model

Ciliberto, Mathias, Ordonez Morales, Francisco Javier and Roggen, Daniel (2016) Exploring human activity annotation using a privacy preserving 3D Model. In: HASCA Workshop at Ubicomp, 12-16 September 2016, Heidelberg, Germany.

[img] PDF - Accepted Version
Download (1MB)


Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowdsourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertial sensors placed on the limbs allows for annotation of human activities. The animated model is shown to 6 people in a suite of tests in order to understand the accuracy of the labelling. We present the model and the dataset, then we present the experiments including the number of activities. We present 3 experiments where we investigate the use of a 3D model for i) activity segmentation, ii) for "openended" annotation where users freely describe the activity they see on screen, and iii) traditional annotation, where users pick one activity among a pre-defined list of activities. In the latter case, results show that users recognise with 56% accuracy when picking from 11 possible activities.

Item Type: Conference or Workshop Item (Paper)
Keywords: Activity recognition, Annotation, Wearable technologies, 3D human model
Schools and Departments: School of Engineering and Informatics > Engineering and Design
T Technology > T Technology (General)
Related URLs:
Depositing User: Daniel Roggen
Date Deposited: 11 Jul 2016 08:03
Last Modified: 17 Nov 2016 12:13
URI: http://sro.sussex.ac.uk/id/eprint/61942

View download statistics for this item

📧 Request an update
Project NameSussex Project NumberFunderFunder Ref
Lifelearn: Unbounded activity and context awarenessG1786EPSRC-ENGINEERING & PHYSICAL SCIENCES RESEARCH COUNCILEP/N007816/1