AROMA: Activity Recognition using Deep Multi-task Learning (recent work)
Published:
AROMA: Activity Recognition using Deep Multi-task Learning
In activity recognition, simple and complex activity recognition are a pair of related tasks. Simple acitvities are usually characterized by repeated and low-level actions, e.g., runing, sitting, and walking. Compared to simple activities, complex activities are more complicated and high-level activities like working, having dinner, commuting. And a complex activity may include several low-level simple activities. For instance, having dinner, a kind of complex activity, can include standing and walking to select food and sitting at the table to eat, the simple activities. In other words, simple activities can be regarded as the components of complex activity. In our model called AROMA, we use multi-task learning to recognize simple and complex activities simultaneously by using a shared representation to improve generalization and accuracy of the model (Our paper “AROMA: A deep multi-task learning based simple and complex activity recognition method using wearable sensors” has been submitted to 2017 AAAI). Here is the code.