Rule-Based Real-Time ADL Recognition in a Smart Home Environment

This paper presents a rule-based approach for both offline and real-time recognition of Activities of Daily Living (ADL), leveraging events produced by a non-intrusive multi-modal sensor infrastructure deployed in a residential environment. Novel aspects

  • PDF / 921,841 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 92 Downloads / 149 Views

DOWNLOAD

REPORT


Department of Informatics, University of Huddersfield, Huddersfield, UK {g.bargiannis,g.antoniou}@hud.ac.uk 2 Faculty of Engineering, University of Bristol, Bristol, UK [email protected]

Abstract. This paper presents a rule-based approach for both offline and real-time recognition of Activities of Daily Living (ADL), leveraging events produced by a non-intrusive multi-modal sensor infrastructure deployed in a residential environment. Novel aspects of the approach include: the ability to recognise arbitrary scenarios of complex activities using bottom-up multi-level reasoning, starting from sensor events at the lowest level; an effective heuristics-based method for distinguishing between actual and ghost images in video data; and a highly accurate indoor localisation approach that fuses different sources of location information. The proposed approach is implemented as a rule-based system using Jess and is evaluated using data collected in a smart home environment. Experimental results show high levels of accuracy and performance, proving the effectiveness of the approach in real world setups. Keywords: Event driven architectures · Activity recognition Indoor localisation · Smart home · Multi-modal sensing

1

· ADL ·

Introduction

In the last two decades sensors have become cheaper, smaller and widely available, residing at the edge of the Internet. A single sensor provides only partial information on the actual physical condition measured, e.g. an acoustic sensor only records audio signals. A single measurement may be useful for simple applications, such as temperature monitoring in a smart home and may be sufficient to discover very simple events, such as fire detection. However, it is often insufficient for an automated Activity Recognition (AR) system to infer all simple and complex events taking place in the area of interest. Therefore, a fusion of multiple sensor-related, low-level events is necessary. The Internet of Things (IoT) paradigm offers an effective way of acquiring and delivering low-level sensor events. The strength of IoT lies in the foundations of the Internet i.e. distribution of resources, support for common naming schemas and ontologies, common access strategies and availability of computational resources, to mention a few. The challenge is to locate and fuse the c Springer International Publishing Switzerland 2016  J.J. Alferes et al. (Eds.): RuleML 2016, LNCS 9718, pp. 325–340, 2016. DOI: 10.1007/978-3-319-42019-6 21

326

G. Baryannis et al.

right pieces of (sensor) information in order to realise AR at the best quality of information possible. There are multiple ways of approaching sensor-based AR. Chen and Khalil [3] propose a broad categorisation into data-driven approaches, exploiting machine learning techniques, and knowledge-driven approaches, leveraging logical modelling and reasoning. Both directions have their strengths and weaknesses. Machine learning techniques are criticised for not handling data conflicts well and for requiring large, annotated training datasets, while logic-b