Tools for Protecting the Privacy of Specific Individuals in Video

  • PDF / 1,265,171 Bytes
  • 9 Pages / 600.03 x 792 pts Page_size
  • 2 Downloads / 167 Views

DOWNLOAD

REPORT


Research Article Tools for Protecting the Privacy of Specific Individuals in Video Datong Chen, Yi Chang, Rong Yan, and Jie Yang School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA Received 25 July 2006; Revised 28 September 2006; Accepted 31 October 2006 Recommended by Ying Wu This paper presents a system for protecting the privacy of specific individuals in video recordings. We address the following two problems: automatic people identification with limited labeled data, and human body obscuring with preserved structure and motion information. In order to address the first problem, we propose a new discriminative learning algorithm to improve people identification accuracy using limited training data labeled from the original video and imperfect pairwise constraints labeled from face obscured video data. We employ a robust face detection and tracking algorithm to obscure human faces in the video. Our experiments in a nursing home environment show that the system can obtain a high accuracy of people identification using limited labeled data and noisy pairwise constraints. The study result indicates that human subjects can perform reasonably well in labeling pairwise constraints with the face masked data. For the second problem, we propose a novel method of body obscuring, which removes the appearance information of the people while preserving rich structure and motion information. The proposed approach provides a way to minimize the risk of exposing the identities of the protected people while maximizing the use of the captured data for activity/behavior analysis. Copyright © 2007 Datong Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

In the last few years, significantly more video cameras continue to be deployed in a variety of locations for different purposes, such as video surveillance and human activity/behavior analysis for medical applications. These systems have posed significant questions about privacy concerns. There are many challenges for privacy protection in video. First, we have to deal with a huge amount of the video data. A video stream captured by a surveillance camera within 24 hours consists of 2 592 000 frames of image (in 30 fps) per day and more than 79 million image frames per month. Medical studies usually need to conduct a longterm recording (e.g., a month or a few months) with dozens of cameras, and thus produce a huge amount of video data. Second, labeling data is a very labor-intensive task but many automatic video analysis algorithms and systems rely on a large amount of training data to achieve a reasonable performance. This problem becomes even worse when the privacy protection issue is taken into account, because we have only limited personnel who can access the original data. Third, we have to deal with the real-time issue because many video analysis tasks re