Pixel-Wise Crowd Understanding via Synthetic Data

  • PDF / 7,663,921 Bytes
  • 21 Pages / 595.276 x 790.866 pts Page_size
  • 69 Downloads / 243 Views

DOWNLOAD

REPORT


Pixel-Wise Crowd Understanding via Synthetic Data Qi Wang1

· Junyu Gao1 · Wei Lin1 · Yuan Yuan1

Received: 17 January 2020 / Accepted: 30 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Crowd analysis via computer vision techniques is an important topic in the field of video surveillance, which has wide-spread applications including crowd monitoring, public safety, space design and so on. Pixel-wise crowd understanding is the most fundamental task in crowd analysis because of its finer results for video sequences or still images than other analysis tasks. Unfortunately, pixel-level understanding needs a large amount of labeled training data. Annotating them is an expensive work, which causes that current crowd datasets are small. As a result, most algorithms suffer from over-fitting to varying degrees. In this paper, take crowd counting and segmentation as examples from the pixel-wise crowd understanding, we attempt to remedy these problems from two aspects, namely data and methodology. Firstly, we develop a free data collector and labeler to generate synthetic and labeled crowd scenes in a computer game, Grand Theft Auto V. Then we use it to construct a large-scale, diverse synthetic crowd dataset, which is named as “GCC Dataset”. Secondly, we propose two simple methods to improve the performance of crowd understanding via exploiting the synthetic data. To be specific, (1) supervised crowd understanding: pre-train a crowd analysis model on the synthetic data, then fine-tune it using the real data and labels, which makes the model perform better on the real world; (2) crowd understanding via domain adaptation: translate the synthetic data to photo-realistic images, then train the model on translated data and labels. As a result, the trained model works well in real crowd scenes.Extensive experiments verify that the supervision algorithm outperforms the state-of-the-art performance on four real datasets: UCF_CC_50, UCF-QNRF, and Shanghai Tech Part A/B Dataset. The above results show the effectiveness, values of synthetic GCC for the pixel-wise crowd understanding. The tools of collecting/labeling data, the proposed synthetic dataset and the source code for counting models are available at https://gjy3035.github.io/GCC-CL/. Keywords Crowd analysis · Pixel-wise understanding · Crowd counting · Crowd segmentation · Synthetic data generation

1 Introduction Communicated by Jifeng Dai. This work was supported by the National Key R&D Program of China under Grant 2017YFB1002202, National Natural Science Foundation of China under Grant U1864204, 61773316, 61632018, and 61825603. Electronic supplementary material The online version of this article (https://doi.org/10.1007/s11263-020-01365-4) contains supplementary material, which is available to authorized users.

B

Yuan Yuan [email protected] Qi Wang [email protected] Junyu Gao [email protected] Wei Lin [email protected]

Recently, crowd analysis has been a hot topic in the field of computer vision. It has great potential (in

Data Loading...