A fast fusion method for multi-videos with three-dimensional GIS scenes

  • PDF / 1,571,721 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 9 Downloads / 160 Views

DOWNLOAD

REPORT


A fast fusion method for multi-videos with three-dimensional GIS scenes Chengming Li 1 & Zhendong Liu 1 & Zhanjie Zhao 1 & Zhaoxin Dai 1 Received: 20 September 2019 / Revised: 24 June 2020 / Accepted: 26 August 2020 # Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract

Techniques for the fusion of real-world videos with virtual scenes are key to the augmentation of three-dimensional (3-D) virtual geographic scenes, which greatly enhances the immersive visual experience. When a 3-D scene is updated dynamically, the existing video projection-based method for real-virtual fusion is generally slow and inefficient, as all rendered objects must be traversed in the new scene to identify the objects to be fused in the user’s new field of view (FOV). To address this issue, a fast, topology-accounting method for multi-video fusion with 3-D geographic information system (GIS) scenes is proposed. First, the topological models for video object and rendered object are constructed, respectively. Second, by using the topological models, a method that considering topological relationships is proposed to realize rapid identification of rendered objects during the dynamic update of 3-D scenes. Finally, real video and 3-D scene data in Tengzhou City were used to validate the method proposed in this paper. The experiments demonstrated that the method is fast and efficient in the fusion of videos with 3-D GIS scenes, and the computational cost of the proposed method is significantly lower than that of the current method. The proposed method is highly viable and robust, facilitating the fusion of videos with virtual environments. Keywords Surveillance videos . Topological information . Fusion . 3-D GIS scene . Dynamic updating

1 Introduction Techniques for the fusion of real videos/images with virtual environments are key to the augmentation of virtual three-dimensional (3-D) geographic scenes [16, 17, 20]. Video augmentation can reduce the visual disparity between virtual geographic information system (GIS)

* Zhaoxin Dai [email protected]

1

Chinese Academy of Surveying and mapping, Beijing 100830, China

Multimedia Tools and Applications

scenes and real videos/images and realize the seamless integration between virtual and reality. Video augmentation also plays an important role in improving the real immersive visual experience. [5, 12]. The fusion techniques of videos with virtual 3-D scenes generally include methods based on video projection [4, 11, 19, 21], video image deformation [10] and video image reconstruction [30]. In particular, the video projection-based approach has become one of the most common approaches for the fusion of 3-D scenes with real images [3, 18] because it requires neither manual intervention and offline fusion [25] nor the predetermination of vertices and the textures of projected textures. Moreover, this approach has a high degree of fidelity [9, 15]. Stephen et al. [23] of the Sarnoff Corporation (USA) proposed a method that maps live videos onto 3-D models as textures using