Geodatabase updating by new measurements; a Bayesian intermezzo
- PDF / 165,345 Bytes
- 7 Pages / 595.276 x 790.866 pts Page_size
- 49 Downloads / 187 Views
ORIGINAL PAPER
Geodatabase updating by new measurements; a Bayesian intermezzo Maria Antonia Brovelli & Fernando Sansò
Received: 12 October 2008 / Accepted: 23 March 2009 / Published online: 6 May 2009 # Società Italiana di Fotogrammetria e Topografia (SIFET) 2009
Abstract The question the paper deals with refers to how it is possible to update existing geodatabases considering both their accuracies and those of the new measurements taken for their updating. Traditionally, maintaining geodatabases (or map bases) has been highly time consuming, costly, and sometimes difficult work, especially in urban and high-density areas. The most common procedure is to globally generate geodatabases every few years by photogrammetric techniques. On the opposite, the possibility of dynamically updating the landscape information from a maintained core spatial database can be considered as an appealing alternative to traditional map revision techniques. A kriging solution, based on the hypothesis that the vector field of the position error on a geodatabase is a homogeneous, isotropic intrinsic random field with constant mean and variogram depending only on the squared distance, known a priori from the relative accuracy of the map, is proposed. The method is a first approach to the problem, as far as at the moment it does not consider constraints to which points on the geodatabase must adapt to. That is the reason why it is presented as an intermezzo. Keywords Geodatabase . Updating . Bayesian . Kriging
Position of the problem Spatial databases can be nowadays easily managed by means of geographic information system tools (Rigaux M. A. Brovelli (*) : F. Sansò Politecnico di Milano—DIIAR, Polo Regionale di Como, via Valleggio, 11-22100 Como, Italy e-mail: [email protected] F. Sansò e-mail: [email protected]
et al. 2002). The hot point in the first era of the digital revolution was mainly to collect pieces of information. On the opposite now the great challenge relates to provide geographic data of high quality level (and therefore the estimation of such a kind of information) and to make available methods to exploit as much as possible the already available information, correctly modeling the uncertainty intrinsic into spatial data, instead of remaking maps every time from the beginning. Thus data sharing, data conflating (Brovelli and Zamboni 2004, 2006), and updating (Arnold and Wright 2005) must be supported by rigorous approaches having a sound statistical basis. Efforts have been made in this regard by some researchers (Leung et al. 2004). The paper places in this frame, trying to solve the problem at least for some elementary cases. We consider a geodatabase basically as a collection of points {Pi; i=1,2,…N} of known planar coordinates, {(xi yi)}, topological information, allowing to identify linear or area features and other numerical or thematic information concerning several attributes related to these points and objects. Here, we will be concerned only with the first aspect, namely the set of points {Pi} an
Data Loading...