Location: Universidad Politécnica de Madrid (UPM) > Grupo de Tratamiento de Imágenes (GTI) > Data > CamLocalizationMaps

Description:

This site contains an exhaustive description of the test data used to evaluate the system for camera positioning proposed in [*] since, due to page length limitations, the said manuscript can only contain graphical descriptions for part of the evaluated settings.

In greater detail, the work [*] proposes a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. This approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via Data Driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The evaluation of the proposal, performed using both synthetic and real settings, shows its satisfactory performance in both ambiguous and unambiguous settings.

For any question about the article [*] or about the described test data, please contact Raúl Mohedano at rmp@gti.ssr.upm.es.

Citation:

[*] R. Mohedano, A. Cavallaro, N. García, “Camera localization using trajectories and maps”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. xx, no. x, pp. xxx-xxx, xxx 2013. (under review).

Experimental setup description:

Synthetic database (cameras C1 to C10):

Complete environmental map with true camera FoVs and synthetic routes superimposed


Real aerial view (Map data ©2012 Google)


Schematic passable regions

Closeup of the true camera FoVs and observed tracks (red)


C1


C2


C3

C4

C5

C6 and C7

C8

C9

C10

Back to top

Real vehicle database (cameras MIT-1 to MIT-4):

Complete environmental map with true camera FoVs superimposed


Real aerial view (Map data ©2012 Google)


Schematic passable regions

Original camera view and metrically rectified view,

with FoV and retained tracks (red) and (part of the) original tracks (black) superimposed


MIT-1 original camera view


MIT-1 rectified view


MIT-2 original camera view


MIT-2 rectified view


MIT-3 original camera view


MIT-3 rectified view


MIT-4 original camera view


MIT-4 rectified view

  1. X. Wang, X. Ma, E. Grimson, “Unsupervised activity perception in crowded and complicated scenes using hierarchical Bayesian models”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 3, pp. 539-555, 2009.[Link]
  2. X. Wang, K. Tieu, E. Grimson, “Correspondence-free activity analysis and scene modeling in multiple camera views”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 56-71, 2010.[Link]
  3. N. Anjum, A. Cavallaro, “Multifeature object trajectory clustering for video analysis”, IEEE Trans. Circuits and Systems for Video Technology, vol. 18, no. 11, pp. 1555-1564, 2008.

Back to top

Grupo de Tratamiento de Imágenes (GTI), E.T.S.Ing. Telecomunicación
Universidad Politécnica de Madrid (UPM)
Av. Complutense nº 30, "Ciudad Universitaria". 28040 - Madrid (Spain). Tel: +34 913367353. Fax: +34 913367353