Vehicle Image Database


The Image Processing Group is currently researching on the vision-based vehicle classification task. In order to evaluate our methods, we have created a new Database of images that we have extracted from our video sequences (acquired with a forward looking camera mounted on a vehicle). The database comprises 3425 images of vehicle rears taken from different points of view, and 3900 images extracted from road sequences not containing vehicles. Images are selected to maximize the representativity of the vehicle class, which involves a naturally high variability. In our opinion one important feature affecting the appearance of the vehicle rear is the position of the vehicle relative to the camera. Therefore, the database separates images in four different regions according to the pose: middle/close range in front of the camera, middle/close range in the left, close/middle range in the right, and far range. In addition, the images are extracted in such a way that they do not perfectly fit the contour of the vehicle in order to make the classifier more robust to offsets in the hypothesis generation stage. Instead, some images contain the vehicle loosely (some background is also included in the image), while others only contain the vehicle partially. Several instances of a same vehicle are included with different bounding hypotheses. The images have 64x64 and are cropped from sequences of 360x256 pixels recorded in highways of Madrid, Brussels and Turin.

The database is open for use of other researchers and can be downloaded here.

Apart from the images of our own collection, we are also using a small set of images of other databases in order to round the number of images to 4000 thousand vehicles images and 4000 non-vehicle images. In particular, we are using images extracted from the Caltech Database [1][2] and the TU Graz-02 Database [3][4]. The subset of images from these databases used for our experiments images is linked here.

The complete set of images is selected so that it covers many different drivind conditions, especially relating to weather. In fact, from the 2000 images devoted to each of the image regions (1000 vehicles instances and 1000 non-vehicle images), 20% of them are taken in sunny weather, 20% in cloudy days, 20% in medium conditions (neither very sunny nor cloudy), 20% with poor illumination (down/dusk), 10% with light rain, 5% with bad resolution cameras, and 2,5% in tunnels (with artificial light).

[1] The Caltech Database (Computational Vision at California Institute of Technology, Pasadena), Accessed 14 May 2011.
[2] R Fergus, P Perona, A Zisserman, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, Wisconsin, 16-22 June 2003.
[3] The TU Graz-02 Database (Graz University of Technology), Accessed 14 May 2011.
[4] A Opelt, A Pinz, in Proceedings of the 14th Scandinavian Conference on Image Analysis, Joensuu, Finland, 19-22 June 2005.

For questions about these test images, please contact Jon Arróspide at This email address is being protected from spambots. You need JavaScript enabled to view it..


J. Arróspide, L. Salgado, M. Nieto, “Video analysis based vehicle detection and tracking using an MCMC sampling framework”, EURASIP Journal on Advances in Signal Processing, vol. 2012, Article ID 2012:2, Jan. 2012 (doi: 10.1186/1687-6180-2012-2)