College.Websites

AVSS 2009 - Results

Results for the tracking algorithm presented in the paper "Object Tracking from Unstabilized Platforms by Particle Filtering with Embedded Camera Ego Motion".

Dataset and results


Video #L1 #L2 #L3 Object tracking
(jpg sequence)
Posterior distribution
(jpg sequence)
redcar1.avi 0 0 0 redcar1_track.rar redcar1_post.rar
redcar2.avi 1 4 9 redcar2_track.rar redcar2_post.rar
person1.avi 0 3 7 person1_track.rar person1_post.rar
motorbike1.avi 4 13 24 motorbike1_track.rar motorbike1_post.rar
motorbike2.avi 1 3 6 motorbike2_track.rar motorbike2_post.rar
motorbike3.avi 2 8 19 motorbike3_track.rar motorbike3_post.rar
motorbike4.avi 5 16 26 motorbike4_track.rar motorbike4_post.rar
motorbike5.avi 8 23 37 motorbike5_track.rar motorbike6_post.rar
bluecar1.avi 0 1 3 bluecar1_track.rar bluecar1_post.rar
bluecar2.avi 0 2 5 bluecar2_track.rar bluecar2_post.rar
bluecar3.avi 1 3 6 bluecar3_track.rar bluecar3_post.rar

(#L) number of times that the tracked object has been lost for each algtorithm: #L1--> the proposed algorithm, #L2--> the algorithm presented in [1], #L3--> and the algorithm presented in [2].

The performance of our proposed tracking algorithm has been compared with two tracking techniques described in [1] and [2]. The main differences with the presented approach are that in [1] the camera ego-motion is not compensated, while in [2] it is compensated, but using only one affine transformation instead of several ones (that is just the main novelty of our algorithm). All of three strategies use a Particle Filter framework to perform the tracking. In order to appropriately compare the three tracking algorithms, the same set of parameters has been used to tune the Particle Filters. The dataset used to perform the comparison is composed by 11 videos with challenging situations: strong ego-motion, changes in illumination, variations of the object appearance and size, and occlusions. The whole dataset can be downloaded from table above. The comparison shown in the same table is based on the following criteria: number of times that the tracked object has been lost (#L) for each algtorithm: our proposed algorithm (#L1), the algorithm presented in [1] (#L2), and the algorithm presented in [2] (#L3). Each time that the tracked object is lost, the corresponding tracking algorithm is initiated with a correct object detection at the same frame where the object was lost.

In addition, several video results related to the propose tracking algorithm can be downloaded from the table, showing the tracked object surrounded by an ellipse or circle, and the posterior distribution of the state vector (in fact, the marginalized posterior distribution of the object location coordinates), which is very useful to visually analyze the set of hipothesis and their relative weights.

It can be appreciated that proposed algorithm is quite superior, due to the other approaches can not satisfactorily handle the ego-motion. On the other hand, it can be observed that, independently of the algorithm, the results obtained by the videos “motorbike1-6.avi” are worse than the rest. The reason is that the size of the tracked object is very small, less than one hundred pixels, and therefore the object can not be robustly represented by a spatiogram.

[1] K. Nummiaro, E. Koller-Meierb, and L. V. Gool. An adaptive color-based particle filter. Image and Vision Computing, 21(1), pp. 99–110, 2003.
[2] V. Venkataraman, G. Fan, and X. Fan. Target tracking with online feature selection in flir imagery. IEEE Proc. CVPR, pp. 1–8, 2007.