VQEG (Video Quality Experts Group) - AAVP

 

Research  

 

GTI Data   

 

Open databases created and software developed by the GTI and supplemental material to papers.  

 

Databases  


SportCLIP (2025): Multi-sport dataset for text-guided video summarization.
Ficosa (2024):
The FNTVD dataset has been generated using the Ficosa's recording car.
MATDAT (2023):  More than 90K labeled images of martial arts tricking.
SEAW – DATASET (2022): 3 stereoscopic contents in 4K resolution at 30 fps.
UPM-GTI-Face dataset (2022): 11 different subjects captured in 4K, under 2 scenarios, and 2 face mask conditions.
LaSoDa (2022): 60 annotated images from soccer matches in five stadiums with different characteristics and light conditions.
PIROPO Database (2021):People in Indoor ROoms with Perspective and Omnidirectional cameras.
EVENT-CLASS (2021): High-quality 360-degree videos in the context of tele-education.
Parking Lot Occupancy Database (2020)
Nighttime Vehicle Detection database (NVD) (2019)
Hand gesture dataset (2019): Multi-modal Leap Motion dataset for Hand Gesture Recognition.
ViCoCoS-3D (2016): VideoConference Common Scenes in 3D.
LASIESTA database (2016): More than 20 sequences to test moving object detection and tracking algorithms.
Hand gesture database (2015): Hand-gesture database composed by high-resolution color images acquired with the Senz3D sensor.
HRRFaceD database (2014):Face database composed by high resolution images acquired with Microsoft Kinect 2 (second generation).
Lab database (2012): Set of 6 sequences to test moving object detection strategies.
Vehicle image database (2012)More than 7000 images of vehicles and roads.           

 

Software  


Empowering Computer Vision in Higher Education(2024)A Novel Tool for Enhancing Video Coding Comprehension.
Engaging students in audiovisual coding through interactive MATLAB GUIs (2024)

TOP-Former: A Multi-Agent Transformer Approach for the Team Orienteering Problem (2023)

Solving Routing Problems for Multiple Cooperative Unmanned Aerial Vehicles using Transformer Networks (2023)
Vision Transformers and Traditional Convolutional Neural Networks for Face Recognition Tasks (2023)
Faster GSAC-DNN (2023): A Deep Learning Approach to Nighttime Vehicle Detection Using a Fast Grid of Spatial Aware Classifiers.
SETForSeQ (2020): Subjective Evaluation Tool for Foreground Segmentation Quality. 
SMV Player for Oculus Rift (2016)

Bag-D3P (2016): 
Face recognition using depth information. 
TSLAB (2015): 
Tool for Semiautomatic LABeling.   
 

   

Supplementary material  


Soccer line mark segmentation and classification with stochastic watershed transform (2022)
A fully automatic method for segmentation of soccer playing fields (2022)
Grass band detection in soccer images for improved image registration (2022)
Evaluating the Influence of the HMD, Usability, and Fatigue in 360VR Video Quality Assessments (2020)
Automatic soccer field of play registration (2020)   
Augmented reality tool for the situational awareness improvement of UAV operators (2017)
Detection of static moving objects using multiple nonparametric background-foreground models on a Finite State Machine (2015)
Real-time nonparametric background subtraction with tracking-based foreground update (2015)  
Camera localization using trajectories and maps (2014)

 

                                                                                                                                                                                                                             
 
                                                                   
 
                                                                                                                                                             
 
      

 

 

VQEG (Video Quality Experts Group) - AAVP

At the 2018 autumn VQEG meeting in Mountain View (California, United States), hosted Google, Narciso García presented the preliminary results of the novel application of the well-known Video Multimethod Assessment Fusion (VMAF) metric on 360VR contents, outcome of a project with Nokia Bell Labs. Since its inception, the full-reference VMAF has provided significantly good results on different types of non-immersive contents and viewing conditions. However, VMAF has been only applied to conventional (i.e. planar or 2D) video sources. So, our research team resolved to analyze the application of VMAF to omnidirectional video content without making any specific adjustments.

The research was based on the underlying hypothesis of a monotonic relationship between 2D-VMAF (existing) and 360VR-VMAF (non-existing). Should it be true, we can avoid: generating a large and rich specific 360VR video dataset, carrying out numerous subjective quality assessments, and performing the corresponding training and testing stage.

The validation of VMAF on 360VR contents has been carried out in two steps. Initially, VMAF was applied to omnidirectional sequences encoded with constant QP in the whole range of possible values to obtain the variation of the score with the encoding parameter. Later, VMAF scores were validate through a subjective assessment together with an adjustment of the VMAF-vs-QP curve with a finite number of key operating points.

After an exhaustive study on the feasibility of VMAF on 360VR contents, the results prove that VMAF works sufficiently correctly with omnidirectional contents, without performing any particular adjustments. Therefore, the creation of a 360VR dataset can be avoided, thus saving computing and time resources.

This activity of GTI is supported by the Spanish Projects IVME (Immersive Visual Media Environments) and AAVP (Advanced Adaptive Video Personalization).

VQEG2018 1

VQEG2018 1