2022 Fiscal Year Research-status Report
Perceptual and Computational Task-Based Quality Assessment of Driving Video
Project/Area Number |
22K12085
|
Research Institution | Ritsumeikan University |
Principal Investigator |
Chandler Damon 立命館大学, 情報理工学部, 教授 (70765495)
|
Co-Investigator(Kenkyū-buntansha) |
稲積 泰宏 山梨英和大学, 人間文化学部, 准教授 (30367255)
|
Project Period (FY) |
2022-04-01 – 2025-03-31
|
Keywords | quality assessment / driving video / visual perception / detection / AI |
Outline of Annual Research Achievements |
In this research, we ask: Is it possible to train a computer to automatically assess the quality of driving video? Specifically, we aim to better understand what signal and perceptual features the human visual system uses when judging driving video quality, and how these aspects can be modeled computationally.
Two main objectives were proposed for 2022: (1) Create a driving video database, and (2) measure visibility thresholds. Both of these objectives have largely been accomplished; however, follow-up research and additional data collection is necessary. For (1) we have collected driving videos for a wide variety of environments, including nighttime videos and scenes that contain glare. We have also captured videos of outdoor sidewalk scenes. For (2) we have measured visibility thresholds for JPEG artifacts added to the periphery of each video when subjects were forced to fixate on the center of the video (the free-viewing measurements are still ongoing).
Several student thesis projects related to this research were completed/continued in 2022. One MS student performed glare detection in the driving videos. One MS student performed de-nighting of nighttime driving videos. These MS projects gave rise to conference publications. BS student projects' themes included roadway detection, detection of words painted on roads, and detection of roadside electric/light poles.
|
Current Status of Research Progress |
Current Status of Research Progress
3: Progress in research has been slightly delayed.
Reason
We faced an initial delay in collecting the videos due to issues with affixing the camera to the vehicle. We initially placed the camera inside the vehicle, but this idea failed due to the occurance of occasional reflections which could not be eliminated. The camera was thus placed outside of the vehicle, but then we faced camera shakes due to vibrations. We have overcome these issues, but they caused a delay in the research.
Furthermore, all of the videos in the database have yet to be fully edited. The main reason for this delay is that manual annotation is too laborious of a process; thus, we began using automatic segmentation with manual correction.
We are not fully satisfied with the lack of control of the video content. We are experimenting with the use of computer-generated videos.
|
Strategy for Future Research Activity |
The plans for FY2023 will largely proceed as initially proposed, with one modification as noted below.
(1) We will finish the video database (ideally, both real videos and CG videos) and finish measuring visibility thresholds for all of the videos. One new MS student will be tasked with the R&D of a model to predict the visibility thresholds. (2) As proposed, we will collect driving safety ratings for the videos and distorted versions of the videos. (3) As a modification of (2), we will also run pilot experiments of task-based measurements of quality using a driving simulator (CG videos).
|
Causes of Carryover |
Some of the camera equipment needed for FY2022 was able to be purchased using laboratory renovation funds provided by Ritsumeikan University. Also, GPS trackers were deemed unnecessary and therefore not purchased. We will use the excess funds to build a driving simulator to perform the task-based quality experiment.
|