Perceptual and Computational Task-Based Quality Assessment of Driving Video
Project/Area Number |
22K12085
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 61010:Perceptual information processing-related
|
Research Institution | Ritsumeikan University |
Principal Investigator |
Chandler Damon 立命館大学, 情報理工学部, 教授 (70765495)
|
Co-Investigator(Kenkyū-buntansha) |
稲積 泰宏 山梨英和大学, 人間文化学部, 准教授 (30367255)
|
Project Period (FY) |
2022-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,160,000 (Direct Cost: ¥3,200,000、Indirect Cost: ¥960,000)
Fiscal Year 2024: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2023: ¥2,730,000 (Direct Cost: ¥2,100,000、Indirect Cost: ¥630,000)
Fiscal Year 2022: ¥260,000 (Direct Cost: ¥200,000、Indirect Cost: ¥60,000)
|
Keywords | quality assessment / driving video / driving simulator / visual perception / AI / detection |
Outline of Research at the Start |
We seek to learn the features used by the human visual system when judging driving video quality. To address this question, we will perform the following steps:
First, we will create a large database of annotated driving videos containing a wide variety of driving scenarios. Next, we will experimentally measure visibility thresholds for distortions added to the videos. Next, we will experimentally measure quality ratings for heavily distorted versions of the videos. Finally, based on the experimental scores and findings, we will research and develop machine-learning-based QA models.
|
Outline of Annual Research Achievements |
(1) We created a large database of computer-graphics-based driving videos (car driving on the road, mini car driving on the sidewalk) by using the CARLA driving simulator. The database consists of videos spanning various weather conditions, locations, road types, and levels of crowdedness. The road-driving videos have been rated in terms of subjective driving safety quality by 12 human subjects. (Quality ratings of the sidewalk navigation videos are still ongoing.) We have analyzed relationships between the quality scores and various computer-vision-based features of the videos. We have also investigated relationships between the quality scores and various objective no-reference video quality assessment scores. We intend to report on these findings at a conference in 2024.
(2) As part of the video quality assessment of sidewalk navigation videos, we developed an algorithm for detection and analysis of braille blocks. The algorithm is based on the DeepLab semantic segmentation framework, and it is supplemented by low-level color and texture features.
(3) We have built the hardware for two miniature robotic vehicles to be used for driving on the sidewalk. Our hope is to compare actual driving of these vehicles vs. simulated driving in the driving simulator. We can then measure driving safety quality in a much more objective way. In 2023, the robots were designed and the hardware assembled. We are now working on the software to drive the robots remotely.
|
Current Status of Research Progress |
Current Status of Research Progress
3: Progress in research has been slightly delayed.
Reason
Our initial plan was to use real driving videos. However, due to the lack of control over environmental and traffic conditions, and due to unforeseen practical issues that affect quality (glass reflections, vibrations, etc.), we decided to switch to simulated videos. Generating these simulated videos caused a delay in our research. However, the real videos which we collected and rated in 2022 are still very useful as a final verification.
The simulated videos offer two critical advantages: (1) we have full control over the environment (weather, terrain, and traffic); and (2) we can actually let subjects drive the simulated car to get an objective measure of quality. We intend to pursue (2) in FY2024.
|
Strategy for Future Research Activity |
(1) We will create an algorithm to predict the subjective quality ratings for our simulated road driving videos. We expect to use environmental features (measured via algorithms or reported by CARLA) to assist the quality assessment process.
(2) We will repeat the quality rating experiment, except this time the subjects will be asked to drive the car in the simulated settings. Quality will be measured in terms of task-performance (driving time, driving safety).
(3) We will test the algorithm in (1) on real driving videos, and we will compare the results of (2) to the subjective ratings. Our hope is to build the groundwork for a quality assessment algorithm that can work on real videos and assess task-based quality.
|
Report
(2 results)
Research Products
(3 results)