研究課題/領域番号 |
22K12085
|
研究種目 |
基盤研究(C)
|
配分区分 | 基金 |
応募区分 | 一般 |
審査区分 |
小区分61010:知覚情報処理関連
|
研究機関 | 立命館大学 |
研究代表者 |
Chandler Damon 立命館大学, 情報理工学部, 教授 (70765495)
|
研究分担者 |
稲積 泰宏 山梨英和大学, 人間文化学部, 准教授 (30367255)
|
研究期間 (年度) |
2022-04-01 – 2025-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
4,160千円 (直接経費: 3,200千円、間接経費: 960千円)
2024年度: 1,170千円 (直接経費: 900千円、間接経費: 270千円)
2023年度: 2,730千円 (直接経費: 2,100千円、間接経費: 630千円)
2022年度: 260千円 (直接経費: 200千円、間接経費: 60千円)
|
キーワード | quality assessment / driving video / driving simulator / visual perception / AI / detection |
研究開始時の研究の概要 |
We seek to learn the features used by the human visual system when judging driving video quality. To address this question, we will perform the following steps:
First, we will create a large database of annotated driving videos containing a wide variety of driving scenarios. Next, we will experimentally measure visibility thresholds for distortions added to the videos. Next, we will experimentally measure quality ratings for heavily distorted versions of the videos. Finally, based on the experimental scores and findings, we will research and develop machine-learning-based QA models.
|
研究実績の概要 |
(1) We created a large database of computer-graphics-based driving videos (car driving on the road, mini car driving on the sidewalk) by using the CARLA driving simulator. The database consists of videos spanning various weather conditions, locations, road types, and levels of crowdedness. The road-driving videos have been rated in terms of subjective driving safety quality by 12 human subjects. (Quality ratings of the sidewalk navigation videos are still ongoing.) We have analyzed relationships between the quality scores and various computer-vision-based features of the videos. We have also investigated relationships between the quality scores and various objective no-reference video quality assessment scores. We intend to report on these findings at a conference in 2024.
(2) As part of the video quality assessment of sidewalk navigation videos, we developed an algorithm for detection and analysis of braille blocks. The algorithm is based on the DeepLab semantic segmentation framework, and it is supplemented by low-level color and texture features.
(3) We have built the hardware for two miniature robotic vehicles to be used for driving on the sidewalk. Our hope is to compare actual driving of these vehicles vs. simulated driving in the driving simulator. We can then measure driving safety quality in a much more objective way. In 2023, the robots were designed and the hardware assembled. We are now working on the software to drive the robots remotely.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
3: やや遅れている
理由
Our initial plan was to use real driving videos. However, due to the lack of control over environmental and traffic conditions, and due to unforeseen practical issues that affect quality (glass reflections, vibrations, etc.), we decided to switch to simulated videos. Generating these simulated videos caused a delay in our research. However, the real videos which we collected and rated in 2022 are still very useful as a final verification.
The simulated videos offer two critical advantages: (1) we have full control over the environment (weather, terrain, and traffic); and (2) we can actually let subjects drive the simulated car to get an objective measure of quality. We intend to pursue (2) in FY2024.
|
今後の研究の推進方策 |
(1) We will create an algorithm to predict the subjective quality ratings for our simulated road driving videos. We expect to use environmental features (measured via algorithms or reported by CARLA) to assist the quality assessment process.
(2) We will repeat the quality rating experiment, except this time the subjects will be asked to drive the car in the simulated settings. Quality will be measured in terms of task-performance (driving time, driving safety).
(3) We will test the algorithm in (1) on real driving videos, and we will compare the results of (2) to the subjective ratings. Our hope is to build the groundwork for a quality assessment algorithm that can work on real videos and assess task-based quality.
|