Project/Area Number |
22K12299
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 62030:Learning support system-related
|
Research Institution | The University of Aizu |
Principal Investigator |
Truong CongThang 会津大学, コンピュータ理工学部, 上級准教授 (40622957)
|
Project Period (FY) |
2022-04-01 – 2025-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥3,120,000 (Direct Cost: ¥2,400,000、Indirect Cost: ¥720,000)
Fiscal Year 2024: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2023: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2022: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
|
Keywords | Quality of experience / Quality evaluation / Adaptive streaming / Multimodal learning / AI-generated content / Quality model / Online learning / Media analysis / Multi-feature learning |
Outline of Research at the Start |
We will first investigate and evaluate existing quality models and transmission methods of adaptive streaming. The models and methods will be improved and adapted for online learning, considering the presence of media objects. Then, solutions of monitoring and managing quality will be developed.
|
Outline of Annual Research Achievements |
We focused on evaluating and managing content quality for end users, where content generation may be done by human or by neural networks. As for quality evaluation, while existing studies just deal with perceptual features, our proposal is that both perceptual features and semantic features are important. We showed that combining such features from traditional quality models and recent LMM models is very effective. A new content type for e-learning and VR, using neural network of radiance field, is evaluated by both subjective and objective experiments. For quality management, we proposed a new adaptive streaming method that deals with sudden drops of connection bandwidth. For this, scalable video coding and HTTP/2 protocol are employed to improve the Quality of Experience for users.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
In the last year, we have successfully progressed in quality evaluation and quality management of visual contents. New quality models and new streaming approaches were investigated. The research has progressed rather smoothly as planned. So far, our research has been based on public image and video datasets. Currently we are creating our own datasets, so as to facilitate new models for new contents.
|
Strategy for Future Research Activity |
In the last year, we found that new neural networks, such as Large Multimodal Models (LMMs) and Neural Radiance Field (NeRF), are important to generate and represent contents in the near future. Thus, our future focus is on 1) creating new content datasets using such neural networks, and then 2) developing new methods to analyze, evaluate, and improve the quality for end users. Also, we will develop a testbed that integrates techniques of content generation/delivery and quality management for multiple users.
|