• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Towards quality monitoring and managing for online learning

Research Project

Project/Area Number 22K12299
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Review Section Basic Section 62030:Learning support system-related
Research InstitutionThe University of Aizu

Principal Investigator

Truong CongThang  会津大学, コンピュータ理工学部, 上級准教授 (40622957)

Project Period (FY) 2022-04-01 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥3,120,000 (Direct Cost: ¥2,400,000、Indirect Cost: ¥720,000)
Fiscal Year 2024: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2023: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2022: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
KeywordsQuality of experience / Quality evaluation / Adaptive streaming / Multimodal learning / AI-generated content / Quality model / Online learning / Media analysis / Multi-feature learning
Outline of Research at the Start

We will first investigate and evaluate existing quality models and transmission methods of adaptive streaming. The models and methods will be improved and adapted for online learning, considering the presence of media objects. Then, solutions of monitoring and managing quality will be developed.

Outline of Annual Research Achievements

We focused on evaluating and managing content quality for end users, where content generation may be done by human or by neural networks. As for quality evaluation, while existing studies just deal with perceptual features, our proposal is that both perceptual features and semantic features are important. We showed that combining such features from traditional quality models and recent LMM models is very effective. A new content type for e-learning and VR, using neural network of radiance field, is evaluated by both subjective and objective experiments. For quality management, we proposed a new adaptive streaming method that deals with sudden drops of connection bandwidth. For this, scalable video coding and HTTP/2 protocol are employed to improve the Quality of Experience for users.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

In the last year, we have successfully progressed in quality evaluation and quality management of visual contents. New quality models and new streaming approaches were investigated. The research has progressed rather smoothly as planned. So far, our research has been based on public image and video datasets. Currently we are creating our own datasets, so as to facilitate new models for new contents.

Strategy for Future Research Activity

In the last year, we found that new neural networks, such as Large Multimodal Models (LMMs) and Neural Radiance Field (NeRF), are important to generate and represent contents in the near future. Thus, our future focus is on 1) creating new content datasets using such neural networks, and then 2) developing new methods to analyze, evaluate, and improve the quality for end users. Also, we will develop a testbed that integrates techniques of content generation/delivery and quality management for multiple users.

Report

(2 results)
  • 2023 Research-status Report
  • 2022 Research-status Report
  • Research Products

    (13 results)

All 2024 2023 2022 Other

All Int'l Joint Research (2 results) Journal Article (4 results) (of which Int'l Joint Research: 4 results,  Peer Reviewed: 4 results,  Open Access: 3 results) Presentation (7 results) (of which Int'l Joint Research: 7 results,  Invited: 1 results)

  • [Int'l Joint Research] Hanoi University of Science & Technology(ベトナム)

    • Related Report
      2023 Research-status Report
  • [Int'l Joint Research] Hanoi University of Science & Tech./VinUniversity(ベトナム)

    • Related Report
      2022 Research-status Report
  • [Journal Article] Scalable and resilient 360-degree-video adaptive streaming over HTTP/2 against sudden network drops2024

    • Author(s)
      V.H. Nguyen, D.T. Bui, T. L. Tran, T. C. Thang, T.H. Truong.
    • Journal Title

      Computer Communications

      Volume: 216 Pages: 1-15

    • DOI

      10.1016/j.comcom.2024.01.001

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Perceptual Image Quality Prediction: Are Contrastive Language-Image Pretraining (CLIP) Visual Features Effective?2024

    • Author(s)
      C. Onuoha, J. Flaherty, and T. C. Thang
    • Journal Title

      Electronics

      Volume: 13 Issue: 4 Pages: 803-803

    • DOI

      10.3390/electronics13040803

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Journal Article] Scalable Multicast for Live 360-Degree Video Streaming Over Mobile Networks2022

    • Author(s)
      Duc Nguyen, Nguyen Viet Hung, Nguyen Tien Phong, Truong Thu Huong, Truong Cong Thang
    • Journal Title

      IEEE Access

      Volume: 10 Pages: 38802-38812

    • DOI

      10.1109/access.2022.3165657

    • Related Report
      2022 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Journal Article] QoE models for adaptive streaming: A comprehensive evaluation2022

    • Author(s)
      Duc Nguyen, Nam Pham Ngoc, Truong Cong Thang
    • Journal Title

      Future Internet

      Volume: 14 Issue: 5 Pages: 151-151

    • DOI

      10.3390/fi14050151

    • Related Report
      2022 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Presentation] An Evaluation of Quality Stability in NeRF Object Modeling2024

    • Author(s)
      S. Luo, J. Flaherty, H. Yang, C. Onuoha, and T.C. Thang
    • Organizer
      IEEE Int’l Conf. on Artificial Intelligence and Mechatronics Systems (AIMS)
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] AI to Judge AI-Generated Images: Both Semantics and Perception Matter2023

    • Author(s)
      J. Flaherty, C. Onuoha, I. Paik, T.C. Thang
    • Organizer
      IEEE 13th International Conference on Consumer Electronics-Berlin (ICCE-Berlin)
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Blind Image Quality Assessment With Multimodal Prompt Learning2023

    • Author(s)
      N. T. Luu, C. Onuoha, and T.C. Thang
    • Organizer
      IEEE 15th Int'l Conf on Computational Intelligence and Comm. Networks (CICN)
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] An Evaluation of Training Strategies in QuGAN2023

    • Author(s)
      T. A. Ngo, N.T. Luu, T. C. Thang
    • Organizer
      IEEE Int’l Conf. Quantum Computing and Engineering (QCE2023)
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] An Evaluation of Quality Metrics for Neural Radiance Field2023

    • Author(s)
      C. Onuoha, J. Flaherty, S. Luo, T.T. Huong, and T.C. Thang
    • Organizer
      IEEE 15th Int’l Conf. Computational Intelligence and Communication Networks
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Towards QoE Management for Post-Pandemic Online Learning2022

    • Author(s)
      Truong Cong Thang, Yutaka Watanobe, Rage Uday Kiran, Incheon Paik
    • Organizer
      IEEE 14th International Conf. on Knowledge and Systems Engineering (KSE)
    • Related Report
      2022 Research-status Report
    • Int'l Joint Research / Invited
  • [Presentation] Multi-feature Machine Learning with Quantum Superposition2022

    • Author(s)
      Tuyen Nguyen, Incheon Paik, Truong Cong Thang
    • Organizer
      IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia)
    • Related Report
      2022 Research-status Report
    • Int'l Joint Research

URL: 

Published: 2022-04-19   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi