研究課題/領域番号 |
23H03439
|
配分区分 | 補助金 |
研究機関 | 九州大学 |
研究代表者 |
THOMAS DIEGO 九州大学, システム情報科学研究院, 准教授 (10804651)
|
研究分担者 |
鍛冶 静雄 九州大学, マス・フォア・インダストリ研究所, 教授 (00509656)
古賀 靖子 九州大学, 人間環境学研究院, 准教授 (60225399)
川崎 洋 九州大学, システム情報科学研究院, 教授 (80361393)
落合 啓之 九州大学, マス・フォア・インダストリ研究所, 教授 (90214163)
|
研究期間 (年度) |
2023-04-01 – 2026-03-31
|
キーワード | Animatable Avatar / 3D reconstruction / Deep learning / Weak supervision / Digital humans |
研究実績の概要 |
We proposed a new method for human body animation that generates pose-dependent detailed deformations in real-time on standard animation pipeline. Our proposed method can animate an avatar up to 30 times faster than baselines with better level of details. The results of this research was published in the proceedings of the international conference Computer Graphics International (CGI) 2023. We proposed a novel AI-based approach to the 3D reconstruction of clothed humans using weak supervision via 2D normal maps. Our results reinforce the notion that less training data is required to train networks that infer normal maps than to train networks that infer 3D geometry. The results were published as on arXiv and submitted the European Conference on Computer Vision (ECCV) 2024.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
In FY 2023 we had three main objectives:1) Design an efficient differentiable 3D render from implicit 3D surface representations for 3D reconstruction of clothed humans; 2) Propose a new method to create real-time animatable avatars from RGB-D data; 3) Capture multi-view RGB-D human data at the university.
We achieved our objectives as planned: 1) In order to address the objective of learning detailed 3D clothed human shapes from 2.5D datasets, we proposed a novel AI-based approach using weak supervision via 2D normal maps; 2) We proposed a new method for animatable avatars that allows to control deformation of body and clothes of the avatar such as wrinkles in real-time; 3) We prepared a 3D capture system in the lab with calibrated RGB-D cameras. We captured some real data using our system.
|
今後の研究の推進方策 |
Our goal is to propose new methods for creating digital human twins supported by generative AI.
Our future research plan is: 1. Weakly supervised 3D reconstruction.[FY 2024] Employ adversarial learning to learn from both RGB-D and large-scale RGB datasets. [FY 2025] Propose adaptive tessellation of the 3D space to reduce computational cost while maintaining level of details. 2. Real-time photorealistic animatable avatars. [FY 2024] Add detailed animation of hands and face to the animatable avatar.[FY 2025] Capture texture and material properties of skin and clothes . 3. Semantic dynamic bodies.[FY 2024] Design action dependent animated 3D human-scene. [FY 2025] Populate 3D scenes with animated 3D human bodies that interact with the scene in a semantically correct manner.
|