• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 課題ページに戻る

2023 年度 実績報告書

A new data-driven approach to bring humanity into virtual worlds with computer vision

研究課題

研究課題/領域番号 23H03439
配分区分補助金
研究機関九州大学

研究代表者

THOMAS DIEGO  九州大学, システム情報科学研究院, 准教授 (10804651)

研究分担者 鍛冶 静雄  九州大学, マス・フォア・インダストリ研究所, 教授 (00509656)
古賀 靖子  九州大学, 人間環境学研究院, 准教授 (60225399)
川崎 洋  九州大学, システム情報科学研究院, 教授 (80361393)
落合 啓之  九州大学, マス・フォア・インダストリ研究所, 教授 (90214163)
研究期間 (年度) 2023-04-01 – 2026-03-31
キーワードAnimatable Avatar / 3D reconstruction / Deep learning / Weak supervision / Digital humans
研究実績の概要

We proposed a new method for human body animation that generates pose-dependent detailed deformations in real-time on standard animation pipeline. Our proposed method can animate an avatar up to 30 times faster than baselines with better level of details. The results of this research was published in the proceedings of the international conference Computer Graphics International (CGI) 2023.
We proposed a novel AI-based approach to the 3D reconstruction of clothed humans using weak supervision via 2D normal maps. Our results reinforce the notion that less training data is required to train networks that infer normal maps than to train networks that infer 3D geometry. The results were published as on arXiv and submitted the European Conference on Computer Vision (ECCV) 2024.

現在までの達成度 (区分)
現在までの達成度 (区分)

2: おおむね順調に進展している

理由

In FY 2023 we had three main objectives:1) Design an efficient differentiable 3D render from implicit 3D surface representations for 3D reconstruction of clothed humans; 2) Propose a new method to create real-time animatable avatars from RGB-D data; 3) Capture multi-view RGB-D human data at the university.

We achieved our objectives as planned: 1) In order to address the objective of learning detailed 3D clothed human shapes from 2.5D
datasets, we proposed a novel AI-based approach using weak supervision via 2D normal maps; 2) We proposed a new method for animatable avatars that allows to control deformation of body and clothes of the avatar such as wrinkles in real-time; 3) We prepared a 3D capture system in the lab with calibrated RGB-D cameras. We captured some real data using our system.

今後の研究の推進方策

Our goal is to propose new methods for creating digital human twins supported by generative AI.

Our future research plan is:
1. Weakly supervised 3D reconstruction.[FY 2024] Employ adversarial learning to learn from both RGB-D and large-scale RGB datasets. [FY 2025] Propose adaptive tessellation of the 3D space to reduce computational cost while maintaining level of details.
2. Real-time photorealistic animatable avatars. [FY 2024] Add detailed animation of hands and face to the animatable avatar.[FY 2025] Capture texture and material properties of skin and clothes .
3. Semantic dynamic bodies.[FY 2024] Design action dependent animated 3D human-scene. [FY 2025] Populate 3D scenes with animated 3D human bodies that interact with the scene in a semantically correct manner.

  • 研究成果

    (13件)

すべて 2024 2023 その他

すべて 国際共同研究 (3件) 雑誌論文 (5件) (うち国際共著 5件、 査読あり 4件) 学会発表 (3件) (うち国際学会 3件) 備考 (2件)

  • [国際共同研究] Universidade Federal Rural de Pernambuco(ブラジル)

    • 国名
      ブラジル
    • 外国機関名
      Universidade Federal Rural de Pernambuco
  • [国際共同研究] INRIA/ESIEE, University Gustave Eiffel/University Savoie Mont Blanc(フランス)

    • 国名
      フランス
    • 外国機関名
      INRIA/ESIEE, University Gustave Eiffel/University Savoie Mont Blanc
  • [国際共同研究] Stanford University/University of California, Berkeley(米国)

    • 国名
      米国
    • 外国機関名
      Stanford University/University of California, Berkeley
  • [雑誌論文] ActiveNeuS: Neural Signed Distance Fields for Active Stereo2024

    • 著者名/発表者名
      Kazuto Ichimaru Takaki Ikeda Diego Thomas Takafumi Iwaguchi Hiroshi Kawasaki
    • 雑誌名

      International Conference on 3D Vision

      巻: 1 ページ: 1-9

    • 査読あり / 国際共著
  • [雑誌論文] ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation2023

    • 著者名/発表者名
      Teshima Hitoshi、Wake Naoki、Thomas Diego、Nakashima Yuta、Kawasaki Hiroshi、Ikeuchi Katsushi
    • 雑誌名

      Proceedings of the ACM on Computer Graphics and Interactive Techniques

      巻: 6 ページ: 1~17

    • DOI

      10.1145/3606940

    • 査読あり / 国際共著
  • [雑誌論文] A Two-Step Approach for Interactive Animatable Avatars2023

    • 著者名/発表者名
      Kitamura Takumi、Iwamoto Naoya、Kawasaki Hiroshi、Thomas Diego
    • 雑誌名

      Computer Graphics International Conference

      巻: 1 ページ: 491~509

    • DOI

      10.1007/978-3-031-50072-5_39

    • 査読あり / 国際共著
  • [雑誌論文] Toward Unlabeled Multi-View 3D Pedestrian Detection by Generalizable AI: Techniques and Performance Analysis2023

    • 著者名/発表者名
      Lima Joao Paulo、Thomas Diego、Uchiyama Hideaki、Teichrieb Veronica
    • 雑誌名

      2023 36th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)

      巻: 1 ページ: 1-6

    • DOI

      10.1109/SIBGRAPI59091.2023.10347151

    • 査読あり / 国際共著
  • [雑誌論文] Weakly-Supervised 3D Reconstruction of Clothed Humans via Normal Maps2023

    • 著者名/発表者名
      Jane Wu, Diego Thomas, Ronald Fedkiw
    • 雑誌名

      arXiv preprint arXiv:2311.16042

      巻: 1 ページ: 1-15

    • 国際共著
  • [学会発表] ActiveNeuS: Neural Signed Distance Fields for Active Stereo2024

    • 著者名/発表者名
      Kazuto Ichimaru Takaki Ikeda Diego Thomas Takafumi Iwaguchi Hiroshi Kawasaki
    • 学会等名
      International Conference on 3D Vision
    • 国際学会
  • [学会発表] A Two-Step Approach for Interactive Animatable Avatars2023

    • 著者名/発表者名
      Kitamura Takumi、Iwamoto Naoya、Kawasaki Hiroshi、Thomas Diego
    • 学会等名
      Computer Graphics International Conference
    • 国際学会
  • [学会発表] ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation2023

    • 著者名/発表者名
      Teshima Hitoshi、Wake Naoki、Thomas Diego、Nakashima Yuta、Kawasaki Hiroshi、Ikeuchi Katsushi
    • 学会等名
      Proceedings of the ACM on Computer Graphics and Interactive Techniques
    • 国際学会
  • [備考] Interactive Animatable Avatar

    • URL

      https://github.com/diegothomas/Interactive-Animatable-Avatar

  • [備考] Digital Humans Lab

    • URL

      https://diegothomas.github.io/DigitalHumans-lab/

URL: 

公開日: 2024-12-25  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi