A smart chemistry experimental environment for affirmative safety training and safe experiment
Project/Area Number |
15K00265
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Human interface and interaction
|
Research Institution | Tokyo University of Agriculture and Technology |
Principal Investigator |
Fujinami Kaori 東京農工大学, 工学(系)研究科(研究院), 准教授 (10409633)
|
Co-Investigator(Renkei-kenkyūsha) |
LENGGORO Wuled 東京農工大学, 大学院工学研究院, 准教授 (10304403)
|
Research Collaborator |
KINOSHITA Keiko
ICHIHASHI Keita
TANIDA Yuki
MURAKOSHI Shuntaro
TOKIWA Mai
SATO Koji
|
Project Period (FY) |
2015-04-01 – 2018-03-31
|
Project Status |
Completed (Fiscal Year 2017)
|
Budget Amount *help |
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2017: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2016: ¥1,820,000 (Direct Cost: ¥1,400,000、Indirect Cost: ¥420,000)
Fiscal Year 2015: ¥1,690,000 (Direct Cost: ¥1,300,000、Indirect Cost: ¥390,000)
|
Keywords | 安全学習 / 空間投影型拡張現実感 / 機械学習 / ウェアラブルデバイス / プロジェクタ・カメラシステム / 化学実験 / 機械学習応用 / 拡張現実感 / ウェアラブルシステム / 安全教育 / 映像要約 / ウェアラブル端末 / プロジェクタ / スマート空間 / 情報通知 / 記憶補助 / 視認性 |
Outline of Final Research Achievements |
Four core technologies were investigated that aimed to facilitate on-site safety training of university chemistry experiments; (1) a pose-aware whole-wrist circumference display was investigated, where the most visible position on the wrist is determined by a machine learning (ML)-based algorithm regardless of forearm posture; (2) a view management method for projection-based display was investigated, where the visibility of a particular tabletop condition is estimated by a ML-based estimator; (3) a still image extraction method was developed to make contents for learning, where situations of "having interests" and "grooving" were identified by image processing of first person view camera movies; (4) a notification device selection method was investigated, where the most appropriate device, e.g., a smart-watch, is chosen based on the features from positional relationship between candidate devices and the user, i.e., the distance, touching, gazing, angle of face.
|
Report
(4 results)
Research Products
(18 results)