Project/Area Number |
14380186
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
社会システム工学
|
Research Institution | University of Tsukuba |
Principal Investigator |
INAGAKI Toshiyuki Univ of Tsukuba, Graduate School of Systems and Information Engineering, Professor, 大学院・システム情報工学研究科, 教授 (60134219)
|
Co-Investigator(Kenkyū-buntansha) |
FURUKAWA Hiroshi Univ of Tsukuba, Graduate School of Systems and Information Engineering, Associate Professor, 大学院・システム情報工学研究科, 助教授 (90311597)
ITOH Makoto Univ of Tsukuba, Graduate School of Systems and Information Engineering, Assistant Professor, 大学院・システム情報工学研究科, 講師 (00282343)
AKAMATSU Motoyuki AIST, Behavioral Modeling Group, Leader, 行動モデリンググループ, 研究グループ長 (60356342)
米澤 直記 筑波大学, 電子・情報工学系, 助手 (70312832)
|
Project Period (FY) |
2002 – 2004
|
Project Status |
Completed (Fiscal Year 2004)
|
Budget Amount *help |
¥12,900,000 (Direct Cost: ¥12,900,000)
Fiscal Year 2004: ¥1,400,000 (Direct Cost: ¥1,400,000)
Fiscal Year 2003: ¥5,600,000 (Direct Cost: ¥5,600,000)
Fiscal Year 2002: ¥5,900,000 (Direct Cost: ¥5,900,000)
|
Keywords | Risk perception / Situation awareness / Safety control / Function allocation / Human interface / Discrete-event simulation model / Trading of authority / Levels of automation / ヒューマン・インターフェース / 離散事象モデル / メンタルモデル / 監視制御 / ヒューマンインタフェース / 注視点追跡 |
Research Abstract |
This research project has investigated design of human interface to support human-machine collaborations under dynamically changing environment in which allotted time. may be limited for situation recognition, decision making, and action selection and implementation. Function allocation needs to be dynamic and situation-adaptive to support humans appropriately. Machines have thus been given various types of intelligence. Intelligent machines can now sense and analyze situations, decide what must be done, and implement control actions. It is true, however, humans working with such smart machines often suffer negative consequences of automation, such as the out-of-the-loop performance problem, loss of situation awareness, complacency or over-trust, and automation-induced surprises. By taking the adaptive cruise control (ACC) system as a real-world example of adaptive systems, this research project has developed the concept of multi-layered human interface in which information is provided with human drivers so that they can share situation recognition and intentions with automated systems and authority may be traded dynamically in an emergency to assure system safety. The efficacy of the human interface has been analyzed and evaluated through series of cognitive experiments with various types of scenarios. Furthermore, computer simulation methods have been developed to investigate degree of safety degradation due to drivers' over-trust in automation, in which conventional cognitive experimental approach may not be applicable or feasible. It has been proven that an effective strategy for function allocation between humans and automation assuring system safety can lie within the category that does not fulfill conditions assumed for a conventional human-centered automation.
|