研究課題/領域番号 |
23K28145
|
補助金の研究課題番号 |
23H03455 (2023)
|
研究種目 |
基盤研究(B)
|
配分区分 | 基金 (2024) 補助金 (2023) |
応募区分 | 一般 |
審査区分 |
小区分61030:知能情報学関連
小区分60030:統計科学関連
合同審査対象区分:小区分60030:統計科学関連、小区分61030:知能情報学関連
|
研究機関 | 京都大学 |
研究代表者 |
Rafik Hadfi 京都大学, 情報学研究科, 特定准教授 (30867495)
|
研究期間 (年度) |
2023-04-01 – 2026-03-31
|
研究課題ステータス |
交付 (2024年度)
|
配分額 *注記 |
14,170千円 (直接経費: 10,900千円、間接経費: 3,270千円)
2025年度: 7,020千円 (直接経費: 5,400千円、間接経費: 1,620千円)
2024年度: 4,680千円 (直接経費: 3,600千円、間接経費: 1,080千円)
2023年度: 2,470千円 (直接経費: 1,900千円、間接経費: 570千円)
|
キーワード | Autonomy / Agency / Artificial Intelligence / Trustworthy AI / Game Theory / Trust |
研究開始時の研究の概要 |
The respect for human autonomy is a crucial principle in developing trustworthy AI. This project investigates autonomy, explores the factors influencing it, and tests it with human and AI agents.
|
研究実績の概要 |
The current achievement involves surveying the two main concepts of the project (agency and autonomy) and developing algorithms that use information-theoretic metrics, precisely mutual information, to define autonomy and agency. The resulting algorithms were tested on automata that simulate systems of interacting players (in the game theoretic sense). The findings from these tests are now being prepared for submission to a journal. The second achievement is the presentation of the theoretical model of the research and its implications for the study of agency and autonomy from an enactive perspective. This theoretical model results from the surveys of the concepts in cognitive sciences, particularly in 4E cognition. The last finding was presented at the CHAIN Philosophy Workshop "Frontiers of Enactivism in East Asia" at Hokkaido University.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The initial phase of the project is primarily theoretical. It involves surveying various accounts of agency and autonomy and developing mathematical and algorithmic formulations of these concepts.
|
今後の研究の推進方策 |
The plan consists in three main phases, starting with the theoretical characterization of autonomy. The first phase involves formulating autonomy using information-theoretic metrics, specifically conditional mutual information, and translating these metrics into algorithms capable of quantifying autonomy in systems composed of two agents. These algorithms will then be tested on automata that simulate real systems of interacting game-theoretic players. This initial implementation will be conducted with the assistance of a research assistant (RA) and primarily using a laptop. The ongoing second phase focuses on constructing a game-theoretic framework for autonomy that incorporates the previously developed metrics into various games, such as Iterated Prisoner's Dilemma (PD). This phase involves algorithmic implementations, which will be coded in Python with the support of an RA.
|