研究課題/領域番号 |
22K12091
|
研究種目 |
基盤研究(C)
|
配分区分 | 基金 |
応募区分 | 一般 |
審査区分 |
小区分61010:知覚情報処理関連
|
研究機関 | 大阪大学 |
研究代表者 |
GARCIA・DOCAMPO NOA 大阪大学, データビリティフロンティア機構, 特任助教(常勤) (80870005)
|
研究期間 (年度) |
2022-04-01 – 2026-03-31
|
研究課題ステータス |
交付 (2022年度)
|
配分額 *注記 |
4,290千円 (直接経費: 3,300千円、間接経費: 990千円)
2025年度: 650千円 (直接経費: 500千円、間接経費: 150千円)
2024年度: 520千円 (直接経費: 400千円、間接経費: 120千円)
2023年度: 780千円 (直接経費: 600千円、間接経費: 180千円)
2022年度: 2,340千円 (直接経費: 1,800千円、間接経費: 540千円)
|
キーワード | bias in computer vision / computer vision / image captioning / vision and language / ethical ai / bias in machine learning / fairness |
研究開始時の研究の概要 |
Artificial intelligence models are being used in the decision-making process of many daily-life applications, having a direct impact on people’s lives. Generally, it is assumed that AI-based decisions are fairer than human-based decisions, however, recent studies have shown the contrary: AI applications not only reproduce the inequalities of society but amplifies them. This project aims to analyze and find solutions to address bias in visual-linguistic models, with the aim of contributing towards making AI fairer.
|
研究実績の概要 |
In this project, we have investigated the problem of societal bias in image captioning and multimodal vision and language models. As an outcome of the first year of research: * We have shown that captioning models encode gender and racial bias. * We have proposed a new metric to measure societal bias. * We have annotated a dataset to study and mitigate societal bias. * We have designed a bias mitigation method for image captioning.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
1: 当初の計画以上に進展している
理由
In the first year, we have achieved more milestones than originally planned, and have published our work on top conferences in the field.
|
今後の研究の推進方策 |
Next, we will explore societal bias in text-to-image generation models.
|