研究課題/領域番号 |
22K12091
|
研究種目 |
基盤研究(C)
|
配分区分 | 基金 |
応募区分 | 一般 |
審査区分 |
小区分61010:知覚情報処理関連
|
研究機関 | 大阪大学 |
研究代表者 |
GARCIA・DOCAMPO NOA 大阪大学, データビリティフロンティア機構, 特任助教(常勤) (80870005)
|
研究期間 (年度) |
2022-04-01 – 2026-03-31
|
研究課題ステータス |
交付 (2023年度)
|
配分額 *注記 |
4,290千円 (直接経費: 3,300千円、間接経費: 990千円)
2025年度: 650千円 (直接経費: 500千円、間接経費: 150千円)
2024年度: 520千円 (直接経費: 400千円、間接経費: 120千円)
2023年度: 780千円 (直接経費: 600千円、間接経費: 180千円)
2022年度: 2,340千円 (直接経費: 1,800千円、間接経費: 540千円)
|
キーワード | computer vision / machine learning / vision and language / societal bias / fairness / artificial intelligence / benchmarking / bias in computer vision / image captioning / ethical ai / bias in machine learning |
研究開始時の研究の概要 |
Artificial intelligence models are being used in the decision-making process of many daily-life applications, having a direct impact on people’s lives. Generally, it is assumed that AI-based decisions are fairer than human-based decisions, however, recent studies have shown the contrary: AI applications not only reproduce the inequalities of society but amplifies them. This project aims to analyze and find solutions to address bias in visual-linguistic models, with the aim of contributing towards making AI fairer.
|
研究実績の概要 |
In 2023, we have made substantial advancements on identifying societal biases in artificial intelligence models. Firstly, we have collected and annotated a dataset for studying societal biases in image and language models. Secondly, we proposed a bias mitigation method for image captioning. Lastly, we investigated misinformation in large language models (LLM) like ChatGPT, which largely affects topics related to women and healthcare.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
According to the plan, the project has accomplished the goal of collecting a dataset for studying social bias in vision and language models. We have also proposed mitigation techniques.
|
今後の研究の推進方策 |
The next steps in the project are to study how bias is transferred from the pretraining datasets to the downstream tasks. We also plan to investigate bias in large generative models like Stable Diffusion.
|