Project/Area Number |
22K12091
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Review Section |
Basic Section 61010:Perceptual information processing-related
|
Research Institution | Osaka University |
Principal Investigator |
GARCIA・DOCAMPO NOA 大阪大学, データビリティフロンティア機構, 特任助教(常勤) (80870005)
|
Project Period (FY) |
2022-04-01 – 2026-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2025: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2024: ¥520,000 (Direct Cost: ¥400,000、Indirect Cost: ¥120,000)
Fiscal Year 2023: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2022: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
|
Keywords | computer vision / machine learning / vision and language / societal bias / fairness / artificial intelligence / benchmarking / bias in computer vision / image captioning / ethical ai / bias in machine learning |
Outline of Research at the Start |
Artificial intelligence models are being used in the decision-making process of many daily-life applications, having a direct impact on people’s lives. Generally, it is assumed that AI-based decisions are fairer than human-based decisions, however, recent studies have shown the contrary: AI applications not only reproduce the inequalities of society but amplifies them. This project aims to analyze and find solutions to address bias in visual-linguistic models, with the aim of contributing towards making AI fairer.
|
Outline of Annual Research Achievements |
In 2023, we have made substantial advancements on identifying societal biases in artificial intelligence models. Firstly, we have collected and annotated a dataset for studying societal biases in image and language models. Secondly, we proposed a bias mitigation method for image captioning. Lastly, we investigated misinformation in large language models (LLM) like ChatGPT, which largely affects topics related to women and healthcare.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
According to the plan, the project has accomplished the goal of collecting a dataset for studying social bias in vision and language models. We have also proposed mitigation techniques.
|
Strategy for Future Research Activity |
The next steps in the project are to study how bias is transferred from the pretraining datasets to the downstream tasks. We also plan to investigate bias in large generative models like Stable Diffusion.
|