2023 Fiscal Year Research-status Report
Societal biases in vision and language applications
Project/Area Number |
22K12091
|
Research Institution | Osaka University |
Principal Investigator |
GARCIA・DOCAMPO NOA 大阪大学, データビリティフロンティア機構, 特任助教(常勤) (80870005)
|
Project Period (FY) |
2022-04-01 – 2026-03-31
|
Keywords | computer vision / machine learning / vision and language / societal bias / fairness / artificial intelligence / benchmarking |
Outline of Annual Research Achievements |
In 2023, we have made substantial advancements on identifying societal biases in artificial intelligence models. Firstly, we have collected and annotated a dataset for studying societal biases in image and language models. Secondly, we proposed a bias mitigation method for image captioning. Lastly, we investigated misinformation in large language models (LLM) like ChatGPT, which largely affects topics related to women and healthcare.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
According to the plan, the project has accomplished the goal of collecting a dataset for studying social bias in vision and language models. We have also proposed mitigation techniques.
|
Strategy for Future Research Activity |
The next steps in the project are to study how bias is transferred from the pretraining datasets to the downstream tasks. We also plan to investigate bias in large generative models like Stable Diffusion.
|
Causes of Carryover |
Due to being on maternity leave, I did not use the full amount of the conceded grant. I am planning to use during the next fiscal year to attend conferences to present our work, publication expenses, and collection of datasets.
|
Remarks |
We have released the source code of our models and experiments on the above urls.
|