• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Societal biases in vision and language applications

Research Project

Project/Area Number 22K12091
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Review Section Basic Section 61010:Perceptual information processing-related
Research InstitutionOsaka University

Principal Investigator

GARCIA・DOCAMPO NOA  大阪大学, データビリティフロンティア機構, 特任助教(常勤) (80870005)

Project Period (FY) 2022-04-01 – 2026-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2025: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2024: ¥520,000 (Direct Cost: ¥400,000、Indirect Cost: ¥120,000)
Fiscal Year 2023: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2022: ¥2,340,000 (Direct Cost: ¥1,800,000、Indirect Cost: ¥540,000)
Keywordscomputer vision / machine learning / vision and language / societal bias / fairness / artificial intelligence / benchmarking / bias in computer vision / image captioning / ethical ai / bias in machine learning
Outline of Research at the Start

Artificial intelligence models are being used in the decision-making process of many daily-life applications, having a direct impact on people’s lives. Generally, it is assumed that AI-based decisions are fairer than human-based decisions, however, recent studies have shown the contrary: AI applications not only reproduce the inequalities of society but amplifies them. This project aims to analyze and find solutions to address bias in visual-linguistic models, with the aim of contributing towards making AI fairer.

Outline of Annual Research Achievements

In 2023, we have made substantial advancements on identifying societal biases in artificial intelligence models. Firstly, we have collected and annotated a dataset for studying societal biases in image and language models. Secondly, we proposed a bias mitigation method for image captioning. Lastly, we investigated misinformation in large language models (LLM) like ChatGPT, which largely affects topics related to women and healthcare.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

According to the plan, the project has accomplished the goal of collecting a dataset for studying social bias in vision and language models. We have also proposed mitigation techniques.

Strategy for Future Research Activity

The next steps in the project are to study how bias is transferred from the pretraining datasets to the downstream tasks. We also plan to investigate bias in large generative models like Stable Diffusion.

Report

(2 results)
  • 2023 Research-status Report
  • 2022 Research-status Report
  • Research Products

    (15 results)

All 2023 2022 Other

All Int'l Joint Research (1 results) Presentation (10 results) (of which Int'l Joint Research: 8 results,  Invited: 2 results) Book (1 results) Remarks (3 results)

  • [Int'l Joint Research] Xiamen Key Laboratory of Women/Meetyou AI Lab/Southwest University of Finance(中国)

    • Related Report
      2023 Research-status Report
  • [Presentation] Uncurated Image-Text Datasets: Shedding Light on Demographic Bias2023

    • Author(s)
      Noa Garcia
    • Organizer
      The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Model-Agnostic Gender Debiased Image Captioning2023

    • Author(s)
      Yusuke Hirota
    • Organizer
      The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Text-to-Image Models in Art and Society2023

    • Author(s)
      Noa Garcia
    • Organizer
      Rethinking the ethics of AI symposium
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care2023

    • Author(s)
      Tong Xiang
    • Organizer
      2023 Conference on Neural Information Processing Systems
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] The elephant in the room: Societal biases in vision and language tasks2023

    • Author(s)
      Yusuke Hirota
    • Organizer
      The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023
    • Related Report
      2023 Research-status Report
    • Int'l Joint Research
  • [Presentation] Quantifying Societal Bias Amplification in Image Captioning2022

    • Author(s)
      Yusuke Hirota
    • Organizer
      IEEE/CVF Computer Vision and Pattern Recognition 2022
    • Related Report
      2022 Research-status Report
    • Int'l Joint Research
  • [Presentation] Quantifying Societal Bias Amplification in Image Captioning2022

    • Author(s)
      Yusuke Hirota
    • Organizer
      The 25th Meeting on Image Recognition and Understanding
    • Related Report
      2022 Research-status Report
    • Invited
  • [Presentation] Gender and Racial Bias in Visual Question Answering Datasets2022

    • Author(s)
      Yusuke Hirota
    • Organizer
      ACM Conference on Fairness, Accountability, and Transparency 2022
    • Related Report
      2022 Research-status Report
    • Int'l Joint Research
  • [Presentation] Societal biases in vision and language2022

    • Author(s)
      Noa Garcia
    • Organizer
      Czech Technical University in Prague
    • Related Report
      2022 Research-status Report
    • Int'l Joint Research / Invited
  • [Presentation] Uncovering societal bias in modern artificial intelligence models2022

    • Author(s)
      Noa Garcia
    • Organizer
      III ACE Japan Meeting
    • Related Report
      2022 Research-status Report
  • [Book] Societal Bias in Vision-and-Language Datasets and Models2023

    • Author(s)
      Yuta Nakashima, Yusuke Hirota, Yankun Wu, Noa Garcia
    • Total Pages
      10
    • Publisher
      日本画像学会誌
    • Related Report
      2023 Research-status Report
  • [Remarks] PHASE: Demographic Annotations on the GCC Dataset

    • URL

      https://github.com/noagarcia/phase

    • Related Report
      2023 Research-status Report
  • [Remarks] CARE-MI: Chinese Benchmark for Misinformation

    • URL

      https://github.com/meetyou-ai-lab/care-mi

    • Related Report
      2023 Research-status Report
  • [Remarks] Model-Agnostic Gender Debiased Image Captioning

    • URL

      https://github.com/rebnej/LIBRA

    • Related Report
      2023 Research-status Report

URL: 

Published: 2022-04-19   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi