• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

A novel study on visible ingredient identification in food images for food computing

Research Project

Project/Area Number 22K12095
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Review Section Basic Section 61010:Perceptual information processing-related
Research InstitutionIwate Prefectural University

Principal Investigator

戴 瑩  岩手県立大学, ソフトウェア情報学部, 准教授 (60305290)

Project Period (FY) 2022-04-01 – 2025-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥4,160,000 (Direct Cost: ¥3,200,000、Indirect Cost: ¥960,000)
Fiscal Year 2024: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2023: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2022: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
Keywordsingredient recognition / ingredient segmentation / food image / decision-making / food recognition / deep learning / food computing
Outline of Research at the Start

In this research, we focus on realizing the recognition of the visible ingredients in the food images. For this purpose, a new hierarchical structure for recognizing ingredients is proposed based on 農林水産省の生鮮食品品質表示基準. On the basis of this structure, a novel method of segmenting ingredients from the food images is explored. Then the method of extracting and representing the spotlight regions of the ingredients is investigated. Furthermore, an approach of classifying each ingredient is explored. The effectiveness of the proposed methods are evaluated on the prototype system.

Outline of Annual Research Achievements

Despite remarkable advances in computer vision and machine learning, food image recognition remains very challenging. Machines find it difficult to identify visible ingredients in food images due to significant variability in the shapes of the same ingredients, which often appear visually similar to those from other ingredient categories. In this research, we aim to address these challenges to achieve the recognition of visible ingredients in food images. We also aim to validate the effectiveness and efficiency of the proposed methods, contributing to the development of applications and services in the fields of health, medicine, cooking, nutrition, and related areas.
In 2023, we constructed a single-ingredient image dataset based on 農林水産省の生鮮食品品質表示基準. This dataset was used to train a single-ingredient classification model for recognizing multiple ingredients in food images. Additionally, we developed a multi-ingredient image dataset to rigorously evaluate the performance of multiple ingredient recognition. We then improved a new approach for segmenting multiple ingredients in food images using k-means clustering based on feature maps extracted from the single-ingredient classification model. Furthermore, these segments were recognized using an introduced decision-making scheme. Experimental results validated the effectiveness and efficiency of our method.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

We constructed and improved the single-ingredient image dataset, comprising 9,982 images across 110 diverse categories, emphasizing variety in ingredient shapes and cooking methods. The multiple-ingredient image dataset contains a total of 2,121 images, each depicting multiple ingredients under various cooking conditions.
We proposed a new framework for ingredient segmentation utilizing feature maps of the CNN-based single-ingredient classification model trained on the individual ingredient dataset with image-level annotation. This resolves the problem of excessively hard and time-consuming work required for pixel-level annotations to achieve semantic segmentation.
To tackle the challenge of processing speed in multi-ingredient recognition, we introduced a novel model pruning method to enhance the efficiency of the classification model.
The experiments particularly highlighted its competitive capability in recognizing multiple ingredients compared to state-of-the-art (SOTA) methods. Furthermore, it was found that the CNN-based pruned model enhances the ingredient segmentation accuracy of food images, marking a significant advancement in the field of food image analysis.

Strategy for Future Research Activity

In previous studies, we focused on addressing the issues of high intra-class variances and class imbalance in ingredient classification. This year, our aim is to solve the problem of high inter-class similarity in multiple ingredient recognition in food images. We propose a novel framework to recognize multiple ingredients, aiming to improve the performance of ingredient recognition by analyzing ingredients that are prone to being classified into other similar categories and introducing new models for these ingredients.
Furthermore, to validate the effectiveness and efficiency of the proposed methods, we plan to build a prototype system for multiple ingredient recognition in food images in the MATLAB environment.

Report

(2 results)
  • 2023 Research-status Report
  • 2022 Research-status Report
  • Research Products

    (8 results)

All 2024 2023 2022

All Journal Article (4 results) (of which Peer Reviewed: 4 results,  Open Access: 2 results) Presentation (4 results)

  • [Journal Article] A New CNN-Based Single-Ingredient Classification Model and its Application in Food Image Segmentation2023

    • Author(s)
      Zhu Ziyi, Ying Dai
    • Journal Title

      Journal of Imaging

      Volume: 9 Issue: 10 Pages: 205-205

    • DOI

      10.3390/jimaging9100205

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Open Access
  • [Journal Article] CNN-based visible ingredients recognition in a food image using decision making schemes2023

    • Author(s)
      Kun Fu, Ying Dai, et al.
    • Journal Title

      Proceedings of IEEE SMC 2023

      Volume: 1 Pages: 2427-2432

    • DOI

      10.1109/smc53992.2023.10394513

    • Related Report
      2023 Research-status Report
    • Peer Reviewed
  • [Journal Article] Building CNN-Based Models for Image Aesthetic Score Prediction Using an Ensemble2023

    • Author(s)
      Ying Dai
    • Journal Title

      Journal of Imaging

      Volume: 9 Issue: 2 Pages: 30-30

    • DOI

      10.3390/jimaging9020030

    • Related Report
      2023 Research-status Report 2022 Research-status Report
    • Peer Reviewed / Open Access
  • [Journal Article] CNN-based visible ingredient segmentation in food images for food ingredient recognition2022

    • Author(s)
      Zhu Ziyi、Dai Ying
    • Journal Title

      Proc. of AAAI AAI 2022

      Volume: 1 Pages: 348-253

    • DOI

      10.1109/iiaiaai55812.2022.00077

    • Related Report
      2022 Research-status Report
    • Peer Reviewed
  • [Presentation] 道路交通標識の検出と分類モデルの構築2024

    • Author(s)
      Masato Asanuma, Ying Dai
    • Organizer
      情報処理学会第86回全国大会講演論文集1S-01
    • Related Report
      2023 Research-status Report
  • [Presentation] Stable Diffusion を用いたキャラクタの画風変換に関する研究2024

    • Author(s)
      縣 憲世, 戴 瑩
    • Organizer
      情報処理学会第86回全国大会講演論文集7S-01
    • Related Report
      2023 Research-status Report
  • [Presentation] 対戦格闘ゲームにおける初心者支援システムの研究2024

    • Author(s)
      須賀 智稀, 戴 瑩
    • Organizer
      情報処理学会第86回全国大会講演論文集4S-05
    • Related Report
      2023 Research-status Report
  • [Presentation] CNN-BASED VISIBLE INGREDIENTS RECOGNITION IN A FOOD IMAGE USING DECISION MAKING SCHEMES2023

    • Author(s)
      19.Kun Fu, Ying Dai, Ziyi Zhu
    • Organizer
      情報処理学会第85回全国大会講演論文集 4Q-06
    • Related Report
      2022 Research-status Report

URL: 

Published: 2022-04-19   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi