2022 Fiscal Year Research-status Report
A novel study on visible ingredient identification in food images for food computing
Project/Area Number |
22K12095
|
Research Institution | Iwate Prefectural University |
Principal Investigator |
戴 瑩 岩手県立大学, ソフトウェア情報学部, 准教授 (60305290)
|
Project Period (FY) |
2022-04-01 – 2025-03-31
|
Keywords | ingredient recognition / food image / ingredient segmentation / decision-making |
Outline of Annual Research Achievements |
Despite of remarkable advances in computer vision and machine learning, food image recognition is still very challenging. It is difficult for machine to identify visible ingredients in the food images, because the shapes of the same ingredients may have significant variability, while they are often visually similar to those from the other ingredient categories. In this research, we focus on solving the above issues to realize the recognition of visible ingredients in food images, and validate the effectiveness and efficiency of the proposed methods, so as to contribute to exploiting the applications and services in the fields of health, medicine, cooking, nutrition, and the related areas. Firstly, we has proposed a new ingredient hierarchical structure for classification based on 農林水産省の生鮮食品品質表示基準, which was used to build a benchmark of food ingredients dataset. Secondly, we have developed a novel approach for segmenting visible ingredients in food images by utilizing a single-ingredient classification model. Thirdly, the above segments have been recognized by introducing a decision-making scheme. The experimental results have shown that the combination of the methods of locating and sliding windows improves the average of F1 scores significantly for the ingredients recognition.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
We built a single-ingredient dataset which has a hierarchical structure of 4 levels, involving 110 kinds of ingredients. The dataset collected approximately 20,000 single-ingredient images that did not need to be labeled by bounding boxes. The ingredients were separated by pixel-wise clustering, while the pixels of the images were represented by the feature vectors obtained through the activations of the images fed to the single-ingredient classification model. the candidate regions of the segments for recognition were located by the methods of locating and sliding windows. Then, these regions were assigned to the ingredient classes by the CNN-based ingredients classifier. Finally, the ingredients were decided from these candidate results using the decision-making scheme.
|
Strategy for Future Research Activity |
We try to introduce the mechanism of multichannel attention graph convolutional network to represent the ingredient spotlight regions, which can reveal the discriminated features of ingredients. Because of the issues of the high inter-class similarity and the imbalance classification in the ingredient recognition, it is better to propose a novel framework to recognize each ingredient in all levels in the ingredient hierarchy, and explore the recognition method incorporating the ultra-fine-grained classification and imbalance classification at a high identification rate. Furthermore, to validate the effectiveness and efficiency of the proposed methods, we plan to build a prototype system for the visible ingredient identification on Matlab environment.
|
Causes of Carryover |
今年度は国際会議が日本国内に開催したので旅費が減ったが、次年度はその分をアメリカハワイに開催する国際会議SMC2023の旅費に使用すると考えられる。
|
Research Products
(3 results)