Recipe dictation system from verbal illustration of how to cook while cooking
Project/Area Number |
23700144
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Media informatics/Database
|
Research Institution | Kyoto University |
Principal Investigator |
YAMAKATA Yoko 京都大学, 情報学研究科, 特定准教授 (60423018)
|
Project Period (FY) |
2011 – 2014
|
Project Status |
Completed (Fiscal Year 2014)
|
Budget Amount *help |
¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2014: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2013: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2012: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2011: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
|
Keywords | メディア情報学 / ヒューマンインターフェイス / 音声対話システム / 映像認識 / 自然言語処理 / メディア情報処理 |
Outline of Final Research Achievements |
This research constructed a method for generating a cooking procedural text automatically by recognizing a cooker’s verbal explanation of how to cook. Usually, on a Japanese recipe, an intermediate food that is manipulated on the process (1), for example, is designated by the number of the process such as “mix (1) to (2).” However, people never call an intermediate food by such a process number but call it as “the vegetable I cut” or “the ingredients of a NIKUJAGA.” Therefore, this research discovered a human wording rule for designating an intermediate object by analyzing cooking videos and interviews with many cookers, and constructed a method to translate automatically a cooker’s designation in speech to suitable wording in text.
|
Report
(4 results)
Research Products
(25 results)