2019 Fiscal Year Final Research Report
Participatory Sensing and Felicitous Recommending of Venues
Project/Area Number |
16K16058
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Multimedia database
|
Research Institution | National Institute of Informatics |
Principal Investigator |
Yu Yi 国立情報学研究所, コンテンツ科学研究系, 特任助教 (00754681)
|
Project Period (FY) |
2016-04-01 – 2020-03-31
|
Keywords | Venue discovery / Cross-modal retrieval / Multimodal learning / Deep Learning / CCA |
Outline of Final Research Achievements |
Visual context-aware applications are very promising because they can provide suitable services adapted to user context. We consider two kinds of scenarios. 1) A user is very interested in a venue photograph obtained from social sharing platform on the Internet, but he does not exactly know where this photograph was taken. 2) A user visits a venue for the first time. He does not know exactly where he is, and the GPS module of his mobile device fails to compute a position, because the user is in an urban canyon, in a building or underground. We study 1) exact venue search (find the venue where the photograph was taken) and 2) group venue search (find relevant venues that have the same category as the photograph) in a joint framework for fine-grained venue discovery based multimodal content association and analysis. Moreover, we also developed a venue discovery demo system based on proposal methods.
|
Free Research Field |
マルチモーダル内容解析, 人工知能・深層学習
|
Academic Significance and Societal Importance of the Research Achievements |
Fine-grained venue discovery relies on the correlation analysis between images and text description of venues. Our research focuses on developing various methods to discover knowledge and relation from more complicated and challenging venue-based heterogeneous multimodal data generated by users.
|