• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Participatory Sensing and Felicitous Recommending of Venues

Research Project

Project/Area Number 16K16058
Research Category

Grant-in-Aid for Young Scientists (B)

Allocation TypeMulti-year Fund
Research Field Multimedia database
Research InstitutionNational Institute of Informatics

Principal Investigator

Yu Yi  国立情報学研究所, コンテンツ科学研究系, 特任助教 (00754681)

Project Period (FY) 2016-04-01 – 2020-03-31
Project Status Completed (Fiscal Year 2019)
Budget Amount *help
¥3,770,000 (Direct Cost: ¥2,900,000、Indirect Cost: ¥870,000)
Fiscal Year 2018: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2017: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2016: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
KeywordsVenue discovery / Cross-modal retrieval / Multimodal learning / Deep Learning / CCA / venue discovery / cross-modal retrieval / Category-Based Deep CCA / マルチモーダルディープラーニング / ディープハッシング / 文書及び画像間クロスモーダル検索 / マルチモーダル分析 / 情報融合 / イベント検出 / パーソナライズド推薦 / 情報検索
Outline of Final Research Achievements

Visual context-aware applications are very promising because they can provide suitable services adapted to user context. We consider two kinds of scenarios. 1) A user is very interested in a venue photograph obtained from social sharing platform on the Internet, but he does not exactly know where this photograph was taken. 2) A user visits a venue for the first time. He does not know exactly where he is, and the GPS module of his mobile device fails to compute a position, because the user is in an urban canyon, in a building or underground.
We study 1) exact venue search (find the venue where the photograph was taken) and 2) group venue search (find relevant venues that have the same category as the photograph) in a joint framework for fine-grained venue discovery based multimodal content association and analysis. Moreover, we also developed a venue discovery demo system based on proposal methods.

Academic Significance and Societal Importance of the Research Achievements

Fine-grained venue discovery relies on the correlation analysis between images and text description of venues. Our research focuses on developing various methods to discover knowledge and relation from more complicated and challenging venue-based heterogeneous multimodal data generated by users.

Report

(5 results)
  • 2019 Annual Research Report   Final Research Report ( PDF )
  • 2018 Research-status Report
  • 2017 Research-status Report
  • 2016 Research-status Report
  • Research Products

    (10 results)

All 2019 2018 2017 2016 Other

All Journal Article (3 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 3 results,  Open Access: 1 results,  Acknowledgement Compliant: 1 results) Presentation (5 results) (of which Int'l Joint Research: 5 results,  Invited: 2 results) Remarks (2 results)

  • [Journal Article] Ensemble super-resolution with a reference dataset2019

    • Author(s)
      Junjun Jiang, Yi Yu, Zheng Wang, Suhua Tang, and Ruimin Hu
    • Journal Title

      IEEE Transactions on Cybernetics, pp.1-15, 2019.

      Volume: No Pages: 1-15

    • Related Report
      2019 Annual Research Report
    • Peer Reviewed
  • [Journal Article] Category-Based Deep CCA for Fine-Grained Venue Discovery from Multimodal Data2019

    • Author(s)
      Yi Yu, Suhua Tang, Kiyoharu Aizawa, and Akiko Aizawa
    • Journal Title

      IEEE Transaction on Neural Network and Learning System

      Volume: 30 Issue: 4 Pages: 1250-1258

    • DOI

      10.1109/tnnls.2018.2856253

    • Related Report
      2018 Research-status Report
    • Peer Reviewed
  • [Journal Article] Leveraging Multimodal Information for Event Summarization and Concept-level Sentiment Analysis2016

    • Author(s)
      Rajiv Ratn Shah, Yi Yu, Akshay Verma, Suhua Tang, Anwar Dilawar Shaikhe, Roger Zimmermann
    • Journal Title

      Journal of Knowledge-Based Systems

      Volume: 108 Pages: 102-109

    • DOI

      10.1016/j.knosys.2016.05.022

    • Related Report
      2016 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research / Acknowledgement Compliant
  • [Presentation] Audio-Visual embedding for cross-modal music video retrieval through Supervised Deep CCA2018

    • Author(s)
      Donghuo Zeng, Yi Yu, Keizo Oyama
    • Organizer
      IEEE International Symposium on Multimedia
    • Related Report
      2018 Research-status Report
    • Int'l Joint Research
  • [Presentation] VenueNet: Fine-Grained Venue Discovery by Deep Correlation Learning2017

    • Author(s)
      Yi Yu, Suhua Tang, Kiyoharu Aizawa, Akiko Aizawa
    • Organizer
      The 19th IEEE International Symposium on Multimedia (ISM 2017)
    • Related Report
      2017 Research-status Report
    • Int'l Joint Research / Invited
  • [Presentation] Deep Multi-label Hashing for Large-Scale Visual Search Based on Semantic Graph2017

    • Author(s)
      Chunlin Zhong, Yi Yu, Suhua Tang, Shin'ichi Satoh, Kai Xing
    • Organizer
      APWeb/WAIM 2017
    • Related Report
      2017 Research-status Report
    • Int'l Joint Research / Invited
  • [Presentation] PROMPT: Personalized User Tag Recommendation for Social Media Photos Leveraging Personal and Social Contexts2016

    • Author(s)
      Rajiv Ratn Shah, Anupam Samanta, Deepak Gupta, Yi Yu, Suhua Tang, Roger Zimmermann
    • Organizer
      IEEE International Symposium on Multimedia
    • Place of Presentation
      San Jose, California, USA
    • Year and Date
      2016-12-11
    • Related Report
      2016 Research-status Report
    • Int'l Joint Research
  • [Presentation] Concept-Level Multimodal Ranking of Flickr Photo Tags via Recall Based Weighting2016

    • Author(s)
      Rajiv Ratn Shah, Yi Yu, Suhua Tang, Shin’ichi Satoh, Akshay Verma, Roger Zimmermann
    • Organizer
      Multimedia COMMONS Workshop at ACM Multimedia 2016
    • Place of Presentation
      Amsterdam, The Netherlands
    • Year and Date
      2016-10-15
    • Related Report
      2016 Research-status Report
    • Int'l Joint Research
  • [Remarks]

    • URL

      http://research.nii.ac.jp/~yiyu/

    • Related Report
      2018 Research-status Report
  • [Remarks]

    • URL

      https://www.nii.ac.jp/faculty/digital_content/yu_yi/

    • Related Report
      2018 Research-status Report

URL: 

Published: 2016-04-21   Modified: 2021-02-19  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi