• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2007 Fiscal Year Final Research Report Summary

Development of Linguistic Understanding System of Human Motion for Nursery Support

Research Project

Project/Area Number 17500132
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Perception information processing/Intelligent robotics
Research InstitutionFukuoka Institute of Technology

Principal Investigator

YOKOTA Masao  Fukuoka Institute of Technology, Department of Information Engineering, Professor (50112313)

Project Period (FY) 2005 – 2007
KeywordsNursery Support / Human Motion / Linguistic Understanding / Artificial Intelligence / Multimedia Understanding
Research Abstract

Rapid increase of matured societies will necessarily bring a large number of handicapped people due to aging and thereby serious shortage of workers for them such as nurses, one of whose routines is to survey their actions through TV monitors in order to prevent accidents. However it is unfeasible and privacy-invasive for human workers to watch TV monitors for surveillance all day and night long. For such a purpose, certain kinds of intelligent systems can be very helpful that facilitate exploiting significant parts such as abnormal motions from video data and reporting it in natural language in real time and desirably linguistic summarization of immense amount of recorded ones.
We have proposed a methodology for systematic linguistic interpretation of human motion data based on our original semantic theory, Mental Image Directed Semantic Theory (MIDST), implemented it on the intelligent system IMAGES-M and confirmed its validity for about 40 verb concepts such as 'raise' and 'nod'. Our work's most remarkable advance to the others resides in the transparency of the description of word meanings and the processing algorithms due to the formal language L_<md>. In turn, this feature results in higher modularity of the program and higher order processing of human motion, for example, inference based on the knowledge formalized in L_<md>.
After this successful result, we have started to develop a robotic system with a good capability in natural language understanding in order to promote gentle nursery support for aged people.

  • Research Products

    (10 results)

All 2009 2008 2006 Other

All Journal Article (6 results) (of which Peer Reviewed: 3 results) Presentation (2 results) Book (1 results) Remarks (1 results)

  • [Journal Article] Towards integrated multimedia understanding for intuitive human-system interaction2008

    • Author(s)
      Yokota, M., Shiraishi, M., Sugita, K., Oka, T.
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 188-193

    • Description
      「研究成果報告書概要(和文)」より
    • Peer Reviewed
  • [Journal Article] A live video streaming system for intuitive human-system interaction2008

    • Author(s)
      Sugita.K, Nakamura, N, Oka, T., Yokota, M.
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 194-198

    • Description
      「研究成果報告書概要(和文)」より
    • Peer Reviewed
  • [Journal Article] Towards integrated multimedia understanding for intuitive human-system interaction2008

    • Author(s)
      Yokota, M., Shiraishi, M., Sugita, K, Oka, T.
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 188-193

    • Description
      「研究成果報告書概要(欧文)」より
  • [Journal Article] A live video streaming system for intuitive human-system interaction2008

    • Author(s)
      Sugita, K, Nakamura, N, Oka, T, Yokota, M
    • Journal Title

      Artificial Life and Robotics 2

      Pages: 194-198

    • Description
      「研究成果報告書概要(欧文)」より
  • [Journal Article] Human-robot communication based on a mind model2006

    • Author(s)
      Shiraishi, M., Capi, G., Yokota, M
    • Journal Title

      Artificial Life and Robotics 10-2

      Pages: 136-140

    • Description
      「研究成果報告書概要(和文)」より
    • Peer Reviewed
  • [Journal Article] Human-robot communication based on a mind model2006

    • Author(s)
      Shiraishi, M., Capi, G, Yokota, M
    • Journal Title

      Artificial Life and Robotics 10-2

      Pages: 136-140

    • Description
      「研究成果報告書概要(欧文)」より
  • [Presentation] Cross-media translation of human motion into text and text into animation based on Mental Image Description Language Lmd2008

    • Author(s)
      He, H., Fan, L., Sugita, K.,Yokota, M
    • Organizer
      International Symposium on Artificial Life and Robotics(AROB)
    • Place of Presentation
      Beppu, Japan
    • Year and Date
      20080631
    • Description
      「研究成果報告書概要(欧文)」より
  • [Presentation] Cross-media translation of human motion into text and text into animation based on Mental Image Description Language Lmd2008

    • Author(s)
      He, H., Fan, L., Sugita, K., and Yokota, M
    • Organizer
      International Symposium on Artificial Life and Robotics(AROB)
    • Place of Presentation
      Beppu,Japan
    • Year and Date
      2008-01-31
    • Description
      「研究成果報告書概要(和文)」より
  • [Book] Handbook on Mobile and Ubiquitous Computing:Innovations and Perspectives(eds., Syukur, E., Yang, L.T.&Loke, S.W.)2009

    • Author(s)
      Masao Yokota
    • Total Pages
      200
    • Publisher
      American Scientific Publishers(in press)
    • Description
      「研究成果報告書概要(和文)」より
  • [Remarks] 「研究成果報告書概要(和文)」より

    • URL

      http://www.fit.ac.jp/~yokota/home.html

URL: 

Published: 2010-02-04  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi