• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Development of Linguistic Understanding System of Human Motion for Nursery Support

Research Project

Project/Area Number 17500132
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Perception information processing/Intelligent robotics
Research InstitutionFukuoka Institute of Technology

Principal Investigator

YOKOTA Masao  Fukuoka Institute of Technology, Department of Information Engineering, Professor (50112313)

Co-Investigator(Kenkyū-buntansha) 小田 誠雄  福岡工業大学, 短期大学部, 教授 (10185598)
Project Period (FY) 2005 – 2007
Project Status Completed (Fiscal Year 2007)
Budget Amount *help
¥3,770,000 (Direct Cost: ¥3,500,000、Indirect Cost: ¥270,000)
Fiscal Year 2007: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2006: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 2005: ¥1,700,000 (Direct Cost: ¥1,700,000)
KeywordsNursery Support / Human Motion / Linguistic Understanding / Artificial Intelligence / Multimedia Understanding
Research Abstract

Rapid increase of matured societies will necessarily bring a large number of handicapped people due to aging and thereby serious shortage of workers for them such as nurses, one of whose routines is to survey their actions through TV monitors in order to prevent accidents. However it is unfeasible and privacy-invasive for human workers to watch TV monitors for surveillance all day and night long. For such a purpose, certain kinds of intelligent systems can be very helpful that facilitate exploiting significant parts such as abnormal motions from video data and reporting it in natural language in real time and desirably linguistic summarization of immense amount of recorded ones.
We have proposed a methodology for systematic linguistic interpretation of human motion data based on our original semantic theory, Mental Image Directed Semantic Theory (MIDST), implemented it on the intelligent system IMAGES-M and confirmed its validity for about 40 verb concepts such as 'raise' and 'nod'. Our work's most remarkable advance to the others resides in the transparency of the description of word meanings and the processing algorithms due to the formal language L_<md>. In turn, this feature results in higher modularity of the program and higher order processing of human motion, for example, inference based on the knowledge formalized in L_<md>.
After this successful result, we have started to develop a robotic system with a good capability in natural language understanding in order to promote gentle nursery support for aged people.

Report

(4 results)
  • 2007 Annual Research Report   Final Research Report Summary
  • 2006 Annual Research Report
  • 2005 Annual Research Report
  • Research Products

    (15 results)

All 2009 2008 2006 2005 Other

All Journal Article (10 results) (of which Peer Reviewed: 5 results) Presentation (2 results) Book (1 results) Remarks (2 results)

  • [Journal Article] Towards integrated multimedia understanding for intuitive human-system interaction2008

    • Author(s)
      Yokota, M., Shiraishi, M., Sugita, K., Oka, T.
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 188-193

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2007 Final Research Report Summary
    • Peer Reviewed
  • [Journal Article] A live video streaming system for intuitive human-system interaction2008

    • Author(s)
      Sugita.K, Nakamura, N, Oka, T., Yokota, M.
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 194-198

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2007 Final Research Report Summary
    • Peer Reviewed
  • [Journal Article] Towards integrated multimedia understanding for intuitive human-system interaction2008

    • Author(s)
      Yokota, M., Shiraishi, M., Sugita, K, Oka, T.
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 188-193

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2007 Final Research Report Summary
  • [Journal Article] A live video streaming system for intuitive human-system interaction2008

    • Author(s)
      Sugita, K, Nakamura, N, Oka, T, Yokota, M
    • Journal Title

      Artificial Life and Robotics 2

      Pages: 194-198

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2007 Final Research Report Summary
  • [Journal Article] Towards integrated multimedia understanding for intuitive human-system interaction2008

    • Author(s)
      Yokota.M, Shiraishi.M., Sugita.K, Oka.T
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 188-193

    • Related Report
      2007 Annual Research Report
    • Peer Reviewed
  • [Journal Article] A live video streaming system for intuitive human-system interaction2008

    • Author(s)
      Sugita, K, Nakamura.N, Oka.T, Yokota.M
    • Journal Title

      Artificial Life and Robotics 12

      Pages: 194-198

    • Related Report
      2007 Annual Research Report
    • Peer Reviewed
  • [Journal Article] Human-robot communication based on a mind model2006

    • Author(s)
      Shiraishi, M., Capi, G., Yokota, M
    • Journal Title

      Artificial Life and Robotics 10-2

      Pages: 136-140

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2007 Final Research Report Summary
    • Peer Reviewed
  • [Journal Article] Human-robot communication based on a mind model2006

    • Author(s)
      Shiraishi, M., Capi, G, Yokota, M
    • Journal Title

      Artificial Life and Robotics 10-2

      Pages: 136-140

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2007 Final Research Report Summary
  • [Journal Article] Human-robot communication based on a mind model2006

    • Author(s)
      Shiraishi, M., Capi, G., Yokota, M.
    • Journal Title

      Journal of Artificial Life and Robotics, Springer-Verlag Tokyo 10-2

      Pages: 136-140

    • Related Report
      2006 Annual Research Report
  • [Journal Article] Cross-media Operations between Text and Picture Based on Mental Image Directed Semantic theory2005

    • Author(s)
      Yokota, M., Capi, G.
    • Journal Title

      WSEAS Trans.on INFORMATION SCIENCE and APPLICATIONS Issue 10,2

      Pages: 1541-1550

    • Related Report
      2005 Annual Research Report
  • [Presentation] Cross-media translation of human motion into text and text into animation based on Mental Image Description Language Lmd2008

    • Author(s)
      He, H., Fan, L., Sugita, K., and Yokota, M
    • Organizer
      International Symposium on Artificial Life and Robotics(AROB)
    • Place of Presentation
      Beppu,Japan
    • Year and Date
      2008-01-31
    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2007 Final Research Report Summary
  • [Presentation] Cross-media translation of human motion into text and text into animation based on Mental Image Description Language Lmd2008

    • Author(s)
      He, H., Fan, L., Sugita, K.,Yokota, M
    • Organizer
      International Symposium on Artificial Life and Robotics(AROB)
    • Place of Presentation
      Beppu, Japan
    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2007 Final Research Report Summary
  • [Book] Handbook on Mobile and Ubiquitous Computing:Innovations and Perspectives(eds., Syukur, E., Yang, L.T.&Loke, S.W.)2009

    • Author(s)
      Masao Yokota
    • Total Pages
      200
    • Publisher
      American Scientific Publishers(in press)
    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2007 Final Research Report Summary
  • [Remarks] 「研究成果報告書概要(和文)」より

    • URL

      http://www.fit.ac.jp/~yokota/home.html

    • Related Report
      2007 Final Research Report Summary
  • [Remarks]

    • URL

      http://www.fit.ac.jp/~yokota/home.html

    • Related Report
      2007 Annual Research Report

URL: 

Published: 2005-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi