2014 Fiscal Year Annual Research Report
Project/Area Number |
14J04733
|
Research Institution | Osaka University |
Principal Investigator |
BASOEKI Fransiska 大阪大学, 基礎工学研究科, 特別研究員(DC2)
|
Project Period (FY) |
2014-04-25 – 2016-03-31
|
Keywords | ロボット / ヒューマノイド / 接触 / 学習 |
Outline of Annual Research Achievements |
The main purpose of this research is to enable teaching a robot in the same way as how we teach a human child. The major aspect that is to be considered in this research is that a human child does not follow the instructions of the parent in a direct manner. Instead, the child adapts the instruction to fit the situation he or she is in. In this work, the focus is on making it possible for general people to intuitively teach movements to humanoid robots through physical interaction, in particular, through touch. To realize this particular way of teaching, the first step is to identify how people think the robot should respond when it is touched. The type of responses were gathered through a set of experiments. The findings are summarized in a journal paper that is currently under review. To extract the type of responses, the responses are first described in a categorical variable expression and then grouped into several clusters using an unsupervised learning method, Expectation Maximization, applied to a Naive Bayes model. By clustering, it is possible to see how similar responses are generated by touch on different places or how people share the expectation of some type of responses. The clustering and the description of each clusters were validated through another set of experiments. It was also found from the analysis that the more difficult a response is to classify by the algorithm, the more it is perceived as unnatural by people. A different way of clustering (by Multiple correspondence analysis) was performed and the result is summarized as a conference paper.
|
Current Status of Research Progress |
Current Status of Research Progress
1: Research has progressed more than it was originally planned.
Reason
In addition to the analysis of responses to touch and the original plan, the applicant also investigates and develop different distance metrics to measure the similarity between two postures. As originally written in the proposal, the plan of this year was to realize the automatic adjustment of the robot's posture. When a parent teach a child, the child may not follow the parent's instructions as is, but he or she may change the posture slightly in order to keep the balance or assume a more comfortable posture. In the same way, the robot may also not follow the user's direct instruction and changes its posture to a better posture, probably to fix the balance or to minimize the motor's torque. In this case, in order not to confuse the user, the robot needs to assume a posture as similar as possible to the one that the user instructed. However, there is no concrete way found in literature to measure the distance between postures. Hints were acquired from the results of previous experiment. It was found that describing postures by categorical variables provides a good metric to measure distance between postures, in comparison to the commonly used metrics, joint angles or Cartesian positions. In order to further prove the validity of our distance metric, an experiment was conducted where participants were asked to rank two postures' similarity. This data was used to develop the distance function that evaluates quantitatively the similarity between two postures. Currently the analysis is being performed and the findings will be summarized as a journal paper.
|
Strategy for Future Research Activity |
The plan of the next financial year is, as stated in the initial research plan, the development of a system that integrates the automatic posture adjustment with multi-modality of the teaching. In detail, in addition to the use of touch, voice commands will also be used as a way to give instructions. Voice recognition software readily available as a product in the market will be used in the system. A dictionary specialized for the system will be prepared including words commonly used in motion development, such as "move faster" or "keep the left leg straight". Compared to the current touch-only system, the development of such a multimodal system is expected to provide an even more efficient and natural teaching of motions to humanoid robots and fills in the void in the system where it is impossible to implement by touch itself, for example touching when the robot is moving. Furthermore, in order to make the communication between the teacher and the robot more clear, feedback system will also be included. Often, the user does not understand why the robot fails to follow his or her command. When this is happening repetitively, it will create frustration and thus stopping the user to use the system. However, by making the robot convey to the user what it recognizes and what it doesn't, the user will have a better view of how he or she should change the way of conveying the instructions to the robot, and this will lead to a more easily usable motion development system.
|
Research Products
(1 results)