Budget Amount *help |
¥5,330,000 (Direct Cost: ¥4,100,000、Indirect Cost: ¥1,230,000)
Fiscal Year 2014: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2013: ¥1,950,000 (Direct Cost: ¥1,500,000、Indirect Cost: ¥450,000)
Fiscal Year 2012: ¥2,470,000 (Direct Cost: ¥1,900,000、Indirect Cost: ¥570,000)
|
Outline of Final Research Achievements |
This research finally proposed a framework of learning relationship between ‘fingering’ and object appearances from samples which enables to infer the functional class of objects and remind the fingering and grasping way for a given object appearance, and experimentally showed its availability: 1) The method of estimating wrist position based on Hough Forest voting of Joint hand image features,2) The method of segmentation of the grasped object based on suitable grasping pattern bound to a certain object function class,3) The method of constructing ‘grasping pattern feature space’ which describes interaction states in object grasping by hand by using Sparse Stacked Convolutional Autoencoder,4) The method of reminding the grasping approach by hand from stimulus of a object appearance by using Convolutional Neural Network .
|