2000 Fiscal Year Final Research Report Summary
Research on Digital Archives for Human Motion Data and Video Data
Project/Area Number |
11680415
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
情報システム学(含情報図書館学)
|
Research Institution | KOBE UNIVERSITY |
Principal Investigator |
UEHARA Kuniaki Kobe University Research Center for Urban Safety and Security, Professor, 都市安全研究センター, 教授 (60160206)
|
Project Period (FY) |
1999 – 2000
|
Keywords | Motion capture / digital library / video databases / Scene segmentatiomt / motion data / video data |
Research Abstract |
In researches on video database, indexing video data is necessary for quick and correct retrieval. The most important process in indexing video data is to segment video into coherent units called scenes, more than just partial video that we call cuts. Manual segmentation is accurate, but it takes a lot of time and background knowledge about the content of video. We have developed the method to segment video automatically, by integrating color, textual, and temporal information extracted from video. Furthermore, we have focused on tagging human motion data. The motion data has the following features : movements on some body parts influence those on other body parts. We call this dependency as motion association rule. Thus the task of tagging motion data is equal to the task of expressing motion by using motion association rules. Because human motion data has above dependency, we treat motion data as multistream. In order to find motion association rules and cut off the cost of the task, we convert motion data of high dimension into multiple sequences of symbols of lower dimension. Each symbol uniquely represents primitive motion. The sequences of symbols can be obtained by content-based segmentation and clustering of similar segments.
|
Research Products
(22 results)