Video retrieval by using video ontology
Project/Area Number |
19300028
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Media informatics/Database
|
Research Institution | Kobe University |
Principal Investigator |
UEHARA Kuniaki Kobe University, 工学研究科, 教授 (60160206)
|
Project Period (FY) |
2007 – 2009
|
Project Status |
Completed (Fiscal Year 2009)
|
Budget Amount *help |
¥18,850,000 (Direct Cost: ¥14,500,000、Indirect Cost: ¥4,350,000)
Fiscal Year 2009: ¥5,460,000 (Direct Cost: ¥4,200,000、Indirect Cost: ¥1,260,000)
Fiscal Year 2008: ¥6,370,000 (Direct Cost: ¥4,900,000、Indirect Cost: ¥1,470,000)
Fiscal Year 2007: ¥7,020,000 (Direct Cost: ¥5,400,000、Indirect Cost: ¥1,620,000)
|
Keywords | マルチメディア情報処理 / ビデオオントロジ / データマイニング / マルチメディア / パターンの抽出 / ラフ集合理論 |
Research Abstract |
In videos, the same event can be taken by different camera techniques and in different situations. Therefore, shots of the same event may contain significantly different features. In order to retrieve such diverse sets of shots for a given event (query), we propose a method which defines an event based on the rough set theory. First, given subsets of shots for an event as positive examples, we represent the event as the union of the subsets. Then, we adopt a partially supervised learning approach to obtain negative examples from a large amount of unlabeled data. To be precise, we identify "likely" negative examples from the unlabeled data based on their dissimilarities to the given positive examples. In calculating dissimilarities, we take advantage of subspace clustering to find clusters in different subspaces of the high-dimensional feature space.
|
Report
(4 results)
Research Products
(31 results)