Developing a segmentation method based on automatic background estimation from a video images
Project/Area Number |
23700202
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Toyohashi University of Technology |
Principal Investigator |
SUGAYA yasuyuki 豊橋技術科学大学, 工学(系)研究科(研究院), 准教授 (00335580)
|
Project Period (FY) |
2011 – 2013
|
Project Status |
Completed (Fiscal Year 2013)
|
Budget Amount *help |
¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2013: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2012: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2011: ¥2,860,000 (Direct Cost: ¥2,200,000、Indirect Cost: ¥660,000)
|
Keywords | 領域分割 / 前景抽出 / 奥行き情報を考慮した前景抽出 / 国際情報交流 |
Research Abstract |
In order to generate a background panoramic image from a video stream, we proposed a new method for detecting mistracking points from tracked feature points in the video stream. We already proposed a mistrackiong points detection method. In this method, we robustly fit an affine space to feature point trajectories, and detect mistracking points by computing the distance from the fitted affine space. We developed this method to be able to detect mistracking points even if multiple moving objects exist in a video stream. Moreover, in order to estimate bakground colors in the background panoramic image generation, we proposed a clastering method based on an EM algorithm. We also proposed a method for generating a trimap from the extracted initial foreground regoins.
|
Report
(4 results)
Research Products
(25 results)