Project/Area Number |
13450150
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
情報通信工学
|
Research Institution | The University of Tokyo |
Principal Investigator |
AIZAWA Kiyoharu Graduate School of Frontier Sciences, Professor, 大学院・新領域創成科学研究科, 教授 (20192453)
|
Co-Investigator(Kenkyū-buntansha) |
KODAMA Kazuya National Institute of Informatics, Infrastructure Systems Research Division, Research Associate, 情報基盤研究系, 助手 (80321579)
KOHIYAMA Kenji Keio University, Graduate School of Media and Governance, Professor, 政策メディア研究科, 教授 (00306888)
|
Project Period (FY) |
2001 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥10,200,000 (Direct Cost: ¥10,200,000)
Fiscal Year 2002: ¥2,800,000 (Direct Cost: ¥2,800,000)
Fiscal Year 2001: ¥7,400,000 (Direct Cost: ¥7,400,000)
|
Keywords | Focus / Blur / Visual effect / Image Based Rendering / Motion blur / Image fusion / Microscopic images / Insects / 仮想視点画像 |
Research Abstract |
In this project, we have investigated an innovative approach to image/video contents manipulation for various visual effect generation. In the conventional approach, the image and video have to be segmented into objects and objects are separately applied by various visual effects. Our approach is totally different from the conventional approach, in which we use multiple differently focused images and directly generate the target image without segmentation. In our new method, we can generate object-based visual effects onfy by linear processing. The specific achievements we have made in this project is shown below. 1) Development of theoretical foundation of multi-focus image processing. Rendering of various visual effects such as blur, motion blur, enhancement, phase shift was able to be controlled object by object only by linear processing of multiple differently focused images. 2) Automation of preprocessing. Registration (position, scale) is required as a preprocessing. The required preprocessing was automated. 3) Virtual view generation. Foreground and background objects were shifted by our method and virtual views were generated. The method was further applied to the so called light field rendering. Each camera of the camera array captured two differently focused images and the new views were generated from them. Because of our technique, smooth interpolation under small number of camera was achieved. 4) Image processing of microscopic images of insects. Microscopic images of insects were processed so that all focus image was generated. A large number of microscopic images were used for evaluation of the method. Depth image of the insects was also generated.
|