Project/Area Number |
13680443
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Nagoya Institute of Technology |
Principal Investigator |
SATO Jun Nagoya Institute of Technology, Department of Comp. Sci. Eng., professor, 工学研究科, 教授 (20303688)
|
Co-Investigator(Kenkyū-buntansha) |
SATO Yukio Nagoya Institute of Technology, Department of Comp. Sci. Eng., professor, 工学研究科, 教授 (80134790)
|
Project Period (FY) |
2001 – 2003
|
Project Status |
Completed (Fiscal Year 2003)
|
Budget Amount *help |
¥4,100,000 (Direct Cost: ¥4,100,000)
Fiscal Year 2003: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 2002: ¥1,600,000 (Direct Cost: ¥1,600,000)
Fiscal Year 2001: ¥1,700,000 (Direct Cost: ¥1,700,000)
|
Keywords | Sonic Visual Interface / Computer Vision / Sound Control / Mixed Reality / Mutual Projection / 音響視覚インターフェース / 視覚障害者 |
Research Abstract |
In this research, we investigated methods for taking consistency between 3D visual information and 3D sound information, and for representing visual information by using sound information. We first analyzed the relationship between 3D image space, 3D sound space and 3D auditory perception space. If cameras are uncalibrated, 3D space reconstructed from images has 3D projective ambiguity. IF 3D sound generators are uncalibrated, 3D sound space has 3D projective ambiguity. Also, We have 3D affine ambiguity in human auditory perception. Thus, the relationship between 3D image space, 3D sound space and 3D auditory space can be described by 3D projective transformations. By using these properties, we proposed a method for representing 3D visual shapes and positions by using 3D sound. We next applied the method for taking geometric consistency of visual and auditory information in mixed reality applications. In particular, we showed that by using the mutual projection of multiple cameras, we can compute 3D visual information much more accurately than the existing method. We also applied the proposed method for mixed reality systems of multiple users, and showed that by recovering 3D information in each user's camera coordinates, we can generate 3D sound information for each person. We applied the proposed method for generating virtual images and virtual sounds in virtual fighting systems and in virtual music instruments
|