2016 Fiscal Year Research-status Report
Saund: Simulation of auditory near-field distance
Project/Area Number |
16K00277
|
Research Institution | The University of Aizu |
Principal Investigator |
Julian Villegas 会津大学, コンピュータ理工学部, 准教授 (50706281)
|
Co-Investigator(Kenkyū-buntansha) |
黄 捷 会津大学, コンピュータ理工学部, 上級准教授 (10261166)
|
Project Period (FY) |
2016-04-01 – 2019-03-31
|
Keywords | Spatial sound / Virtual reality / Human interfaces / Head-mounted display / Auditory distance |
Outline of Annual Research Achievements |
During FY2016 we focused on work packages (WP) one and two. We defined experimental protocols for the comparison of different spatialization methods, including a web-based software tool for collecting and storing subjective data, Matlab and R scripts for the creation of stimuli and data analysis, etc. As for WP2, two real-time spatialization techniques were programmed in the Pure-data language and one additional technique was programmed in Matlab for off-line spatialization. Additionally, we progressed on WP5 and WP6 with the creation of the project's website, successful project management meetings, and the publication of several articles in peer-reviewed international venues.
|
Current Status of Research Progress |
Current Status of Research Progress
1: Research has progressed more than it was originally planned.
Reason
The continuous effort of the two groups (Human Interface Laboratory (HIL) and the Computer Arts Laboratory (CAL)) and their periodic communication have been reflected in the smooth progress of the project, arriving to the second year according to the initial plan. Besides the development and evaluation of several sound spatializers, the joined effort of the two laboratories has been also reflected in the publication of four articles in international peer-reviewed conference proceedings.
|
Strategy for Future Research Activity |
For FY2017 we will work in parallel on 1) the creation of a VR application displayed on HMDs that allows us to compare different sound spatialization techniques in conjunction with visual contents and 2) improve the efficiency of the already programmed spatializers to ease their scalability. Additionally, we will collect a new set of Head-Related Impulse Responses with equipment bought for this project. This new set will focus on capturing the directivity pattern of human voice to be used in virtual conferences.
|
Remarks |
This is the public website of the project with information about recent updates, publications, project objectives, etc.
|
Research Products
(5 results)