Meeting abstract. No PDF available.
Effective rendering technique of auditory
information for an augmented reality (AR) device has been investigated. Researchers and industries are searching for new applications incorporating AR and Virtual Reality (VR) technologies, which provide enhanced user-experience of interactivity and intimacy. To utilize these benefits best, homogeneous integration between visual and auditory information is important. To date, the binaural technology based on head-related transfer function (HRTF) has been used to create immersive and three-dimensional audio objects for VR and AR devices. In this study, we compared the HRTF method with a stereophonic panning method that only controls Inter-aural Level Difference (ILD) for an AR device (a smart see-through glass) and investigated the precision of auditory information required for a coherent representation with a target visual image at the locations of 0 degree, -5 degrees, and -10 degrees in a counter clockwise. Auditory stimuli were rendered to have target locations in the horizontal plane from + 45 degrees to -45 degrees with a 5-degree interval. In the subjective evaluation, each participant answered whether a randomly given auditory location was matched to a target location of the visual image. The results showed that two audio rendering methods did not produce significant difference in creating integrated perception in an AR device.
- © 2016 Acoustical Society of America.
Please Note: The number of views represents the full text views from December 2016 to date. Article views prior to December 2016 are not included.