|
Abstract: A filtering model for efficient rendering of
the spatial image of an occluded virtual sound source
William L. Martens, Jens Herder, and Yoshiki Shiba. A filtering model for efficient rendering of
the spatial image of an occluded virtual sound source, 137th Meeting of the Acoustical Society
of America and the 2nd Convention of the European Acoustics
Association, Berlin, March 1999.
Rendering realistic spatial sound imagery for complex virtual
environments must take
into account the effects of obstructions such as reflectors and
occluders. It is
relatively well understood how to calculate the acoustical consequence
that would be
observed at a given observation point when an acoustically opaque
object occludes a
sound source. But the interference patterns generated by occluders of
various
geometries and orientations relative to the virtual source and
receiver are
computationally intense if accurate results are required. In many
applications,
however, it is sufficient to create a spatial image that is
recognizable by the human
listener as the sound of an occluded source. In the interest of
improving audio
rendering efficiency, a simplified filtering model was developed and
its audio output
submitted to psychophysical evaluation. Two perceptually salient
components of
occluder acoustics were identified that could be directly related to
the geometry and
orientation of a simple occluder. Actual occluder impulse responses
measured in an
anechoic chamber resembled the responses of a model incorporating only
a variable
duration delay line and a low-pass filter with variable cutoff
frequency.
Keywords:
audio rendering, occluder, first-order reflection, and human perception
Full paper: [Postscript gzip]
[Video]
|
|