|
Abstract: Sound Spatialization Resource Management in Virtual Reality Environments
Jens Herder and Michael Cohen. Sound Spatialization Resource
Management in Virtual Reality Environments, ASVA'97 - International
Symposium on Simulation, Visualization and Auralization for Acoustic Research and Education, Tokyo, April 1997.
In a virtual reality environment, users are immersed in a scene with
objects which might produce sound. The responsibility of a VR
environment is to present these objects, but a practical system has only
limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. A sound spatialization
resource manager controls sound resources and optimizes fidelity
(presence) under given conditions. For that, a priority scheme based
on psychoacoustics is needed. Parameters for spatialization
priorities include intensity calculated from volume and distance,
orientation in the case of non-uniform radiation patterns, occluding
objects, frequency spectra (low frequencies are harder to
localize), expected activity, and others. Objects which are
spatially close together (depending on distance and direction) can
be mixed. Sources that can not be spatialized separately can be
mixed as ambient sources. Important for resource management is the
resource assignment, i.e., minimizing swap operations, which makes
it desirable to look-ahead and predict upcoming events in a scene.
Prediction is achieved by monitoring objects' position, speed, and
past evaluation values (i.e., priorities, probabilities, ..).
Fidelity is contrasted for different kind of resource restrictions
and optimal resource assignment.
To give standard and comparable results, the VRML
2.0 specification is used as an
application programmer interface. Applicability is demonstrated with
a helical keyboard, a polyphonic MIDI
stream driven animation including user interaction (a user may move
around, playing together with programmed notes). The developed sound
spatialization resource manager gives improved spatialization
fidelity under runtime constraints. Application programmers and
virtual reality scene designers are freed from the burden of
assigning mixels and predicting the sound sources locations.
|
|