|
Spatial Media Group: Sound for Virtual Reality
Environments
Jens Herder, Michael Cohen and William L. Martens. Sound for Virtual Reality
Environments, University of Aizu Forum for University-Industry Cooperation, November 1997.
Our basic theme is realtime synthesis of the sight and sound of a virtual
reality (VR) environment, including multimodal display and interaction
involving multiparameter, time-dependent data streams. Graphics alone
cannot create a sense of "place" for the user, and do not usually make
users feel as if they are present in the virtual environment. Sound is
critically important in achieving such results with VR systems. It can
be used to enhance immersion, to help orient the user in the virtual
environment, and to allow for communication between user and the system
and/or other users. We are engaged in creating a portable, scalable
module for supporting virtual reality systems with realistic sound
synthesis and spatial sound choreography. For instance, a MIDI data
stream, arriving from a sequencer or generated by realtime keyboard
performance, can be interpreted in several ways besides musical
synthesis, including HRTF-based processing to spatialize the
synthesized musical notes
according to listener position, animation of a graphical
representation (like a helical keyboard) by the MIDI data stream,
and interpretation using multiple sinks, which are multiple
instantiations of the user at various positions
within the space. After a short introduction to spatial sound, a model
of the Pioneer Sound Field Control (PSFC) System in the Multimedia Center
is shown with the virtual helical keyboard performing a piece of music.
3D Audio Rendering Technology
3D graphics rendering technology has been recognized by Japanese
industry as a core technology for many years. 3D audio rendering
technology has come to the attention of Japanese industry more
recently, and may be regarded as less mature. The next generation
3D audio rendering technology is currently under development at
the University of Aizu. Newly developed audio signal processing
algorithms promise to revolutionize both the audible quality and
computational efficiency of the sonic component of virtual reality
and multimedia systems. The improved technology has widespread
application in communications in general, and should be
commercially marketable within the next few years. Concrete
examples of these applications that can be demonstrated by
researchers at the University of Aizu include binaural
teleconferencing, 3D music and sound effects processing, and
coordination of 3D audio with 3D graphics for video presentations.
|
|