Workshop on Mixed Reality and Virtual Environments:
Media Technology and Developments in Japan and Germany

Welcome to the workshop on Mixed Reality and Virtual Environments: Media Technology and Developments in Japan and Germany. The workshop language is English and is open to the public. It is a forum which addresses all aspects of research related to mixed reality. This international forum is part of the Japanese Week at the FH Duesseldorf 2011. Scientists, researchers, and lecturers will have open discussions and presentations of their latest achievements as well as give a retroperspective of the developments in Japan and Germany. You are invited to attend the workshop on Wednesday, 25th May 2011 at 9:00. Hands on and demonstrations will be given after the presentations at 14:00.


TITLE:
Virtual and Augmented Reality for Product Development

ABSTRACT:
Virtual Prototyping is an inherent part of today's product development process. Virtual Prototyping means to build a computer model of a future product and analyse it like a real prototype. It reduces time and costs in product development and increases product quality. Key technologies for Virtual Prototyping are Virtual Reality (VR), Augmented Reality (AR) and Simulation. VR and AR are necessary for the visual presentation of the Virtual Prototype. Simulation gives them a realistic behaviour. The presentation gives an overview of the research activities of the Heinz Nixdorf Institute in the fields of VR and AR and highlights the strong influences by Japanese scientists and their research.

SPEAKER:
Michael Grafe (Heinz Nixdorf Institute, University of Paderborn)

Michael GrafeDipl.-Ing. Michael Grafe is Chief Engineer and Managing Director at the Heinz Nixdorf Institute, University of Paderborn. Since 1995, he is coordinating the Institute's research activities in the areas of Virtual and Augmented Reality. Since 2011, he is Managing Director of the Fraunhofer Project Group "Mechatronic Systems Design" in Paderborn.


TITLE:
Prototypical Mixed Reality Applications

ABSTRACT:
For more than a decade our department has worked on prototyping cooperative Mixed and Augmented Reality applications. My talk provides an overview of past and present works, and also takes a brief look at future projects.

SPEAKER:
Leif Oppermann (Collaborative Virtual and Augmented Environments, Fraunhofer-Institut für angewandte Angewandte Informationstechnik FIT)

Dr. Leif OppermannDr. Leif Oppermann is deputy head of FIT's Collaborative Virtual and Augmented Environments department. He studied Mediainformatics at the Hochschule Harz and earned his doctoral degree at the Mixed Reality Lab at the University of Nottingham. He is generally interested in building all kinds of Mixed Reality solutions. His current research interest is mostly concerned with building location-based experiences, including the underlying infrastructures and workflows. Leif Oppermann is active in the field of Mixed and Augmented Reality for 10 years and publishes his work at national and international conferences such as Ubicomp, ACE, or Multimedia, and workshops such as ARToolkit, PerGames, or Mobile Gaming. He acts as reviewer for conferences like Pervasive, ISMAR, EICS and CHI. Since 2009 Leif Oppermann is working at Fraunhofer FIT.


TITLE:
Health 2.0: IT enforced Medical

ABSTRACT:
What is the next wave in Medical using IT? How about using Motion Capture to visualize human motions or robotics? How about using Computer Vision for gesture interface? How about Augmented Reality for medical simulation? How about 3DCG or Stereo 3D for medical visualization? How about Smartphone or Tablet device for future medical? How about Web 2.0 concept to mashup medical data? The presentation gives an overview of Health 2.0: IT enforced Medical.

SPEAKER:
Jun Yamadera (Chief Chaos Officer, Eyes, JAPAN Co. Ltd.)

Jun YamaderaJun Yamadera, was born in 1968, Aizuwakamatsu, Fukushima. He started Eyes, JAPAN Co. Ltd. in 1995 with some students of the University of Aizu. The company's vision is "Any sufficiently advanced technology is indistinguishable from magic." originally quoted by Arthur C. Clarke. He is working on various cutting edge projects with staff, students, interns and researchers all over the world. The company has accepted 11 German internship students since 2003.


TITLE:
History of Virtual Studio and HDTV research demonstrates Japanese Commitment to New Media Technology

ABSTRACT:
The idea to combine virtual 3D graphics with professional camera shots in real-time became reality around 1995 when the first mainframe computers were able to render 50 or 60 images per second with digital SD TV resolution (PAL/NTSC). Developing camera tracking systems at that time also solved the problem to lock the perspective of real shots with the virtual scene. Since then, the Virtual Studio technology has been brought to a professional level and many research has been done to improve the interaction between humans and graphic computer in the studio room, to find new ways for interactive TV shows, or to develop new visualization methods for meta data to be presented during the show (e.g. in weather forecasts). This presentation summarizes the research projects over the last 16 years from a German point of view and reflects Japanese activities in this area.

SPEAKER:Wolfgang Vonolfen (Fraunhofer-Institut für intelligente Analyse- und Informationssysteme IAIS)

Wolfgang VonolfeWolfgang Vonolfen reached computer science degree in 1993 and first worked in industry as a free lancer and project manager for German versions of Microsoft and Borland products. In 1997, he took over the research and development activities for the digital TV, Cinema, and Virtual Studio production facilities of the Institute for Media Communication (IMK which is now part of Fraunhofer IAIS). He was responsible for scientific work e.g. as part of EC funded projects as well as professional Virtual Studio productions with broadcasters resulting in various publications, lectures as well as spin-offs and licenses for industrial exploitation. Since 2006, he is responsible for the scientific and industrial activities of the Media Production department of IAIS comprising audio/video as well as Internet technology.


TITLE:
Listen to what I say: environment-aware speech production

ABSTRACT:
Major challenges to adapt all forms of speech output to a given auditory context (e.g., noisy or highly reverberant environments, second language or hearing-impaired listeners, etc.) based on human speaker strategies are discussed. Ongoing research aimed at increasing speech intelligibility in real-time without compromising speech quality (or fatiguing the listener) is described, and software applications used in this research are presented. This talk will also present auditory demonstrations of natural and artificial speech modifications.

SPEAKER:
Julian Villegas (University of the Basque Country)

Dr.Julian VillegasDr. Julián Villegas studied Music and Electronic Engineering in the Valle University (Cali, Colombia). He obtained an M.Sc. and Ph.D. in Computer Science from the University of Aizu in Japan. After his Ph.D. graduation (2010), Julián joined the EU-funded LISTA project (the listening talker) as researcher for the Language and Speech Laboratory University of the Basque Country. He currently lives and works in Vitoria, Spain. Julián was invited to be a guest researcher at CIRMMT McGill University in 2008, and received the University of Aizu President's award (2006). His interests and professional activities include interdisciplinary research on music, speech analysis, and sound, psychoacoustics, experimental psychology, realtime programming, visual and aural illusions, virtual reality, and 3d audio among others.


TITLE:
Multiperspectives in Art, Cinema, & Mixed Reality Systems

ABSTRACT:
The idea of multiple simultaneous perspectives is not new, but retains its ability to contextualize advanced user interfaces. This seminar traces some threads of multiperspectives, including panoramic imagery (which allow omnidirectional browsing), stereolithographic composition of multiple figures, fractal designs (which have similar features across a range of scales), Shepard tones (an auditory pitch illusion manifesting apparent tonal motion without changes in register), analytic cubism, hybrid images (which superimpose images at different spatial scales), and modern computer games, which encourage fluid point-of-view. "Cyberpunk" literature and cinema--- whose themes of alienation, man-machine interfaces, telepresence, and cloning, forking, and replication of consciousness inform consideration of fragmented perspective. A multipresence-enabled avatar-populated online chat systems for spatial sound displays is demonstrated. Spectroscopy, as developed by Joseph Fraunhofer, has been developed into dispersive, prism-like binocular systems. Chromastereoptic eyewear presented to each audience member allows depth-rich apprehension of appropriately colored images through diffraction-grating induced binocular disparity.

SPEAKER:
Michael Cohen (University of Aizu)

Prof. Michael CohenMichael Cohen is Professor at the University of Aizu in Japan, where he heads the Spatial Media Group, comprising about 30 members, and teaches undergraduate courses in information theory, human interfaces & virtual reality, and has graduate lectures in sound and audio, computer music, and spatial sound. His research primarily concerns interactive multimedia, including virtual & mixed reality, spatial audio & stereotelephony, stereography, ubicomp (ubiquitous computing), and mobile computing. He received an Sc.B. in EE from Brown University (Providence, Rhode Island), M.S. in CS from the University of Washington (Seattle), and Ph.D. in EECS from Northwestern University (Evanston, Illinois). He had post-doctoral appointments at the University of Washington and at NTT. He has worked at the Air Force Geophysics Lab (Hanscom Field, Massachusetts), Weizmann Institute (Rehovot; Israel), Teradyne (Boston, Massachusetts), BBN (Cambridge, Massachusetts and Stuttgart; Germany), Bellcore (Morristown and Red Bank, New Jersey), the Human Interface Technology Lab (Seattle, Washington), and the Audio Media Research Group at the NTT Human Interface Lab (Musashino and Yokosuka; Japan). He is the co-developer of the Sonic (sonic.u-aizu.ac.jp) online audio courseware, the author or coauthor of over one hundred publications, seven book chapters, and two patents (plus one pending), and the inventor or co-inventor of multipresence (virtual cloning algorithm), the Schaire ("Share Chair" rotary motion platform), nearphones (headrest-mounted stereo speakers), SQTVR (stereographic QuickTime Virtual Reality), and Zebrackets (dynamic articulated parentheses).

He has been an invited or keynote speaker at numerous international conferences, including the Japan-China Joint Workshop on Frontier of Computer Science and Technology (2006), the Int. Symp. on Universal Communication (2007), MobileHCI (2007), the InterLink Workshop on Ambient Computing and Communication Environments (2007), the Int. Symp. on Electronic Arts (2008), the Int. Conf. on Applied and Creative Arts (2008), the Special Interest Group on Distributed Processing Systems (2008), the Int. Symp. on Electronic Arts (2008), the Int. Conf. on the Use of Symbols to Represent Music and Multimedia Objects (2008), the Immersive Education Initiative Boston Summit (2010), the FutureCampus Forum (2010), and the Int. Conf. on Virtual-Reality Continuum and Its Applications in Industry (2010).

He is on the Scientific Committee of the Journal of Virtual Reality and Broadcasting, a member of the ACM, AES (Audio Engineering Society), including the AES Technical Committee on Spatial Sound, the IEEE Computer Society, IEICE (Institute of Electronics, Information and Communication Engineers), JMUG (Japan Mathematica Users Group), 3D-Forum, TUG (TeX Users Group), and VRSJ (Virtual Reality Society of Japan). He is currently Vice-Chair of the Dept. of Computer and Information Systems, Graduate School of Computer Science and Engineering of the U. of Aizu.