Moderator:
Scott Selfon, Audio Experiences Lead, Facebook Reality Labs (Oculus Research) - Redmond, WA, USA
Presenting sound to a person experiencing a dynamic virtual reality experience is, by definition, a just-in-time activity. How do we take advantage of more than a century of mixing and monitoring practices based on linear content—and more than 20 years of interactive game mixing—to create a coherent, believable, and emotionally satisfying soundscape for these new realities? This talk discusses the current state of the art for mixing and monitoring techniques, from the actual process, to ever-evolving standards, to robustly handling the variety of authored and implemented content, playback environments, and scenarios.