Google’s Resonance Audio: High-Fidelity Sound Across Mobile & Desktop
By Michael Berg
Google’s new Resonance Audio SDK for Unity lets you render hundreds of simultaneous 3D sound sources in the highest fidelity for your XR, 3D and 360 video projects on Android, iOS, Windows, MacOS, and Linux.
With Resonance Audio, some of the biggest audio challenges for immersive and interactive experiences have been solved. It enables you to deliver realistic, impactful sound on multiple platforms – including mobile – without compromising audio quality or running out of CPU resources. As well, with the Resonance Audio SDK for Unity you can author ambisonic clips, combine ambisonic and spatialized clips, generate realistic reverb based on scene geometry and acoustic surface materials, and deliver a host of other impressive audio effects.
Hear the power of Resonance Audio in action
Audio Factory is a VR experience that showcases the features and capabilities of the Resonance Audio SDK. Experience spatial audio in this exhilarating clip from Audio Factory.
Resonance Audio in Google’s Audio Factory VR app. App available for free on Daydream and SteamVR.
The technology behind Resonance Audio is already powering the top Made with Unity VR apps on Daydream and also all YouTube 360 videos where spatial audio is present.
It’s time to bring high-fidelity, spatial audio that scales to your Unity projects. Learn more about Resonance Audio’s features and check out the resources below to help you get up and running.
Getting Started with Resonance Audio
Here’s how to get started with Resonance Audio in your project:
- 1. Make sure you have Unity 2017.1 or later installed. If not, install Unity.
- 2. Download the Resonance Audio SDK for Unity.
- 3. Learn more by visiting Google’s Resonance Audio and Developer Guides for Unity.
- 4. Join the discussion in our Unity Forum.
Resonance Audio: Overview and Features
How it works
Audio spatializers are typically CPU-intensive and the cost increases with each audio source. In contrast, Resonance Audio’s spatializer is efficient and it scales well as audio sources are added to a scene. Resonance Audio accomplishes this through a unique design, where each audio source clip is converted into ambisonic format.
This format maintains enough spatial information to be able to effectively spatialize the source later. All sources are then mixed together. The “expensive” spatialization step is then applied just once to the mix of all sources. This allows hundreds of simultaneous high-fidelity sources to be handled per CPU core, even on mobile devices.
Ambisonic Decoder: Combining ambisonic clips with spatialized clips
Resonance Audio includes an Ambisonic Decoder plugin. With it, developers can create rich audio experiences using both ambisonic clips and more traditional audio clips, due to Resonance Audio’s ambisonics support. First-order ambisonics are mixed into the global internal ambisonic representation, which is already generated for all spatialized audio sources. The spatialization step is then applied just once to the mix of all audio sources.
Ambisonic Soundfield Recording: Authoring clips in the Unity Editor
Ambisonics are an exciting advance for XR (AR/VR) audio because they project sounds above and below the listener as well as on the horizontal plane. Think of them as the audio equivalent of 360 videos, where XR ambiences rotate correctly as you turn your head, and perform in other interesting and creative ways during an XR experience.
But one typical issue with using ambisonics has been that these clips were difficult to record and author. Now with the Unity-exclusive Ambisonic Soundfield recording tool in the Resonance Audio SDK, sound designers can use Unity to author ambisonic clips. This feature allows you to place many ambient audio sources in a scene, and then bake out one ambisonic clip based on the mix of the original clips.
The newly created ambisonic clip is much “cheaper” to play back than several audio sources. It also retains enough relative positional information to realistically simulate where each sound originated and have those sounds rotate correctly as you turn your head in an XR experience.
Geometric Reverb Baking: Calculating audio reflections and reverb
Also exclusive to Unity, this feature lets developers generate realistic reverb based on the geometry and associated acoustic surface materials in a scene. Resonance Audio also supports direct sound propagation, occlusion, near-field effects, sound source spread, and directivity-shaping for sound sources and listeners.
[caption] Resonance Audio’s sound directivity customization
[caption] Resonance Audio’s Geometric Reverb Baking in Unity
Environmental audio, or modeling how the environment affects authored sounds, has been another ongoing challenge. Initially, environmental modeling was simplified and often used the “shoebox” model, which basically assumed there was a rectangular room around the listener and audio sources. Now, with the Resonance Audio SDK, you can use actual scene geometry to model all of these environmental effects more realistically.
Cross-platform support: Build once, deploy everywhere
The Resonance Audio SDK for Unity supports development for Android, iOS, Windows, MacOS, and Linux platforms.
Keep us in the audio loop!
It’s time to bring high-fidelity Resonance Audio to your Unity projects to wow your users with truly immersive and realistic sound and effects. So get started now by installing the Resonance Audio SDK for Unity and joining the discussion in the Unity Forum. We can’t wait to hear from you – and hear your results!