The Turn Of The Screw Trailer

An Immersive Spatial Audio Experience for Opera North

In October 2019, Opera North, alongside the Audiolab of York University and the sound artist James Bulley, came to Lusion for a collaboration on an innovative cultural project. Their idea was to record the Turn of the Screw opera in a spatialized way, then place these recordings in an explorable 3D environment. The main goal of this project was to create an audiovisual walk through a binaural landscape.

Project Beginning, Brief, Intentions

Site visiting at the Yorkshire Scupture Park before the production.

The experience is divided following the opera’s five chapters: Opening, Dawn, Day, Dusk, Night. Each one has a visual ambience following daytime and specific music. Although we had a lot of musical material from the opera, we received very few guidelines for the look, so we had to explore and improvise to visualise the story of the Opera.

Early prototyping

We received these references to start prototyping the look:

Seeing the references, we rapidly decided to use the Yorkshire Sculpture Park as a starting point. Based on the photographs taken from the team, we aligned the placement of each key location with the Google Satellite images and we composed a workable terrain based map in Photoshop. Then we artistically painted the land, lake, trees and scene locations in Houdini FX to procedurally generative the overall structure of the terrain.

Finally, we export from Houdini the data we want in .ply, and use this little tool we created to convert the data into some lightweight binary buffers, then it’s ready to use it for our webgl scene.

This workflow (Concept → Houdini FX → WebGL) has given us the opportunity to fastly shape the visual concept. Only one week after the brief, we were able to send some screenshots of the scene and concrete prototypes for the look and feel.

Early screenshot of the scene scattered with points

Setting up the spatialized musi

One of the pillars of this project is the binaural audio. We really wanted to build an immersive walkthrough where we hear the sound as the audio team recorded it, in a spatialized way. This means that for each instrument and each singer, there would be a PositionalAudio speaker at a specific position that outputs only its own channel.

To facilitate this, the audio team assembled multiple channels together and established spatial maps to indicate the positions of each PositionalAudio. We would then place them in Houdini FX and export their position/rotation to three.js. Finally, we used PositionalAudioHelper to expose the values of each PositionalAudio (cone, ref distance, roll-off factor...), and collaborated closely with the audio team to get these right.

Spatial layout of the Dawn scene on the left, transferred to Houdini in the middle, and built in webgl on the right (with helpers enabled).

In order to add some interactivity, we collaborated with the audio team to add an “audio focus” on singers: when the mouse hovers the singer, its voice raises amongst everything so you can hear it very close, without reverb.

The character lightens when mouse hovers it, as its voice raises amongst music


In the experience, the navigation is led by the predefined camera splines, we matched the physical camera settings in Houdini FX with the WebGL output to make sure what “we see is what we got”. With this set up, we decided to optimise the 3D scene by removing any polygon which is not visible during the experience.

The challenging part is that since the user can rotate and offset the camera, we had to write a custom raycasting system in Houdini FX to do the visible test from each polygons to the camera spline tube. It is an O(n³) operation and it took around 20 mins to optimise the opening scene. But the result was fantastic and it helps speeding up the render time as well as the download time.

Dynamic Lightmap

Similar to the fixed step voxel cone tracing technique we used in Lost Your Way In Nature (A experience we never managed to finish -_-)

We used a simplified 2D version of this technique to render almost 50k emissive spheres in real-time. In a nutshell, we rendered the spheres as points onto a 2048 x 1024 px framebuffer with the point size based on the distance of the sphere to the landscape. We then manually mipmapped the framebuffer and did a fixed step 2D raymarching. We only do one “ray” per fragment for the diffuse light and also used the accumulated weight as the Shadow/AO by playing with the smoothstep domains.

In order to make this lighting system working, we had to snap those spheres on to 2048 x 1024 grid in Houdini FX and remove any sphere which is sitting right next to another sphere. It is a very good example of having the coders working within the 3D modelling pipeline. It is because with the traditional pipeline of having a 3D modeller handing down the 3D assets to the coders to build the experience, this kind of visual effect would never be happening. It is the area that we always love to explore at Lusion.

Better Fog

Other than the generic distance fog rendering provided by threejs, we also applied the height fog technique by Inigo Quilez which makes the scene look way more realistic at a low rendering cost.

Opera Drapes

For the menu action, we added a little cloth effect to match our opera theme which made our client at Opera North happy :)

Cloth Simulation - Same technique like the one we used on our website for the cloth simulation. We cross blend the looping state as well as the opening state through the vertex animation texture from a simple Houdini cloth simulation. Since the experience requires WebAudio support and IE11 doesn’t support it, we can safely ignore the premultiplied alpha bug in IE 11 to use the alpha channel to store the information as well without using more than 2 pixels of data to store the position + normal of one vertex.

Cloth Rendering - We used a similar mipmapping technique to manually mipmap the fullscreen output and offset the uv sampling based on the view normal to obtain that blurry cloth look and feel.


It was an incredible experience for us to collaborate with other talented parties such as the sound artist James Bulley, Opera North and Audiolab of York University to deliver this immersive experience. Also, nothing would happen without the support of the XR Stories.

Hope you all enjoyed this case study. Stay tuned for our next production!

Opera North
Visual Design and Development, Web Design and Development, 3D Asset Compositing
fwa Site of the Day

Don’t blindly
follow the crowd.

Let’s make something incredible together!
Subscribe to our newsletter
Scroll to see more
Vector Smart Object
We are sorry to inform you that your browser is not compatible with our website.

Please do consider upgrading your browser.