Dark Asset: A Unity case study
For VFX artist Setareh Samandari, being able to express ideas quickly is essential to applying her creativity. She wants to show directors their visions and let them iterate and make changes in real-time – either on-set or remotely – to come up with the richest possible shots and sequences.
Producing top-quality VFX and post-processing quickly with a small staff
Los Angeles, USA
Transforming production timelines
When technologies evolve quickly, it’s easy to forget just how much work went into “how it used to be.” Lighting environments, rendering full scenes, and compositing complex shots took considerable time and could involve hundreds of artists.
Today, VFX artists can use real-time tools to get a lot more great effects with a lot less work. “Unity gives me real-time planning for previs as well as lighting and tracking,” says Samandari. “And I can render final pixel shots for whole environments unbelievably quickly.”
Previewing with the Virtual Camera
Unity Virtual Camera is a live capture package component and iOS App that can track the motion of an iPhone or iPad as a handheld camera to perform and record shots. For example, its iOS app can use an iPhone’s LIDAR to precisely map out a greenscreen set and actors. This adds extraordinary depth and speed to previs work.
“In Dark Asset, we first used the Virtual Camera for previs of virtual sets, then for tracking the physical cameras on the real sets. We could preview the virtual environment while walking around the greenscreen set,” Samandari explains. The team then used the recorded camera movement of the scene to match a virtual environment to the shot and render it as final pixel directly from Unity.
In one example, a character sits down in a government office while the camera slowly moves down the desks and swivels around the corner to find her. Only the actress and her desk and chair are real; the rest is all virtual, rendered in Unity in real-time on-set, then composited in real-time to ensure the camera is moving in the right path. Later, the environment is rendered in Unity and comped with the greenscreen in NUKE.
Scouting the set
In practice, this meant mounting an iPhone 12 Pro to track their ARRI Alexa. The iPhone connected via WiFi to a desktop instance of the Unity Editor equipped with the Live Capture package. The team would location scout to find good camera positions on the set, taking snapshots to save as thumbnails.
After picking the shot angles they wanted, they recorded takes by moving an iPad with the same Unity Virtual Camera app within the virtual scene. “We reviewed the takes and selected the best ones for the shot and then the DP would perform that camera move with the ARRI on the physical set. I think what was really amazing was the degree of interaction, how much we could all get involved in making the best choices,” says Samandari.
Adding to the shoot
They were also able to add new content. One greenscreen set of shots required breaking glass over the actors. However, the glass simply wouldn’t break on the day of the shoot. Using the Virtual Camera tracking data, they set up the glass in the precise location to correctly juxtapose it on the actors.
They smashed the glass this way, without having to worry about flying shards. After lighting and rendering the additional content, they ended up with a seamless shot.
Fine-tuning lighting with Probe Volumes
Being able to bake lighting data into automatic Probe Volumes is one of Unity’s new features for finessing the look of detailed surfaces. Samandari says, “I added Probe Volume lighting as a lighting node to speed up baking.”
For general lighting, it interpolates lighting per pixel instead of per object between the baked probes. “In Unity post-processing, I also combined it with the HDRP real-time ray-traced global illumination, depth of field, area lights, and ray-traced reflections. I think the shots turned out just awesome.”
Technically, a Probe Volume is a group of Light Probes that Unity places automatically based on the geometry density in a scene. This enables baking indirect lighting, and it eliminates spending time manually placing and configuring Light Probes. Instead of using lightmaps, you bake the lighting into Probe Volumes, give it an area you want it to work with, and it will take care of the global illumination bouncing and density. She added, “We could bake in four seconds, and there’s no distinction between static and dynamic objects.”
Post-processing fine points with HDRP
Unity’s High Definition Render Pipeline (HDRP) let Samandari not only get realistic lighting in real-time for enhanced previs, but also let her render final images for compositing into the movie. She often used HDRI images in a linear color space with alpha channels to prepare her EXRs.
“While I work in post-processing, I use the ACES color space for tone mapping,” she explains. “But when you’re rendering in linear space, you have to turn it off for exporting EXRs.
Building apps faster than greenscreening
If a shot includes someone working on a tablet or reading a smartphone screen, the screen itself is typically greenscreened for dropping in content in post. However, that can entail placing corner tracking markers and hundreds of tracking shots and screen replacements and rotoscoping elements that come in front.
In Dark Asset, the VFX artists simply used Unity to actually build an appropriate app. According to Samandari, “It worked on-camera and they could update it on the fly. I think it really saved them a lot of time.”
Lighting holographic characters
One Dark Asset scene required developing a holographic display of the characters. Samandari says, “I took the greenscreen footage into After Effects to get the first part of the look, then I used Unity HDRP volumetric fog in post-processing to light the beams for the holograms.”
To place the footage in the scene, she assigned it to a polygon and used the render texture to locate the characters there. “This way, they fit the lighting of the scene and also interacted with the set lighting by casting shadows onto the virtual set and volumetric fog.” She was then able to create a new camera move using the iPad with the Unity Virtual Camera in this virtual set.
Going real-time sooner
Traditionally, compositing and rendering these shots in post were long, protracted processes that required entire teams and huge CPU/GPU resources. It also dramatically reduces workforce; for this project, a small team of three could achieve high-quality VFX that would usually take as many as 25 people for a similarly sized project.
“The industry is going real-time, sooner than many think. And there’s no reason to wait.” Samandari adds, “Unity is simply amazing, I love it.”