Updated April 2020. 5 min. read

Best practices for VR and mobile AR graphics in Unity

What you will get from this page: Unity tools and workflows for creating Virtual Reality (VR) and Augmented Reality (AR) graphics evolve continuously. Dan Miller, an XR evangelist, provides you with updated best practices on how to craft performant graphics for high-end VR and mobile AR.

Three modes of Stereo Rendering in Unity

Multi-pass requires going through the scene graph twice to render for each eye. Use multi-pass stereo rendering when you are creating photorealistic visuals targeting high-end devices where resources are limitless. This method is the most expensive of the three but it's usually the one we support first because it's the most straightforward for certain displays and technology.

Single-pass packs two textures into one large one (referred to as a double-wide texture). It goes through the scene graph once so it’s faster on the CPU, however it requires more of the GPU as there are a lot of extra state changes.  

Single-pass instancing is the optimal method. It uses a single texture array containing two textures that store the rendering for each eye. It involves the same amount of GPU work as single-pass, but fewer draw calls and less work for the CPU, so it’s significantly more performant. 

If you have rendering issues in VR on certain platforms, and/or with one of the scriptable render pipelines, then toggle between these three settings to see if it helps.

Changes to vertex shader code to use stereo rendering

These changes will enable any custom shader that you write to be instanced in the double-wide.

The code example below is of a normal vertex shader. The first thing to do is add the macros at the top into the app data. Then we add the output stereo and in the final step, the vertex pass, we have the instance ID that allows it to link in and initialize the stereo output. You can find this example and others in our shader docs

EyeTextureResolutionScale vs RenderViewportScale EyeTextureResolutionScale (formerly RenderScale) is a value that controls the actual size of eye textures as a multiplier of the device’s default resolution. The value of 1.0 will use the default eye texture resolution specified by your target XR device. A value of less than 1.0 will use lower resolution eye textures, while a value greater than 1.0 yields higher resolutions. 

EyeTextureResolutionScale allows you to either increase or decrease the resolution of your entire VR experience. You might use this if you want to port a high-end VR experience to Oculus Quest or a mobile platform, or in the reverse case, port a mobile VR project to a high-end device. 

RenderViewportScale is dynamic and adjusted at runtime. It controls how much of the allocated eye texture should be used for rendering. The value ranges from 0.0 to 1.0. You can use RenderViewportScale to decrease the resolution at runtime, for example, if you have a high number of special effects in a scene but want to maintain an acceptable framerate. 

To sum up, EyeTextureResolutionScale is a value you set for the entire VR experience while RenderViewportScale can be adjusted at runtime at certain moments or specifications during your experience.

Unity-High-Definition-Rendering-Pipeline für VR

Scriptable Render Pipelines for XR

The High-Definition Render Pipeline for VR

In 2019.3 with package version 7.2.0, the High-Definition Render Pipeline (HDRP) is now verified and can be used in VR. Please read this blog post to see which features are supported when using the HDRP for VR projects.

Universal Render Pipeline for VR

Universal Render Pipeline is supported on all VR platforms. However, there are some limitations on the mobile side. Always check for updates in the URP forum, the docs, and this recent blog post that outlines the capabilities as of 2019.3. 

Post-processing

Some post-processing effects will work well in XR while other effects aren’t optimal for it. Temporal effects, for example, can cause blurriness, or depth of field, which simulates a property of a physical camera, may cause nausea. Get Dan’s tips on how to apply post-processing effects correctly to your VR content.

If you apply post-processing effects in mobile AR keep in mind that the effects are applied on/to the entire camera feed. For example, if you apply bloom effects this could backfire if the user goes outside and looks up at the sun, or looks directly at a light indoors.  

Unity-AR Foundation

AR Foundation

AR Foundation is Unity’s cross-platform framework for building experiences for ARKit, ARCore, Magic Leap and HoloLens. You build your project once in Unity and AR Foundation uses the SDK for your chosen platform. You can find more information here:

AR Foundation docs

Getting started with AR development in Unity

XR Plugin Framework

Here are three general points to keep in mind while you develop AR projects: 

  1. In mobile AR apps rendering happens twice: the camera feed gets rendered and then the digital content is rendered on top of that. 
  2. In the physical world, almost everything either casts a shadow or has a shadow on it. Shadows are important in AR content because they “ground” the objects and help the user understand where they are in-depth. But occlusion and shadows are not enabled by default and require special shaders. 
  3. When you develop an AR project keep in mind that people will use it in a variety of different environments and locations. 

The next level in AR authoring: MARS

AR Foundation is the preferred method for bringing real-world understanding into the new AR authoring solution, Unity MARS. Unity MARS brings environment and sensor data into the creative workflow, so you can build intelligent AR apps that are context-aware and responsive to physical space, and can work in any location with any type of data. 

Unity MARS provides you with a UI specifically designed for AR, enabling you to:

  • Author complex, data-oriented apps visually 
  • Test experiences without leaving the Unity Editor
  • Deliver apps with runtime logic that adapt responsively to the real world and work across platforms

Learn more about Unity MARS from this blog post. You can also sign up for a free 45-day trial.  

Unity-AR-Grafik-Additivanzeigen

“Reverse” lighting for head-mounted AR

The current AR platforms are additive displays, meaning that light is added to the display and any instances of black or other dark colors are rendered transparent. 

In the image above, the bottom half of the sphere to the far left is transparent. It displays the normal shading model that you would expect with a light shining down directly on top of it. It's bright on the top and then starts to get shaded on the bottom. This is because physically-based rendering and self-shadowing can create dark areas. You can solve this by using unique platform features such as eye tracking. Or, shading techniques available for Magic Leap, which is what you see in the spheres in the middle and to the right: instead of taking an object and having it go from light to dark, it keeps the same shading all over the sphere and then brightens the top.

Umgekehrte Schatten – Andreas Jakl

Reverse and colored shadows

Dan presented this tip, which comes from Andreas Jakl, a lecturer for Digital Healthcare and Smart Engineering at St. Pölten University of Applied Sciences in Austria. This technique involves making an outline of where the shadow might be and then feathering that outline from solid white-out to full transparency. You can see what it looks like in Unity in the image above, which is by Andreas Jakl. You apply a reverse shadow to use the black as transparency thereby grounding the content in a convincing way in an additive display. 

Another technique to try is colored shadows: using different colors besides black or grays, since those are just going to render to transparency. In certain types of content you can try this, for example, have purple shadows on objects in an alien-themed game or experience.

Occlusion in AR

You can find a special mobile occlusion shader here. It can be applied to planes or certain meshes found within your environment. You can use it with feature points for quicker occlusion. A feature point is a specific point in a point cloud. AR devices use a camera and image analysis to track specific points in the world which are used to build a map of its environment. These are usually high frequency elements, such as a knot in a wood-grain surface. You can apply small objects to those feature points to act as occluders for your content. 

Another technique available is the AR Foundation Human Body Subsystem available for ARKit 3.0. This subsystem provides apps with human stencil and depth segmentation images. The stencil segmentation image identifies, for each pixel, whether the pixel contains a person. The depth segmentation image consists of an estimated distance from the device for each pixel that correlates to a recognized human. Using these segmentation images together allows for rendered 3D content to be realistically occluded by real-world humans.

Shadows

If you are an experienced developer with Unity then you know that there are a variety of ways to configure and set shadows, such as on the objects themselves or the light source, and with plenty of quality settings to adjust. Just keep in mind you might have to tweak settings if you develop on a desktop but are deploying to mobile devices. 

For static objects use blob shadows: you can create the shape or create a generic circle around an object to ground it in the experience.

XR-Environment-Probe-Subsystem

Environmental Probes for ARKit

These probes generate environment textures during an AR session using camera imagery. It picks up color data associated with planes and feature points during the scan. From there, a cubemap is built up and then applied to a reflection probe, so the physical world is mirrored on your digital content. This feature works best in an environment where you have clearly defined planes and feature points, such as an indoor space with furniture.

Rakete im Freien

Environmental HDR on ARCore

This feature adjusts the rotation of your directional light so you can have a real-time directional light that links into this feature and then matches the direction of the shadows that you're seeing in the real world. This technique works well for making digital content feel like it’s a part of the real world. It also has Ambient Spherical Harmonics which helps model ambient illumination from all directions, matching your digital content more closely to the real world. Finally, an HDR Cubemap provides specular highlights and reflections. 

To see these features in action, check out Dan’s demo about 25 minutes into his Unite Copenhagen talk.

Wir verwenden Cookies, damit wir Ihnen die beste Erfahrung auf unserer Website bieten können. In unseren Cookie-Richtlinien erhalten Sie weitere Informationen.

Verstanden