This is the second in a series of articles that explains how developers and technical artists can set up and use the High Definition Render Pipeline (HDRP) in Unity to achieve high-end graphical realism. HDRP represents a technological leap in Unity’s real-time rendering so you can work with light just as it behaves in the real world.
Be sure to read the other articles in our how-to series for high-end lighting:
The rendering path in Lit Shader Mode impacts how you can use anti-aliasing to remove jagged edges from your renders. HDRP offers several anti-aliasing techniques, depending on your production needs.
Multisample Anti-aliasing (MSAA) is a popular anti-aliasing method among PC gamers. This is a high-quality hardware method that smooths the edges of individual polygons, and only works with Forward rendering in Unity. Most modern GPUs support 2x, 4x, and 8x MSAA samples.
In your active Pipeline Asset, set the Lit Shader Mode to Forward Only. Next select MSAA 2x, MSAA 4x, or MSAA 8x for the Multisample Anti-aliasing Quality. Higher values result in better anti-aliasing, but they are slower. We can see this more clearly when we zoom into the Camera view.
Limitations with MSAA
There are a couple of limitations with MSAA worth noting:
- MSAA is incompatible with Deferred shading G-buffers, which store the scene’s geometry in a texture. Thus, Deferred shading requires one of the Post-processing Anti-aliasing techniques.
- Because MSAA only deals with polygon edge aliasing, it cannot prevent aliasing found on certain textures and materials hit by sharp specular lighting. You might need to combine MSAA with another Post-processing Anti-aliasing technique if that is an issue.
To apply anti-aliasing as a post-processing technique, use the Post Anti-aliasing Settings:
- Temporal Anti-aliasing (TAA) combines information from past and current frames to remove jaggies from the current frame. You must enable motion vectors in order for it to work. TAA generally produces great results, but it can create ghosting artifacts in some situations (e.g., a GameObject moving quickly in front of a contrasting surface). HDRP10 introduced improvements to cut down on typical TAA artifacts. Unity’s implementation reduces ghosting, improves sharpness, and prevents flickering found in other solutions.
- Fast Approximate Anti-aliasing (FXAA) is a screen space anti-aliasing algorithm that blends pixels between regions of high contrast. It is a relatively fast technique that does not require extensive computing power, but can reduce the overall sharpness of the image.
- Subpixel Morphological Anti-aliasing (SMAA) detects borders in the image, then looks for specific patterns to blend. This produces sharper results than FXAA, and works well with flat, cartoon-like, or clean art styles.
Note: When combining Post-processing and Multisample Anti-aliasing, pay attention to the rendering cost. As always, optimize your project to balance visual quality with performance.
HDRP uses a Volume framework. This system allows you to split up your scene and enable certain settings or features based on Camera position. For example, the HDRP template level contains three distinct parts, each with its own lighting setup. As such, there are different Volumes encompassing each room.
A Volume is just a placeholder object with a Volume component. You can create one through the GameObject > Volume menu by selecting a preset. Otherwise, simply make a GameObject with the correct components manually.
Because Volume components can be added to any GameObject, it can be difficult to find them in the Hierarchy. The Light Explorer (Window > Rendering > Light Explorer > Volumes) can help you locate Volumes in loaded scenes. Use this interface to make quick adjustments.
Set the Volume component’s Mode to either Global or Local, depending on the relevant context.
Global Volume works as a “catchall” without any boundaries, and so affects all cameras in the scene. In the HDRP Sample Scene, the VolumeGlobal defines an overall baseline of HDRP Settings for the entire level.
Local Volume defines a limited space where its settings take effect. It uses a Collider component to determine its boundaries. Enable Is Trigger if you don’t want the Collider to impede on the movement of any physics bodies like your FPS player controller.
In the Sample Scene (see image under Volumes section), each room has a Local Volume with a BoxCollider that overrides the Global Settings.
Room 2 has a small, spherical Volume for the bright center next to the glass case, and Room 3 has smaller Volumes at its entrance corridor and in the seated area below the pendant lights.
In the template, the Local Volumes override White Balance, Exposure, and/or Fog. Anything not explicitly overridden falls back to the Global Settings.
As your Camera moves around the scene, the Global Settings take effect until your player controller bumps into a Local Volume where those settings take over.
Performance tip: Don’t use a large number of Volumes. Evaluating each Volume (blending, spatialization, override computation, and so on) comes with some CPU cost.
A Volume component contains no actual data. Instead, it references a Volume Profile – a ScriptableObject Asset on disk that contains settings to render the scene.
Use the Profile field to create a new Volume Profile with the New or Clone buttons. You can also switch to another Profile you already have saved.
Using the Volume Profile as a file makes it easier to reuse previous settings and share Profiles between your Volumes.
Note that changes made to Volume Profiles in Play Mode will not be lost when leaving said mode.
Each Volume Profile begins with a set of default properties. To edit their values, go to Volume Overrides and customize the individual settings. For example, use Volume Overrides to modify the Volume’s Fog, Post-processing, or Exposure.
Once you have your Volume Profile set, click Add Override to customize the Profile Settings. See the image for an example of what a Fog Override could look like.
Each of the Volume Override’s properties has a checkbox on the left, which you can use to edit that property. Leaving the box disabled means HDRP uses the Volume’s default value. Volume objects can have several Overrides. Edit as many properties as needed for each one. You can quickly check or uncheck all of them with the All or None shortcut at the top-left.
Adding Overrides is a key workflow in HDRP. If you understand the concept of inheritance from programming, Volume Overrides will seem familiar to you.
The high-level Volume Settings are the default for low-level Volumes. Here, the HDRP Default Settings pass down to the Global Volume, which in turn, serves as the “base” for Local Volumes.
The Global Volume overrides the HDRP Default Settings and the Local Volume overrides the Global Volume. Use Priority, Weight, and Blend Distance (outlined in the following section) to resolve any conflicts caused by overlapping Volumes.
To debug the current values of a given Volume component, go to the Volume tab in the Rendering Debugger.
Find a complete Volume Overrides list in the HDRP documentation.
Blending and priority
Because you often need more than one Volume per level, HDRP allows you to blend Volumes. This makes transitions between them less abrupt.
At runtime, HDRP uses the Camera position to determine which Volumes affect the HDRP Settings.
Blend Distance determines where, or how far outside the Volume’s Collider, to begin fading on or off. A value of 0 for Blend Distance means an instant transition, whereas a positive value means that the Volume Overrides only blend once the Camera enters the specified range.
The Volume framework is flexible and allows you to mix and match Volumes and Overrides as you see fit. If more than one Volume overlaps the same space, HDRP relies on Priority to decide which Volume takes precedence. Higher values mean higher priority.
In general, set your Priority values explicitly to eliminate any guesswork. Otherwise, the system will use creation order as the Priority “tiebreaker,” which can lead to unexpected results.
HDRP uses real-world lighting models to render each scene. As such, many properties are analogous to their counterparts in traditional photography.
Understanding Exposure Value
Exposure Value (EV) is a numeric value that represents a combination of a camera’s shutter speed and f-number (which determines the size of the lens opening, or aperture). You need to properly set exposure to reach ideal brightness and capture high levels of detail in both the shadows and highlights. Otherwise, overexposing or underexposing the image leads to less than desirable results.
The exposure range in HDRP typically falls somewhere along the spectrum above.
Greater Exposure Values let less light into the Camera, which is appropriate for more brightly lit situations. Here, an EV value between 13 and 16 is suitable for a sunny daytime exterior. In contrast, a dark, moonless, night sky might use an EV between -3 and 0.
You can vary a number of factors in an actual camera’s settings to modify your Exposure Value:
- The shutter speed: The amount of time that the image sensor is exposed to light
- The f-number: The size of the aperture or lens opening
- The ISO: The sensitivity of the film/sensor to light
Photographers call this the Exposure Triangle. In Unity, as with a real camera, you can arrive at the same Exposure Value using different combinations of these numbers. HDRP expresses Exposure Value in EV100, which fixes the sensitivity to that of 100 International Standards Organization (ISO) film.
Exposure Value formula
The formula above calculates Exposure Value.
Note that it’s on a logarithmic base-2 scale. As the Exposure Value increases one unit, the amount of light entering the lens decreases by half.
HDRP allows you to match the exposure of a real image. Simply shoot a digital photo with a camera or smartphone. Grab the metadata from the image to identify the f-number, shutter speed, and ISO.
Use the formula to calculate the Exposure Value. If you use the same value in the Exposure Override (see following section), the rendered image should fall in line with the real-world exposure.
You can use digital photos as references when lighting your level. While the goal isn’t necessarily to recreate an image perfectly, matching an actual photograph can take the guesswork out of your lighting setups.
In HDRP, Exposure is a Volume Override. Add it to a Local or Global Volume to see the available properties.
In the Mode drop-down, select one of the following: Fixed, Automatic, Automatic Histogram, Curve Mapping, and Physical Camera.
Compensation allows you to shift or adjust the Exposure. You can use this to apply minor adjustments and “stop” the rendered image up and down slightly.
Fixed Mode lets you set the Exposure Value manually. Follow the graduation marks on the Fixed Exposure slider for hints. While the icon to the right has a drop-down of presets (e.g., 13 for Sunlit Scene down to -2.5 for Moonless Scene), you can set the field directly to any value.
Keep in mind that Fixed Mode is rather simple, but it’s not very flexible. It only tends to work if you have a Volume or scene with relatively uniform lighting, where one Exposure Value can work throughout.
Automatic Mode dynamically sets the exposure depending on the range of onscreen brightness levels. This functions like how the human eye adapts to varying levels of darkness, redefining what is perceived as black.
While Automatic Mode will work under many lighting conditions, it can also unintentionally overexpose or underexpose the image when pointing the Camera to a very dark or very bright part of the scene. Use Limit Min and Limit Max to keep the exposure level within a desirable range. Playtest to verify that the limits stay within your expected exposure throughout the level. Then use Metering Mode, combined with Mask options, to indicate which parts of the frame to apply autoexposure.
Autoexposure changes as the Camera transitions between darkness and light, with options to adjust the Speed. Just like with the eye, moving the Camera from a very dark to a very light area, or vice versa, can be briefly disorienting.
Metering Mode options
Automatic, Automatic Histogram, and Curve Mapping use Metering Mode to determine which part of the frame to use when calculating exposure. You can set the Metering Mode to:
- Average: The Camera uses the entire frame to measure exposure.
- Spot: The Camera only uses the center of the screen to measure exposure.
- Center Weighted: The Camera favors pixels in the center of the image and feathers out toward the edges of the frame.
- Mask Weighted: A supplied image (Weight Texture Mask) indicates which pixels are most important for controlling exposure.
Procedural Mask: The Camera evaluates exposure based on a procedurally generated texture. You can change options for the center, radius, and softness.
Automatic Histogram Mode takes Automatic Mode a step further. Ultimately, it computes a histogram for the image and ignores the darkest and lightest pixels when setting exposure.
By rejecting very dark or very bright pixels from the exposure calculation, you might experience a more stable exposure whenever extremely bright or dark pixels appear on the frame. This way, intense emissive surfaces or black materials won’t underexpose or overexpose your rendered output as severely.
Use the Histogram Percentage Settings under Automatic Histogram Mode to discard anything in the histogram outside the given range of percentages (imagine clipping the brightest and darkest pixels from the histogram’s leftmost and rightmost parts). Then use Curve Remapping to remap the exposure curve (see more in the following section).
Curve Mapping is another variant of Automatic Mode.
In Curve Mapping Mode, the x-axis of the curve represents the current exposure, and the y-axis represents the target exposure. Remapping the exposure curve can refine precision.
Those familiar with photography might find Physical Camera Mode helpful for setting Camera parameters.
Switch the Exposure Override Mode to Physical Camera, then locate the Main Camera. From there, you can enable Physical Camera. See the image for properties shown in the Inspector.
Important to exposure are the ISO (sensitivity), Aperture (or f-number), and Shutter Speed, located under Physical Camera. If you’re matching reference photos, copy the correct settings from the image’s Exif data. Otherwise, this table can help you guesstimate Exposure Value based on f-number and shutter speed.
Additional Physical Camera parameters
Though not related to exposure, other Physical Camera properties can help you match the attributes of real-world cameras.
For example, we normally use the field of view in Unity (and many other 3D applications) to determine how much of the world a camera can see at once. In real cameras, however, the field of view depends on the size of the sensor and focal length of the lens.
Rather than setting the field of view directly, the Physical Camera Settings allow you to fill in the Sensor Type, Sensor Size, and Focal Length from the actual camera data. Unity will then automatically calculate the corresponding field of view value.
Rely on the camera metadata included with the image files when trying to match a real photo reference. Both Windows and macOS can read the Exif data from digital images. You can then copy the corresponding fields to your virtual camera.
Note that you might need to search for the exact sensor dimensions on the manufacturer’s website once you derive the camera make and model from the metadata. This article includes an estimate of common image sensor formats. Several of the bottom parameters influence the Depth of Field Volume.
In Unity 2021 LTS, you can control the Focus Distance from the Camera’s Inspector. In the Depth of Field Volume component, set the Focus Mode and the Focus Distance Mode to Physical Camera.
Use Blade Count, Curvature, and Barrel Clipping to change the Camera aperture’s shape. This influences the look of the bokeh that results from the Depth of Field Volume component.