Unity Publications

在Unity,我们研究图形,AI,性能等等。 我们通过讲谈、会议和期刊的方式与您和社区分享相关研究,可参阅以下最新发布的内容。

Sampling the GGX Distribution of Visible Normals

Eric Heitz - JCGT 2018

Importance sampling microfacet BSDFs using their Distribution of Visible Normals (VNDF) yields significant variance reduction in Monte Carlo rendering. In this article, we describe an efficient and exact sampling routine for the VNDF of the GGX microfacet distribution. This routine leverages the property that GGX is the distribution of normals of a truncated ellipsoid and sampling the GGX VNDF is equivalent to sampling the 2D projection of this truncated ellipsoid. To do that, we simplify the problem by using the linear transformation that maps the truncated ellipsoid to a hemisphere. Since linear transformations preserve the uniformity of projected areas, sampling in the hemisphere configuration and transforming the samples back to the ellipsoid configuration yields valid samples from the GGX VNDF.

Paper
Bib
Sample Code

Analytical Calculation of the Solid Angle Subtended by an Arbitrarily Positioned Ellipsoid to a Point Source

Eric Heitz -  Nuclear Instruments and Methods in Physics Research 2018

We present a geometric method for computing an ellipse that subtends the same solid-angle domain as an arbitrarily positioned ellipsoid. With this method we can extend existing analytical solid-angle calculations of ellipses to ellipsoids. Our idea consists of applying a linear transformation on the ellipsoid such that it is transformed into a sphere from which a disk that covers the same solid-angle domain can be computed. We demonstrate that by applying the inverse linear transformation on this disk we obtain an ellipse that subtends the same solid-angle domain as the ellipsoid. We provide a MATLAB implementation of our algorithm and we validate it numerically.

Publisher’s Version
Author's free print

A note on track-length sampling with non-exponential distributions

Eric Heitz, Laurent Belcour - Tech Report 2018

Track-length sampling is the process of sampling random intervals according to a distance distribution. It means that, instead of sampling a punctual distance from the distance distribution, track-length sampling generates an interval of possible distances.The track-length sampling process is correct if the expectation of the intervals is the target distance distribution. In other words, averaging all the sampled intervals should converge towards the distance distribution as their number increases. In this note, we emphasize that the distance distribution that is used for sampling punctual distances and the track-length distribution that is used for sampling intervals are not the same in general. This difference can be surprising because, to our knowledge, track-length sampling has been mostly studied in the context of transport theory where the distance distribution is typically exponential: in this special case, the distance distribution and the track-length distribution happens to be both the same exponential distribution. However, they are not the same in general when the distance distribution is non-exponential.

Paper

Combining Analytic Direct Illumination and Stochastic Shadows

Eric Heitz, Stephen Hill (Lucasfilm), Morgan McGuire (NVIDIA)  - I3D 2018 (short paper) (Best Paper Presentation Award)

In this paper, we propose a ratio estimator of the direct-illumination equation that allows us to combine analytic illumination techniques with stochastic raytraced shadows while maintaining correctness. Our main contribution is to show that the shadowed illumination can be split into the product of the unshadowed illumination and the illumination-weighted shadow. These terms can be computed separately — possibly using different techniques — without affecting the exactness of the final result given by their product. This formulation broadens the utility of analytic illumination techniques to raytracing applications, where they were hitherto avoided because they did not incorporate shadows. We use such methods to obtain sharp and noise-free shading in the unshadowed-illumination image and we compute the weighted-shadow image with stochastic raytracing. The advantage of restricting stochastic evaluation to the weighted-shadow image is that the final result exhibits noise only in the shadows. Furthermore, we denoise shadows separately from illumination so that even aggressive denoising only overblurs shadows, while high-frequency shading details (textures, normal maps, etc.) are preserved.

Paper
Supplemental
Slides
Code + demo

程序噪声函数的非周期性平铺

Aleksandr Kirillov — 2018年度HPG大会

程序噪声函数在计算机图形中具有许多应用,涉及纹理合成、大气效果模拟或景观几何规范。噪声可以预先计算并存储到纹理中,也可以直接在应用程序运行时进行评估。此选项可以在图像差异、内存消耗和性能之间提供权衡方案。

高级平铺算法可用于减少视觉重复。“王氏瓷片”允许使用较小的纹理集以非周期性方式平铺平面。可以在单个纹理贴图中排列切片,以便GPU能够使用硬件过滤功能。

本文介绍了对几种常用程序噪声函数的修改,这些函数可直接生成包含最小的完整“王氏瓷片”集的纹理映射。本文中提出的研究结果支持以它们为基础的这些噪声函数和纹理的非周期性平铺,无论是在运行时还是作为预处理步骤。这些研究结果还允许以较低的性能成本减少计算机生成的图像中基于噪声的效果的重复,同时保持甚至减少存储器消耗。

文章

补充资料

High-Performance By-Example Noise using a Histogram-Preserving Blending Operator

Eric Heitz, Fabrice Neyret (Inria) - HPG 2018 (Best Paper Award)

We propose a new by-example noise algorithm that takes as input a small example of a stochastic texture and synthesizes an infinite output with the same appearance. It works on any kind of random-phase inputs as well as on many non-random-phase inputs that are stochastic and non-periodic, typically natural textures such as moss, granite, sand, bark, etc. Our algorithm achieves high-quality results comparable to state-of-the-art procedural-noise techniques but is more than 20 times faster

Paper
Supplemental
Slides
Video

Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences

Louis Lettry (ETH Zürich), Kenneth Vanhoey, Luc Van Gool (ETH Zürich) - Pacific Graphics 2018 / Computer Graphics Forum

Intrinsic Decomposition decomposes a photographed scene into albedo and shading. Removing shading  allows to "delight" images, which can then be reused in virtually relit scenes. We propose an unsupervised learning method to solve this problem.

Recent techniques use supervised learning: it requires a large set of known decompositions, which are hard to obtain. Instead, we train on unannotated images by using time lapse imagery gained from static webcams. We exploit the assumption that albedo is static by definition, and shading varies with lighting. We transcribe this into a siamese training for deep learning.

Paper
Supplemental
Supplemental code

Bib
Slides
 

Efficient Rendering of Layered Materials using an Atomic Decomposition with Statistical Operators

Laurent Belcour - ACM SIGGRAPH 2018

We derive a novel framework for the efficient analysis and computation of light transport within layered materials. Our derivation consists of two steps. First, we decompose light transport into a set of atomic operators that act on its directional statistics. Speci€cally, our operators consist of reflƒection, refraction, scaŠttering, and absorption, whose combinations are sufficient to describe the statistics of light scattŠering multiple times within layered structures. We show that the €first three directional moments (energy, mean and variance) already provide an accurate summary. Second, we extend the adding-doubling method to support arbitrary combinations of such operators eciently. During shading, we map the directional moments to BSDF lobes. We validate that the resulting BSDF closely matches the ground truth in a lightweight and efficient form. Unlike previous methods, we support an arbitrary number of textured layers, and demonstrate a practical and accurate rendering of layered materials with both an offl„ine and real-time implementation that are free from per-material precomputation.

Paper
Supplemental
Supplemental code
Bib
Video
Slides

An Adaptive Parameterization for Material Acquisition and Rendering

Jonathan Dupuy and Wenzel Jakob (EPFL) - ACM SIGGRAPH Asia 2018

One of the key ingredients of any physically based rendering system is a detailed specification characterizing the interaction of light and matter of all materials present in a scene, typically via the Bidirectional Reflectance Distribution Function (BRDF). Despite their utility, access to real-world BRDF datasets remains limited: this is because measurements involve scanning a four-dimensional domain at sufficient resolution, a tedious and often infeasible time-consuming process. We propose a new parameterization that automatically adapts to the behavior of a material, warping the underlying 4D domain so that most of the volume maps to regions where the BRDF takes on non-negligible values, while irrelevant regions are strongly compressed. This adaptation only requires a brief 1D or 2D measurement of the material’s retro-reflective properties. Our parameterization is unified in the sense that it combines several steps that previously required intermediate data conversions: the same mapping can simultaneously be used for BRDF acquisition, storage, and it supports efficient Monte Carlo sample generation.

Paper
Video
Isotropic BRDF Dataset
Anisotropic BRDF Dataset
MERL Database Validation
C++ & Python code
Material Database

Stochastic Shadows

Eric Heitz, Stephen Hill (Lucasfilm), Morgan McGuire (NVIDIA) 

In this paper, we propose a ratio estimator of the direct-illumination equation that allows us to combine analytic illumination techniques with stochastic raytraced shadows while maintaining correctness. Our main contribution is to show that the shadowed illumination can be split into the product of the unshadowed illumination and the illumination-weighted shadow. These terms can be computed separately — possibly using different techniques — without affect- ing the exactness of the final result given by their product.

This formulation broadens the utility of analytic illumination tech- niques to raytracing applications, where they were hitherto avoided because they did not incorporate shadows. We use such methods to obtain sharp and noise-free shading in the unshadowed-illumination image and we compute the weighted-shadow image with stochastic raytracing. The advantage of restricting stochastic evaluation to the weighted-shadow image is that the final result exhibits noise only in the shadows. Furthermore, we denoise shadows separately from illumination so that even aggressive denoising only overblurs shad- ows, while high-frequency shading details (textures, normal maps, etc.) are preserved.

Paper

Adaptive GPU Tessellation with Compute Shaders

Jad Khoury, Jonathan Dupuy, and Christophe Riccio - GPU Zen 2 (to appear)

GPU rasterizers are most efficient when primitives project into more than a few pixels. Below this limit, the Z-buffer starts aliasing, and shading rate decreases dramatically [Riccio 12]; this makes the rendering of geometrically-complex scenes challenging, as any moderately distant polygon will project to sub-pixel size. In order to minimize such sub-pixel projections, a simple solution consists in procedurally refining coarse meshes as they get closer to the camera. In this chapter, we are interested in deriving such a procedural refinement technique for arbitrary polygon meshes.

Paper
Code

Real-Time Line- and Disk-Light Shading with Linearly Transformed Cosines

Eric Heitz (Unity Technologies) and Stephen Hill (Lucasfilm) - ACM SIGGRAPH Courses 2017

We recently introduced a new real-time area-light shading technique dedicated to lights with polygonal shapes. In this talk, we extend this area-lighting framework to support lights shaped as lines, spheres and disks in addition to polygons.

Slides
Demo code
WebGL demo for quad, line and disk lights

Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing

Vincent Schüssler (KIT), Eric Heitz (Unity Technologies), Johannes Hanika (KIT) and Carsten Dachsbacher (KIT) - ACM SIGGRAPH ASIA 2017

Normal mapping imitates visual details on surfaces by using fake shading normals. However, the resulting surface model is geometrically impossible and normal mapping is thus often considered a fundamentally flawed approach with unavoidable problems for Monte Carlo path tracing: it breaks either the appearance (black fringes, energy loss) or the integrator (different forward and backward light transport). In this paper, we present microfacet-based normal mapping, an alternative way of faking geometric details without corrupting the robustness of Monte Carlo path tracing such that these problems do not arise.

Paper

A Spherical Cap Preserving Parameterization for Spherical Distributions

Jonathan Dupuy, Eric Heitz and Laurent Belcour - ACM SIGGRAPH 2017

We introduce a novel parameterization for spherical distributions that is based on a point located inside the sphere, which we call a pivot. The pivot serves as the center of a straight-line projection that maps solid angles onto the opposite side of the sphere. By transforming spherical distributions in this way, we derive novel parametric spherical distributions that can be evaluated and importance-sampled from the original distributions using simple, closed-form expressions. Moreover, we prove that if the original distribution can be sampled and/or integrated over a spherical cap, then so can the transformed distribution. We exploit the properties of our parameterization to derive efficient spherical lighting techniques for both real-time and offline rendering. Our techniques are robust, fast, easy to implement, and achieve quality that is superior to previous work.

Paper
Video

A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence

Laurent Belcour (Unity), Pascal Barla (Inria) - ACM SIGGRAPH 2017

Thin film iridescence permits to reproduce the appearance of leather. However, this theory requires spectral rendering engines (such as Maxwell Render) to correctly integrate the change of appearance with respect to viewpoint (known as goniochromatism). This is due to aliasing in the spectral domain as real-time renderers only work with three components (RGB) for the entire range of visible light. In this work, we show how to anti-alias a thin-film model, how to incorporate it in microfacet theory, and how to integrate it in a real-time rendering engine. This widens the range of reproducible appearances with microfacet models.

Paper
Supplemental
Bib
Video
Code
Slides

使用线性变换余弦算法来实现线性照明着色

Eric Heitz, Stephen Hill (Lucasfilm) - GPU Zen (book)

In this book chapter, we extend our area-light framework based on Linearly Transformed Cosines to support linear (or line) lights. Linear lights are a good approximation for cylindrical lights with a small but non-zero radius. We describe how to approximate these lights with linear lights that have similar power and shading, and discuss the validity of this approximation.

Paper

对于光传输的频率分析的实用介绍

Laurent Belcour - ACM SIGGRAPH Courses 2016

Frequency Analysis of Light Transport expresses Physically Based Rendering (PBR) using signal processing tools. It is thus tailored to predict sampling rate, perform denoising, perform anti-aliasing, etc. Many method have been proposed to deal with specific cases of light transport (motion, lenses, etc). This course aims to introduce concepts and present practical application scenario of frequency analysis of light transport in a unified context. To ease the understanding of theoretical elements, frequency analysis will be introduced in pair with an implementation.

Course

使用线性变换余弦算法来实现实时多边形照明着色

Eric Heitz, Jonathan Dupuy, Stephen Hill (Ubisoft), David Neubelt (Ready at Dawn Studios) - ACM SIGGRAPH 2016

通过区域光源的阴影使CG渲染更为逼真。 但这需要计算球形方程,这使得实时渲染变得困难。 在这个项目中,我们开发了一种新的球形色散,通过多边形光源实时对物理材料进行着色。

报告
幻灯片
MATLAB
规划与验证
与Technicolor技术的比较
Demo
WebGL 演示
BRDF fitting code
视频

Microfacet 和 Microflake 理论统一的额外进展

Jonathan Dupuy and Eric Heitz - EGSR 2016 (E&I)

We study the links between microfacet and microflake theories from the perspective of linear transport theory. In doing so, we gain additional insights, find several simplifications and touch upon important open questions as well as possible paths forward in extending the unification of surface and volume scattering models. First, we introduce a semi-infinite homogeneous exponential-free-path medium that (a) produces exactly the same light transport as the Smith microsurface scattering model and the inhomogeneous Smith medium that was recently introduced by Heitz et al, and (b) allows us to rederive all the Smith masking and shadowing functions in a simple way. Second, we investigate in detail what new aspects of linear transport theory enable a volume to act like a rough surface. We show that this is mostly due to the use of non-symmetric distributions of normals and explore how the violation of this symmetry impacts light transport within the microflake volume without breaking global reciprocity. Finally, we argue that the surface profiles that would be consistent with very rough Smith microsurfaces have geometrically implausible shapes. To overcome this, we discuss an extension of Smith theory in the volume setting that includes NDFs on the entire sphere in order to produce a single unified reflectance model capable of describing everything from a smooth flat mirror all the way to a semi-infinite isotropically scattering medium with both low and high roughness regimes in between.

Paper
Slides

We use cookies to ensure that we give you the best experience on our website. Click here for more information.