Precomputed Radiance Transfer (PRT) proposed by Sloan et al. [1] enables interactive image-based lighting that takes into account of soft shadows and indirect illumination under low-frequency dynamic lighting environments. PRT precomputes light transport functions capturing the way an object shadows scatters, and reflects light, and encodes them by low-order spherical harmonics (SH) basis functions, which makes them to be represented in compact vector form. Therefore, rendering becomes a simple inner product of a light vector which is sampled run-time and also represented by SH basis, with the precomputed transport vector. For glossy BRDFs, a transport matrix instead of vector is precomputed for every vertex to allow for view-dependent shading.

Similar to Fourier series, low-order SH can only approximate low-frequency signals, and is not an ideal candidate for efficiently encoding high frequency signals like sharp light and hard shadows.

Ng et al. [2] broke the low-frequency limitation by using non-linear wavelet approximation to represent the lighting and transport vectors, achieving all-frequency illumination and shadows at interactive rates. Wavelets require far fewer coefficients than SH to approximate high-frequency data. This technique was developed primarily for image relighting. For changing viewpoint, it was limited to diffuse objects because of the need of sampling in viewing direction. This additional sampling is required for every vertex, which is too costly to be done.

The following image is a comparison between SH (left) and Haar wavelets (right) PRT from Ng et al. [2] (SH and Haar both uses 100 coefficients):

Ng et al. [3] factors light, visibility, and BRDF into 3 independent components, and thus realizes all-frequency dynamic light as well as changing view.

The following portion is a brief explanation of their work.

Considering the rendering equation without indirect illumination, the equation

describe outgoing readiance at surface location in viewing direction , where is the light source, is incident light direction, is BRDF, n is surface normal, and V visibility. The integral is over the visible or upper hemisphere . Here we make several simplifications. First, we incorporate the cosine term in the BRDF definition. Second, we assume distant illumination, making independent of surface location. Third, we only consider spatially uniform BRDF. Finally, we define all functions in a global coordinate system. However, BRDFs are typically defined in local coordinates with respect to surface normals, making rotation operations necessary when applying the BRDF in the global coordinate system to different orientations of surface normals. By the above simplification, the rendering equation can be expressed as

where the integrand is a prodcut of three functions. If we consider only one vertex in fixed viewpoint, the equation can be further expressed as

The functions and can then be expended in appropriate orthonormal basis functions :

,

,

,

where , , and are coefficients of corresponding basis functions. With this basis expansion, we can write the simplified rendering equation as:

,

where the triple product tensor, , is defined by

Ng et al.[1] called it tripling coefficient. Due to the presence of tripling coefficient, the evaluation of the above equation is very complicated. Generally we need consider all combinations, making the complexity . Ng et al.[1] proved that the tripling coefficient of 2D Haar basis is nonzero if and only if certain criteria are met [1]. So instead of complexity, the complexity of evaluating the equation in Haar basis is , specifically the exact number of non-zero Haar tripling coefficients for the first N basis functions is . Furthermore, the evaluation could be reduced to linear time complexity by dynamic programming.

In the work of Ng et al.[3], the light function is sampled run-time from an environment map; the visibility function is precomputed beforehand, by sampling an occlusion cubemap in each vertex position; the BRDF function is precomputed in a predefined set of orientations. When rendering, the BRDF for a specific orientation is obtained from interpolating the precomputed set of BRDFs.

Here is a sample of these 3 functions, from [3], shown in cubemap form:

And the following is a sample rendered result from the work of Ng et al.[3]:

Ma et al. [5] uses spherical wavelets and compute the triple product integrals in local coordinate to eliminate the need of BRDF interpolation and greatly reduces the data size.

Although the higher fidelity and run-time interactivity can be achieved by triple-product integrals with Haar wavelet basis, its high precomputation costs and data size prevents it from being adopted by graphics industry, which adopts SH-based rendering widely nowadays because of its simplicity and low costs.

References:

- Sloan, P.-P., Kautz, J., and Snyder, J., Precomputed radiance transfer for

real-time rendering in dynamic, low-frequency lighting environments, in

Proceedings of the 29th annual conference on Computer graphics and

interactive techniques. 2002, ACM Press: San Antonio, Texas. - Ng, R., Ramamoorthi, R., and Hanrahan, P., All-frequency shadows using

non-linear wavelet lighting approximation, in ACM SIGGRAPH 2003

Papers. 2003, ACM Press: San Diego, California. - Ng, R., Ramamoorthi, R., and Hanrahan, P., Triple product wavelet

integrals for all-frequency relighting, in ACM SIGGRAPH 2004 Papers.

2004, ACM Press: Los Angeles, California. - Ramamoorthi, R., Precomputation-Based Rendering, Foundations and Trends in

Computer Graphics and Vision Vol. 3, No. 4 (2007) 281–369.