To accurately compute shadow in a scene with omni-directional light sources, each shaded point has to shoot many shadow rays toward the surrounding environment to test light visibility (if a shadow ray can reach the right source without hitting a blocking geometry. To do these occlusion tests we need to know the whole scene geometry, and it is not trivial to access the information about scene geometry.
In an effort to combine voxel-based GI with IBL, I made an experiment of casting shadows using scene geometry represented by sparse 3D textures:
To represent this “buddha on a plane” using an ordinary 512x512x512x8bit 3D texture requires 128MB memory, whereas a sparse 3D texture consumes only 12.25MB. In this “buddha on a plane” scene, only 196 texel pages (each page has 64x32x32 texels in a 8-bit 3D texture) are resident (occupying memory). By using sparse texture, we can use the maximal 3D texture resolution without consuming too much memory. Furthermore, only 1 bit per voxel is enough for storing the occlusion information, so a 8-bit texel could store 2x2x2 occlusion voxels, which further reduces the memory usage.
With this scene representation, I was able to cast shadow rays on each surface point. The following screenshot is the result of casting 18 shadow rays on each surface point:
Clearly 18 shadow rays is not enough for accurate shadowing.
Right now this naive approach has several obvious drawbacks:
1. Unlike ambient occlusion, which only tests occlusion within a short distance, shadow rays for environment map lighting have to traverse all way to the scene boundary, that makes the traversal costly.
2. Many, many times of texture fetching is required in the fragment program, and it is also very costly.
3. This approach can only be used on diffuse materials. To accurately compute lighting integrals on glossy materials, visibility functions have to be incorporated into the integrals.