Topic : Radiosity in English
Author : Paul Nettle
Page : << Previous 2  Next >>
Go to page :


with the most energy to contribute to the scene. Each pass would cause another render of the scene, allowing the user to progressively evaluate the progress. If the progress showed a problem along the way (an illumination surface was in the wrong place or the wrong color) they could stop the process and make the needed adjustments.

During this process, the user would see a completely dark scene progress to a fully lit scene. To accommodate this sharp contrast in visual difference from beginning to end, the progressive refinement technique added something called the "ambient term".

Before I continue, I want to point something out that is pretty important in radiosity. There is no such thing as ambient light in real life. Ambient light is something that was invented to accommodate the need for what appears to be a "global light" in real life. But in reality, ambient light doesnít exist. Rather, light is always being reflected from surface to surface, which is how it finds its way into all the nooks and crannies of real-world detail. Before the advent of radiosity, ambient light was the best thing available to the typical rendering architectures. It is safe to think of radiosity is a more accurate solution to ambient (global) light. This is why radiosity is considered a technique for "global illumination."

The ambient term starts off as a "differential area sum" of the radiative energy for the entire scene. What this means is that itís a number that represents the average amount of light that each surface will receive throughout the processing of the entire radiosity solution. We can calculate that average without doing all the work simply because itís an average amount of energy, not a specific amount of energy for a single surface.

As each progressive pass emits the radiative energy for a surface, the ambient term is slowly decreased. As the total radiative energy of the scene approaches zero, so does the ambient term (though, at different rates, of course.) A nice advantage here is that you can use the ambient term to figure out when youíve distributed enough energy as to make only a negligible difference. At this point, you can stop processing.

So, the progressive approach has solved the massive memory requirements for the radiosity matrix by simply not storing it, and it partially solves the processing time issue by speeding things up, and further improving this by allowing users to preview their works in progress.




A Note on Patches

Before I continue, I want to cover the topic of patch subdivision just a little. I only touched on it lightly so as not to confuse the reader. Itís time we dive just a little bit deeper in to these ever useful things.

First, letís be perfectly clear on something. If you use subdivision in your radiosity code, then you will not be using "surfaces" since the patches are a higher resolution representation of the original surface geometry. It will be the patches that shoot and gather energy amongst themselves, not the surfaces. If you use patch subdivision, you can probably discard your original surfaces since they have been replaced by a higher resolution representation, their patches.

Patches are how we simulate area light sources. Rather than actually treating the surface like an area light source, we simply split it up into lots of smaller light sources across the entire area of the original surface. If the surface is subdivided enough, then the results can be quite pleasing.

Patch subdivision can be done blindly or intelligently. An example of blind subdivision might be to subdivide every surface into a set of patches that are one square foot each. This can be quite a waste, since we only really need the subdivision in high-contrast areas (i.e. an area of a surface that has a dramatic change in energy across a relatively small area - like a shadow boundary.)

There is a multitude of intelligent subdivision techniques. One of the most common is to subdivide progressively by adding another step to the process. Once a surface has fully emitted its energy, each patch in the existing data-set is visited and a decision is made if two adjoining patches have too much of a difference in their illumination values. If they do, there will be a sharp contrast between these two patches so you should subdivide each of them. You can pick any threshold you wish to contain your subdivisions to a minimum. You can also set a maximum subdivision level to prevent from subdividing too much.

Patches, however, are just the first step to subdivision. Patches themselves can be subdivided into "elements". The usefulness of elemental subdivision is for performance reasons as well as aesthetic reasons. Patch subdivision can be pre-set to a specific resolution. In this case, the entire scene is subdivided evenly into patches of a specific size. This sounds like a waste, but letís not get hasty. The subdivision resolution can be quite low in this case. As the radiosity solution progresses, the patches are intelligently subdivided into elements based on high contrast areas (or whatever intelligent subdivision technique you decide to use.)

You can think of elements as a higher resolution representation of their "parent" patches. But unlike patch subdivision where the surfaces are discarded and replaced by patches, patch subdivision does not discard the patches. The advantage here, is that the patches are maintained for shooting, while the elements are used for gathering.

Letís look at that a little more closely. A patch is subdivided into a grid of 8x8 elements. During the distribution process, the patch with the highest amount of radiative energy is chosen for energy distribution. Energy is distributed from that patch to all of the ELEMENTS in the scene. The elements retain their illumination value (for beautyís sake) and the radiative energy that would be reflected from all the elements is then sent up to their parent patch. Later, the patch will do the shooting, rather than each individual element. This allows us to have a high resolution of surface geometry with a lower resolution distribution. This can save quite a lot of processing time, especially if the average patch is subdivided into 8x8 elements.

For the sake of this example, Iíll just assume weíre not at the elemental subdivision stage yet, and work from patches.




The Hemicube

Did somebody say shadows? I didnít. Not yet, at least. :-)

To obtain shadows, we need to have some visibility information, so weíll know how much of a patch is visible from another patch. One of the most common ways of doing this in todayís world is to use a z-buffer. And radiosity is no different. To do this, however, weíll need a way to generate a z-buffer from a patch. This is where the hemicube comes in handy.

A hemicube is exactly what it sounds like. Itís exactly one half of a cube, split orthogonally along one axis. This gives us one whole face, and four half-faces.

Whatís it for? Try to picture this: place a pin-hole camera at the base of the hemicube (i.e. the center of the cube prior to cutting it in half) and point the camera at the center of the top face. Now set your camera to a 90-degree frustum.

You can consider the top face of the hemicube now, to be the rendering surface of the camera. This surface has a pixel resolution (which Iíll discuss shortly.) If you render the scene from this perspective, youíll "see" what the patch "sees".

Remember when I said that we need to take the relative distance and relative orientation of two patches into account to calculate their form factors? Well, in this case, we no longer need to do that. The hemicube takes care of that for us. As patches are rendered onto the surface of the hemicube, theyíll occupy "hemicube pixels". The farther away the surface is, the fewer pixels it will occupy. This is also true for patches at greater angles of relative orientation. The greater the angle, the fewer pixels it will occupy. Using a z-buffer we can let some patches partially (or fully) occlude other patches, causing them to occupy even fewer pixels (or none at all) which gives us shadows.

For this to work, we need to translate these renders into energy transmission. Letís talk about that for a bit.

A standard z-buffer renderer will render color values to a frame buffer and store depth information into a z-buffer. A hemicube implementation is very similar. It keeps the z-buffer just like normal. But rather than storing color values into a frame buffer, it stores patch IDs into a frame buffer. When the render is complete, you have partial form factor information for how much energy gets transmitted from one patch to another. I say "partial form factor information" because weíre missing one piece.

This information is lacking some of the relative angle information between two patches. The relative angles

Page : << Previous 2  Next >>