Each rendering method has its strengths and weaknesses. Because the shortcomings of one approach tend to be strengths in the other, some renderers, suitably named "hybrid renderers", use both methods in an attempt to have few or no weaknesses.

Raytracers are good at:
  • Photorealistic features such as reflections, transparency, multiple lights, shadows, area lights, etc. With only a little work, these features pretty much "fall out" of the algorithm, because rays are a good analogy for light paths, thereby modeling the real-world properties of light.
  • Rendering images with very large amounts of scene geometry. By using a hierarchical bounding box tree data structure, locating any given object to intersection-test is some inverse power (log) of the number of primitives, similar to guessing a number in a sorted list of numbers. Because only world-aligned boxes need to be intersection-tested when searching the tree, searches are relatively fast compared to scene complexity.
  • Using different cameras. By simply altering how eye rays are projected into the scene, one can easily imitate the optical properties of many different lenses, scene projections, and special lens distortions.
  • CSG. Constructive Solid Geometry modeling is easy to support (todo: specifics).
  • Motion blur (todo: specifics).

Scanline renderers are good at:

  • Drawing quickly if the final number of polygons is under some threshold relative to the visibility determination algorithm being used (BSP, octree, etc.). By not searching for scene geometry for each pixel, they just "hop to it" and start drawing.
  • Supporting displacement shaders. After splitting a primitive into polygons or patches, the polygon or patch can easily be subdivided further to produce more geometry.
  • Maintaining CPU/GPU code and data cache coherency, because the switching of textures and primitives occurs less frequently.
  • Arbitrary generation of primitives/patches/polygons, because they can be unloaded after being drawn. This is useful when implementing, for example, shaders that work by inserting additional geometry on the fly.
  • Realtime rendering even without hardware support, and realtime rendering of considerable model complexity with hardware support.
  • Wireframe, pointcloud, and other diagnostic-style rendering.

What impedes raytracing performance:

  • Although each screen pixel need only be computed once, that computation is expensive. This can happen even for pixels that are not intersected by any geometry. First, the projected eye ray is determined. This costs at least 10 multiplies and 7 adds. Next, the bounding slabs hierarchy is traversed. This requires an optimized search with intersection testing of world-aligned bounding boxes. Each box test costs two multiply-adds plus some comparison logic. Then, when the nearest bbox leaf node is found, the primitive inside is tested. This costs at least 18 multiplies and 15 adds, because the eye ray must be transformed into the primitive's local coordinate system. Then the actual hit test is done; for a sphere, we're looking at an extra 10 multiplies and 15 adds. All the conditional logic inside these routines (and it can be complex) impedes the CPU's branch prediction. There is also the overhead of the slab machinery, which must maintain state flags in rays, etc. If it takes five bbox tests during the bounding slabs traversal, we've used 10 multiply-adds, so the total for a sphere intersection for one screen pixel would be 48 multiplies and 47 additions. So far, we have not cast any secondary rays -- this computation load occurs for each primary ray. Even without global illumination effects, a raytracer would have to be able to trace width x height pixels in 1/24 second to perform in realtime, but that would mean about 14,745,600 multiplies and 14,438,400 additions for a 640 x 480 display.

    There's just so much computing going on. Considering that current chip fabrication processes are hitting a wall, the necessary speed might not be available for some time. Clearly, if raytracing is to perform in realtime before Moore's Law can be unstalled, a hardware assist (or massive parallelism) is necessary.
  • In scanlining, there are no computations required for pixels that do not intersect geometry. The expensive operations are projecting the primitive into eye space, tesselating a primitive into polygons, projecting each polygon into screen space, and computing per-polygon edge lists. The larger a polygon is, the more pixels it has to spread across the cost of the per-polygon computation. Since the ratio of the perimeter length of a polygon to its surface area decreases as the polygon gets larger, the per-pixel cost can become very small, the ultimate minimum being 4 multiplies and 5 adds (an optimized interpolation to compute the pixel's 3D location and depth buffer value plus a single addition to increment the pixel's X coordinate). This benefit accrues particularly in preview rendering and rendering of flat surfaces such as planes, boxes, triangles, etc. Throw in the greater cache coherency and less disruptive effects upon branch prediction, and it's apparent that a scanliner can afford to suffer pixel overwrites several times before a raytracer becomes competitive. With efficient visibility determination, scanlining is an order of magnitude ahead. For micropolygons, edge lists and their per-pixel interpolations become unnecessary, so a different set of computation costs occur (todo: investigate this).

0 comments:

Post a Comment

Copyright 2010 Lets Do Blogging