Raytracing is the dominant method for rendering photorealistic scenes. POV-Ray and Rayshade are examples of raytracers. Hardware implementations of raytracers exist but tend to be rare.

The idea behind raytracing is to iterate through the output buffer (screen) pixels and figure out what part of the scene each of them shows. As a result, if scene geometry remains constant, render time increases in linear proportion to the number of output pixels.

For each screen pixel, an imaginary ray is cast from the camera into the scene. Intersections between the ray and scene objects are compared and the closest one to the camera is used to color the pixel, making hidden surface removal implicit.

If the object is reflective or transparent/refractive, a second ray is cast (or bounced off the object) to find out what the object is reflecting or letting show through. Rays can also be cast towards light sources to determine shadows. Every primitive must provide some way to test itself with the intersection of a ray.

A Z or depth buffer may also be used to provide quick redrawing effects after a rendering is completed, although some raytracers forego such a feature because the main rendering task does not explicitly require a Z buffer.

0 comments:

Post a Comment

Copyright 2010 Lets Do Blogging