Scanline rendering is the preferred method for generating most computer graphics in motion pictures. One particular implementation, REYES, is so popular that it has become almost standard in that industry. Scanline rendering is also the method used by video games and most scientific/engineering visualization software (usually via OpenGL). Scanline algorithms have also been widely and cheaply implemented in hardware.

In scanline rendering, drawing is accomplished by iterating through component parts of scene geometry primitives. If the number of output pixels remains constant, render time tends to increase in linear proportion to the number of primitives. OpenGL and Photorealistic Renderman are two examples of scanline rendering.

Before drawing, a Z or depth buffer containing as many pixels as the output buffer is allocated and initialized. The Z buffer is like a heightfield facing the camera, and it keeps track of which scene geometry part is closest to the camera, making hidden surface removal easy. The Z buffer may store additional per-pixel attributes, or other buffers can be allocated to do this (more on this below). Unless primitives are prearranged in back-to-front painting order and do not present pathological depth issues, a Z buffer is mandatory.

For each primitive, it is either composed of an easily drawable part (usually a triangle) or can be divvied up (tesselated) into such parts. Triangles or polygons that fit within screen pixels are called micropolygons, and represent the smallest size a polygon needs to be for drawing. It is sometimes desriable (but not absolutely necessary) for polygons to be micropolygons -- what matters is how simply (and therefore quickly) a polygon can be drawn.

Assigning color to output pixels using these polygons is called rasterization. After figuring out which screen pixel locations the corners of a polygon occupy, the polygon is scan-converted into a series of horizontal or vertical strips (usually horizontal). As each scanline is stepped through pixel by pixel (from one edge of the polygon to the other), various attributes of the polygon are computed so that each pixel can be colored properly. These include surface normal, scene location, z-buffer depth, and polygon s,t coordinates. If the depth of a polygon pixel is nearer to the camera than the value for the respective screen pixel in the Z buffer, the Z buffer is updated and the pixel is colored. Otherwise, the polygon pixel is ignored and the next one is tried.

0 comments:

Post a Comment

Copyright 2010 Lets Do Blogging