• A computer Virus attaches itself to a program or file so it can spread from one computer to another, leaving infections as it travels. Some viruses cause only mildly annoying effects while others can damage your hardware, software, or files. Almost all viruses are attached to an executable file, which means the virus may exist on your computer but it cannot infect your computer unless you run or open the malicious program. It is important to note that a virus cannot be spread without a human action, (such as running an infected program) to keep it going. People continue the spread of a computer virus, mostly unknowingly, by sharing infecting files or sending e-mails with viruses as attachments in the e-mail.
  • A Worm is similar to a virus by its design, and is considered to be a sub-class of a virus. Worms spread from computer to computer, but unlike a virus, it has the ability to travel without any help from a person. A worm takes advantage of file or information transport features on your system, which allows it to travel unaided. The biggest danger with a worm is its ability to replicate itself on your system, so rather than your computer sending out a single worm, it could send out hundreds or thousands of copies of itself, creating a huge devastating effect. One example would be for a worm to send a copy of itself to everyone listed in your e-mail address book. Then, the worm replicates and sends itself out to everyone listed in each of the receiver's address book, and the manifest continues on down the line. Due to the copying nature of a worm and its ability to travel across networks the end result in most cases is that the worm consumes too much system memory (or network bandwidth), causing Web servers, network servers, and individual computers to stop responding. In more recent worm attacks such as the much talked about Blaster Worm, the worm has been designed to tunnel into your system and allow malicious users to control your computer remotely.
  • A Trojan Horse is full of as much trickery as the mythological Trojan Horse it was named after. The Trojan Horse, at first glance will appear to be useful software but will actually do damage once installed or run on your computer. Those on the receiving end of a Trojan Horse are usually tricked into opening them because they appear to be receiving legitimate software or files from a legitimate source. The Trojan horse itself would typically be a Windows executable program file, and thus must have an executable filename extension such as .exe, .com, . scr, .bat, or .pif. Since Windows is sometimes configured by default to hide filename extensions from a user, the Trojan horse is an extension that might be "masked" by giving it a name such as ' Readme.txt.exe'. With file extensions hidden, the user would only see 'Readme.txt' and could mistake it for a harmless text file. When the recipient double-clicks on the attachment, the Trojan horse might superficially do what the user expects it to do (open a text file, for example), so as to keep the victim unaware of its real, concealed, objectives. Meanwhile, it might discreetly modify or delete files, change the configuration of the computer, or even use the computer as a base from which to attack local or other networks - possibly joining many other similarly infected computers as part of a distributed denial-of-service attack. When a Trojan is activated on your computer, the results can vary. Some Trojans are designed to be more annoying than malicious (like changing your desktop, adding silly active desktop icons) or they can cause serious damage by deleting files and destroying information on your system. Trojans are also known to create a backdoor on your computer that gives malicious users access to your system, possibly allowing confidential or personal information to be compromised. Unlike viruses and worms, Trojans do not reproduce by infecting other files nor do they self-replicate.

Visual effects may be divided into at least four categories:

Visual effects (or 'VFX' for short) is the term given in which images or film frames are created and manipulated for film and video. Visual effects usually involve the integration of live-action footage with computer generated imagery or other elements (such as pyrotechnics or model work) in order to create environments or scenarios which look realistic, but would be dangerous, costly, or simply impossible to capture on film. They have become increasingly common in big-budget films, and have also recently become accessible to the amateur filmmaker with the introduction of affordable animation and compositing software.

The illusions used in the film, television, and entertainment industries to simulate the imagined events in a story are traditionally called special effects (a.k.a. SFX or SPFX). In modern films, special effects are usually used to alter previously-filmed elements by adding, removing or enhancing objects within the scene. The use of special effects is more common in big-budget films, but affordable animation and compositing software enables even amateur filmmakers to create professional-looking effects.
Special effects are traditionally divided into the categories of optical effects and mechanical effects. In recent years, a greater distinction between special effects and
visual effects has been recognized, with "visual effects" referring to post-production and optical effects, and "special effects" referring to on-set mechanical effects.

Optical effects (also called visual or photographic effects), are techniques in which images or film frames are created and manipulated for film and video. Optical effects are produced photographically, either "in-camera" using
multiple exposure, mattes, or the Schüfftan process, or in post-production processes using an optical printer or video editing software. An optical effect might be used to place actors or sets against a different background, or make an animal appear to talk.

Mechanical effects (also called practical or physical effects), are usually accomplished during the live-action shooting. This includes the use of mechanized
props, scenery and scale models, and pyrotechnics. Making a car appear to drive by itself, or blowing up a building are examples of mechanical effects. Mechanical effects are often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls, or makeup can be used to make an actor look like a monster.

Since the 1990s,
computer generated imagery (CGI) has come to the forefront of special effects technologies. CGI gives film-makers greater control, and allows many effects to be accomplished more safely and convincingly. As a result, many optical and mechanical effects techniques have been superseded by CGI.

Compositing is a technique by which one shot is super-imposed on another, resulting in a composite shot. A common example is our everyday weather forecast on TV. The weather map is a separate computer generated shot onto which the announcer is super-imposed, making it look as if he/she is standing in front of a giant TV screen flashing different weather images.


By separating the foreground and the background into distinct layers, we can manage each layer with much more control. As you will see, this technique alone has given rise to enormous possibilities in the special effects realm. Let us study this technique with the help of an example. The following shot involves flying a plane through a congested city, between tall skyscrapers. Obviously, this is a very risky shot, and would not be permitted by any sane mayor of any large city. The only alternative is to resort to special effects.

Sources : Internet

Look at a bright light for a few seconds and then abruptly close your eyes. The image of the light seems to stay in your eyes a little longer even though your eyes are closed.

This phenomenon is termed as Persistence of Vision because the vision seems to persist for a brief moment of time.

When the retina of the eyes are excited by light, they send impulses to the brain which are then interpreted as an image by the visual cortex in the brain. The cells in the retina continue to send impulses even after the incident light is removed. This continues for a few fractions of a second till the retinal cells return back to normal. Until that time, the brain continues to receive impulses from the retina, and hence seems to perceive an image of the source of light, giving rise to the phenomenon called Persistence of Vision.

The Principle of Motion Picture is totally based on the phenomenon of Persistence of Vision. Without it, motion picture as we know it simply would'nt exist. Our eyes can retain a picture for a fraction of a second after seeing one. Before this time frame expires, if a another similar picture is shown in its place, the eyes see it as a continuation of the first picture, and don't perceive the gap between the two.

If a series of still pictures depicting progressively incrementing action is flashed before the eyes in rapid succession, the eyes see it as a scene depicting smooth, flowing action. All visual media (Movies, TV, Electronic Displays, Laser Light Shows, etc) exploit this phenomenon.

Thanks to Persistence of Vision, our entertainment industry could make a transition from perpetual live shows like dance and dramas, to recordable entertainment like movies.

Motion of an object is the continuous displacement of the object in space with reference to another object. In the absence of a reference object, motion ceases to be apparent. What this means is that motion is always measured in relation to another object, which is used as a reference point.

When we drive, the road & the surroundings move past us. Thus we get the sensation of motion. So the road & surroundings are our reference points. When we fly, the earth beneath us is our reference point. But as you can see, the closer the reference point, the more acute the sense of motion. That's why astronauts in orbit seldom sense speed (though they are moving at thousands of miles an hour ) because earth, their only reference point is quite far away.

OK, but what has this got to do with Special Effects ?! A Sfx technique called Compositing totally relies on the way our mind perceives motion. Compositing is one of the most useful tools in a Sfx technician's bag of tricks. Keep these two in mind; the object, and its reference point(s); both of these are necessary to perceive motion in a scene.




Sources : CGtantra

Red, green and blue channels have all been used, but blue has been favored for several reasons. Blue is the complementary color to flesh tone--since the most common color in most scenes is flesh tone, the opposite color is the logical choice to avoid conflicts. Historically, cameras and film have been most sensitive to blue light, although this is less true today.

Green has it's own advantages, beyond the obvious one of greater flexibility in matting with blue foreground objects. Green paint has greater reflectance than blue paint which can make matting easier. Also, video cameras are usually most sensitive in the green channel, and often have the best resolution and detail in that channel. A disadvantage is that green spill is almost always objectionable and obvious even in small amounts, wheras blue can sometimes slip by unnoticed.

Sometimes (usually) the background color reflects onto the foreground talent creating a slight blue tinge around the edges. This is known as blue spill. It doesn't look nearly as bad as green spill, which one would get from green.

Usually only one camera is used as the Chroma Key camera. This creates a problem on three camera sets; the other cameras can see the blue screen. The screen must be integrated into the set design, and it is easier to design around a bright sky blue than an intense green or red.

Sources : Internet

The Chroma Key process is based on the Luminance key. In a luminance key, everything in the image over (or under) a set brightness level is "keyed" out and replaced by either another image, or a color from a color generator. (Think of a keyhole or a cookie-cutter.) Primarily this is used in the creation of titles. A title card with white on black titles is prepared and placed in front of a camera. The camera signal is fed into the keyer's foreground input. The background video is fed into the keyer. The level control knob on the keyer is adjusted to cause all the black on the title card to be replaced by the background video. The white letters now appear over the background image.

Luminance keying works great with titles, but not so great for making live action composites. When we want to key people over a background image, problems arise because people and their clothing have a wide range of tones. Hair, shoes and shadow areas may be very dark, while eyes, skin highlights and shirt collars can approach 100% white. Those areas might key through along with the background.

Chroma Key creates keys on just one color channel. Broadcast cameras use three independent sensors, one for each color, Red, Green and Blue. Most cameras can output these RGB signals separately from the Composite video signal. So the original chroma key was probably created by feeding the blue channel of a camera into a keyer. This works, sort of, but soon manufacturers created dedicated chromakeyers that could accept all 3 colors, plus the background composite signal and the foreground composite signal. This made it possible to select any color for the key and fine tune the selection of the color.

As keyers became more sophisticated, with finer control of the transition between background and foreground, the effect became less obvious and jarring. Today's high-end keyers can make a soft key that is basically invisible.

Creating a blue screen composite image starts with a subject that has been photographed in front of an evenly lit, bright, pure blue background. The compositing process, whether photographic or electronic, replaces all the blue in the picture with another image, known as the background plate.

Blue screen composites can be made optically for still photos or movies, electronically for live video, and digitally to computer images. Until very recently all blue screen compositing for films was done optically and all television composites were done using analog real time circuits.

In addition to blue, other colors can be used, green is the most common, although sometimes red has been used for special purposes.

In addition to blue, other colors can be used, green is the most common, although sometimes red has been used for special purposes.

Sources : Internet

Difference between Non Linear Editing System & After Effects :-

  • After Effects and NLEs is that After Effects is layer-oriented, and NLEs are generally track-oriented.
  • After Effects, each individual media object (video clip, audio clip, still image, etc.) occupies its own track. However, NLEs use a system where individual media objects can occupy the same track as long as they do not overlap in time.
  • Track-oriented system is more suited for editing and can keep project files much more concise. The layer-oriented system that After Effects adopts is suited for extensive effects work and keyframing

After Effects uses a system of layers organized on a timeline to create composites from still images and motion footage, such as video files. Properties such as position and opacity can be controlled independently for each layer, and each layer can have effects applied. After Effects is often described as the "Photoshop of video", because its flexibility allows compositors to alter video in any way they see fit, as Photoshop does for images.

Although After Effects can create images of its own, it is generally used to composite material from other sources to make moving graphics (also known as motion graphics). For example, with a picture of a space ship and a picture of a star background, After Effects could be used to place the ship in front of the background and animate it to move across the stars.

The main interface consists of several panels (windows in versions prior to After Effects 7.0). Three of the most commonly used panels are the Project panel, the Composition panel, and the Timeline panel. The Project panel acts as a bin to import stills, video, and audio footage items. Footage items in the Project panel are used in the Timeline panel, where layer order and timing can be adjusted. The items visible at the current time marker are displayed in the Composition panel.

Channels are grayscale images that store different types of information:
  • Color information channels are created automatically when you open a new image.
  • The image’s color mode determines the number of color channels created. For example, an RGB image has a channel for each color (red, green, and blue) plus a composite channel used for editing the image.
  • Alpha channels store selections as grayscale images. You can add alpha channels to create and store masks, which let you manipulate or protect parts of an image.
  • Spot color channels specify additional plates for printing with spot color inks.
  • An image can have up to 56 channels. All new channels have the same dimensions and number of pixels as the original image.
  • As long as you save a file in a format supporting the image’s color mode, the color channels are preserved.
  • Alpha channels are preserved only when you save a file in Photoshop, PDF, PICT, Pixar, TIFF, PSB, or raw formats. DCS 2.0 format preserves only spot channels. Saving in other formats may cause channel information to be discarded.

Some points to remember :-
  • When you select part of an image, the area that is not selected is “masked” or protected from editing. So, when you create a mask, you isolate and protect areas of an image as you apply color changes, filters, or other effects to the rest of the image. You can also use masks for complex image editing such as gradually applying color or filter effects to an image.
  • Masks are stored in alpha channels. Masks and channels are grayscale images, so you can edit them like any other image with painting tools, editing tools and filters. Areas painted black on a mask are protected, and areas painted white are editable.
  • To save a selection more permanently, you can store it as an alpha channel. The alpha channel stores the selection as an editable grayscale mask in the Channels palette. Once stored as an alpha channel, you can reload the selection at any time or even load it into another image.

There is hardly a movie that is made in Hollywood these days that does not have extensive special effects (SFX) work. Bollywood has also started embracing the SFX bandwagon with some gusto. A large part of SFX is animation. But SFX goes way beyond animation. In this issue, we go behind the scenes to see how some of the most spectacular and some of the most realistic SFX seen in recent times have been achieved. How various effects are executed is explained towards the end of this section. You may want to read that first.

Creating Spiderman
If you have seen the recent blockbuster Spiderman, you would recall the last scene of Tobey Maguire swinging away, amidst the high rises of Manhattan. Surely, you would have wondered how a human being, even an especially adept and agile stuntman, could have pulled one off. If you have wondered and would like to know, welcome to the world of computer-generated SFX.

Let us take the Spiderman scene described above as an example. The bulk of the scene was computer-generated imagery (CGI). There was no real Manhattan, and no real Maguire, most of the time. Almost everything was generated by software.

This software-generated imagery was interspersed with live shots of stunt doubles for street-level shots, and the occasional shot of Maguire himself, to create the footage you and I saw.

What software was used for this? Like most other SFX projects of this scale, Spiderman used standard effects packages like Maya, extensions specifically written for the Spiderman project, and completely new packages written just to create Spiderman-specific effects like spider webs.

Spiderman and the Green Goblin were animated for their many stunts in Maya. So was the genetically-altered spider that bit Peter Parker, turning him into the superhero. Spider webs, web-slinging effects and the Manhattan buildings were digitally created in Houdini, software from Side Effects. Renderman from Pixar was also used. Animation created in Houdini and render paths for Renderman were coordinated using PERL scripts.

Houdini is available for NT, Linux, IRIX and Solaris. Maya from Alias WaveFront is available for NT, IRIX, Linux and MacOS X.Rendering all this is a demanding task in itself, and SFX and animation studios build render farms for the purpose. Initially, most of the work was done on heavy-duty RISC machines from SGI and Sun. Recently there’s been a shift towards Linux and render farms built of commodity Intel-based machines. A render farm is a collection of machines, networked together by a high bandwidth connection, and dedicated to running the rendering. Specially written tools divide the rendering work amongst the machines in the render farm and also keep track of what is going on.

Announcing New Autodesk Visual Effects And Editing

Realize your ideas in the most creative way possible using the most advanced creative toolset on the market today. Autodesk Visual Effects and Editing solutions provide the high performance and interactivity needed to truly experiment and test new ideas. The combination of great talent with Autodesk tools is unbeatable.Use the best tools in the business.

From tracking to keying, color correction to motion estimation, and advanced timeline editing to our unique interactive 3D compositing environment, Autodesk Effects and Editing solutions offer you the broadest and richest toolset in the industry. No matter what the project demands, you can deal with it more creatively and efficiently, even exceeding the expectations of your clients.

Visual Effects Systems

Autodesk® Flame 2007
Industry-leading real-time visual effects design and compositing system.

Autodesk® Inferno 2007
The ultimate interactive design system for high-resolution visual effects.

Autodesk® Flint 2007
Advanced visual effects system for post-production and broadcast graphics.


Editing Systems

Autodesk® Smoke 2007


Integrated editing and finishing system for SD, HD, 2K film, and above.

Discreet® Fire 2007
The ultimate real-time, non-compressed, high-resolution, non-linear editing, and finishing system.

Andrew Daffy
(Animator, Maya Guru)


Born in the UK in 1976, Andrew Daffy specializes in CGI Supervision for the post production industry. He started working at Framestore CFC as a Junior Animator in 1996.

After earning the position as Head of 3D Commercials some years later, he worked on award winning projects such as Levis Odyssey, Walking With Dinosaurs and two James Bond title sequences.

Andrew's final project within the company was the CGI supervision of a bat sequence for the film Harry Potter and the Prisoner of Azkaban. He's now looking at branching out. As well as editing promos and pitching for directing work, he's currently researching the idea of setting up a London based school focusing solely on training in photorealistic animation.

As well as freelancing for UK's major post production houses and animation studios, Daffy has now set up his own company - "THE HOUSE OF CURVES"

Photographs, magazines and other objects of nature such as an apple; create color by subtracting or absorbing certain wavelengths of color while reflecting other wavelengths back to the viewer. This phenomenon is called subtractive color.

A red apple is a good example of subtractive color; the apple really has no color; it has no light energy of its own, it merely reflects the wavelengths of white light that cause us to see red and absorbs most of the other wavelengths which evokes the sensation of red. The viewer (or detector) can be the human eye, film in a camera or a light-sensing instrument.

The subtractive color system involves colorants and reflected light. Subtractive color starts with an object (often a substrate such as paper or canvas) that reflects light and uses colorants (such as pigments or dyes) to subtract portions of the white light illuminating an object to produce other colors. If an object reflects all the white light back to the viewer, it appears white. If an object absorbs (subtracts) all the light illuminating it, no light is reflected back to the viewer and it appears black. It is the subtractive process that allows everyday objects around us to show color.

Color paintings, color photography and all color printing processes use the subtractive process to reproduce color. In these cases, the reflective substrate is canvas (paintings) or paper (photographs, prints), which is usually white.

Printing presses use color inks that act as filters and subtract portions of the white light striking the image on paper to produce other colors. Printing inks are transparent, which allows light to pass through to and reflect off of the paper base. It is the paper that reflects any unabsorbed light back to the viewer. The offset printing process uses cyan, magenta and yellow (CMY) process color inks and a fourth ink, black. The black printing ink is designated K to avoid confusion with B for blue. Overprinting one transparent printing ink with another produces the subtractive secondary colors, red, green, blue.

The additive color system involves light emitted directly from a source, before an object reflects the light. The additive reproduction process mixes various amounts of red, green and blue light to produce other colors. Combining one of these additive primary colors with another produces the additive secondary colors cyan, magenta, yellow. Combining all three primary colors produces white. Television and computer monitors create color using the primary colors of light. Each pixel on a monitor screen starts out as black. When the red, green and blue phosphors of a pixel are illuminated simultaneously, that pixel becomes white. This phenomenon is called additive color.

To illustrate additive color, imagine three spotlights, one red, one green and one blue focused from the back of an ice arena on skaters in an ice show. Where the blue and green spotlights overlap, the color cyan is produced; where the blue and red spotlights overlap, the color magenta is produced; where the red and green spotlights overlap the color yellow is produced. When added together, red, green and blue lights produce what we perceive as white light.

As mentioned before, television screens and computer monitors are examples of systems that use additive color. Thousands of red, green and blue phosphor dots make up the images on video monitors. The phosphor dots emit light when activated electronically, and it is the combination of different intensities of red, green and blue phosphor dots that produces all the colors on a video monitor. Because the dots are so small and close together, we do not see them individually, but see the colors formed by the mixture of light. Colors often vary from one monitor to another. This is not new information to anyone who has visited an electronics store with various brands of televisions on display. Also, colors on monitors change over time. Currently, there are no color standards for the phosphors used in manufacturing monitors for the graphics arts industry. All image capture devices utilize the additive color system to gather the information needed to reproduce a color image. These devices include digital cameras, flatbed scanners, drum scanners, and video cameras. To summarize: Additive color involves the use of colored lights. It starts with darkness and mixes red, green and blue light together to produce other colors. When combined, the additive primary colors produce the appearance of white.

What is Color?

Color is all around us. It is a sensation that adds excitement and emotion to our lives. Everything from the cloths we wear, to the pictures we paint revolves around color. Without color; the world (especially RGB World) would be a much less beautiful place. Color can also be used to describe emotions; we can be red hot, feeling blue, or be green with envy.

In order to understand color we need a brief overview of light. Without light, there would be no color, and hence no RGB World. Thank God for light!

Light is made up of energy waves which are grouped together in what is called a spectrum. Light that appears white to us, such as light from the sun, is actually composed of many colors. The wavelengths of light are not colored, but produce the sensation of color.

Raster Images

  • A Raster image is a collection of dots called pixels.
  • Each pixel is a tiny colored square.
  • When an image is scanned, the image is converted to a collection of pixels called a raster image
  • Scanned graphics and web graphics (JPEG and GIF files) are the most common forms of raster images.
  • The quality of an imprint produced from a raster image is dependant upon the resolution (dpi) of the raster image, the capabilities of the printing technology and whether or not the image has been scaled up.

Vector Images

  • A vector image is a collection of connected lines and curves that produce objects.
  • When creating a vector image in a vector illustration program, node or drawing points are inserted and lines and curves connect notes together.
  • Each node, line and curve is defined in the drawing by the graphics software by a mathematical description.
  • Text objects are created by connecting nodes, lines and curves.
  • In a vector object, colors are like clothes over the top of a skeleton.
  • They can be scaled up or down without any loss of quality.
  • Since vector images are composed of objects not pixels, you can change the color of individual objects without worrying about individual pixels.

Bit depth, sometimes called "brightness resolution", defines the number of possible tones or colours every pixel can have. The greater the bit depth, the greater the depth of colour, and the larger the colour (or greyscale) palette (number of colours). For example, 8-bit colour has a range of 256 colours (or shades of grey) and 24-bit (or higher) colour provides 16.7 million colours, but 30-bit colour has many more millions of colours, which offers higher definition and thus better results in reproducing details such as the shadowy parts of an image.


2 bit Black & white


8-bit 256 Greyscale


8-bit 256 color


24-bit True Color

When digital technology is used to capture, store, modify and view photographic images, the images must first be converted to a set of numbers in a process called digitisation. Computers are very good at storing and manipulating numbers and can therefore handle digitised images with remarkable speed. Once digitised, photographs can be examined, altered, displayed, transmitted, printed or archived in an incredible variety of ways. As you explore digital imaging, it helps to be familiar with a few basic terms.

Digital images consist of a grid of small squares, known as picture elements, or
pixels: These basic building blocks are the smallest elements used by computer monitors or printers to represent text, graphics, or images.

Resolution describes the clarity or level of detail of a digital image. Technically the term "resolution" refers to spatial resolution and brightness resolution; commonly, however, the word is used to refer to spatial resolution alone. The higher the resolution, the greater the detail in the image (and the larger the file). For computers and digital cameras, resolution is measured in pixels; for scanners, resolution is measured in pixels per inch (ppi) or dots per inch (dpi); for printers, resolution is measured in dots per inch (dpi).





Scanline rendering is the preferred method for generating most computer graphics in motion pictures. One particular implementation, REYES, is so popular that it has become almost standard in that industry. Scanline rendering is also the method used by video games and most scientific/engineering visualization software (usually via OpenGL). Scanline algorithms have also been widely and cheaply implemented in hardware.

In scanline rendering, drawing is accomplished by iterating through component parts of scene geometry primitives. If the number of output pixels remains constant, render time tends to increase in linear proportion to the number of primitives. OpenGL and Photorealistic Renderman are two examples of scanline rendering.

Before drawing, a Z or depth buffer containing as many pixels as the output buffer is allocated and initialized. The Z buffer is like a heightfield facing the camera, and it keeps track of which scene geometry part is closest to the camera, making hidden surface removal easy. The Z buffer may store additional per-pixel attributes, or other buffers can be allocated to do this (more on this below). Unless primitives are prearranged in back-to-front painting order and do not present pathological depth issues, a Z buffer is mandatory.

For each primitive, it is either composed of an easily drawable part (usually a triangle) or can be divvied up (tesselated) into such parts. Triangles or polygons that fit within screen pixels are called micropolygons, and represent the smallest size a polygon needs to be for drawing. It is sometimes desriable (but not absolutely necessary) for polygons to be micropolygons -- what matters is how simply (and therefore quickly) a polygon can be drawn.

Assigning color to output pixels using these polygons is called rasterization. After figuring out which screen pixel locations the corners of a polygon occupy, the polygon is scan-converted into a series of horizontal or vertical strips (usually horizontal). As each scanline is stepped through pixel by pixel (from one edge of the polygon to the other), various attributes of the polygon are computed so that each pixel can be colored properly. These include surface normal, scene location, z-buffer depth, and polygon s,t coordinates. If the depth of a polygon pixel is nearer to the camera than the value for the respective screen pixel in the Z buffer, the Z buffer is updated and the pixel is colored. Otherwise, the polygon pixel is ignored and the next one is tried.

Raytracing is the dominant method for rendering photorealistic scenes. POV-Ray and Rayshade are examples of raytracers. Hardware implementations of raytracers exist but tend to be rare.

The idea behind raytracing is to iterate through the output buffer (screen) pixels and figure out what part of the scene each of them shows. As a result, if scene geometry remains constant, render time increases in linear proportion to the number of output pixels.

For each screen pixel, an imaginary ray is cast from the camera into the scene. Intersections between the ray and scene objects are compared and the closest one to the camera is used to color the pixel, making hidden surface removal implicit.

If the object is reflective or transparent/refractive, a second ray is cast (or bounced off the object) to find out what the object is reflecting or letting show through. Rays can also be cast towards light sources to determine shadows. Every primitive must provide some way to test itself with the intersection of a ray.

A Z or depth buffer may also be used to provide quick redrawing effects after a rendering is completed, although some raytracers forego such a feature because the main rendering task does not explicitly require a Z buffer.

Each rendering method has its strengths and weaknesses. Because the shortcomings of one approach tend to be strengths in the other, some renderers, suitably named "hybrid renderers", use both methods in an attempt to have few or no weaknesses.

Raytracers are good at:
  • Photorealistic features such as reflections, transparency, multiple lights, shadows, area lights, etc. With only a little work, these features pretty much "fall out" of the algorithm, because rays are a good analogy for light paths, thereby modeling the real-world properties of light.
  • Rendering images with very large amounts of scene geometry. By using a hierarchical bounding box tree data structure, locating any given object to intersection-test is some inverse power (log) of the number of primitives, similar to guessing a number in a sorted list of numbers. Because only world-aligned boxes need to be intersection-tested when searching the tree, searches are relatively fast compared to scene complexity.
  • Using different cameras. By simply altering how eye rays are projected into the scene, one can easily imitate the optical properties of many different lenses, scene projections, and special lens distortions.
  • CSG. Constructive Solid Geometry modeling is easy to support (todo: specifics).
  • Motion blur (todo: specifics).

Scanline renderers are good at:

  • Drawing quickly if the final number of polygons is under some threshold relative to the visibility determination algorithm being used (BSP, octree, etc.). By not searching for scene geometry for each pixel, they just "hop to it" and start drawing.
  • Supporting displacement shaders. After splitting a primitive into polygons or patches, the polygon or patch can easily be subdivided further to produce more geometry.
  • Maintaining CPU/GPU code and data cache coherency, because the switching of textures and primitives occurs less frequently.
  • Arbitrary generation of primitives/patches/polygons, because they can be unloaded after being drawn. This is useful when implementing, for example, shaders that work by inserting additional geometry on the fly.
  • Realtime rendering even without hardware support, and realtime rendering of considerable model complexity with hardware support.
  • Wireframe, pointcloud, and other diagnostic-style rendering.

What impedes raytracing performance:

  • Although each screen pixel need only be computed once, that computation is expensive. This can happen even for pixels that are not intersected by any geometry. First, the projected eye ray is determined. This costs at least 10 multiplies and 7 adds. Next, the bounding slabs hierarchy is traversed. This requires an optimized search with intersection testing of world-aligned bounding boxes. Each box test costs two multiply-adds plus some comparison logic. Then, when the nearest bbox leaf node is found, the primitive inside is tested. This costs at least 18 multiplies and 15 adds, because the eye ray must be transformed into the primitive's local coordinate system. Then the actual hit test is done; for a sphere, we're looking at an extra 10 multiplies and 15 adds. All the conditional logic inside these routines (and it can be complex) impedes the CPU's branch prediction. There is also the overhead of the slab machinery, which must maintain state flags in rays, etc. If it takes five bbox tests during the bounding slabs traversal, we've used 10 multiply-adds, so the total for a sphere intersection for one screen pixel would be 48 multiplies and 47 additions. So far, we have not cast any secondary rays -- this computation load occurs for each primary ray. Even without global illumination effects, a raytracer would have to be able to trace width x height pixels in 1/24 second to perform in realtime, but that would mean about 14,745,600 multiplies and 14,438,400 additions for a 640 x 480 display.

    There's just so much computing going on. Considering that current chip fabrication processes are hitting a wall, the necessary speed might not be available for some time. Clearly, if raytracing is to perform in realtime before Moore's Law can be unstalled, a hardware assist (or massive parallelism) is necessary.
  • In scanlining, there are no computations required for pixels that do not intersect geometry. The expensive operations are projecting the primitive into eye space, tesselating a primitive into polygons, projecting each polygon into screen space, and computing per-polygon edge lists. The larger a polygon is, the more pixels it has to spread across the cost of the per-polygon computation. Since the ratio of the perimeter length of a polygon to its surface area decreases as the polygon gets larger, the per-pixel cost can become very small, the ultimate minimum being 4 multiplies and 5 adds (an optimized interpolation to compute the pixel's 3D location and depth buffer value plus a single addition to increment the pixel's X coordinate). This benefit accrues particularly in preview rendering and rendering of flat surfaces such as planes, boxes, triangles, etc. Throw in the greater cache coherency and less disruptive effects upon branch prediction, and it's apparent that a scanliner can afford to suffer pixel overwrites several times before a raytracer becomes competitive. With efficient visibility determination, scanlining is an order of magnitude ahead. For micropolygons, edge lists and their per-pixel interpolations become unnecessary, so a different set of computation costs occur (todo: investigate this).

  • Ambient color appears where the surface is lit by ambient light alone (where the surface is in shadow).
  • Diffuse color appears where light falls directly on the surface. It is called "diffuse" because light striking it is reflected in various directions. Highlights, on the other hand, are reflections of light sources.
  • Specular highlights appear where the viewing angle is equal to the angle of incidence. Glancing highlights appear where the angle of incidence is high, relative to the observer or camera (that is, the light ray is nearly parallel to the surface). Shiny surfaces usually have specular highlights. Glancing highlights are characteristic of metallic surfaces.

Materials work in combination with lights.
  • Light Intensity
    A light's original intensity at its point of origin.
  • Angle of Incidence
    As the angle of incidence increases, the intensity of the face illumination decreases.
  • Distance
    Light Diminishes over distance. This effect is known as Attenuation.

The Components in a Standard Material's include its color components, highlight controls, Self-illumination, and opacity.....
  • Ambient Color is the color of the object in Shadow.
  • Diffuse is the color of the object in direct, "Good" Lighting.
  • Specular is the color of shiny highlights.
  • Some Shaders generate the specular color, rather than letting u choose it.
  • Filter is the color transmitted by light shinning through the object
  • The Filter color component isn't visible unless the material's opacity is less than 100%
  • Self Illumination simulates an object lit from within
  • Opacity is the opposite of transparency. As you reduce the Opacity value, the object becomes more transparent.

Shading types are handled by a "shader", which offers you variations on how the surface responds to light.
  • Anisotropic
    Creates surfaces with noncircular, "anisotropic" highlights; good for modelling hair, glass, or metal.

  • Blinn
    Creates smooth surfaces with some shininess; a general purpose shader

  • Metal
    Creates a lustrous metallic effect

  • Multi-Layer
    Creates more complex highlights than Anisotropic by layering two anisotropic highlights.

  • Oren-Nayar-Blinn
    Creates good matte surfaces such as fabric or terra-cotta; similar to blinn

  • Phong
    Creates smooth surfaces with some shininess; similar to Blinn, but doesnt handle highlights (especially glancing highlights) as well.

  • Strauss
    Creates both nonmetallic and metallic surfaces; has a simple set of controls.

Material

  • A Material is a data that you assign to the surface or faces of an object so that it appears a certain way when rendered.
  • Materials affect the color of objects, their shininess, their opacity, and so on.
  • A standard material consists of ambient, diffuse, and specular components. You can assign maps to the various components of a standard material.

Maps

  • The images you assign to materials are called maps.
  • They include standard bitmaps (such as .bmp, .jpg, or .tga files), procedural maps, such as checker or Marble, and image-processing systems such as compositors and masking systesm.
  • Materials that contain one or more images are called mapped materials.
  • The Renderer needs instructions telling it where the map should appear on the geometry. These instructions are called mapping coordinates.


Hi friends... so it was 22nd of sep, when maac preet vihar organised "MAAC MANTHAN'' to produce anmol ratan of the institute... not only in the sphere of academics but also in co-curricular activities which is also a part of our study. Vfx, documentary, short films, 3d animation....other than that dancers, actors n singers..... watever talent u want is here in maac preet vihar........

It was started with our faculty Mr. Rahul pandey aka Raj n Mr. Vijay, the hosts for the day n entertained alll with their great mimicry n acting skills......

The guest for the day were Mr. Naveen gupta (COO), MAAC INDIA

Manthan started with ganesh vandana, follwed by dance on a western v/s classical mix and then a bollywood ishtyle dance [ it was just roking man]....... also the screening part was continued in between these rocking n mind blowing performances by our students. Two of our friends jus rocked the day with a rap n music too generated manualy by mouth... then came all the aksay kumar's of maac with mast kalander on the stage......


Then lions n sherniya of punjab came with only one say- chak de fatte... just heart winning bhangra n gidda....

Mr. Paresh Parekh, the name which is always responsible for the enthusiasm n the best part of party with all of his actors from maac preet vihar on stage to act a play.... so funy n kool, u all know man........

Some great animation n compositing was also shown by Voodoo.... Team members of Voodoo (Amit, Ashok, Natasha, Nida, Nupur, Atif, Satender, Anshika) have done a great job.... They did a really gud work.... Hats off to all of them !!!!


To end the function was again on stage our students with a maharashtrian song..... hmm our Mr. rahul also showed his some dance movements.....lolzzzz. Next was the galssssssss...... all babies of maac preet vihar with Heyy Babby


Now was the time for the best students to be awarded for their commitment and hardwork....so the 2nd runner ups were Crushers, the first runner ups were Anime pazzi and the first prize was shared by the Steps and Gravity

And the last was all our faculties n our sweet coordinators to jhoom on jhoom barabar jhoom........

All was awesum and rockin.....bt it was the least not the last..... its maac preet vihar yaar..abi baki hai.... nw it was time to get on terrace with the guest for night........shankar sahni..... and all dance the night on his voice and rocked the party…….

Q. Choose your Interest

Options were :

Modelling
Animation
Lighting
Texturing
Rigging

Now the result is....

Total Vote : 51

Duration : One Month

Modelliing 13 (25%)

Animation 11 (21%)

Lighting 17 (33%)

Texturing 5 (9%)

Rigging 5 (9%)



Let's assume that our printer is capable of 300dpi I am going to print an 8 x 10 image and my rendered pixel size is 3000 x 2400.

You have Pixel Size and DPI and want image size:
This is useful when you want to know how big an image will print on your page if you do not allow the printer to scale the image at print time. (ie scale to fit).

Image Width = Pixel Width / DPI
Image Width = 3000 pixels / 300DPI
Image Width = 10 Inches


Image Height = Pixel Height / DPI
Image Height = 2400 pixel / 300 DPI
Image Height = 8 Inches


You have Pixel Size and Inches and want DPI
This one is not all that useful and I can't think of any reason to use it practially but here is it anyway.

Horizontal DPI = Pixel Size (Width) / Inches (Width)

Horz. DPI = 3000 pixels /10 inches

Horz. DPI = 300DPI

You can use the same equation for the Vertical DPI as well. Some printer are the same resolution in both axes.


You have Inches and DPI need pixels
This one is usful when you know that your printer can print 300dpi and you would like to print an image that is 8 x 10 and need to nkow how big to render your final image to do this. Of couse you could always render it bigger, but rendering time is precious so you never want to render more pixels that you have to.

Pixel Width = DPI x inches (Width)

Pixel Width = 300 DPI x 10 inches

Pixel Width = 3000 pixel wide

Again the same formula can be used for pixel Height as well.


Sources : Internet & Forum


It is difficult to understand the relation between pixel, dpi and print size. Here are trying to comeout from the difficulty of pixels and their relation with other variables.....

Basically when you are printing/rendering an image you have three variables to consider:
  • DPI/PPI (Dots per Inch/Pixels per Inch) Both are known as resolution.
  • Pixel size - measured in pixels (this is normally what you render to and is how monitors are measured.
  • Print size - measured in inches (can be anythign but I'll use inches for convinience)
a) DPI/PPI
First you need to know what resolution your printer can handle. Some will say that they go to 1200 dpi or 600 dpi, but in practice you should never need to go bigger than 300 dpi, even if you are printing a 60 ft wide billboard. Just like digital camera's non-optical zoom abilities, printers use DPI to claim superiority in the industry.

DPI is the number of dots of ink that are put down by your printer onto a page over a one inch line. The amount of resolution that a printer can achive is based upon how close together the print head elements or jets are positioned. Of course it is a bit different with a 4 color press, but for now I'll just leave it at that. You may also notice that your inkjet will say that the vertical resolution will is different than the horizontal resolution. This is becuase in one axis the printer is relying on the proximity of the jets to each other, whereas the oher diection is related to the sensativity of the rollers that are feeding the paper across the jets.


b) Pixel Size
I'm assuming that everyone is pretty familiar with pixel size as we deal with is on a daily basis. However, don't assume that your print house will be, because in many case they are not. You will usually need to give them an image size in inches based upon the resolution that their printer is capable. Don't try to explain it to them becuase they just won't get it.

Pixels are something that your monitor uses to describe the very small square dots (pixels) of light that are lit up by the guns of your CRT. (LCDs are different) Typically we say that a monitor is 72dpi.
A pixel translated to a page is usually represented by many dots of ink depending upon the print resolution.

c) Print Size
This one is pretty self explantory and is the image size on the page that your printer printed.



Sources : Internet & Forum

ZBrush 3.1 gives you access to unparalleled power and control previously unknown in digital art creation software. Controls enable sculptors to create with a stylus and a tablet as intuitively as if they were using their hands on a block of clay. ZBrush further extends the creation experience, harnessing technology and providing artists with a multitude of creation-enhancing tools.

New Features
  • Transpose
    Posing your model is as simple as moving an action line. Create a mask to isolate an area, click and drag. It’s no more complicated than posing a clay model with your fingers
  • MATCAP
    Matcap lets you apply real world texturing and lighting to your model. Sample a few points from your chosen photograph or texture; apply to your model, and seconds later you’ve got a model complete with texture and lighting
  • Perspective Camera
    The perspective camera gives you the ability to apply perspective to your model, giving you the ability to adjust your focal length at will
  • Speed
    Multithreaded support for up to 256 processors let ZBrush compute at the speed of your imagination
  • Higher Poly Count
    Up to a billion polygons allow you to create objects with almost infinite detail.
  • HD Geometry
    HD Geometry allows you to divide your model to 1 billion polygons, and your system will only process the polygons visible onscreen.
  • Topology
    ZSpheres makes simple and fast work of creating a new topology. And the projection feature lets you shrink-wrap your topology to an existing model.
  • Scripted Interface
    ZBrush’s integrated scripting lets you create an interface that suits your workflow and your needs. Move existing interface items as you like, or add entirely new buttons and palettes to your interface.
  • User Defined Alpha and texture Start up
    Customize your environment with the alphas, textures, materials and plug-ins you use most.
  • New Movie Palette
    Create, view or export ZBrush tutorials, movies, turntables, models, and even time-lapse videos of your sculpting process.
  • 64 bit support
    ZBrush takes full advantage of your 64-bit system.




Sources : http://www.pixologic.com/zbrush/corefeatures/


Accelerate your creative workflow and increase your pipeline efficiency. Autodesk® 3ds Max® 2008 3d modeling, animation, and rendering software helps design visualization professionals, game developers, and visual effects artists maximize productivity by streamlining the process of working with complex scenes.

Features of 2008

  • Accelerated Performance
    The integration of new technology into the software’s Adaptive Degradation System improves
    interactive performance by automatically simplifying scene display to meet a user-defined target frame rate. You control how 3ds Max adjusts scene display—for example, whether the smallest objects are hidden or distant objects have less detail—and 3ds Max calculates how best to achieve it. When combined with the new Direct3D® mesh caching that groups objects by materials, the result is that tens of thousands of objects can be just as interactive as ten objects. In addition, loading, arrays, FBX® and OBJ export, and other areas of the software
    perform significantly faster.


  • Scene Explorer Scene Management
    3ds Max 2008 delivers Scene Explorer, a robust new tool that provides a hierarchical view of scene data and fast scene analysis, as well as editing tools that facilitate working with even the most complex, object-heavy scenes. Scene Explorer gives you the ability to sort, filter, and search a scene by any object type or property (including metadata)—with stackable filtering, sorting, and searching criteria. This new tool also enables you to save and store multiple Explorer instances and to link, unlink, rename, hide, freeze, and delete objects, regardless of what objects are currently selected in the scene. You can also configure columns to display and edit any object property, and because this feature is scriptable and SDK extendable, you can use callbacks to add custom column definitions.


  • Review Rendering
    This powerful new toolset gives you immediate feedback on various render settings, enabling you to iterate rapidly. This means you can now quickly hone in on your desired look without waiting for a software render—perfect for over-the-shoulder client/boss feedback sessions, and other iterative workflows. Based on the latest game engine technology, Review delivers interactive viewport previews of shadows (including self-shadowing and up to 64 lights simultaneously), the 3ds Max sun/sky system, and mental ray Architectural and Design material settings.


  • MAXScript ProEditor
    3ds Max 2008 marks the debut of the new MAXScript ProEditor. This intuitive new interface for working with MAXScript includes multilevel undo functionality; fast, high-quality code colorization; rapid opening of large documents; line number display; regular expressions in search/replace; folding of sections of the script; support for user-customization; and many
    other features.


  • Enhanced DWG Import
    3ds Max 2008 delivers faster, more accurate importing of DWG files. Significantly improved memory management enables you to import large, complex scenes with multiple objects in considerably less time. Improved support for material assignment and naming, solid object import, and normals management facilitate working with software products based on the Revit 2008 platform. Plus, a new Select Similar feature identifies all objects in an imported DWG scene that contain characteristics similar to those of a selected object. This capability lets you select and edit multiple imported objects simultaneously— dramatically streamlining DWG-based workflows.


  • Artist-Friendly Modeling Options
    3ds Max 2008 gives you a more streamlined, artistfriendly modeling workflow through a collection of hands-on modeling options that let you focus on the creative process. These options include selection previewing and the ability to have existing modeling hotkeys and pivots become temporary overrides.


  • Biped Enhancements
    This latest release provides new levels of flexibility with regard to your Biped rigs. A new Xtras tool lets you create and animate extraneous Biped features anywhere on your rig (for example, wings or additional facial bones) and save them as BIP files. These files are supported in Mixer and Motion Flow, as well as in layers where new layering functionality enables you to save BIP files as offsets from each layer to isolate character motion. As a result, you can save each layer as its own asset for export into a game.


  • Expanded Platform Support
    3ds Max 2008 is the first full release of the software officially compatible with Microsoft® Windows Vista™ 32-bit and 64-bit operating systems, and the DirectX 10 platform.




Sources : www.autodesk.com/3dsmax



Scott Farrar
(Visual Effects Supervisor)

Scott Farrar is a Visual Effects Supervisor at Industrial Light & Magic (ILM), a position he held when he worked on the effects for Star Trek VI: The Undiscovered Country. His first contribution to the Star Trek franchise, however, was as photographic effects cameraman on Star Trek: The Motion Picture, before he joined ILM. during his early years with ILM, he worked as special effects cameraman on Star Trek II: The Wrath of Khan and Star Trek III: The Search for Spock.

Joining ILM as a visual effects cameraman, Farrar's early work includes Star Wars: Episode VI - Return of the Jedi, Willow, and Who Framed Roger Rabbit. His camera work on the film Cocoon helped ILM win an Academy Award for Best Visual Effects in 1986. After becoming an ILM Visual Effects Supervisor in 1988, Farrar helped the company earn three more Academy Award nominations for their visual effects work in the films Backdraft, A.I. Artificial Intelligence, and The Chronicles of Narnia: The Lion, the Witch and the Wardrobe. His other credits as an ILM Visual Effects Supervisor include Cocoon: The Return, Back to the Future Part II, Back to the Future Part III, Wolf, Amistad, Deep Impact, The Mummy, Star Wars: Episode I - The Phantom Menace, Minority Report, and, most recently, 2007's Transformers (written by Roberto Orci & Alex Kurtzman). He was also an additional effects plate photographer on Jurassic Park and supervised the visual effects of the end sequence in Men in Black.


FILMOGRAPHY
  • Foes (1977)
    Special Effects
  • Star Trek III: The Search for Spock (1984)
    Camera Operator
  • Back to the Future Part III (1990)
    Special Effects
  • Star Trek VI: The Undiscovered Country (1991)
    Special Effects
  • Alive (1993)
    Special Effects
  • Congo (1995)
    Supervisor/Manager
  • Daylight (1996)
    Special Effects Supervisor
  • Amistad (1997)
    Special Effects Supervisor
  • Deep Impact (1998)
    Special Effects Supervisor
  • Cowboys (2000)
    Visual Effects Supervisor
  • Artificial Intelligence (2001)
    Visual Effects Supervisor
  • Minority Report (2002)
    Visual Effects Supervisor
  • Peter Pan (2003)
    Visual Effects Supervisor
  • Rent (2005)
    Visual Effects Supervisor
  • The Chronicles of Narnia: The Lion, the Witch, and the Wardrobe (2005)
    Visual Effects Supervisor
  • Transformers (2007)
    Visual Effects Supervisor


Copyright 2010 Lets Do Blogging