Sources : CGtantra

Red, green and blue channels have all been used, but blue has been favored for several reasons. Blue is the complementary color to flesh tone--since the most common color in most scenes is flesh tone, the opposite color is the logical choice to avoid conflicts. Historically, cameras and film have been most sensitive to blue light, although this is less true today.

Green has it's own advantages, beyond the obvious one of greater flexibility in matting with blue foreground objects. Green paint has greater reflectance than blue paint which can make matting easier. Also, video cameras are usually most sensitive in the green channel, and often have the best resolution and detail in that channel. A disadvantage is that green spill is almost always objectionable and obvious even in small amounts, wheras blue can sometimes slip by unnoticed.

Sometimes (usually) the background color reflects onto the foreground talent creating a slight blue tinge around the edges. This is known as blue spill. It doesn't look nearly as bad as green spill, which one would get from green.

Usually only one camera is used as the Chroma Key camera. This creates a problem on three camera sets; the other cameras can see the blue screen. The screen must be integrated into the set design, and it is easier to design around a bright sky blue than an intense green or red.

Sources : Internet

The Chroma Key process is based on the Luminance key. In a luminance key, everything in the image over (or under) a set brightness level is "keyed" out and replaced by either another image, or a color from a color generator. (Think of a keyhole or a cookie-cutter.) Primarily this is used in the creation of titles. A title card with white on black titles is prepared and placed in front of a camera. The camera signal is fed into the keyer's foreground input. The background video is fed into the keyer. The level control knob on the keyer is adjusted to cause all the black on the title card to be replaced by the background video. The white letters now appear over the background image.

Luminance keying works great with titles, but not so great for making live action composites. When we want to key people over a background image, problems arise because people and their clothing have a wide range of tones. Hair, shoes and shadow areas may be very dark, while eyes, skin highlights and shirt collars can approach 100% white. Those areas might key through along with the background.

Chroma Key creates keys on just one color channel. Broadcast cameras use three independent sensors, one for each color, Red, Green and Blue. Most cameras can output these RGB signals separately from the Composite video signal. So the original chroma key was probably created by feeding the blue channel of a camera into a keyer. This works, sort of, but soon manufacturers created dedicated chromakeyers that could accept all 3 colors, plus the background composite signal and the foreground composite signal. This made it possible to select any color for the key and fine tune the selection of the color.

As keyers became more sophisticated, with finer control of the transition between background and foreground, the effect became less obvious and jarring. Today's high-end keyers can make a soft key that is basically invisible.

Creating a blue screen composite image starts with a subject that has been photographed in front of an evenly lit, bright, pure blue background. The compositing process, whether photographic or electronic, replaces all the blue in the picture with another image, known as the background plate.

Blue screen composites can be made optically for still photos or movies, electronically for live video, and digitally to computer images. Until very recently all blue screen compositing for films was done optically and all television composites were done using analog real time circuits.

In addition to blue, other colors can be used, green is the most common, although sometimes red has been used for special purposes.

In addition to blue, other colors can be used, green is the most common, although sometimes red has been used for special purposes.

Sources : Internet

Difference between Non Linear Editing System & After Effects :-

  • After Effects and NLEs is that After Effects is layer-oriented, and NLEs are generally track-oriented.
  • After Effects, each individual media object (video clip, audio clip, still image, etc.) occupies its own track. However, NLEs use a system where individual media objects can occupy the same track as long as they do not overlap in time.
  • Track-oriented system is more suited for editing and can keep project files much more concise. The layer-oriented system that After Effects adopts is suited for extensive effects work and keyframing

After Effects uses a system of layers organized on a timeline to create composites from still images and motion footage, such as video files. Properties such as position and opacity can be controlled independently for each layer, and each layer can have effects applied. After Effects is often described as the "Photoshop of video", because its flexibility allows compositors to alter video in any way they see fit, as Photoshop does for images.

Although After Effects can create images of its own, it is generally used to composite material from other sources to make moving graphics (also known as motion graphics). For example, with a picture of a space ship and a picture of a star background, After Effects could be used to place the ship in front of the background and animate it to move across the stars.

The main interface consists of several panels (windows in versions prior to After Effects 7.0). Three of the most commonly used panels are the Project panel, the Composition panel, and the Timeline panel. The Project panel acts as a bin to import stills, video, and audio footage items. Footage items in the Project panel are used in the Timeline panel, where layer order and timing can be adjusted. The items visible at the current time marker are displayed in the Composition panel.

Channels are grayscale images that store different types of information:
  • Color information channels are created automatically when you open a new image.
  • The image’s color mode determines the number of color channels created. For example, an RGB image has a channel for each color (red, green, and blue) plus a composite channel used for editing the image.
  • Alpha channels store selections as grayscale images. You can add alpha channels to create and store masks, which let you manipulate or protect parts of an image.
  • Spot color channels specify additional plates for printing with spot color inks.
  • An image can have up to 56 channels. All new channels have the same dimensions and number of pixels as the original image.
  • As long as you save a file in a format supporting the image’s color mode, the color channels are preserved.
  • Alpha channels are preserved only when you save a file in Photoshop, PDF, PICT, Pixar, TIFF, PSB, or raw formats. DCS 2.0 format preserves only spot channels. Saving in other formats may cause channel information to be discarded.

Some points to remember :-
  • When you select part of an image, the area that is not selected is “masked” or protected from editing. So, when you create a mask, you isolate and protect areas of an image as you apply color changes, filters, or other effects to the rest of the image. You can also use masks for complex image editing such as gradually applying color or filter effects to an image.
  • Masks are stored in alpha channels. Masks and channels are grayscale images, so you can edit them like any other image with painting tools, editing tools and filters. Areas painted black on a mask are protected, and areas painted white are editable.
  • To save a selection more permanently, you can store it as an alpha channel. The alpha channel stores the selection as an editable grayscale mask in the Channels palette. Once stored as an alpha channel, you can reload the selection at any time or even load it into another image.

There is hardly a movie that is made in Hollywood these days that does not have extensive special effects (SFX) work. Bollywood has also started embracing the SFX bandwagon with some gusto. A large part of SFX is animation. But SFX goes way beyond animation. In this issue, we go behind the scenes to see how some of the most spectacular and some of the most realistic SFX seen in recent times have been achieved. How various effects are executed is explained towards the end of this section. You may want to read that first.

Creating Spiderman
If you have seen the recent blockbuster Spiderman, you would recall the last scene of Tobey Maguire swinging away, amidst the high rises of Manhattan. Surely, you would have wondered how a human being, even an especially adept and agile stuntman, could have pulled one off. If you have wondered and would like to know, welcome to the world of computer-generated SFX.

Let us take the Spiderman scene described above as an example. The bulk of the scene was computer-generated imagery (CGI). There was no real Manhattan, and no real Maguire, most of the time. Almost everything was generated by software.

This software-generated imagery was interspersed with live shots of stunt doubles for street-level shots, and the occasional shot of Maguire himself, to create the footage you and I saw.

What software was used for this? Like most other SFX projects of this scale, Spiderman used standard effects packages like Maya, extensions specifically written for the Spiderman project, and completely new packages written just to create Spiderman-specific effects like spider webs.

Spiderman and the Green Goblin were animated for their many stunts in Maya. So was the genetically-altered spider that bit Peter Parker, turning him into the superhero. Spider webs, web-slinging effects and the Manhattan buildings were digitally created in Houdini, software from Side Effects. Renderman from Pixar was also used. Animation created in Houdini and render paths for Renderman were coordinated using PERL scripts.

Houdini is available for NT, Linux, IRIX and Solaris. Maya from Alias WaveFront is available for NT, IRIX, Linux and MacOS X.Rendering all this is a demanding task in itself, and SFX and animation studios build render farms for the purpose. Initially, most of the work was done on heavy-duty RISC machines from SGI and Sun. Recently there’s been a shift towards Linux and render farms built of commodity Intel-based machines. A render farm is a collection of machines, networked together by a high bandwidth connection, and dedicated to running the rendering. Specially written tools divide the rendering work amongst the machines in the render farm and also keep track of what is going on.

Announcing New Autodesk Visual Effects And Editing

Realize your ideas in the most creative way possible using the most advanced creative toolset on the market today. Autodesk Visual Effects and Editing solutions provide the high performance and interactivity needed to truly experiment and test new ideas. The combination of great talent with Autodesk tools is unbeatable.Use the best tools in the business.

From tracking to keying, color correction to motion estimation, and advanced timeline editing to our unique interactive 3D compositing environment, Autodesk Effects and Editing solutions offer you the broadest and richest toolset in the industry. No matter what the project demands, you can deal with it more creatively and efficiently, even exceeding the expectations of your clients.

Visual Effects Systems

Autodesk® Flame 2007
Industry-leading real-time visual effects design and compositing system.

Autodesk® Inferno 2007
The ultimate interactive design system for high-resolution visual effects.

Autodesk® Flint 2007
Advanced visual effects system for post-production and broadcast graphics.


Editing Systems

Autodesk® Smoke 2007


Integrated editing and finishing system for SD, HD, 2K film, and above.

Discreet® Fire 2007
The ultimate real-time, non-compressed, high-resolution, non-linear editing, and finishing system.

Andrew Daffy
(Animator, Maya Guru)


Born in the UK in 1976, Andrew Daffy specializes in CGI Supervision for the post production industry. He started working at Framestore CFC as a Junior Animator in 1996.

After earning the position as Head of 3D Commercials some years later, he worked on award winning projects such as Levis Odyssey, Walking With Dinosaurs and two James Bond title sequences.

Andrew's final project within the company was the CGI supervision of a bat sequence for the film Harry Potter and the Prisoner of Azkaban. He's now looking at branching out. As well as editing promos and pitching for directing work, he's currently researching the idea of setting up a London based school focusing solely on training in photorealistic animation.

As well as freelancing for UK's major post production houses and animation studios, Daffy has now set up his own company - "THE HOUSE OF CURVES"

Photographs, magazines and other objects of nature such as an apple; create color by subtracting or absorbing certain wavelengths of color while reflecting other wavelengths back to the viewer. This phenomenon is called subtractive color.

A red apple is a good example of subtractive color; the apple really has no color; it has no light energy of its own, it merely reflects the wavelengths of white light that cause us to see red and absorbs most of the other wavelengths which evokes the sensation of red. The viewer (or detector) can be the human eye, film in a camera or a light-sensing instrument.

The subtractive color system involves colorants and reflected light. Subtractive color starts with an object (often a substrate such as paper or canvas) that reflects light and uses colorants (such as pigments or dyes) to subtract portions of the white light illuminating an object to produce other colors. If an object reflects all the white light back to the viewer, it appears white. If an object absorbs (subtracts) all the light illuminating it, no light is reflected back to the viewer and it appears black. It is the subtractive process that allows everyday objects around us to show color.

Color paintings, color photography and all color printing processes use the subtractive process to reproduce color. In these cases, the reflective substrate is canvas (paintings) or paper (photographs, prints), which is usually white.

Printing presses use color inks that act as filters and subtract portions of the white light striking the image on paper to produce other colors. Printing inks are transparent, which allows light to pass through to and reflect off of the paper base. It is the paper that reflects any unabsorbed light back to the viewer. The offset printing process uses cyan, magenta and yellow (CMY) process color inks and a fourth ink, black. The black printing ink is designated K to avoid confusion with B for blue. Overprinting one transparent printing ink with another produces the subtractive secondary colors, red, green, blue.

The additive color system involves light emitted directly from a source, before an object reflects the light. The additive reproduction process mixes various amounts of red, green and blue light to produce other colors. Combining one of these additive primary colors with another produces the additive secondary colors cyan, magenta, yellow. Combining all three primary colors produces white. Television and computer monitors create color using the primary colors of light. Each pixel on a monitor screen starts out as black. When the red, green and blue phosphors of a pixel are illuminated simultaneously, that pixel becomes white. This phenomenon is called additive color.

To illustrate additive color, imagine three spotlights, one red, one green and one blue focused from the back of an ice arena on skaters in an ice show. Where the blue and green spotlights overlap, the color cyan is produced; where the blue and red spotlights overlap, the color magenta is produced; where the red and green spotlights overlap the color yellow is produced. When added together, red, green and blue lights produce what we perceive as white light.

As mentioned before, television screens and computer monitors are examples of systems that use additive color. Thousands of red, green and blue phosphor dots make up the images on video monitors. The phosphor dots emit light when activated electronically, and it is the combination of different intensities of red, green and blue phosphor dots that produces all the colors on a video monitor. Because the dots are so small and close together, we do not see them individually, but see the colors formed by the mixture of light. Colors often vary from one monitor to another. This is not new information to anyone who has visited an electronics store with various brands of televisions on display. Also, colors on monitors change over time. Currently, there are no color standards for the phosphors used in manufacturing monitors for the graphics arts industry. All image capture devices utilize the additive color system to gather the information needed to reproduce a color image. These devices include digital cameras, flatbed scanners, drum scanners, and video cameras. To summarize: Additive color involves the use of colored lights. It starts with darkness and mixes red, green and blue light together to produce other colors. When combined, the additive primary colors produce the appearance of white.

What is Color?

Color is all around us. It is a sensation that adds excitement and emotion to our lives. Everything from the cloths we wear, to the pictures we paint revolves around color. Without color; the world (especially RGB World) would be a much less beautiful place. Color can also be used to describe emotions; we can be red hot, feeling blue, or be green with envy.

In order to understand color we need a brief overview of light. Without light, there would be no color, and hence no RGB World. Thank God for light!

Light is made up of energy waves which are grouped together in what is called a spectrum. Light that appears white to us, such as light from the sun, is actually composed of many colors. The wavelengths of light are not colored, but produce the sensation of color.

Raster Images

  • A Raster image is a collection of dots called pixels.
  • Each pixel is a tiny colored square.
  • When an image is scanned, the image is converted to a collection of pixels called a raster image
  • Scanned graphics and web graphics (JPEG and GIF files) are the most common forms of raster images.
  • The quality of an imprint produced from a raster image is dependant upon the resolution (dpi) of the raster image, the capabilities of the printing technology and whether or not the image has been scaled up.

Vector Images

  • A vector image is a collection of connected lines and curves that produce objects.
  • When creating a vector image in a vector illustration program, node or drawing points are inserted and lines and curves connect notes together.
  • Each node, line and curve is defined in the drawing by the graphics software by a mathematical description.
  • Text objects are created by connecting nodes, lines and curves.
  • In a vector object, colors are like clothes over the top of a skeleton.
  • They can be scaled up or down without any loss of quality.
  • Since vector images are composed of objects not pixels, you can change the color of individual objects without worrying about individual pixels.

Bit depth, sometimes called "brightness resolution", defines the number of possible tones or colours every pixel can have. The greater the bit depth, the greater the depth of colour, and the larger the colour (or greyscale) palette (number of colours). For example, 8-bit colour has a range of 256 colours (or shades of grey) and 24-bit (or higher) colour provides 16.7 million colours, but 30-bit colour has many more millions of colours, which offers higher definition and thus better results in reproducing details such as the shadowy parts of an image.


2 bit Black & white


8-bit 256 Greyscale


8-bit 256 color


24-bit True Color

When digital technology is used to capture, store, modify and view photographic images, the images must first be converted to a set of numbers in a process called digitisation. Computers are very good at storing and manipulating numbers and can therefore handle digitised images with remarkable speed. Once digitised, photographs can be examined, altered, displayed, transmitted, printed or archived in an incredible variety of ways. As you explore digital imaging, it helps to be familiar with a few basic terms.

Digital images consist of a grid of small squares, known as picture elements, or
pixels: These basic building blocks are the smallest elements used by computer monitors or printers to represent text, graphics, or images.

Resolution describes the clarity or level of detail of a digital image. Technically the term "resolution" refers to spatial resolution and brightness resolution; commonly, however, the word is used to refer to spatial resolution alone. The higher the resolution, the greater the detail in the image (and the larger the file). For computers and digital cameras, resolution is measured in pixels; for scanners, resolution is measured in pixels per inch (ppi) or dots per inch (dpi); for printers, resolution is measured in dots per inch (dpi).





Copyright 2010 Lets Do Blogging