Anti-Aliasing

Aliasing is a term from signal analysis referring to the phenomenon which occurs when a signal is sampled at too low a sampling rate, the result is a signal with a lower frequency, i.e. some of the detail has been lost. This lower frequency result is called the alias.

This problem also occurs in the rasterization process due the fact that we are tying to represent continually varying shapes and fine textures on a coarse grid of pixels. Aliasing is manifested in jagged diagonal edges (jaggies), incorrectly rendered textures and small or thin objects which may not be rendered (spatial aliasing) (figure below).

Aliasing is particularly noticeable in animation, where textures seen to change from one frame to the next and small objects can appear and disappear (scintillation), these phenomenon are known as temporal aliasing.

Anti-aliasing techniques usually involve blurring the sharp transition at the edge of objects. At this stage we will look at three traditional anti-aliasing methods, more advanced techniques will be discussed in connection with ray-tracing.

Image with no antialiasing
Image has been antialiased using a supersampling method

Prefiltering (object based anti-aliasing)

Prefiltering techniques compute the objects coverage of a pixel and assigns alpha value to that pixel based on the percentage coverage. This can be a particularly expensive technique for objects other than polygons, where efficient algorithms have been devised.

Pitteway and Watkinson have developed a technique based on Bresenham. This method uses the fact the pixel coverage of a straight edge changes predictably from one scan line to the next.

Pre-filtered finite line

Postfiltering

Postfiltering uses the colour of neighbouring pixels to determine if a pixel is an edge pixel or not (silhouettes are the source of most anti-aliasing) In this case the value of a pixel is a weighed average, with the centre sample contributing more than the edge samples.

To calculate the average a mask or window function is laid over the set of samples, each sample is multiplied by the corresponding entry in the mask and the products are then summed to get the pixel value (see figure below). The weights in the mask must sum to unity.

Sometimes larger masks are used; 5$\times$ 5 or 9$\times$ 9. These "look further" into the area surrounding the centre sample and can give better smoothing results.

$\frac{1}{8}$
1
2
1
2
4
2
1
2
1
$\frac{1}{8}$
0
1
0
1
4
1
0
1
0
$\frac{1}{81}$
1
2
3
2
1
2
4
6
4
2
3
6
9
6
3
2
4
6
4
2
1
2
3
2
1
Typical Window Functions

Supersampling

Supersampling solves the problem to too few samples by taking many samples at different positions for each pixel. The pixels value (colour intensity) is the average of these samples.

A typical implementation involves double-sampling, taking a sample at every half pixel. This process requires four times as many samples to be taken, but gives nine samples for each pixel. The nine samples are averaged to obtain the pixel value.

http://en.wikipedia.org/wiki/Supersampling

Adaptive Supersampling involves taking only one sample per pixel where there is no big change from one pixel to the next. If the renderer detects change of colour from one pixel to the next, then extra samples are taken.

A form of anti-aliasing can be achieved by sampling at a corner of each pixel. The value of a pixel is the average of the values at the 4 corners. This form of supersampling is available at little extra cost.