Last Updated March 12, 2013
We have looked at removing backfaces, this stage will usually eliminate about half of the surfaces in a scene from further processing. Once the polygons to be displayed have been clipped, we need to draw them on to the display device (Rasterization).
We also need to remove parts of those surfaces that are partially obscured by other surfaces. These are called hidden surfaces. If hidden surfaces are not dealt with correctly, then we may get distorted images like the right of the image below. Rasterization and HSR are usually combined in to a single phase of the rendering pipeline.
This is a very simple but naive algorithm. Find the face with the deepest vertex (vertex with greatest $z$ co-ordinate), and draw that face. Then draw then next nearest face, if two faces overlap, then the most recently drawn face will obscure the first one. This method does not work when faces intersect. The method is very expensive when there is a lot of overdraw in the scene.
Occlusion buffer is a refined variation on a reverse heedless painter technique. The method sorts and draws the polygons in a near-to-far order. As pixels are drawn, each pixel is marked as "filled" in an occlusion buffer. The occlusion buffer is a boolean array of the same size as the refresh buffer, initialised as "unfilled".
Before rendering a pixel of a polygon, the occlusion buffer is check to see if the corresponding screen pixel is already filled. If that screen pixel has already been filled, the polygon pixel can be eliminated, otherwise it is drawn and the occlusion buffer updated.
This technique is very cost effective where there is a lot of overdraw and lighting or texturing calculations are expensive. Time consuming pixel shaders are not executed for pixels which are never seen.
The demerits of this technique are
This is the technique use in most modern rendering systems.
This algorithm tests the visibility of a surface one pixel at a time. For each pixel position (x,y) on the viewplane, the surface with the smallest $z$ co-ordinate (pseudo depth) at that position is the visible surface. So the pixel is drawn with the colour of the closest surface.
Before processing the overall projection matrix $M_{tot}$ has been applied to all the vertices in the model. Two buffers are required for this algorithm. A depth buffer is used to store $z$ -values for each $(x,y)$ pixel position of the scene. And a refresh bufferis used to store colour (intensity) of each pixel position.
Initially all positions in the depth buffer are set to 1 (Maximum depth) and the refresh buffer is filled with background colour.
Each polygon in the model is processed one scan line at a time, and the depth ( $z$ co-ordinate) is calculated at each pixel position $(x,y)$ . The calculated z-value is compared to the value previously stored in the depth buffer at that position. If the z-value is less that the value in the depth buffer, this means that the current polygon is the closest polygon so far observed at that pixel position, the refresh buffer at that position is then filled with the colour of that polygon. Once all the polygons have been processed the refresh buffer should contain a realistic view of the scene, with closer faces correctly overlapping faces that are further away.
The key to this algorithm is being able to accurately and quickly calculate the depth of a polygon at various points on its surface. We already know the relative depth of the vertices (these have been normalized), but if faces are at an angle to the view plane, we will need some method to calculate the depth of points on the face. To calculate depth we will use the point normalform of a plane. This can be specified by giving a point $S(s_{x},s_{y},s_{z})$ on the plane and a normal, $\vec{N}$ , to the plane. For another arbitrary point on the plane $\vec{R}$ , we know that $\vec{N}$ is perpendicular to the vector $(\vec{R}-\vec{S})$ which lies on the plane. This gives us an equation for a plane called the point normal form.
\[\begin{eqnarray*} \vec{N} .(\vec{R} -\vec{S} )&=&0 \ \vec{N} .\vec{R} -\vec{N} .\vec{S} &=&0 \ \vec{N} .\vec{R} &=&\vec{N}.\vec{S} \end{eqnarray*}\]
i.e. All points on a plane have the same dot product with the normal to the plane. If we write $\vec{R}$ (arbitrary point) as $(x,y,z)$ and $\vec{N} .\vec{S}$ as a constant $D$ we get;
\[ \vec{N}\cdot(x,y,z)=D \]
Expanding the dot product:
\[n_{x}x+n_{y}y+n_{z}z=D \] (which can be re-written as the usual form of a plane:
$ax+by+cz=D$
re-arranging we get an expression for the depth at
some point
$(x,y)$
;
As we process a scan line, the $y$ co-ords stay the same, and the $x$ co-ord changes by one. So if the depth of at position $(x,y)$ has been determined to be z, the depth at the subsequent pixel $(x+1,y), z'$ is;
So we can calculate the depth of the next pixel by adding a constant to the depth of the current pixel. We can calculate the normal for each face by using Newell's method, or by finding two non-collinear vectors on the surface of the face and calculating their cross product.
The Depth buffer is easy to implement, quick and requires no sorting but requires large amounts of memory for the buffers, and depth calculations need to be made at every (pixel) point on every face.
In XNA z-buffer HSR is controlled by the DepthStencilState member of the graphics device.
© Ken Power 1996-2016