Viewing transformation

The orthographic projection is a simple way to convert from 3D into 2D, simply remove or ignore the z-coordinate, leaving the x & y coordinates of a vertex on the x-y plane. However if we want to view and project the scene from various different positions and orientations we need to move and rotate our scene so that the correct 'aspect' of the scene is projected onto the x-y plane. This causes an awkward mix of modeling transformations(describing the objects to be viewed) and viewing transformation (rendering a picture of the object).

Another way to achieve the same effect, but much is much more flexible is to create a synthetic camera. This involves setting up a new coordinate system (set of 3 axes) called the viewing coordinate system(VCS). The VCS can be placed anywhere in the scene, oriented in any direction. The 3D-2D projection can be performed in the VCS.

The graphics renderer uses the following steps;

  1. Set up the VCS
  2. Convert all the vertex coordinates from world coordinates (WCS) to VCS coordinates (viewing transformation)
  3. Perform the projection in the VCS

The conversion from WCS to VCS will be carried out by multiplying each vertex by a Viewing Transformation Matrix (or just View Matrix).

The synthetic camera separates viewing from modeling, which will simplify things later.

Synthetic Camera

A synthetic camera is a way to describe a camera (or eye) positioned and oriented in 3D space. The system has three principal ingredients:

  1. A viewplane in which a window is defined.
  2. A coordinate system called viewing coordinate system(VCS) sometimes called the $UVN$ system.
  3. An eye defined in VCS.

In order to easily project on to the viewplane, all points in our scen must be transformed from World Coordinates to View Coordinates

The position of the camera and direction is defined by a point called the View Reference Point(VRP) and a normal to the viewplane called the View Plane Normal(VPN). The viewplane will be the plane perpendicular the VPN. These are defined in the world coordinate system. The viewing coordinate system is defined as follows:

In order for a rendering application to achieve the required view, the user would need the specify the following parameters.

Calculating $\vec{n}$

The $\vec{n}$ axis is the direction the camera is pointing. To choose a VPN( $\vec{n}$ ), the user would simply select a point in the area of interest in the scene. The vector $\vec{n}$ is a unit vector, which can be calculated as follows;

The user should select some point in the scene which (s)he would like to appear as the centre of the rendered view, call this point $\vec{scene}$ . The vector $ \vec{norm}$ , a vector lying along $\vec{n}$ can then be calculated:

\[\vec{norm}=\vec{scene}-\vec{VRP}\]

$\vec{n}$ must be a unit vector along $\stackrel{\longrightarrow}{\mathbf{norm}}$ ;

\[ \mbox{$\vec{n}$}=\frac{\mbox{$\stackrel{\longrightarrow}{\mathbf{norm}}$}}{|norm|} \]

Calculating $\vec{v}$

An upward vector must be a unit vector perpendicular to $\vec{n}$, let the user enter a vector $\vec{up}$ and project this vector onto the plane perpendicular to $\vec{n}$ (i.e. the viewplane) to calculate an appropriate unit vector $\vec{v}$ .

\[\begin{eqnarray*} \vec{up'}&=&\vec{up}-k\vec{n}\\ k&=&\vec{up}.\vec{n}\\ \vec{up'}&=&\vec{up}-(\vec{up}.\vec{n})\vec{n}\\ \vec{v}&=&\frac{\vec{up'}}{|\vec{up'}|}\\ \end{eqnarray*}\]

Calculating $\vec{u}$

The vector $\vec{u}$ is needs to perpendicular to both $\vec{n}$ and $\vec{v}$ so is calculated as $\vec{u}=\vec{n}\times \vec{v}$ .

Changing the view

The components of the synthetic camera can be changed to provide different views and animation effects;

 

Describing Objects in Viewing Coordinates

We have developed a method for specifying the location and orientation of the synthetic camera. In order to draw projections of models in this system we need to be able to represent our real-world coordinates in terms of $ \vec{u}\vec{v}\vec{n}$ (viewing coordinates).

Assume there is a point $\vec{p_{wc}}$ in world coords, which needs converted to a viewing coords, $\vec{p_{vc}}$. The origin of the viewing coordinates is at $\vec{r}$ and the axis are $\vec{u}\vec{v}\vec{n}$.

The dot product can be used to calculate how a point projects onto a unit length coordinate axis like $\vec{u}$

\[\begin{eqnarray*} p_{vc_u}&=&\vec{u}.\vec{p_{wc}}\\ p_{vc_v}&=&\vec{v}.\vec{p_{wc}}\\ p_{vc_n}&=&\vec{n}.\vec{p_{wc}}\\ \end{eqnarray*}\]

these three equations can be written in matrix form;

\[\begin{eqnarray*} \left(\begin{array}{c} p_{vc_u}\\ p_{vc_v}\\ p_{vc_n}\end{array} \right)&=&\left(\begin{array}{ccccc} u_{x}&u_{y}&u_{z}\\ v_{x}&v_{y}&v_{z}\\ n_{x}&n_{y}&n_{z}\end{array} \right)\vec{p_{wc}}\\ \vec{p_{vc}}&=&\mathbf{M}\vec{p_{wc}}\\ \end{eqnarray*}\]

this works only if both coordinate systems have the same origin. To take account of the shift in origins, subtract $\vec{r}$ from $\vec{p_{wc}}$

\[\begin{eqnarray*} \vec{p_{vc}}&=&\mathbf{M}(\vec{p_{wc}}-\vec{r})\\ \vec{p_{vc}}&=&\mathbf{M}\vec{p_{wc}}-\mathbf{M}\vec{r}\\ \vec{p_{vc}}&=&\mathbf{M}\vec{p_{wc}}+\left(\begin{array}{c} -\vec{u}.\vec{r}\\ -\vec{v}.\vec{r}\\ -\vec{n}.\vec{r}\end{array} \right)\\ \end{eqnarray*}\]

A matrix multiplication followed by a translation can be combined using homogeneous coordinates

\[\begin{eqnarray*} \vec{p_{vc}}&=& \left(\begin{array}{ccccc} u_{x}&u_{y}&u_{z}&-\vec{u}.\vec{r}\\ v_{x}&v_{y}&v_{z}&-\vec{v}.\vec{r}\\ n_{x}&n_{y}&n_{z}&-\vec{n}.\vec{r}\\ 0&0&0&1\end{array} \right)\vec{p_{wc}}\\ \end{eqnarray*}\]

this matrix will be called $\mathbf{\hat{M}}_{wv}$ (or View Matrix)

\[\begin{eqnarray*} \vec{p_{vc}}&=&\mathbf{\hat{M}}_{wv}\vec{p_{wc}} \end{eqnarray*}\]

We now have a method for converting world coordinates to viewing coordinates of the synthetic camera. We need to transform all objects from world coordinates to viewing coordinates, this will simplify the later operations of clipping, projection etc. We should have a separate data structure to hold the viewing coordinates of an object. The model itself remains uncorrupted and we can have may different views by setting up different synthetic cameras.

Viewing Coordinates in OpenGL & XNA

In OpenGL the transformation into viewing coordinates can be performed by using the gluLookAt function.

In XNA use the method CreateLookAt

 
Matrix.CreateLookAt(position,target,up);