A HLSL file is a code file containing High Level Shading Language(HLSL) code, which is a microsoft language similar to GLSL(for OpenGL).
In the early days of shader programming, vertex and pixel shaders were programmed at a very low level with only the assembly language of the graphics processing unit. Although using the assembly language gave the programmer complete control over code and flexibility, it was fairly hard to use. A portable, higher level language for programming the GPU was needed, so HLSL was created to overcome these problems and make shader development easier.
A HLSL file contains a number of independent programs called shaders, some of these are vertex shaders and some are pixel shaders.
These programmes are organized using techniques. Techniques are used to group and manage shaders which work together. A HSLS file can contain a number of techniques.
In XNA, the HLSL file is abstracted by the Effect class. The effect class is used in a simlar way to the BasicEffect.
HLSL has a syntax similar to C on the surface, but is a very different language.
A Shader in a HLSL file looks a bit like a method in C. There is a shader name (like a function name), input parameters, a return type and a return value.
For a vertex shader the input parameters usually contain the raw vertex properties (position, colour, etc). The shader's job is to transform the vertex based on the projection, view and world matrices (at a minimum). The shader can also perform some lighting calculations. When finished, the shader returns a new vertex, with modified position and colour (and other properties).
HLSL supports the usual scalar data types; bool, int, float.
Vectors (up to 4 components) are also supported; bool2, int3, float4, etc
Matrices (up to 16 components) are supported ; int3x3, float4x4, etc
There are other data type like textures, samplers & buffers.
The programmer communicates with the GPU about which variable should contain which vertex or pixel property using semantics. For example, to force the vertex position to be written to or read from a variable "myPosition", use the following code;
float4 myPosition:POSITION;
Here, the semantic "POSITION" makes the 'variable' myPosition be a reference to the Vertex Position register.
List of Standard Semantics
Uniform variables are variables whose value is the same for every pixel or vertex processed by an invocation of a shader, e.g light position & colour, view or projection matrix, world matrix, texture, etc.
They are declared like global variables (at file scope).
Ambient light can be primitively modelled by giving every vertex the same colour value;
The shader code is stored in a .fx file in the content directory.
//uniform variables float4x4 world_view_proj_matrix; //world, view & projection matrices combined float4 Light_Ambient; //colour of ambient light struct VS_OUTPUT //the vertex shader (VS) will return this struct { float4 Pos: POSITION; float3 Color: COLOR0; }; //vertex shader VS_OUTPUT vs_main(float4 inPos: POSITION) { VS_OUTPUT Out=(VS_OUTPUT)0; // Compute the projected position and send out the texture coordinates Out.Pos = mul(inPos,world_view_proj_matrix ); // Start with the ambient color float4 Color = Light_Ambient; // Output Final Color Out.Color=Color; return Out; } //pixel shader float4 ps_main(float4 inColor: COLOR0) : COLOR { return inColor; } technique Ambient { pass P0 { vertexShader = compile vs_2_0 vs_main(); pixelShader = compile ps_2_0 ps_main(); } }
Before drawing, the effect has to be loaded;
Effect effect; override void LoadContent() { effect = Content.Load("MyShader"); }
In the Draw method, the uniform variables are set and the shader is applied while drawing the primitives.
protected override void Draw(GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.CornflowerBlue); RasterizerState rs = new RasterizerState(); rs.CullMode = CullMode.CullCounterClockwiseFace; graphics.GraphicsDevice.RasterizerState = rs; effect.Parameters["world_view_proj_matrix"].SetValue(world * view * proj); effect.Parameters["Light_Ambient"].SetValue(new Vector4(0.4f, 0.4f, 0.4f, 1)); effect.CurrentTechnique = effect.Techniques["Ambient"]; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); graphics.GraphicsDevice.DrawUserPrimitives( PrimitiveType.TriangleList, toruspoints, 0, toruspoints.Length / 3); } base.Draw(gameTime); }
Use the Lambert model to render diffuse reflection. It is the dot product of the normal and light vectors The position of the light and the normal of the vertex needs to be known.
The light vector is calculated as;
$$\begin{eqnarray*} \vec{light}=light_{pos}-vertex_{pos} \end{eqnarray*}$$Of course the light position is in world coordinates and the normal vector & vertex position are in model coordinates. We need all points to be in the same coordinate space in order to have a meaningful result.
The obvious way to achive this is to transform the vertex position and vertex normal into worldspace, by multiplying them both by the world transformation (two matrix multiplications).
However it is more economic to move the light position into model space by aplying the inverse world transformation (one matrix multiplication).
float4x4 world_view_proj_matrix; float4 Light_Ambient; float4 Light1_Position; float4x4 inv_world_matrix; struct VS_OUTPUT { float4 Pos: POSITION; float3 Color: COLOR0; }; VS_OUTPUT vs_main(float4 inPos: POSITION, float3 inNormal: NORMAL) { VS_OUTPUT Out=(VS_OUTPUT)0; // Compute the projected position and send out the texture coordinates Out.Pos = mul(inPos,world_view_proj_matrix ); // the normals might not be normalised (paraniod) inNormal=normalize(inNormal); // Start with the ambient color float4 Color =Light_Ambient; // Determine the light vector // first get the light vector in object space vector obj_light=mul(Light1_Position,inv_world_matrix); vector LightDir = normalize(obj_light - inPos); // Diffuse using Lambert float DiffuseAttn = max(0, dot(inNormal, LightDir) ); // Compute final lighting // assume white light vector light={0.8,0.8,0.8,1}; Color += light*DiffuseAttn; // Output Final Color Out.Color=Color; return Out; } float4 ps_main(float4 inColor: COLOR0) : COLOR { return inColor; } technique Diffuse { pass P0 { vertexShader = compile vs_2_0 vs_main(); pixelShader = compile ps_2_0 ps_main(); } }
protected override void Draw(GameTime gameTime) { //.. prepare to draw effect.Parameters["view_proj_matrix"].SetValue(world * view * proj); effect.Parameters["Light1_Position"].SetValue(new Vector4(0,30,0,1)); effect.Parameters["Light_Ambient"].SetValue(new Vector4(0.1f, 0.1f, 0.1f, 1)); effect.Parameters["inv_world_matrix"].SetValue(Matrix.Invert(world)); effect.CurrentTechnique = effect.Techniques["Diffuse"]; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); graphics.GraphicsDevice.DrawUserPrimitives( PrimitiveType.TriangleList, toruspoints, 0, toruspoints.Length / 3); } base.Draw(gameTime); }
To model specular reflection, we will use a variation of Phong specular reflection call Blinn-Phong reflection. Traditional Phong reflection calculates the angle (phi) between the reflection vector (R) and the view vector (V), but R is expensive to calculate.
We can calculate the average of the view vector and the light vector (L), this is called the half vector(H). Blinn observed that $/phi$ was always twice the angle between the normal (N) and the half vector. As the half vector is much easier to calculate, this angle is often used instead of $/phi$.
Incorporating Blinn-Phong is easy, the only extra infomation we need is the viewer position;
VS_OUTPUT vs_main(float4 inPos: POSITION, float3 inNormal: NORMAL) { VS_OUTPUT Out=(VS_OUTPUT)0; // Compute the projected position and send out the texture coordinates Out.Pos = mul(inPos,view_proj_matrix ); // the normals might not be normalised inNormal=normalize(inNormal); // Start with the ambient color float4 Color =Light_Ambient; // Determine the light vector // first get the light vector in object space vector obj_light=mul(Light1_Position,inv_world_matrix); vector LightDir = normalize(obj_light - inPos); // Determine the eye vector // first get the eye vector in object space vector obj_eye=mul(view_position,inv_world_matrix); vector EyeDir = normalize(obj_eye-inPos); // Compute half vector vector HalfVect = normalize((LightDir+EyeDir)/2); // Specular, using Blinn Phong and a 'shinesiness' value of 64 float SpecularAttn = max(0,pow( dot(inNormal, HalfVect),64)); // Diffuse using Lambert float DiffuseAttn = max(0, dot(inNormal, LightDir) ); // Compute final lighting // assume white light vector light={1,1,1,1}; Color += light*SpecularAttn+light*DiffuseAttn; // Output Final Color Out.Color=Color; return Out; }
protected override void Draw(GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.CornflowerBlue); graphics.GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace; // TODO: Add your drawing code here effect.Parameters["view_proj_matrix"].SetValue(world * view * proj); effect.Parameters["Light1_Position"].SetValue(new Vector4(0,30,0,1)); effect.Parameters["view_position"].SetValue(eye); effect.Parameters["Light_Ambient"].SetValue(new Vector4(0.1f, 0.1f, 0.1f, 1)); effect.Parameters["inv_world_matrix"].SetValue(Matrix.Invert(world)); effect.CurrentTechnique = effect.Techniques["Specular"]; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); graphics.GraphicsDevice.DrawUserPrimitives( PrimitiveType.TriangleList, toruspoints, 0, toruspoints.Length / 3); } base.Draw(gameTime); }
Standard fixed function pipelines use gouraud shading, calculating lighting at each vertex and interpolating the lighting colour across the pixels. A more accurate way to do lighting is Phong shading; normal, half, and light vectors are interpolated across pixels and the lighting is calculated from scratch for each pixel.
In shader programming, this means moving the lighting calculation to the pixel shader.
The lighting calculation needs normal, half, and light vectors, these are calculated for each vertex in the vertex shader.
The only input semantics for a pixel shader are color and texturecoordinates, there are no semantics for other vectors. We will pass the normal, half and light vectors via the texturecoordinate semantics.
//uniform vectors float4x4 view_proj_matrix; float4 Light_Ambient; float4 Light1_Position; float4 Light1_Color; float3 view_position; float4x4 inv_world_matrix; struct VS_OUTPUT_PER_PIXEL // VS will output all this { float4 Pos: POSITION; float3 normal: TEXCOORD1; float3 light: TEXCOORD2; float3 halfvect: TEXCOORD3; float3 Color: COLOR0; }; //refactor the lighting model into its own function vector lighting(vector color, float3 normal, float3 light_dir, float3 half_vect){ // Output the lit color // Specular float SpecularAttn = max(0,pow( dot(normal, half_vect),32)); // Diffuse float DiffuseAttn = max(0, dot(normal, light_dir) ); // Compute final lighting color *= (SpecularAttn+DiffuseAttn); return color; } // vertex shader will compute all the required vectors for lighting // but will not actually calculate the amount of lighting VS_OUTPUT_PER_PIXEL vs_main_per_pixel(float4 inPos: POSITION, float3 inNormal: NORMAL) { VS_OUTPUT_PER_PIXEL Out=(VS_OUTPUT_PER_PIXEL)0; // Compute the projected position and send out the texture coordinates Out.Pos = mul(inPos,view_proj_matrix ); inNormal=normalize(inNormal); // Output the ambient color float4 Color =Light_Ambient; // Determine the eye vector vector obj_eye=mul(view_position,inv_world_matrix); vector EyeDir = normalize(obj_eye-inPos); vector obj_light=mul(Light1_Position,inv_world_matrix); vector LightDir = normalize(obj_light - inPos); // Compute half vector vector HalfVect = normalize((LightDir+EyeDir)/2); // Output Final Color Out.Color=Color; Out.normal= inNormal; Out.light= LightDir; Out.halfvect=HalfVect; return Out; } float4 ps_main_per_pixel( float4 inColor: COLOR0, float3 inNormal: TEXCOORD1, float3 LightDir: TEXCOORD2, float3 HalfVect: TEXCOORD3 ) : COLOR { return lighting(inColor,inNormal,LightDir,HalfVect); } technique PerPixel { pass P0 { vertexShader = compile vs_2_0 vs_main_per_pixel(); pixelShader = compile ps_2_0 ps_main_per_pixel(); } }
© Ken Power 1996-2016