Archive for June, 2010

Marschner Shader Part II

In my last post I mentioned two functions that are needed to represent the hair model as depicted in Marschner’s paper.

S = SR + STT + STRT,

Sp = Mp (q i , q r ) x Np (q d , f d ) for P = R, TT, TRT.

M component

This is actually just a probability density function and the best choice here is to use a Gaussian distribution (or normal distribution).

And the M components are as follows:

  • MR (q h ) = g( Beta R , q h Alpha R).
  • MTT (q h ) = g( Beta TT , q h Alpha TT).
  • MTRT (q h ) = g( Beta TRT , q h Alpha TRT).

N component

The N component is actually a bit tricky to compute. Here are all the main steps:

This is done in order to change the index of refraction to 2D physics, so that the optics of a 3D cylindrical fiber may be reduced to the 2D analysis of the optics of its cross-section.

After looking into Snell’s Law we define the indexes of refraction as:

Remember this picture:

We need to find out who the incident angles are, and we can approximate the solution for this equation as:

Fresnel equation is used in order to simulate the reflection model from within the attenuation

  • Find out the absorption factor

This is actually quite straightforward, just:

  • The attenuation factor

This is obtain combining both the reflection and the absorption factor, hence the “Attenuation by absorption and reflection” model from Marschner’s paper.

where the first derivative is

  • The N component (finally)

and the N are

  • NR (q d , f d ) = NP (0, q d , f d ).
  • NTT (q d , f d ) = NP (1, q d , f d ).
  • NTRT (q d , f d ) = NP (2, q d , f d ).

For the last component Marschner proposes a more complex model in order to avoid singularities, but for my implementation I couldn’t tell any improvement so I stuck with the simpler version of NTRT.

The whole model

As a sum up this is the whole Marschner hair model in just an equation:

Hope I managed to keep everything simple and explicit alike.


Marschner Shader Part I

I decided to write a trilogy (3 posts) explaining, as best as I can, what is discussed in Marschner’s paper “Light Scattering from Human Hair Fibers“.

First of all, I have to warn you that in order to understand this paper you must have some physics and math background, rather than knowing a lot about shaders, things such as Snell’s law or probability density functions being mentioned quite often.

The main advantage of the model proposed by Marschner is that it is based on the actual physical phenomenon that occurs when light passes through hair fibers. So by studying electron micrograph of hair fibers such as this one:

a model has been proposed, where each individual hair fiber is treated as a translucent cylinder, having the following components:

and the components that contribute to a distinct and visually significant aspect of hair reflectance are R, TT and TRT.

  • R – light that bounces off of the surface of the hair fiber toward the viewer.
  • TT – light that refracts into the hair and refracts out again toward the viewer.
  • TRT – light that refracts into the hair fiber, reflects off of the inside surface, and refracts out again toward the viewer.

The notation used throughout this paper is in tangent space, for the light and viewer position, reported to the current hair fiber.

These are all the variable inputs that are needed for Marschner hair shading model:

  • u – tangent to the hair, pointing in the direction from the root toward the tip.
  • w – normal to the hair, pointing toward the viewer (the geometry faces the camera).
  • v – binormal to the hair, pointing such that v and w complete a right-handed orthonormal basis, and are the vw is the normal plane.
  • w i – direction of illumination (light).
  • w r – direction of camera (viewer).
  • q i ,r – inclinations with respect to the normal plane (measured so that 0 is perpendicular to the hair, PI is u, and –PI is –u).
  • f i ,r – azimuths around the hair (measured so that v is 0 and w is +PI).

Several derived angles are used, as well:

  • q d = (q r q i )/2; – the difference angle.
  • f = (f r f i ); – the relative azimuth
  • q h = (q i + q r )/2; – half angle
  • f h = (f i + f r )/2; – half angle

Also, there are some constants parameters for hair fibers, surface and glints that you can find in Table 1 ( page 8 ) from Marschner’s paper.

Having all of this in mind we can approximate the hair model as:

S = SR + STT + STRT,

Sp = Mp (q i , q r ) x Np (q d , f d ) for P = R, TT, TRT.

So it turns out the only thing we need is to find out who M and N are. My next post will do just that.

Marschner in Nalu Demo

Before starting to implement anything I downloaded and installed the Nalu Demo from Nivida, that uses Marschner shader for hair rendering, just to have a closer look at this implementation.

Sadly enough this applications doesn’t provide any source code (except for the hlsl shaders) so I had to do some reverse engineering in order to use this shader as a test shader for hair rendering in CS. I set most of the parameters based on the application and configuration files and I used the lookup textures and computed some angles based on Chapter 23 from GPU Gems 2.

In order to keep things simple I chose to set all the application variables in shaders, but if you’d like to pass the light position for instance to the vertex shader you can just comment the line where I set the value for light position in the vs (float4 LightPos = float4 (9000, 0, 0, 0);).

These are the connectors I used:

struct a2vConnector {
  float4 objCoord : POSITION;
  float3 objNormal : NORMAL;
  float3 Tangent: TEXCOORD0;

struct v2fConnector {
  float4 projCoord : POSITION;
  float3 angles : TEXCOORD1;
  half4 diffuseColor : COLOR0;
  half3 ambientColor : COLOR1;

I only modified the a2vConnector in order to have the Tangent buffer too.

The vertex shader looks like this:

v2fConnector main(a2vConnector a2v,
                  uniform float4x4 modelViewProj : state.matrix.mvp,

                  // Light and eye directions in object space
                  uniform float3 objLightDir,
                  uniform float3 objEyePos,

                  // Reflectance model parameters
                  uniform float DiffuseCol,
                  uniform float AmbientCol,

                  uniform float3 worldPointLight0Pos,
                  uniform float3 PointLightColor,
                  uniform float4x4 ModelViewIT : state.matrix.modelview.invtrans)
  v2fConnector v2f;

  float4 LightPos = float4 (9000, 0, 0, 0);

  objLightDir = normalize( -;
  objEyePos = ModelViewIT[3].xyz;

  AmbientCol = 0;
  DiffuseCol = 0.75;
  PointLightColor = 1;
  worldPointLight0Pos =;
  float4 objColor = float4(1,0,0,1);

  /* Compute the tangent from adjacent vertices */
  float3 objTangent = normalize(a2v.Tangent - );

  /* Project */
  float4 objCoord = a2v.objCoord;
  float4 projCoord = mul(modelViewProj, objCoord);
  v2f.projCoord = projCoord;

  float3 objEyeDir = normalize(objEyePos -;  

  /* Compute longitudinal angles */
  float2 uv1;
  uv1.x = dot(objLightDir, objTangent);
  uv1.y = dot(objEyeDir, objTangent);
  v2f.angles.xy = 0.5 + 0.5*uv1;

  /* Compute the azimuthal angle */
  float3 lightPerp = objLightDir - uv1.x * objTangent;
  float3 eyePerp = objEyeDir - uv1.y * objTangent;
  float cosPhi = dot(eyePerp, lightPerp) * rsqrt(dot(eyePerp, eyePerp) * dot(lightPerp, lightPerp));
  v2f.angles.z = 0.5*cosPhi + 0.5;

  /* Compute diffuse lighting with phi-dependent component */
  float diffuse = sqrt(max(0, 1 - uv1.x*uv1.x));

  /* Pass colors */
  v2f.diffuseColor.rgb = diffuse*objColor.rgb * DiffuseCol;
  v2f.diffuseColor.a = objColor.a;
  v2f.ambientColor = objColor.rgb * AmbientCol;

  // compute point light lighting  
  float3 Delta = worldPointLight0Pos-a2v.objCoord;
  float3 pointLightDir = normalize(Delta);
  float NDL = dot(objTangent, pointLightDir);

  float pointLightDist = sqrt(dot(Delta,Delta)) * (1.0/400.0);
  float att = min(1,max(0,pointLightDist));

  v2f.ambientColor = (1.0-att) * PointLightColor;

  return v2f;

The only thing I modified in the vertex shader is using Tangent instead of objNormal to determine the objTangent vector. I also added code to get the objEyePos and objLightDir. Here it is important to take into account that the hair geometry being recreated every frame doesn’t have any World (or Model) matrix (it is the Identity matrix).

Moving on to the pixel/fragment shader:

float4 main(v2fConnector v2f,
            // Parameters for the hair model
            uniform half Rcol,
            uniform half TTcol,

            // Lookup tables for the hair model (fixed point)
            uniform sampler2D lookup1fixed,
            uniform sampler2D lookup2fixed

            ) : COLOR
  Rcol = 1.87639;
  TTcol = 3.70201;

  /* Compute the longitudinal reflectance component */
  half2 uv1 = v2f.angles.xy;
  half4 m = h4tex2D(lookup1fixed, uv1);

  /* Compute the azimuthal reflectance component */
  half2 uv2;
  uv2.x = cos( (asin (2 * v2f.angles.x - 1) - asin (2 * v2f.angles.y - 1) ) / 2 ) * 0.5 + 0.5; //m.w;
  uv2.y = v2f.angles.z;
  half4 ntt = h4tex2D(lookup2fixed, uv2);

  /* Combine longitudinal and azimuthal reflectance */
  half3 lighting;
  lighting = (m.r * ntt.a * Rcol.r).xxx;      // Primary highlight  
  lighting = m.b * ntt.rgb * TTcol.r;      // Transmittance (using MTRT instead of MTT)
  lighting += v2f.diffuseColor.rgb;            // Diffuse lighting

  float4 COL;
  COL.rgb = lighting + v2f.diffuseColor.rgb*0.2 + v2f.ambientColor; 

  //COL.rgb = v2f.ambientColor;
  COL.a = v2f.diffuseColor.a;

  return COL;

For the pixel shader I had to compute the first cosinus angle (see GPU Gems Chapter 23) because the lookup texture I got doesn’t have an alpha channel. So uv2.x = cos( (asin (2 * v2f.angles.x - 1) - asin (2 * v2f.angles.y - 1) ) / 2 ) * 0.5 + 0.5; //not m.w;.

LE: There is still a bug regarding the usage of ntt.a (which is invalid) for the primary highlight.

And that’s pretty much it regarding the Nvidia Nalu Marschner shader.

Almost forgot, here are the lookup textures:

Next you can see a comparison between Krystal rendered with Phong, Kajiya and Kay and Marschner. If you have any questions about the Phong shading model (or per pixel lighting) you can look over these pdfs: Basics of GPU-Based Programmig and MathematicsOfPerPixelLighting.

Physics reloaded

A couple of days ago I managed to find some bullet ropes parameters that tend to make ropes act more like hair strands:

body->m_cfg.kDP = 0.08f; // no elasticity
body->m_cfg.piterations = 16; // no white zone
body->m_cfg.timescale = 2;

This is what each one of them does:

  • kDP is the Damping coefficient [0, 1], zero means no damping and one full damping. This is a damping spring:

  • piterations are the number of iterations for position solvers (if any). It goes from 1 to infinity.
  • timescale is a factor of time step, that can be used to speed up, or slow down simulation, default=1.

There are a lot more settings that can be altered with from bullet’s soft bodies. More on bullet online documentation.

Next you can see how these settings look on Krystal, having 367 control points and 3670 hair strands. I guess I will have to further modify them, to get rid of some elasticity, but I find the overall simulation quite plausible:

Generating geometry

In this post I will present how I implemented the GenerateGeometry function from the iFurMaterial interface. This function can be split in two big parts: guide hair and hair strands generation.

  • Generating guide hairs

These guide hairs will be used only for physics simulation and as reference for the hair strands. So they will not be rendered, except maybe for debug purpose.

In order to generate guide hairs the base mesh, the mesh on which fur will grow, will be used. If there are enough vertices, guide hairs will be attach to each point of every vertex. If not, the mesh will either be tessellated using a CS function, or by implementing the same technique used to generate hair strands from guide hairs. The only important thing here is to also have a vertex buffer (triangle buffer actually) for these guide hairs, and not just an index buffer (a vector to store them).

Unless any physics model is specify (ropes or so), guide hairs will grow having the direction of the vertex normal and a length specify via a heightmap.

  • Generating hair strands

This is where the guide hairs triangle buffer will be used. For each such triangle, based on a density map, hair strands will be generated. In order to get any number of points inside a triangle, as random distributed as possibly, barycentric coordinates will be used.

If we look closely at the above picture, we see that we can generate a point Y, in a triangle ABC, just by Y = bA * A + bB * B + bC * C, where bA + bB + bC = 1. And randomly choosing the barycentric coordinates is not tough at all: bA = random (0,1), bB = random (0,1) * (1 - bA), bC = 1 - bA - bB.

The interesting thing here is that these barycentric coordinates can also be used for setting the whole hair strand (not just the base point), and even the UV coordinates for the density map.

  • Updating Geometry

All hair strands need to be updated even if there is no physics model involved.

The iFurPhysicsControl interface will update guide hairs, and after this, hair strands will be regenerated every time. Even if no physics interface is specify the hair strands still need to be regenerated because they are represented as triangle strips and they need to always face the camera. This can be done be taking into account that the vertex tangent has to be perpendicular to the eye direction.

csVector3 firstPoint = furMaterial->hairStrands.Get(x).controlPoints[y];
csVector3 secondPoint = furMaterial->hairStrands.Get(x).controlPoints[y + 1];
csVector3 tangent;

tangent, firstPoint, secondPoint, tc.GetOrigin());
strip = furMaterial->strandWidth *

The reason why I used solid geometry instead of lines is that vertices support both textures and shaders.

  • The result

Here is a picture of some generated hair strands, with no physics model specified (hair grows on vertex normal direction).

The plugin

Giving the fact that this project has two main parts, the physics simulation and the rendering, it would probably be a good designing idea if the plugin implementing it will also have two CS interfaces. This is what these interface should do:


  • Attach fur to the base mesh

For this, a mesh (any type) would have to be specify along with some control points for the guide hairs.

  • Generate geometry

Using the guide hairs and a density map (maybe a heightmap too) the rest of the hair strands will be generated using interpolation.

  • Update position

This is done by synchronizing the position with iFurPhysicsControl. This interface can implement any type of physics not just ropes, and it can even be null, specifying that a particular instance of the iFurMaterial interface doesn’t have any physics simulation (might be used for static objects, or such).

  • Implementing a shader

Or specify a shader/material to be used with this interface. This is especially important because although Marschner is a good model for hair rendering, it might be too complex for fur in general. Here Kajiya and Kay shading model could be used instead, because it has pretty good results too and it’s faster.


  • Initialize strand

Given a guide hair this function will create a physics object, a rope in my case, and make a connection with this strand by a unique id, or so.

  • Animate strand

This function will update a strand’s position using the physics object especially created (via the initialize method) for this strand.

  • Remove strand

Removing physics objects might be a good idea for a LOD scheme, because animating physics objects is quite computational expensive.

Categories: Crystal Space Tags: , , ,

Hair simulation types

There are various ways in which hair can be simulated using a physics engine.

Next I am going to present 3 of them. For these simulations I used CS for rendering, and for physics the Bullet plugin, that as of recently supports soft bodies, thanks to my mentor Christian Van Brussel.

  • Solid geometry

Perhaps the easiest way to simulate hair is as standard collision objects, such as spheres or cylinders. Although this representation has the best performance, it only covers some particular types of hair like the one below:

  • Soft Body Dynamics

Another approach is to use soft body dynamics, and represent the hair as a … cloth. A larger number of hair styles can be simulated using this method and it also looks more convincing. You can see in the next video both Krystal’s (that’s the model’s name BTW) hair and her skirt represented as soft bodies (drawn in green):

  • Ropes

This is probably the best and somehow the most intuitive way to simulate hair: as ropes. But, as you already know, there are way to much (i.e. millions) hair strands to be simulated individually as physics objects. So the trick here is to choose (either random or better yet using a density map) lets say a hundred hair strands to be guide hairs, represented as ropes. And for the rest of the hair strands just interpolate. You can see these guide hairs (hopefully) in black: