UV Maps

In this post I will explain how I managed to set some properties like length and density for the hair mesh.

The challenge here was to adapt a well-known technique and use it for these particular features. This technique consisted of UV maps, and was used for Krystal’s skull mesh (the picture from the left), whose UV map looks like this (in the right):

The length map

This is actually very similar to your usual heightmap, especially used for terrains and such, but it is UV mapped.

The data from this image is interpreted also similar to that of a heightmap, but it sets the length of the hair strands in that vicinity and not the height of the mesh, hence the name length map instead of density map.

And an example for a bangs hairstyle is the following UV map (left), having the result from right when applied to Krystal:

The density map

Although the density map looks similar to the other UV map, the length map, getting data from it, is done quite differently.

This is caused by the fact that data from this image is used to generate new data (geometry) as opposed to just refine existent geometry, which is the case with the length map.

Generating new geometry based on a UV map is also used in adaptive tessellation, but there the map used is a displacement map also having information about  the direction of the newly created meshes.

For this algorithm to work the mesh has to be composed of triangles and an UV density map has to be specified. The steps of the algorithm are as follows:

  1. foreach triangle T in the mesh
  2.   find the area A and density D of T
  3.   if  D * A * factor > 1 then
  4.     choose a point Y inside T using barycentric coordinates
  5.     delete T and create 3 other triangles based on Y and T

The only things uncommon are choosing a point using barycenctric coordinates and finding out the density of a triangle based on an UV map. Regarding the barycentric coordinates you can check out one of my previous posts, where I also explained this technique when used to generate hair strands. Finding the density of a triangle based on the density map is not hard either, and I tried three ways of doing this, all based on the fact that UV coordinates are known for A, B and C, the points of triangle T.

  • The average of A, B and C density

Although this approach evaluates just three points, it gives good enough results when there are plenty triangles to begin with and the UV map is at a lower resolution. Also applying a Gaussian filter on the image at the begging of the algorithm helps.

Actually I got to admit that this is not my idea, but I heard it from a colleague that used it for a real-time adaptive tessellation application. The main advantage of this approach is that it represents a compromise between speed and information analyzed. Also to improve this way of getting the density of a triangle convolution matrices can be used in order to obtain information from the vicinity of the currently analyzed point as well.

  • The sum of all points density in the triangle T

Even though this might be the most obvious way to get the density of a triangle, generating all points inside of a triangle is not that easy. In order to do this I used the ever mentioned barycentric coordinates, but this time they weren’t generated random at all. Having in mind that the area of a triangle, which covers the whole surface of this polygon, is the base multiplied by height and divided by two, generating the first two barycentric coordinates along these lines seemed a good solution. The only problem is that the points further away from the base are analyzed more times (no division by two means passing points in this area more than one time), so doing this operation three times (one time for each base) and then getting the average, gives a very close approximation of the triangle density. Because I do this operation only at the begging I used this last method in the fur plugin implementation, being the best choice regarding the amount of information analyzed.

Next you can see Krystal having just a few hair strands on the top of her skull:

Other UV maps

UV maps can also be used to set various other information about a mesh, such as: the contour of a mesh, which vertices are more important or setting different materials/colors on different hair strands.

I already used UV maps to determine the contour of the geometry and to determine some pivots vertices (as guide ropes). Those pictures look like this, left is the contour:

I haven’t use UV maps to generate various colors for different hair strands, but I have in mind two approaches, and after implementing them I will write another post. However I think my next post will be about the LOD system for the fur plugin which is currently under development.

Advertisements

Halfway there

The 16th July midterm deadline just passed and I haven’t posted in a while, so I am going to make a short presentation about what I have done so far and what is still to be done for this GSoC project.

Done:

  • Generated geometry – iFurMaterial
  • Animated geometry – iFurPhysicsControl
  • Written specific shaders – iFurStrandGenerator

TODO’s:

  • LOD – working on it
  • Shadows – at least receiving shadows from other objects
  • Blender integration – if there is any time left

Recently I have finished adding support for density and height maps, and I will soon write how I have done this. I think that the way in which hair strands are generated, based on the density map, is quite general and could be used even for an adaptive tessellation project, so I will try to write my next post about this as soon as possible.

Until then I leave you with this video, showing more or less what I have implemented so far (YouTubeHD):

Categories: Crystal Space Tags: ,

Marschner Shader Part III

This is the last part of the three post regarding the Marschner shader. I will explain how to efficiently make the shader for this model, how to add ambient and diffuse lighting and at the end of the post I will also give source code for generating Marschner lookup textures and a video showing the results I had in CS.

Lookup Textures

Because there are too many computations done in M and N functions to be put in the pixel shader, the best optimization is to use lookup textures, that need to be updated as rarely as possible.

We can easily observe that apart from the constants defined in Table 1 ( page 8 ) from Marschner’s paper, the M function only depends on q i and q r , and N on q d and f d. Although this might seem a good optimization at first, taking into account that all these angles must be computed from inverse trigonometric functions, such as acos and asin, which aren’t fast at all, indexing the lookup textures directly by cos and sin sounds a better idea.

The way in which sinus and cosinus values can be computed for all these angles can be found in GPU Gems 2, Chapter 23:

  • sin q i = (light · Tangent),
  • sin q o = (eye · Tangent).
  • lightPerp = light – (light · tangent) x tangent,
  • eyePerp = eye – (eye · tangent) x tangent.
  • cos f d = (eyePerp · lightPerp) x ((eyePerp · eyePerp) x (lightPerp · lightPerp))-0.5

As for the cos q d if we observe that q d depends on q i and q r then we figure out that we can use a channel from the lookup texture indexed by the sins of these two angles.

The easiest way to build these two textures is to make a lookup texture for M, having MR, MTT, MTRT and cos q d, and a lookup texture for N. However, in the original paper NTT and NTRT each have three channels, but they can be reduced to only one channel if we consider the absorption to have one channel as well.

These are the lookup textures obtained with my first implementation of the Marschner project:

Ambient and diffuse lighting

The Marschner model only specifies the specular component for lighting, so in order to obtain nice visual effects, both ambient and diffuse lighting were added to this model.

I used the lighting from the Nalu Demo, presented in detail in one of my previous posts:

/* Compute diffuse lighting with phi-dependent component */
float diffuse = sqrt(max(0.0001, 1 - uv1.x * uv1.x));

/* Pass colors */
float4 diffuseColor;
diffuseColor.rgb = diffuse * objColor.rgb * DiffuseCol;
diffuseColor.a = objColor.a;
float3 ambientColor;
ambientColor = objColor.rgb * AmbientCol;

float3 lighting = (( M.r * N.r + M.g * N.g + M.b * N.b ) / (cos_qd * cos_qd));
lighting += diffuseColor.rgb;

OUT.xyz = lighting + diffuseColor.rgb * 0.2 + IN.AmbientColor;

Source code

Here you can find the first version of my Marschner C# Project, which generates the lookup textures needed for a shader similar to the one presented in the Nalu Demo post.

There are still some things that can be improved, but I plan to release another version for that, as soon as I get a chance. Until then feel free to improve the project yourself.

These are the two adjustments done to the original model, as described in Marschner:

  1. The absorption is specify by only one channel.
  2. Instead of the standard NTRT component, the simplify version was used.

You can find more information in the README, INSTALL and LICENSE files from the archive.

Demo

Next you can see the effects this shader has on Krystal’s hair. If you want to play with the application yourself checkout the hair branch from CS main repository.

Marschner Shader Part II

In my last post I mentioned two functions that are needed to represent the hair model as depicted in Marschner’s paper.

S = SR + STT + STRT,

Sp = Mp (q i , q r ) x Np (q d , f d ) for P = R, TT, TRT.

M component

This is actually just a probability density function and the best choice here is to use a Gaussian distribution (or normal distribution).

And the M components are as follows:

  • MR (q h ) = g( Beta R , q h Alpha R).
  • MTT (q h ) = g( Beta TT , q h Alpha TT).
  • MTRT (q h ) = g( Beta TRT , q h Alpha TRT).

N component

The N component is actually a bit tricky to compute. Here are all the main steps:

This is done in order to change the index of refraction to 2D physics, so that the optics of a 3D cylindrical fiber may be reduced to the 2D analysis of the optics of its cross-section.

After looking into Snell’s Law we define the indexes of refraction as:

Remember this picture:

We need to find out who the incident angles are, and we can approximate the solution for this equation as:

Fresnel equation is used in order to simulate the reflection model from within the attenuation

  • Find out the absorption factor

This is actually quite straightforward, just:

  • The attenuation factor

This is obtain combining both the reflection and the absorption factor, hence the “Attenuation by absorption and reflection” model from Marschner’s paper.

where the first derivative is

  • The N component (finally)

and the N are

  • NR (q d , f d ) = NP (0, q d , f d ).
  • NTT (q d , f d ) = NP (1, q d , f d ).
  • NTRT (q d , f d ) = NP (2, q d , f d ).

For the last component Marschner proposes a more complex model in order to avoid singularities, but for my implementation I couldn’t tell any improvement so I stuck with the simpler version of NTRT.

The whole model

As a sum up this is the whole Marschner hair model in just an equation:

Hope I managed to keep everything simple and explicit alike.

Marschner Shader Part I

I decided to write a trilogy (3 posts) explaining, as best as I can, what is discussed in Marschner’s paper “Light Scattering from Human Hair Fibers“.

First of all, I have to warn you that in order to understand this paper you must have some physics and math background, rather than knowing a lot about shaders, things such as Snell’s law or probability density functions being mentioned quite often.

The main advantage of the model proposed by Marschner is that it is based on the actual physical phenomenon that occurs when light passes through hair fibers. So by studying electron micrograph of hair fibers such as this one:

a model has been proposed, where each individual hair fiber is treated as a translucent cylinder, having the following components:

and the components that contribute to a distinct and visually significant aspect of hair reflectance are R, TT and TRT.

  • R – light that bounces off of the surface of the hair fiber toward the viewer.
  • TT – light that refracts into the hair and refracts out again toward the viewer.
  • TRT – light that refracts into the hair fiber, reflects off of the inside surface, and refracts out again toward the viewer.

The notation used throughout this paper is in tangent space, for the light and viewer position, reported to the current hair fiber.

These are all the variable inputs that are needed for Marschner hair shading model:

  • u – tangent to the hair, pointing in the direction from the root toward the tip.
  • w – normal to the hair, pointing toward the viewer (the geometry faces the camera).
  • v – binormal to the hair, pointing such that v and w complete a right-handed orthonormal basis, and are the vw is the normal plane.
  • w i – direction of illumination (light).
  • w r – direction of camera (viewer).
  • q i ,r – inclinations with respect to the normal plane (measured so that 0 is perpendicular to the hair, PI is u, and –PI is –u).
  • f i ,r – azimuths around the hair (measured so that v is 0 and w is +PI).

Several derived angles are used, as well:

  • q d = (q r q i )/2; – the difference angle.
  • f = (f r f i ); – the relative azimuth
  • q h = (q i + q r )/2; – half angle
  • f h = (f i + f r )/2; – half angle

Also, there are some constants parameters for hair fibers, surface and glints that you can find in Table 1 ( page 8 ) from Marschner’s paper.

Having all of this in mind we can approximate the hair model as:

S = SR + STT + STRT,

Sp = Mp (q i , q r ) x Np (q d , f d ) for P = R, TT, TRT.

So it turns out the only thing we need is to find out who M and N are. My next post will do just that.

Marschner in Nalu Demo

Before starting to implement anything I downloaded and installed the Nalu Demo from Nivida, that uses Marschner shader for hair rendering, just to have a closer look at this implementation.

Sadly enough this applications doesn’t provide any source code (except for the hlsl shaders) so I had to do some reverse engineering in order to use this shader as a test shader for hair rendering in CS. I set most of the parameters based on the application and configuration files and I used the lookup textures and computed some angles based on Chapter 23 from GPU Gems 2.

In order to keep things simple I chose to set all the application variables in shaders, but if you’d like to pass the light position for instance to the vertex shader you can just comment the line where I set the value for light position in the vs (float4 LightPos = float4 (9000, 0, 0, 0);).

These are the connectors I used:

struct a2vConnector {
  float4 objCoord : POSITION;
  float3 objNormal : NORMAL;
  float3 Tangent: TEXCOORD0;
};

struct v2fConnector {
  float4 projCoord : POSITION;
  float3 angles : TEXCOORD1;
  half4 diffuseColor : COLOR0;
  half3 ambientColor : COLOR1;
};

I only modified the a2vConnector in order to have the Tangent buffer too.

The vertex shader looks like this:

v2fConnector main(a2vConnector a2v,
                  uniform float4x4 modelViewProj : state.matrix.mvp,

                  // Light and eye directions in object space
                  uniform float3 objLightDir,
                  uniform float3 objEyePos,

                  // Reflectance model parameters
                  uniform float DiffuseCol,
                  uniform float AmbientCol,

                  uniform float3 worldPointLight0Pos,
                  uniform float3 PointLightColor,
                  uniform float4x4 ModelViewIT : state.matrix.modelview.invtrans)
{
  v2fConnector v2f;

  float4 LightPos = float4 (9000, 0, 0, 0);

  objLightDir = normalize(LightPos.xyz -  a2v.objCoord.xyz);
  objEyePos = ModelViewIT[3].xyz;

  AmbientCol = 0;
  DiffuseCol = 0.75;
  PointLightColor = 1;
  worldPointLight0Pos = LightPos.xyz;
  float4 objColor = float4(1,0,0,1);

  /* Compute the tangent from adjacent vertices */
  float3 objTangent = normalize(a2v.Tangent - a2v.objCoord.xyz );

  /* Project */
  float4 objCoord = a2v.objCoord;
  float4 projCoord = mul(modelViewProj, objCoord);
  v2f.projCoord = projCoord;

  float3 objEyeDir = normalize(objEyePos - objCoord.xyz);  

  /* Compute longitudinal angles */
  float2 uv1;
  uv1.x = dot(objLightDir, objTangent);
  uv1.y = dot(objEyeDir, objTangent);
  v2f.angles.xy = 0.5 + 0.5*uv1;

  /* Compute the azimuthal angle */
  float3 lightPerp = objLightDir - uv1.x * objTangent;
  float3 eyePerp = objEyeDir - uv1.y * objTangent;
  float cosPhi = dot(eyePerp, lightPerp) * rsqrt(dot(eyePerp, eyePerp) * dot(lightPerp, lightPerp));
  v2f.angles.z = 0.5*cosPhi + 0.5;

  /* Compute diffuse lighting with phi-dependent component */
  float diffuse = sqrt(max(0, 1 - uv1.x*uv1.x));

  /* Pass colors */
  v2f.diffuseColor.rgb = diffuse*objColor.rgb * DiffuseCol;
  v2f.diffuseColor.a = objColor.a;
  v2f.ambientColor = objColor.rgb * AmbientCol;

  // compute point light lighting  
  float3 Delta = worldPointLight0Pos-a2v.objCoord;
  float3 pointLightDir = normalize(Delta);
  float NDL = dot(objTangent, pointLightDir);

  float pointLightDist = sqrt(dot(Delta,Delta)) * (1.0/400.0);
  float att = min(1,max(0,pointLightDist));

  v2f.ambientColor = (1.0-att) * PointLightColor;

  return v2f;
}

The only thing I modified in the vertex shader is using Tangent instead of objNormal to determine the objTangent vector. I also added code to get the objEyePos and objLightDir. Here it is important to take into account that the hair geometry being recreated every frame doesn’t have any World (or Model) matrix (it is the Identity matrix).

Moving on to the pixel/fragment shader:

float4 main(v2fConnector v2f,
            // Parameters for the hair model
            uniform half Rcol,
            uniform half TTcol,

            // Lookup tables for the hair model (fixed point)
            uniform sampler2D lookup1fixed,
            uniform sampler2D lookup2fixed

            ) : COLOR
{
  Rcol = 1.87639;
  TTcol = 3.70201;

  /* Compute the longitudinal reflectance component */
  half2 uv1 = v2f.angles.xy;
  half4 m = h4tex2D(lookup1fixed, uv1);

  /* Compute the azimuthal reflectance component */
  half2 uv2;
  uv2.x = cos( (asin (2 * v2f.angles.x - 1) - asin (2 * v2f.angles.y - 1) ) / 2 ) * 0.5 + 0.5; //m.w;
  uv2.y = v2f.angles.z;
  half4 ntt = h4tex2D(lookup2fixed, uv2);

  /* Combine longitudinal and azimuthal reflectance */
  half3 lighting;
  lighting = (m.r * ntt.a * Rcol.r).xxx;      // Primary highlight  
  lighting = m.b * ntt.rgb * TTcol.r;      // Transmittance (using MTRT instead of MTT)
  lighting += v2f.diffuseColor.rgb;            // Diffuse lighting

  float4 COL;
  COL.rgb = lighting + v2f.diffuseColor.rgb*0.2 + v2f.ambientColor; 

  //COL.rgb = v2f.ambientColor;
  COL.a = v2f.diffuseColor.a;

  return COL;
}

For the pixel shader I had to compute the first cosinus angle (see GPU Gems Chapter 23) because the lookup texture I got doesn’t have an alpha channel. So uv2.x = cos( (asin (2 * v2f.angles.x - 1) - asin (2 * v2f.angles.y - 1) ) / 2 ) * 0.5 + 0.5; //not m.w;.

LE: There is still a bug regarding the usage of ntt.a (which is invalid) for the primary highlight.

And that’s pretty much it regarding the Nvidia Nalu Marschner shader.

Almost forgot, here are the lookup textures:

Next you can see a comparison between Krystal rendered with Phong, Kajiya and Kay and Marschner. If you have any questions about the Phong shading model (or per pixel lighting) you can look over these pdfs: Basics of GPU-Based Programmig and MathematicsOfPerPixelLighting.

Physics reloaded

A couple of days ago I managed to find some bullet ropes parameters that tend to make ropes act more like hair strands:

body->m_cfg.kDP = 0.08f; // no elasticity
body->m_cfg.piterations = 16; // no white zone
body->m_cfg.timescale = 2;

This is what each one of them does:

  • kDP is the Damping coefficient [0, 1], zero means no damping and one full damping. This is a damping spring:

  • piterations are the number of iterations for position solvers (if any). It goes from 1 to infinity.
  • timescale is a factor of time step, that can be used to speed up, or slow down simulation, default=1.

There are a lot more settings that can be altered with from bullet’s soft bodies. More on bullet online documentation.

Next you can see how these settings look on Krystal, having 367 control points and 3670 hair strands. I guess I will have to further modify them, to get rid of some elasticity, but I find the overall simulation quite plausible: