## Relief mapsOften, we may want to render a surface that isn't precisely flat, but to model its actual surface structure in the mesh would be prohibitively expensive. Think for instance of a desert - there's small ripples of wind-blown sand, but if the sand isn't very fine, you can actually recognize each grain (or pebble) - however the idea to model every pebble by adding vertices is bound to fail soon.The solution is, as in the case of other surface properties, to use a map to add apparent structure to a triangle between the mesh points via a texture (which can be a real texture or just procedural noise of course). What we want to do is to create a surface that appears to have structure then. In order to accomplish that, we first need to understand the main visual cues for the perception of such structure.
- If the surface is lit with directional light, the way the light is strong when it falls directly onto a surface, attenuated when it reaches the surface with a shallow angle and shaded when the surface normal faces away from the light gives a strong visual indication of a relief.
- If a surface is textured with a fairly regular pattern, we see the original pattern when viewing the surface under a 90 degree angle but strong distortions if we look under a shallow angle. The way how the pattern (for instance individual pebbles) appears compressed and stretched to the eye due to the variation of view angle as we look at a relief provides a second depth cue (which is however absent for monochromatic surfaces).
- Finally, we can recognize a pronounced relief easily by the fact that parts of the relief more to the front can obscure the surface further back, i.e. block the line of sight.
vector maps. While the (rgba) channels of a texture are quite sufficient to store a 3-vector and then some, unlike for scalar maps, for vector maps we need to commit to a coordinate system in which the vector is supposed to be.As the scalar maps, vector maps are the domain of the fragment shader,
## Bumpiness effectLet's start with the simplest case - assume we have a terrain surface and we just want it to look a bit rough but are not picky about the detailed appearance. In principle, we'd need to know the normal as distorted by the bumps everywhere.But, terrain being terrain, we know the normal n is usually pointing upward in suitable coordinates (say we have our model coordinates of the mesh arranged that way). The distortions are then going to be small wiggles around this upward direction. For small distortions and if we don't care about details, we can in fact change NdotL = dot(n, lightDir) rather than the normal itself. But we have to determine the magnitude of the distortion. Say we have a noise function Noise2D(Pos.xy, scale) which takes a 2d position on the mesh and a length scale at which the noise is generated as arguments and returns a value between 0 and 1 as output. We can use that to model the displacement height of the terrain we want to render. Whether a surface is then lit or shaded depends of the steepness of that function along the light direction - mathematically the gradient of the heightfield. This requires us to do a numerical derivative. We can do it as a finite difference, using fdot(x) = (f(x) - f(x+ Dx))/Dx. In numerical mathematics, that's usually a bad idea, but since we're not interested in the exact result anyway, this will do. The algorithm then determines NdotL for a given position, computes the gradient of the heightfield by evaluating the noise function two times displaced along the light direction (all in model coordinates) and uses the result, multiplied with an overall noise_magnitude, to modify NdotL. The relevant part of the fragment shader might then look like:
(Note that the assumption that the terrain is close to flat has been used both in the fact that we characterize it by a 2d coordinate position and by the fact that we project the light direction into the (xy) plane via swizzling - the effect does not work too well for vertical rock faces.) The result is a gentle pattern of light and shadow drawn on the terrain, giving the impression of a shallow relief.
This is a technique which delivers a reasonable impression of roughness for very cheap, but offers little detailed control. If control over the apparent surface structure is needed (for instance when rendering a pattern of rivets on a wing), a ## Normal mappingFor a normal map, we directly encode the surface normal in a texture rather than the displacement height over the terrain (in the example above, the distortion of the normal has been implicitly created by doing the derivative, but there's no mathematically clean relation between the heightfield and the actual distortion, it's just a 'look plausible').We're going to pack the surface normal into the (rgb) channel of a texture now - but what coordinate system should that vector be in? We can't encode a normal in eye space because this is not a fixed system, it rotates with the eye movement. Thus we could encode normals in model space such that directly vec3 normal = (texture2D(normalMapTex, gl_TexCoord[0].st)).rgb;. This works, but is a bit unwieldy. The normal map texture has to be different for every face of a cube even if the surface structure is supposed to be the same - because all the cubes are oriented differently. Moreover, every time we'd like to rotate or otherwise change a model in the 3d modeling application, the normal map texture has to be re-computed. It's more usual to view a normal map as a property of the local surface (defined by the normal of the underlying triangle). Since that surface may be curved, we need a local coordinate system that follows the surface.
Such a coordinate system is provided by
We can thus encode just the variation of the surface structure in the normal map, defining that the direction of the unperturbed normal is the z-coordinate and tangent and binormal stand for x and y. The vertex shader then picks up the attributes, transforms them into eye space and declares tham as varying data types for interpolation across the triangle:
The fragment shader then picks up the interpolated values and uses the normal map to construct a normal by going along the direction of the coordinate axes given by the vector triplet:
Note that since a texture can't encode negative color values but a normal can have a negative value, the texture (rgb) value encodes negative numbers in the range from 0-0.5 and positive values in the range from 0.5-1, and this is decoded in N = normal_texel.rgb * 2.0 - 1.0;. The normal N obtained at the end can now be used further down for lighting purposes.
## Parallax mappingParallax mapping is a cheap way to implement the effect that a regular texture structure appears compressed and stretched dependent on view angle when there is surface bumpiness. It is in essence a prescription to look up a texture at a position different from the nominal reference point.For this purpose, the bumpiness is characterized by a heightfield over the nominal triangle (in the following, we discuss it for a surface that is stretched in the (xy) plane in model space for simplicity, but it can be generalized to tangent space easily using the techniques described above). The idea is as follows: We'd like to look up the texture where the view ray intersects the heightfield (and you can see how the stretching arises from this procedure):
The problem is, we don't readily know where this happens. So instead, we use the heightfield at the default lookup position to estimate how the heightfield looks like, go back by and offset to the original view ray and look up at that position. This doesn't give the exact result, but if the heightfield is not strongly varying, it is close enough to get compelling visuals.
To aviod artifacts at shallow viewing angles, we may want to limit the offset at which we retrieve the texture to some fixed value. If the view vector in model coordinates is view_vec and we have a heightfield function hfield(in vec2 xy) (which may be noise or a texture) that returns the displacement over the default surface, the texture lookup offset is
And surprisingly that's already all that's needed - look up a texture at the shifted coordinates and it will show a pattern distortion consistent with the heightfield.
## Height mappingIf we want to do better, we need to determine the actual intersection between view ray and heightfield. There's many different techniques to do that (all involve repeated calls to the heightfield function) - one can do adaptive subdivision and zero in for instance, or just do a straightforward outward-in sampling. What is the best technique depends on what accuracy is needed and most important how the heightfield looks like. A sharp heightfield in which strongly defined cube-like clusters reach upward requires different techniques than a gently rolling hillscape.The following algorithm requires us to have a heightmap texture in which we want to map to a size given by relief_hresolution and for which we want that the vertical size is mapped to relief_vscale. The fact that the relief can only be a certain scale above the ground is used in the following algorithm to find a starting point for a search along the view ray (given by the coordinate difference relPos). The heightfield is evaluated at trial positions progressively inward till an intersection is found, that value is then returned as the shift in the texture coordinate. Functions implementing this in the fragment shader might look like:
Since only the alpha channel of the texture is needed to provide the heightfield, the (rgb) channels can carry the associated normal map for lighting purposes (it can also be computed from the heightfield inside the shader, but that's more expensive than to use the existing texture lookup). The results of a heightmap are quie compelling if the view angle is not too shallow - the texture pattern shows the right distortions and compelling shadows and bumps can obscure what is behind.
In fact all visual cues are there - however note that the heightmap does just that - it does not alter the mesh, so from the point of e.g. collision detection the terrain is unchanged - one can't 'walk' on a heightmap, only on the original mesh. Also, texture lookup calls are modestly expensive - and the need for perhaps 10-20 calls to sample the intersection point with good accuracy is something that makes a heightmap a performance-costly technique.
Continue with Light intensity. Back to main index Back to rendering Back to GLSL Created by Thorsten Renk 2016 - see the disclaimer, privacy statement and contact information. |