Download (870 KB)


This example shows off how dynamic shadow mapping can be implemented using WebRender. Due to the performance costs this example may run slow on the devie you are viewing. Clicking the "Switch Shader" button, you can toggle between the 3D view of the screen and that of the light creating the shadow map.

This implementation of shadow mapping involves creating a shadow map each frame, then checking each pixel that is drawn against the map to see if it is in shadow. The shadow map is created with an orthographic view from above, which then stores the z (depth) values for each pixel. The depth is stored across the 4 colour channels, RGBA, so viewing the depth map does look a little strange. This map is then given to the shaders for the 3D view, where in the vertex shader the XY pixel position is calculated for each vertex (alongside the normal 3D position) in order to be interpolated for the pixel shader. In the pixel shader that interpolated XY position is used to look up the shadow map and find the depth.

This example does all the rendering in a single instance of WebRender and on one canvas. This means that the shadow map is actually quite high resolution, and therefore has a bigger impact on performance. A better implementation may have a hidden smaller canvas where the shadow is drawn to, and even separate that into a thread that simply updates the shadow map. There are many ways the performance and the quality of this could be increased, this example is only a guide.

Some things to take note of. WebRender sends the data back as 8-bit numbers to the callback function. Since we only care about the combined 32-bit number, we use typed arrays to allow different views (8-bit and 32-bit) of the same array. This is also a very basic implementation of dynamic shadows and does not have other lighting effects. The techniques used are fairly common and should be familiar if you have experience with other graphics libraries such as OpenGL. The biggest difference being that we have to store the shadow map ourselves and that the depth has to be stored in the colour channels (since we cannot access the depth buffer).