Main | API | WebRender |

WebRender

WebRender is a very complex library underneath, so this page will attempt to outline all the major elements and how they fit together. The library is made up of four main parts, the interface which your application interacts with, WR_Core where the rendering pipeline and major functionality is, the shader generation code and the rendering pipeline code. Although they often share functionality between each other and parts exist within the same scope, this is a rough spilt of the library.

First off is the interface layer that sits between your application and WR_Core, this is used to check inputs, obscure underlying functionality and allow for multi-threading and non-threaded modes rendering modes. Every function is passed though this interface and depending on the rendering mode, is either sent to a thread or passed straight to WR_Core. For example, the initialisation functions, constructor and initialise are used to set up a WebRender instance with only two calls in your application, whereas accessing the WR_Core directly requires more information and more function calls to set up. There is nothing stopping you accessing WR_Core directly in non-threaded mode, however this is not recommended and not documented.

WR_Core is the main scope of everything within the library, it holds most of the rendering pipeline, all the helper functions and handles all buffers and data. After your calls have gone through the interface, they often passed verbatim into WR_Core where they are executed. The biggest part of WR_Core is the rendering pipeline and it's associated functions, however that is best explained in it's own section later. Just know that anything loaded into, or executed by WebRender is within the WR_Core. For more information on vertex buffers, texture buffers and multi-threading, see their related pages.

The shader architecture is separated from WebRender as it has a vast amount of generated code and has to be an object itself. This page will not go into depth about how the shader programs work, see the shaders page for more information. What we do need to know, is that the shaders are connected to the rendering pipeline at the start for vertex manipulation in the vertex shader, and at the end for colour generation in the pixel/fragment shader. Shaders also contain much of the rasterisation code, as it is all generated based on the shader inputs.

Rendering Pipeline

The rendering pipeline is by far the most important part of WebRender, therefore some time must be spent explaining it. When a draw command comes in, the vertex buffer id that is given is used to grab all the buffer that is to be drawn, this is copied into an array that is passed to the vertex shader. In order to loop through the buffer as defined (starting index and length), the code for the loop is generated by the shader. Every three vertices is treated as a triangle, and once three vertices are have gone through the vertex shader they are passed together into the pipeline.

WebRender expects the first four positions to contain the XYZW values of the vertex, therefore there is a simple step at the start to reorder the vertex attributes if this is not the case. The triangle is then checked against the 3D viewing frustum, the triangle will either be completely in, completely out (in which case it is discarded), or partially in. Next, if the vertex data has UV coordinates, they are moved into the positive range for more efficient sampling later. Following this the triangle is clipped against the near plane based on the results of the check from before.

The next step is to apply the perspective division which is used to prevent vertex attributes (such as UV mapping) from being warped by perspective. Following this, backface culling is applied and any triangle that is in clockwise order is discarded at this point. The XY coordinates are then converted to screen coordinates and clipped against the edges of the screen.

At this point we pass the triangle into the rasterisation code, where edges are interpolated along, before each line of the triangle is put into scanline to be drawn. For each pixel, the depth is checked against the depth buffer, and if the pixel is visible the interpolated vertex attributes are passed into the pixel/fragment shader in order to determine a colour value. From this the colour and depth buffers are updated.


As discussed, WebRender uses two pixel buffers for rendering, the colour buffer and the depth buffer. Only the colour buffer can be accessed by your application using getBuffer, so if you want to create and store a depth map (like in the shadow example) you must save the depth values (z value) as a 32-bit colour into the colour buffer. The depth buffer can be turned off using setProperty, however this will tend to slow down the rendering time as more pixel draw steps will happen.

The est way to fully understand all of this and how to use WebRender is to read the tutorials and look at how the example applications work. More detailed information on how each function works can be found on their respective API pages.

Related Functions

constructor

addSBOVertexAttribute

assignSBOPixelShader

assignSBOVertexShader

bindSBO

bindTBO

clearBuffer

clearColor

clearSBO

clearTBO

clearVBO

destroyThreads

draw

getBuffer

getSBOId

getTBOId

getVBOId

initialise

loadSBO

loadTBO

loadVBO

resetScreen

setProperty

setShaderVariable