In scientific visualization, volume rendering is widely used to visualize 3D scalar fields. These scalar fields are often uniform grids of values, representing, for example, charge density around a molecule, an MRI or CT scan, air flow around an airplane, etc. Volume rendering is a conceptually straightforward method for turning such data into an image: by sampling the data along rays from the eye and assigning a color and transparency to each sample, we can produce useful and beautiful images of such scalar fields (see Figure 1). In a GPU renderer, these 3D scalar fields are stored as 3D textures; however, in WebGL1 3D textures were not supported, requiring additional hacks to emulate them for volume rendering. Recently, WebGL2 added support for 3D textures, allowing for an elegant and fast volume renderer to be implemented entirely in the browser. In this post we’ll discuss the mathematical background for volume rendering, and how it can be implemented in WebGL2 to create an interactive volume renderer entirely in the browser! Before we start, you can try out the volume renderer described in this post online.
To produce a physically realistic image from volumetric data, we need to model how light rays are absorbed, emitted, and scattered by the medium (Figure 2). While modelling light transport through a medium at this level produces beautiful, physically correct images, it is too expensive for interactive rendering, which is our goal in visualization software. In scientific visualization, our end goal is to allow scientists to interactively explore their data, enabling them to ask and answer questions about their research problem. As a complete physically based model with scattering is too expensive for interactive rendering, visualization applications employ a simplified emission-absorption model, either ignoring expensive scattering effects, or approximating them in some way. Here we will just focus on the emission-absorption model.
In the emission-absorption model, we only compute lighting effects occurring directly along the black ray in Figure 2, and ignore those from the dashed gray rays. Rays passing through the volume and reaching the eye accumulate color emitted by the volume, and are attenuated as they traverse it due to absorption by the volume. If we trace rays from the eye through the volume, we can compute they light received at the eye by integrating the ray through the volume, to accumulate the emission and absorption along the ray. Given a ray from the eye which enters the volume at and exits it at , we can compute the light which is received at the eye using the following integral:
As the ray passes through the volume, we integrate the emitted color and absorption at each point along the ray. The emitted color at each point is attenuated as it returns to the eye by the volume’s absorption up to that point, which is computed with the term.
In general, this integral cannot be computed analytically, and we must use a numeric approximation. We approximate the integral by taking a set of samples along the ray on the interval , each a distance apart (Figure 3), and summing these samples together. The attenuation term at each sample point becomes a product series, accumulating the absorption at previous samples.
To simplify this sum further, we approximate the previous attenuation term () by its Taylor series. We also introduce alpha for convenience. This yields the front-to-back alpha compositing equation:
The above equation amounts to a for loop, where we step the ray through the volume, and accumulate the color and opacity iteratively as we go. This loop continues until the ray either leaves the volume, or the accumulated color has become opaque (). The iterative computation of the above sum is done using the familiar front-to-back compositing equations:
In these final equations we use pre-multiplied opacity for correct blending, .
To render an image of the volume we just need to trace a ray from the eye through each pixel, and perform the above iteration for each ray intersecting the volume. Each ray (or pixel) we process is independent, so if we want to render the image quickly all we need is a way to process a large number of pixels in parallel. This is where the GPU comes in. By implementing the raymarching process in a fragment shader we can leverage the parallel computing power of the GPU to implement a very fast volume renderer!
To run our raymarching work in the fragment shader, we need to get the GPU to run the fragment shader for the pixels we want to trace rays through. However, the OpenGL pipeline works on geometric primitives (Figure 5), and does not have a direct method to run the fragment shader on some region of the screen. To work around this, we can render some proxy geometry to execute the fragment shader on the pixels we want to render. Our approach to rendering the volume will be similar to those of Shader Toy and demoscene renderers, which render two full-screen triangles to execute the fragment shader, which then does the real rendering work.
While rendering two full-screen triangles as in ShaderToy will work, it would run an unnecessary amount of fragment processing in the case that the volume does not cover the entire screen. This case is actually quite common, as users zoom out to get an overview of the dataset or study large-scale features. To restrict the fragment processing work to just those pixels touched by the volume, we can rasterize the bounding box of the volume grid, and then run the raymarching step in the fragment shader. Finally, we don’t want to render both the front and back faces of the box, as this could run our fragment shader twice, depending on the order the triangles are rendered in. Furthermore, if we render just the front faces we’ll run into issues when the user zooms in to the volume, as the front faces will project behind the camera and be clipped, causing those pixels to not be rendered. To allow users to zoom fully into the volume, we’ll render just the back faces of the box. Our resulting rendering pipeline is shown in Figure 6.
In this pipeline, the bulk of real rendering work is done in the fragment shader; however, we can still use the vertex shader and the fixed function interpolation hardware for some useful computation. Our vertex shader will transform the volume based on the user’s camera position, and compute the ray direction and eye position in the volume space, and pass them to the fragment shader. The ray direction computed at each vertex is then interpolated across the triangle for us by the fixed function interpolation hardware on the GPU, letting us compute the ray directions for each fragment a bit cheaper. However, these directions may not be normalized when we get them in the fragment shader, so we’ll still need to normalize them.
We’ll render the bounding box as a unit cube, and scale it by the volume axes to support non-uniform sized volumes. The eye position is transformed into the unit cube, and the ray direction is computed in this space. Raymarching in the unit cube space will allow us to simplify our texture sampling operations during the raymarching in the fragment shader, since we’ll already be in the texture coordinate space of the 3D volume.
The vertex shader we’ll use is shown below, the rasterized back faces colored by the view ray direction are shown in Figure 7.
Now that we have our fragment shader running on the pixels that we need to render the volume for, we can raymarch the volume and compute a color for each pixel. In addition to the ray direction and eye position we computed in the vertex shader, we’ll need a few more inputs to the fragment shader to render the volume. Of course, the first thing we’ll need is a 3D texture sampler to sample the volume. However, the volume is just a block of scalar values, if we used these scalars directly as the color () and opacity () values, the rendered grayscale image may not be very useful to the user. For example, it would not be possible to highlight regions of interest with different colors, or to make noise and background regions transparent to hide them.
To give the user control over the color and opacity assigned to each sample value, scientific visualization renderers use an additional colormap, called a Transfer Function. The transfer function specifies what color and opacity value should be assigned for a given value sampled from the volume. Although more complex transfer functions exist, a typical one acts as a simple color lookup table, and can be represented as a 1D color and opacity texture (RGBA). To apply the transfer function when raymarching the volume, we can sample the transfer function’s texture using the scalar value sampled from the volume texture. The returned color and opacity values are then used as and for the sample.
The final additional input we need for our fragment shader are the volume dimensions, which we’ll use to compute a ray step size () to sample each voxel along the ray at least once. As the conventional equation for a ray is , we will switch the terminology in the code to match, and refer to as . Similarly, the range along the ray overlapped by the volume, , will be referred to as .
To raymarch the volume in our fragment shader, we will do the following:
- Normalize the view ray direction received as input from the vertex shader;
- Intersect the view ray against the volume bounds to determine the interval to raymarch over to render the volume;
- Compute the step size such that each voxel is sampled at least once;
- Starting from the entry point at , step the ray through the volume until the exit point at .
- At each point, sample the volume and use the transfer function to assign a color and opacity;
- Accumulate the color and opacity along the ray using the front-to-back compositing equation.
As an additional optimization, we can include an early exit condition in our raymarching loop to break out when the accumulated color is nearly opaque. Once the color is nearly opaque, any samples after will have little to no contribution to the pixel, as their color is fully absorbed by the medium before reaching the eye.
The full fragment shader for our volume renderer is shown below, with additional comments marking each step in the process.
Published: 13 January 2019