Physically-Based Rendering with Image-Based Lighting for PowerVR Summary

Hopefully through this document some of the key details of implementing physically-based rendering with image-based lighting have been illuminated. When looking at the ImageBasedLighting SDK demo it should be clearer why certain choices were made.

Here's a brief review of the main points covered:

Processing the model

The main model used in the demo was a glTF file with mesh data stored in a separate binary file. In addition, several external texture files for the albedo, normal, emissive, ambient occlusion, roughness, and metallic maps were used.

Transcoding the textures

These textures were transcoded into compressed PVRTC files using PVRTexTool GUI. At the same time any single or double channel textures were packed together using the PVRTexTool CLI, so all of the available colour channels were used. Compressing and packing textures helps to reduce memory bandwidth usage.

Pre-computing the BRDF

After processing the textures, the demo made use of a simplification called the 'split-sum approximation' to allow a significant portion of the bidirectional reflectance distribution function (BRDF) to be pre-calculated and stored in a 2D lookup texture. This is one of the most important stages in setting up the demo, as the BRDF determines how light is reflected off an object, based on the object's material properties. Moving a substantial amount of the BRDF offline reduces the amount of work the fragment shader has to do during runtime, making real-time physically-based shading viable.

Calculating the environment map

Next, the environment map was created. The basis of the environment map was an HDR image from HDRI Haven, which was scaled and blurred using a custom script to ensure its colour values fit in the 16-bit floating point number range. This processing also had the added benefit of conserving the total energy encoded in the image, so the realism of the environmental lighting would not be affected. The resulting image was converted from an equirectangular image to a cubemap texture and encoded into the small float format, R5G5B5E9. This image format was chosen because it had a large enough range to store the HDR values and also had better precision than the other options.

Creating the global illumination maps

The final part of processing the assets was creating two global illumination maps, an irradiance map and a prefiltered reflection map to model the light coming from the environment map. The irradiance map approximates the diffuse light while the prefiltered reflection map approximates the specularity of the light reflected off a surface based on the BRDF.

The irradiance map was calculated by summing all of the incoming environmental light for all possible surface normals. The irradiance map could then be sampled by the fragment shader to determine the diffuse contribution to a particular pixel.

The prefiltered reflection map was composed of several reflection mipmaps which had been blurred based on the roughness of the object. The roughness of the object at any particular pixel could then be used to sample this reflection map and obtain the specular contribution to that pixel's colour. Much like with the BRDF, computing these maps in advance removes a significant burden from the application at runtime.


While developing this demo, care was taken with the design of the asset pipeline and the shaders to ensure all of the complex calculations that were required to implement PBR did not cripple performance. This included loading the part of the BRDF and the environmental lighting maps into textures, choosing a tonemapping operator with a small performance impact, and using mediump maths in the shaders.