Optimal Depth Buffer Usage for Large-scale Games#
With large-scale scene rendering, developers often decide to use a 32-bit floating point depth buffer format, rather than the usual 24-bit integer format. However, against expectations, this does not result in noticeable precision improvement when using a standard perspective projection, because the 32-bit floating point format only has 23-bits of mantissa.
One solution is to change the depth buffer mapping to ensure the depth values are distributed more evenly. This requires changing OpenGL’s default clip space Z projection, which maps values to [-1...1]
range, to [0...1]
using the GL_EXT_clip_control
extension. Depth buffers must then be cleared to zero, and depth comparison will need to be changed to GL_GREATER
. Finally, the projection matrix needs to be changed so that far values are projected to 0
and near values are projected to 1
.
With this method, the usual D24S8
format may be enough for most games. For even more precision, use a D32F
with a separate stencil buffer, although this will use more memory.