circle-loader
1
by
1/ 13984/ /7

Overview

Lumen is UE5’s GI system, it is different from the traditional real-time GI which only includes the contribution of indirect diffuse reflection. It also includes indirect diffuse reflection and indirect highlight, providing a new set of complete indirect lighting. Lumen supports both hardware-based RTX and software-based Trace algorithms. The starting point of this article is that Lumen GI uses the process, algorithm, and data structure analysis of indirect diffuse reflection part based on software Trace to understand the basic principle and operation mechanism of Lumen from a macro perspective.

 

The core of Lumen includes the following parts:

  • The simplified expression of unique scenes. The scenes that provide GI reflection in Lumen are not composed of models but are composed of these model simplified proxy MeshCards.
  • Based on the intersection acceleration of the distance field, the core acceleration structure used by the software Trace in Lumen is GlobalDistanceField and MeshDistanceField. The lighting expression is separated from the scene structure. There are two core components for buffering lighting information in Lumen: One is AtlasTexture that stores the highest precision lighting, each SubTexture corresponds to a MeshCard, and also corresponds to a small slice of the scene entity model; The other component is Voxel, which is generated during the runtime and is the main part of Lumen’s light information expression.
  • GI solution based on Screen Space Probe, SSGI is integrated into the solution process, MeshCard AtlasTexture lighting solution based on MeshDistanceField Trace, Voxel lighting solution based on GlobalDistanceField Trace and other GI algorithms.

Data Structure

1. Intersection Acceleration Structure

There are two main Tracing intersection acceleration structures in Lumen: Distance Field and HZB in 3D space. Distance Field is divided into Mesh Distance Field (hereinafter referred to as MDF) and Global Distance Field (hereinafter referred to as GDF). MDF/GDF is used to accelerate the intersection of models (MeshCard/Voxel) in 3D space, and Hierarchy Z-Buffer (hereinafter referred to as HZB) is used to intersect SSGI’s RayCast in screen space.

2. MDF/GDF

Without BVH, RayTracing can only take uniform steps along the light direction with a fixed step length and intersect the scene after each step. In order to penetrate thin slices and small models without errors, the step size needs to be infinitely small. And for operational efficiency, it is hoped that the step length is as long as possible, and the two requirements are completely opposite. The diagram of the uniform step is as follows:

With MDF/GDF, this situation can be effectively changed, because MDF/GDF stores the position closest to the model surface, so with MDF/GDF, RayMarch’s step length in most cases can be greatly improved without penetrating the “model” error. As shown in the figure below, light is emitted from the front of the camera, the blue circle is the Distance Field (the shortest distance to the object) stored at the current point. Because there can be no other objects within this distance (circle), so every time the length of the step can be directly the distance of the Distance Field:

3. HZB

RayMatch in screen space and RayTracing in 3D space have completely similar stepping problems. The screen space lacks complete 3D scene information, so the hit judgment of RayMatch is determined by comparing the depth Z of the current pixel with the Z value of the light equation at the current position. The depth of the current pixel light is greater than or equal to the depth in the ZBuffer (the following discussion is based on the general ZBuffer, the reversed Z rule is the opposite), the light is considered intersected, otherwise, it is considered that it is unblocked, and you can continue to step forward. The basic idea of HZB is similar to BVH. It first generates a Mipmap chain for the current ZBuffer, where the higher Mip Z value takes the smallest value among the 4 pixels in the previous Mip line, and then selects one of them when executing RayMatch Level Mip starts. If there is no intersection in the current Mip, increase the Mip level and continue to intersect, otherwise reduce the Mip to intersect at a finer granularity. Its meaning is to see whether RayMatching has an intersection with the current block in the larger pixel block. If the intersection is found in the smaller-grained pixel block, and there is always an intersection, it will drop to the highest precision required level to check whether the light intersects the current pixel: On the contrary, it will try to see whether there is an intersection point in the larger block, so as to quickly skip areas that do not require pixel-by-pixel steps. The RayMatch on the two-level HZB is shown in the following figure:

4. MeshCard and LumenScene

LumenScene is the scene used by Lumen in the calculation process. It is an incomplete simplified version of the real scene. For Trace-based algorithms, choosing a simplified scene expression means speeding up Trace, and at the same time, it also brings a loss of Trace accuracy. MeshCard is the basic component of LumenScene. In the sense of scene structure, it is the basic element of LumenScene. For LumenScene, there are no models in the scene – MeshCard is their substitute (Proxy).

 

The more common model simplifications in real-time GI are voxel (Voxel) and surface element (Surfel). The MeshCard in Lumen is of unequal width and height and only one coordinate axis faces Orientation ∈ [-x,+x, -y,+y,-z,+z] hexahedron. In terms of shape, it is a block-shaped building block:

 

LumenScene is built piece by piece with such building blocks, but it does not need to be built manually, it is automatically generated. The main function of MeshCard in Lumen is to provide the position and direction of the light sampling, which is used to greatly reduce the basic component of the light information buffer. The 6 orientations of MeshCard are the 6 basic functions of Ambient Cube1, even in the model there are probably as many as 6 MeshCards at the same location, and each MeshCard just expresses a different direction. This expression can have a higher fidelity to DiffuseLighting, and is widely used in various real-time and baked GI solutions of Probe-Based. At the same time, it has less amount of instructions when restoring data than SH.

5. Screen Space Probe

Lumen chose to use the world position corresponding to the screen pixel of the current frame to place the Probe and named it ScreenSpace Probe.

 

Probe’s Radiance has two expressions, one is octahedral map (OctahedralMap)[6] storage, the default size is 8*8, and the other expression of Probe’s Radiance is a spherical harmonic function, which will be generated after the Probe calculation is completed. The octahedral mapping uses a 2D RenderTarget as Atlas Texture to store all the Probes, and it does not have difficult problems like boundary seams in ellipse mapping need to handle. The mapping process of octahedral mapping from 2D texture to sphere and hemisphere is shown in the following figure. For a more detailed algorithm description, refer to reference 6.

 

After having Meshcard Atlas/Voxel, Lumen once again chose to join ScreenSpaceProbe as intermediate light storage with many advantages, such as:

  • Using a very low-resolution SpaceSpaceProbe can greatly speed up the final GI solution process.
  • Smoothly combining the lighting information of different signal frequencies (Meshcard Lighting/VoxelLighing/SSGI), octahedral mapping, and low-order SH are all expressions of low-frequency IrradianceMap, and projecting these signals with different frequencies to ScreenSpaceProbe is equivalent to these light sources a low-pass filter is performed.
  • The probe can be reused by Cache. The ingenious thing about Screen Space Probe is that although it is generated in screen space, it retains the complete Radiance semantics – each Probe actually exists in a specific world space position, so the camera moves smoothly In fact, only a small part of the probe that is visible on the screen of each frame will change, which creates the possibility for Cache Probe.
  • There is no need for a heavy Filter like RTX to reduce variance.

Lumen Basic Process

Lumen’s complete process is divided into two large modules: offline data generation, runtime data update, and lighting calculation. The main tasks of the two modules are as follows:

 

The Offline Stage

The offline generated data includes MeshCard, MDF, and GDF data, which are automatically and asynchronously generated after the static model is imported or modified, and there is no need to perform the Build step. The reason why there are two parts of data, MDF and GDF, is because the accuracy of MDF is higher than that of GDF. At runtime, MeshCard will be used as a scene component and a light buffer structure at the same time.

 

Runtime Stage
Lumen has four main tasks at runtime to complete the final GI calculation:

[1] Update LumenScene
[2] Inject direct and indirect Diffuse lighting into the lighting buffer (including MeshCards and Voxel based on 3D Clipmap, etc.)
[3] Automatically place Probe and Trace based on the current screen space to obtain the illumination information of the Probe, perform secondary encoding on the illumination of the Probe and generate Irradniance Cahe
[4] Calculate the final Indirect DIffuse and Indirect Reflection by using Probe's lighting information, secondary coded lighting information and Irradiance Cache information, and mix History lighting information as the final GI output

The following is an introduction and analysis of the main points of work in these stages.

1) Update LumenScene

  1. Updating data based on Primitive

The data in LumenScene is divided into two parts, one part comes directly from the static model of FScene (including MeshCards, two kinds of DF data), and the other part comes from the use of MeshCards and DF data generation for rendering Illumination buffer data (including various encoded Atlas textures). Therefore, LumenScene’s update driver also has two parts: one is the data changes in the scene, and the other is that specific lighting buffer data needs to be updated each frame for subsequent lighting calculations. These data updates are performed in multiple threads.

The primitive data update operations of LumenScen include:

  • New Primitive is added to the scene, and multiple instances such as ISM will merge LumenCard –> new MeshCards, DF will be added to LumenScene.
  • Primitive has been removed from the scene –> Dirty MeshCards, DF has been removed from LumenScene
  • Primitive has an update –> Update MeshCards, DF Transform/Lod, and other data.
  • Assign and update the position and size of MeshCards in AtlasTexture such as material and lighting information.
  • Add the MeshCards that material parameters have not been captured close to the rendering list (the default close refers to all MeshCards within 200 meters from the camera, and the bounding box of the MeshCards is required to be greater than the given threshold, which also limits the MeshCards rendered in each frame to not exceed the given number, the framing process is done here).
  • MeshCards near the Cache correspond to the rendering commands (DrawCommands) of the static model and the Nanite rendering list, which are used to capture the material attributes (MateialAttributes). The static model uses the largest Lod Level to save the cost of material capture.

LumenScene divides the scene into two parts: distant and close. Therefore, in addition to Primitive, camera movement in the scene will also trigger LumenScene update:

  • Generate the Meshcards used in the vision (the default is only one level of Cascade, that is, only one MeshCard is used to express the vision)

In addition, for the Nanite model:

  • Nanite’s multi-Instance will also automatically perform a merge operation when joining LumenScene to reduce the total number of MeshCards.

 

Notes on Meshcards and AtlasTexture

  • The organization of AtlasTexture uses the Bin Packing algorithm to organize. Each new Meshcards will try to allocate in AtlasTexture. Here, the existing TextureLayout components in UE4 are used directly for sub-texture allocation.
  • The SubTexture Size of MeshCards in AtlasTexture is determined by the size of the MeshCard itself and the distance between the MeshCard and the current camera, so when the camera moves, it will also trigger the reorganization of AtlasTextureMeshCards。
  • The area of Sub Texture corresponding to MeshCards is generally larger than 1 texel
  • Because only a fixed number of MeshCards are updated per frame, priority is used to ensure that New MeshCards will be created first, and then the Update MeshCards request will be processed.

 

  1. Capturing the material attributes of MeshCards (MaterialAttributes)

As can be seen from the storage structure of MeshCard, it does not save its own material attributes and depth information, mainly to save disk space and ensure the flexibility and controllability of its accuracy. The missing data for offline baking needs to be supplemented during operation and stored in the corresponding AtlasTexture for later use. The next step is to prepare MeshCards for the calculation of GI materials and other data related to shapes. These data attributes mainly include:

  • Material Albdo, used to support Diffuse Lighting calculations
  • Material Emissive, used to support self-illumination to illuminate the scene
  • Material Opacity used to deal with light passing through and semi-bombs on semi-transparent objects
  • Normal, used in the lighting model to calculate various types of lighting
  • DepthStencil is used in various subsequent stages of Lumen, such as calculating the placement of Probe points, calculating light interpolation weights, etc.

In step 1, the mesh rendering list has been processed. This needs to be organized first to form an Instance, so as to minimize the use of DrawCall to save performance. Because Nanite’s Cull-Draw process and the traditional static model are two completely different processes, the process here also needs to be performed twice. It should also be noted that because MeshCards are used to calculate GI, it should be submitted for rendering regardless of whether it is within the field of view, so the Cull process is not effective here.

After the material attribute capture operation is completed, the following operations will also be performed:

  • Calculate the Moments of Depth and generate a Mipmap, which is used to estimate Occlusion using Chebyshev’s inequality in subsequent traces. This processing method comes from Variance Shadow Maps[3] and is also used in DDGI[4] Deal with the visibility of Irradiance Volume.
  • Preprocess Opacity and generate Mipmap chain.

2) Inject Lighting To Cache

After completing the MeshCards data update and material attribute capture, the next step is to inject the lighting information in the current scene into the lighting buffer. Lumen uses three main data structures to cache the lighting information in the scene. They are MeshCards and its corresponding AtlasTexture (the highest precision), Voxel 3D Clipmap[2] (medium precision), and GI Volume (used for body Rendering). These three types of data can simultaneously cover static objects, dynamic objects, and volume-rendered objects in the scene, and can form a complete GI lighting source.

The overall process of injecting light is shown in the following figure:

 

  1. VoxelLighting

In VoxelLighting, Lumen is similar to VXGI and uses a 3D Clipmap-based method to save storage space. However, each 3D texel in Lumen’s VoxelLighting is not the same as Voxel in VXGI — Each 3D texture of Lumen represents a certain directional light projection parameter of Ambient Cube[1]. The number of 3D texels required is 6 times its size. It can be seen that VoxelLighting has chosen almost the same structure (six body) and basis function (AmbientCube) as MeshCards. A detailed introduction to Clipmap can be found in references [2] and [5].

 

The following figure shows the comparison of the number of texels that need to be loaded between the full mip of a 3D texture with a size of 4096 and a 3D Clipmap with a size of 64 * 64:

 

VoxelLighting uses a 4-level 3d Clipmap by default, and stores indirect lighting information with a 200-meter range (configurable) around the center of the camera’s world position. All MipLevels and Directions are directly tiled in a 3D texture:

  • The texture size is [64, 64 * MipLevels, 64 * Directions(6)], except that the X-axis is equivalent to Clipmap Size, all MipLevels are spread out on the Y-axis, and the Ambient Cube’s 6 directions are spread out on the Z-axis.
  • The distance expressed by each level of Mip increases exponentially. The range of level 0 is [0, 25 meters], and the range of level 3 is [100 meters, 200 meters]. Inversely calculating its accuracy, you can get the level 0 The accuracy is 25 meters*2/64, and the scene space expressed by each Voxel is about 0.78 meters, similarly, the accuracy of the third level is 3.125 meters.

 

  1. Injecting light into MeshCards

From the above flowchart, we can see that the light injection of MeshCards is divided into two parts: direct light and indirect light, and both direct light and indirect light only calculate the Diffuse contribution instead of the &Specular contribution.

 

The first step is to calculate the indirect light of MeshCards. The calculation of indirect light can be split according to the light source data source and the two dimensions of the Trace method. Lumen supports 4 different indirect light calculation modes, as shown in the table below:

 

 

By default, Lumen uses VoxelLighting as the source of indirect light and uses block Trace multiplexing to calculate MeshCard’s indirect lighting. In addition, we need to pay attention to two points that the light source and sampling method of indirect light:

  • The two sources of indirect light, VoxelLighting, and Irradiance Cache were generated in the subsequent Lumen process, so the data sampled is from the previous frame or earlier historical data.
  • Indirect lighting generated by texel-by-texel sampling has higher accuracy but requires more sampling points, thereby having worse performance. The block-based Trace reuses the sampling results of adjacent texels, so in most cases, you can use a smaller number of samples to achieve smoother sampling results.

 

A brief description of the Voxel Trace part of indirect light:

  • Use Global Distance Field to speed up the intersection
  • Voxel ConeTrace is used to sample VoxelLighting. 8 Cones are sampled for each texel by default, and the Mipmap is determined according to the Hit distance.
  • The sampled Cone, the skylight will be sampled at the same time (if the skylight is turned on) and add it to the Lighting result

 

The second step is to perform Bilinear Filter again on the result of indirect light and filter out the outliers <0.

 

The third step is to calculate the contribution of direct light. The direct light types supported by Lumen include PointLight, SpotLight, RectLight, and DirectionalLight. Except for DirectionalLight, which is calculated on a lamp-by-lamp basis, the other three types of lighting are performed in batches – because they have a limited lighting range, you can only find out the MeshCards that are affected in this batch. Each batch of direct light rendering only works on MeshCards within their range of influence.

 

Another problem of direct light calculation is the need to consider the visibility of the current MeshCard to the light. Lumen supports the use of ShadowMap or RTX light tracking to determine the occlusion ratio (ShadowFactor) of the light to the current MeshCard.

 

The last point of direct light calculation is: only Diffuse contribution items are calculated.

 

The last two steps of lighting injection are sampling Albedo and Emissive to DiffuseAtlas and EmissiveAtlas corresponding to MeshCards, generating Mipmaps for the final lighting (Indirect+Direct) of MeshCard, and when generating Mipmaps, it is the same as a normal Mipmaps. Use bilinear sampling to filter and produce high-level texels.

 

  1. Update VoxelLighting Mips, Voxel lighting injection.

From the introduction of VoexlLighting above, you can see that it is a Voxel Clipmap centered on the camera’s world position so that when the camera moves, the Clipmap also needs to be updated. In order to reduce the number of Voxel that needs to be updated at a time, the following optimizations are made.

 

(1) It is guaranteed to update at most one level of Mips per frame, and the order of the four-level Mip update of Voxel Clipmap is as follows:
a. The 0, 2, 4, 6, 8,… frames are allowed update to the level 0 mip
b. The 1,5,9,13,17,… frames are allowed update to the level 1 mip
c. The 3,11,19,27,… frames are allowed update to the level 2 mip
d. The 7,15,23, 31,… frames are allowed update to the level 3 mip

 

(2) When the camera does not move violently, theoretically only the Voxel of the affected part of the movement direction needs to be updated, as shown in the figure below:

 

In addition to the camera update, the Primitive additions deletions, and modifications in the scene will also affect the surrounding Voxel, so this strategy actually uses the more general PrimitiveUpdateBounds in Lumen’s implementation to intersect with Clipmap Tile Bound to determine the real number of Voxels that need to be updated, it is not Voxel, but the VisibilityBuffer that will be introduced in the next.

 

The VoxelLighting lighting solution process uses MeshDistanceField to speed up the intersection: Lumen chooses to first use the Boundbox of the current Voxel Clipmap to remove all Objects that are not in this range and their corresponding MeshDistanceField, and then use these removed MeshDistanceField surrounds Box to calculate which Voxel they cover and write their own index into all Trace parameters that cover Voxel. In the next Voxel Trace Pass, each Voxel only needs to process the MeshDistanceFiled filled in in the previous step. The output data of Voxel Trace Pass is the VisibilityBuffer containing the combination of HitDistance and HitObjectIndex. The VisibilityData structure is as follows:

uint32_t NormalizedHitDistance : 8 ; //Intersect distance
uint32_t HitObjectIndex : 24 ; //Object ID

 

The final Voxel Shading Pass obtains the best three MeshCards from the compressed VisibilityBuffer to calculate the lighting for the Voxel. The weight coefficient of the lighting is calculated not only using the coefficient of AmbientCube, but also taking into account the transparency of the object and the possibility of MeshCard Visibility (similar to VSM, estimated using Chebyshev’s inequality).

 

  1. Generate and calculate GIVolume

GIVolume is the traditional Irradiance Volume, which covers the world (z-axis) at a distance of 80 meters from the default camera. The main points are as follows:

  • The light source comes from VoxelLighting, and use ConeTrace to solve lighting. The default is 16 samples per Volume.
  • Use the SH2 to encode each final result
  • In addition to VoxelLighting, every ConeTrace will also sample skylight
  • Will mix historical lighting data to produce smoother changes

3) ScreenSpace Probe and Indirect Lighting solution

According to the general RTGI plan, with MeshCards and the corresponding LightingAtlas+MaterialAtlas, with the information of VoxelLighting and GI Volume, there is enough information to solve the GI in normal games. For example, we can calculate it like this:

  1. Light source: Divide the scene into close-up and long-range. VoxelLighting is used for close-up, and DistantMeshCard (equivalent to a huge AmbientCube) is used for long-range.
  2. Illumination calculation: Use PixelWorldPosition and PixelWorldNormal are used to obtain the three Voxels with the closest matching direction to calculate the current GI.
  3. Efficiency: You can use half-screen or smaller resolution GI RenderTarget.
  4. Effect: Use spatial and temporal filters to smooth lighting, use some rough procedures to deal with light leakage (such as Normal offset, limit wall thickness, use the stencil to mark indoor and outdoor, use SDF to push the sampling points out of the object, etc.) .

 

You can also change VoxelLighting to GI Volume, which is different only when the lighting calculation obtains the Volume and calculates the weight, which is different when obtaining Volume and calculating weight in illumination calculation.

  1. Light source: Divide the scene into close-up and long-distance. GI Volume is used for close-up and DistantMeshCard (equivalent to a huge AmbientCube) is used for long-range.
  2. Illumination calculation: use PixelWorldPosition and PixelWorldNormal to get the latest GI Volume.
  3. Efficiency: You can use half-screen or smaller resolution GI RenderTarget.
  4. Effect: Use spatial and temporal filters to smooth lighting, use some rough procedures to deal with light leakage (such as Normal offset, limit wall thickness, use a stencil to mark indoor and outdoor, use SDF to push the sampling points out of the object, etc.).

 

If you want the effect to be better, Per Pixel also can generate the sampling direction according to the PDF Importance Sample to Trace MeshCard. It is noted that the cache data of MeshCards contains depth-related information, which can directly imitate the calculation method of VoxelLighting’s VisibilityWeigth to estimate the viability, achieve the occlusion effect similar to GGDI.

 

Lumen uses SSGI, Detail MeshCard Trace, Voxel Lighting Trace, and Distant Meshcard Trace 4 ways to solve the final lighting at the same time. The working distance and priority of various traces are arranged as follows:

  • The scope of SSGI is full-scene, that is, once Hit can be traced, then use SSGI’s rebound information. For SSGI introduction, please refer to “UE4.24 SSGI Implementation Analysis”.
  • The starting range of Detail MeshCard Trace is not more than 2 meters, and the Trace distance is not more than 40 meters away from the camera. The VoexLighting calculation is the same way, In the same way with VoexLighting calculation, it uses a VSM-like method to estimate the occlusion using probability when sampling the MeshCard lighting information. Probes determined by SSGI and MeshCard Trace do not use VoxelLighting.
  • The effective range of VoxelLighting Trace only covers the pixel range within 200 meters from the camera, and the occlusion estimation is based on Cone implementation.
  • The 200-1000 meters is the effective range of Distant MeshCard Trace.
  • The processing of semi-transparent objects generates GI Volume.

 

It is easy to see that in outdoor scenes, the main lighting is VoxelLighting (including DistantMeshCard), as shown in the following figure:

 

With Probe, Lumen’s Indirect Diffuse solution process is adjusted as follows:

  1. The screen size of the Downsample is 1/16, 1/16 + 1/32, where [1/16, 1/16] are all generated as Probes, and the remaining [1/16, 1/32] are based on the surrounding probe spatial information is generated and self-adapted. The uniform placement of a pass is based on a distance of 16 pixels + jitter, and the adaptive placement part is divided into two passes, which are iterated at a distance of 8 pixels and 4 pixels, to find the position where the probe needs to be placed. The algorithm of Adaptive Placement can be described as:
4 probe around the sample {sample_probe(uv) , uv|uv + [(0,0),(1,0),(0,1),(1,1)]}
Calculate bilinear weight
Get the depth difference and clip between these probes and the current position, and calculate the depth weight & corner weight superposition
biliner weight becomes finalWeight
If all finalWeight < 0, indicating that there is no valid Probe around to sample, then a new probe is placed
  1. Use ConeTrace to sample MeshCard and VoxelLighting and the radiance of the probe generated by SSGI, do a spatial filter on the probe and repair the sampling points on the boundary, and transfer a probe to the SH-based data but do not clean up the original probe data. In this way, actually, probe has two radiance data – octahedral map + sh dual storage.
  2. Upsample Probe to the screen size, temporal blend Indirect Diffuse. Upsample has two main points:
  • For non-hair stand pixels, use the Probe SH created in step 2, and use the same weight of Adaptive Placement is used for SH blending to calculate the final lighting
  • For the hair, BRDF Importance Sample Weight will be used, and the Probe octahedral map will be sampled and mixed to calculate the final illumination

 

If you don’t use RTX reflection, the following step is to perform the indirect specular calculation, finally, mix the indirect diffuse with the previous frame indirect diffuse as the final indirect diffuse output, then complete the calculation of the entire Lumen GI. It can be seen that Lumen does not rely on a heavy time-frequency domain Filter to compress the variance between pixels in the entire process like RTX.

 

How is Lumen’s infinite bounce achieved?
The VoxelLighting sampled by MeshCard is the data of the previous frame, so the bounce data of MeshCards will be accumulated from the second frame, and each frame will bounce once more.

 

Where is the SurfaceCache in the official document?
MeshCard Lighting + Voxel Lighting is roughly equivalent to the SurfaceCache in the official document.

 

World Space Probe Cache
The cache method, the scope of action, and the update strategy are similar to Voxel Lighting (3D Clipmap), except that the stored data is Radiance/Irradiance and the Trace method and data format are equivalent to Screen Space Probe.


Reference

1 Jason Mitchel , Gary McTaggart and Chris Green , Shading in Valve’s Source Engine ,Advanced Real-Time Rendering in 3D Graphics and Games Course – SIGGRAPH 2006

2 Cyril Crassin1,Fabrice Neyret1,Miguel Sainz,Simon Green4 Elmar Eisemann , Interactive Indirect Illumination Using Voxel Cone Tracing

3 William Donnelly, Andrew Lauritzen, Variance Shadow Maps

4 Zander Majercik Jean-Philippe Guertin,Derek Nowrouzezahrai,Morgan McGuire, Dynamic Diffuse Global Illumination with Ray-Traced Irradiance Fields

5 Christopher C. Tanner, Christopher J. Migdal, and Michael T. Jones, The Clipmap: A Virtual Mipmap

6 Thomas Engelhardt, Carsten Dachsbacher, Octahedron Environment Maps

 

Thanks to author Jiff for the contribution and translation by UWA. Welcome to forward and share the article, please do not reprint without the author’s authorization. If you have any unique insights or discoveries, please contact us and let’s discuss them together.

Related Topics

1 Visitor Comment

  1. This is a very very nice and a complete documentation. Thanks for sharing this very nice information.
    Georgios Burnham H.

Post a Reply

Your email address will not be published.