circle-loader
by
2 / 3751/ /1

In this article, we will introduce the family tree of ray tracing categories and their common parts, and then we will dissect each of them in detail. This article will be divided into two parts for a better reading experience.

 

Practitioners in the Graphics field have introduced this idea from the fields of neutron transport, heat transfer, and illumination engineering. As these concepts are studied in many fields, the terminology leading to ray tracing is evolving, and sometimes diverging between and within individual disciplines, which can lead to confusion about the terminology. Therefore, before introducing the concept of ray tracing, we need to clarify the concepts of these terms. Almost all modern ray tracers use recursion and Monte Carlo methods, but few people now call them the “recursive Monte Carlo” method.


1. The History of Ray Tracing

  • Ray Casting, 1968. Arthur Appel.

 

  • Whitted Ray Tracing, 1980. Whitted, Kay, and Greenberg proposed the use of recursive ray tracing to describe accurate refraction and reflection.

 

  • Distributed Ray Tracing, 1984. Cook et al. proposed distributed or distribution ray tracing (DRT). However, this approach is often referred to as stochastic ray tracing to avoid confusion with distributed processing. Both it and path tracing use the Monte Carlo method.

 

  • Path Tracing, 1986. Two important algorithms were introduced. Kajiya called the integral transport equation the rendering equation. The rendering equation he proposed has become the mathematical basis for almost all global illumination algorithms so far. In the same paper, he also proposed the most primitive path-tracing algorithm.

 

  • Bidirectional Path Tracing (BidirectionalPath Tracing) was proposed by Lafortune and Willems, and Veach gave a detailed description of bidirectional path tracing.

 

  • Metropolis Light Transport,1997. Veach first introduced the Metropolis sampling method, which was first applied in the field of computational physics, into graphics.

 

  • Energy Redistribution Path Tracing, 2005

 

  • Manifold Exploration, 2012


2. Ray Casting

The first ray-casting algorithm for rendering was originally introduced by Arthur Appel in 1968. Ray casting renders the scene by firing a ray for each pixel from the viewpoint and finding the closest object that blocks the ray path in the world scene. Ray Casting has only two types of rays. The first is the eye ray emitted by the eye to find the intersection point in the scene; and the other is the shadow ray sent from the intersection point to the light, to see if it is in the shadow. A significant advantage of ray casting over traditional scanline rendering algorithms is its ability to handle uneven surfaces and solids. Much of the animation for Tron (the 1982 film Tron Series) was rendered using ray-casting techniques.

 

When rasterization technology was not popular in the early years, it was also used in games. One of the most famous ray-casting games is Wolfenstein 3D. Next, we will introduce the common parts that are common in both ray tracing and ray casting algorithms, so as to avoid too complicated content later.

 

2.1 The basic algorithm of Raycasting

Render()
  for each pixel x,y
  color(pixel) = Trace(ray_through_pixel(x,y))

 

The first step is to create a ray, get the ray data structure of each point, and then start tracing. The function name, Trace is used here to unify with the following, with the meaning of tracing this ray in the scene.

Trace(ray) 
  object_point = Closest_intersection(ray)
  if object_point return Shade(object_point, ray)
  else return Background_ColorRender()
  for each pixel x,y
  color(pixel) = Trace(ray_through_pixel(x,y))

 

Then, we look for the intersection of light and objects in the scene and return the information of the object closest to us. We will color it once it is found; If we don’t find it, we return the set background color.

Closest_intersection(ray)
  for each surface in scene
    calc_intersection(ray, surface)
  return the closest point of intersection to viewer 

 

In the Closest_intersection of ray casting, we don’t need to return other complex information.

Shade(point, ray) 
  to calculate contributions of each light source

 

You can play the shading of the raycasting freely since the original implementation only has the division of shadows. Additional scene information is generally used to improve performance for scenes in real-time games like Wolfenstein 3D. In short, the shade here is not important, but the main thing is to understand the ideological framework of RayCast, and all subsequent algorithms are played within this framework.

Of course, we can also perform simple coloring calculations, which is consistent with RayTracing without recursion, which we will compare later.

 

2.2 Representation and Intersection of Rays

No matter what kind of ray rendering algorithm is adopted, the first step is to perform ray casting from the camera, and its ultimate goal is to find the direction of the light corresponding to each pixel on the screen.

 

Here, the conversion relationship between the local, world, and camera spaces is basically similar to the space conversion in rasterized rendering, so I won’t go into details here.

 

In most cases, we use the method of p(t)=e+t(s-e) to represent light, e is the camera observation point we set, and s is the point on the canvas; usually, we pre-set the length and width before rendering, and then we use the parameter t to represent the length of the light.

  1. When t is 0, it represents the origin
  2. When t is a positive number, it is heading toward us
  3. When t is negative, it is behind the origin
  4. If t1<t2, it means that t1 is closer to us

3. Whitted RayTrace

The evolution of ray-casting rendering occurred in 1979 when Turner Whitted extended the ray-casting process by introducing reflections, refractions, and shadows to form his Whitted ray tracing.

 

It took him 74 minutes to render that 512*512 photo at the time, which would only take seconds today.

 

In general, Whitted RayTrace mainly divides rays into four types:

  1. EyeRay. The same as before.
  2. Reflected ray. It continues to shine on the surface in the direction of specular reflection. The reflected color is determined by the intersection of the reflected ray with an object in the scene.
  3. Refracted ray. Created similarly to a reflected ray, except that it is directed into the object and can eventually exit the object.
  4. Shadow ray. Calculated by creating shadow rays from the intersection point to all lights. If the shadow ray intersects an object before reaching the light, that intersection will be shadowed by that particular light.

 

Whitted ray tracing mainly solves the problem that there is no indirect light in the scene, but the performance is not very good, because all the indirect light generated only comes from perfect specular reflection or refraction. This material is not commonly seen in the real world; therefore, it is basically impossible to simulate most indirect light, and another reason is that it only emits one Reflected ray and one Refracted ray for each EyeRay intersection. This problem will be better solved in distributed ray tracing.

 

3.1 Types of Ray Tracing

3.1.1 Forward Ray Tracing

Forward ray tracing follows photons from light sources to objects. While the forward ray can most accurately determine the color of each object, it is very inefficient. This is because many rays from the light source never pass through the image plane and enter the eye. Tracing every ray from the light source means that many rays will be wasted because they are never seen by the eye.

 

3.1.2 Backward Ray Tracing

 

To make ray tracing more efficient, the backward ray tracing method was introduced. In a backward ray, an eye ray is created at the eye; it passes through the image plane and into the world. The first object the ray hits is the one visible from that point in the view plane.

 

The downside of Backward Ray is that it assumes that only rays that pass through the image plane and enter the eye contribute to the final image of the scene. In some cases, this assumption is flawed. For example, if a lens is fixed at a distance from the top of a table and is illuminated by a light source directly above, there will be a focal point with a large light density below the lens. If reverse raytracing tried to recreate this image, it would miscalculate, because backward rays will only confirm that rays go through the lens; reverse rays will not recognize forward rays bending through the lens. Therefore, if you only do reverse ray tracing, there will only be a uniform spot of light underneath the lens, as if the lens were a normal piece of glass.

 

As mentioned above, one of the most efficient and easiest ways to optimize performance is to fire rays backward from the eye, rather than from the light source. This way, no computing power is wasted for rays that never hit the model or camera.

 

3.1.3 Hybrid Ray Tracing

Since there are drawbacks in both forward and backward ray tracing, recent research attempts to develop hybrid solutions that compromise speed and accuracy. In these hybrid solutions, only certain levels of forward rays are performed. The algorithm records the data and proceeds to perform backward ray tracing. The final shading of the scene takes both back-ray and forward-ray calculations into account.

 

Veach (1995) invented a hybrid method of backward ray tracing, forward ray tracing, and connecting lines (Bidirectional Path Tracing), which will be explained later.

 

In summary, our Whitted ray tracing is generally performed by using backward rendering (emission of rays from the eyes).

 

3.2 Basic Algorithms of Ray Tracing

The Shade which we talked about in the second part has a use here. We know that recursion is the most basic feature of ray tracing; therefore, let’s first look at the situation without recursion. In fact, the situation without recursion is almost the same as the technical characteristics of RayCasting.

 

The framework is the same, so they will not be listed here but the differences.

Shade(point, ray) 
  calculate surface normal vector
  use Phong illumination formula
  to calculate contributions of each light source

 

First of all, as the age relationship listed before, when Whitted ray tracing appeared, it did not propose the rendering equation for global illumination, so it was still using an experience-based lighting model at that time, just like phong here, which can be replaced.

 

Let’s look at the recursive form:

Shade(point, ray)
  radiance = black; /*Initialization */
  for each light source
    shadow_ray = calc_shadow_ray(point,light)
    if !in_shadow(shadow_ray,light)
        radiance += phong_illumination(point,ray,light)
    if material is specularly reflective
         radiance += spec_reflectance * Trace(reflected_ray(point,ray)))
    if material is specularly transmissive
        radiance += spec_transmittance * Trace(refracted_ray(point,ray)))
    return radiance

 

First, initialize the color of this point. If it is not in the shadow, then color the phong model. If the material has a specular reflection, the color contribution of the specular reflection ray must be added to the final color. If the material has refraction characteristics, then the contribution of refraction will be added.

 

Let’s now take a look at the difference between the effect and RayCasting:

Two recursive calls

 

Three recursive calls

 

Another important question is when will this recursion end? There are two situations. In the first case, the ray does not hit the object. In the second case, due to the reflection or refraction, the contribution value will gradually decrease. We set a threshold in advance, and when it is less than this threshold, it stops. If you only render a picture according to the above method, it is feasible, but there will be some problems to be solved.

 

3.3 Antialiasing

 

The first question that we should ask here is why there is aliasing. Since the previous ray tracing algorithm only created one ray for each pixel value, it only sampled one point and one color in the scene, but a pixel may contain many different points, especially at the edge of the object. In some cases, the points do not necessarily all have the same color; therefore, our regular sampling will lead to this kind of aliasing.

 

3.3.1 Supersampling

Supersampling is the process of increasing the number of rays per pixel, which will not fix aliasing, but it will try to reduce their effect on the final image. In the following example, nine rays are emitted from the pixel, among which six are blue and three are green. The final color of the pixel will be two-thirds blue and one-third green.

 

Other combinations are also possible, and the quantity, position, and proportion can be set by yourself.

 

3.3.2 Adaptive Supersampling

Adaptive supersampling (also known as Monte Carlo sampling) is an attempt to do supersampling in a smarter way. It first emits a fixed number of rays and compares their colors. If the colors are similar, the program assumes that the pixels are looking at the same object, and the average of the rays is calculated as that pixel’s color. If the light color is different (defined by some threshold), then we consider this pixel special and need further inspection. In this case, pixels are subdivided into smaller regions, and each new region is considered a complete pixel. The process starts again, with the same fixed light pattern being shot into each new section.

 

Unfortunately, adaptive supersampling still divides pixels into regular light patterns and suffers from aliasing that can occur with regular pixel subdivisions. For example, the object and sampling grid are almost aligned. In short, regular methods are not good.

 

3.3.3 Random Sampling

Stochastic (random) sampling sends a fixed number of rays into pixels, but ensures that they are randomly distributed (but cover the area more or less evenly). Additionally, Stochastic rays attempt to solve the problem of following incoming rays on uneven surfaces. This is a core concept in distributed ray tracing, which will be discussed later.

 

3.4 Acceleration Structure

Acceleration structure is a technique to help decide as quickly as possible which objects from the scene a particular ray is likely to intersect and reject a large group of objects which we know for certain the ray will never hit.

 

3.4.1 Bounding Volumes

The bounding box hierarchy is the most representative of the division method based on hierarchical relationships. The basic idea of ​​the hierarchical bounding box algorithm is to use simple bounding boxes (such as spheres, cuboids, etc.) to surround the patches in the scene, and the adjacent bounding boxes are contained in a larger bounding box, which is gradually expanded, thereby generating a hierarchical structure. Before performing the ray-object intersection test, test the ray and the bounding box first; if the ray intersects the bounding box, then perform an intersection test with the object patch it contains to improve efficiency; by using the bounding box according to an effective hierarchical structure to reduce the number of intersection tests, it can reduce complexity, and further improve efficiency. In terms of the selection of the bounding box, on the one hand, a bounding box with a simple shape should be selected to reduce the cost of the intersection test between the light and the bounding box; on the other hand, a bounding box that can closely surround the object surface should be selected to improve the effectiveness of the intersection test between the light and the bounding box.

 

In practical applications, the most commonly used is the Axis Aligned Bounding Box (AABB). The AABB axisymmetric bounding box is simple, easy to store, easy to calculate, and has good robustness. It is a good compromise between the simplicity of the intersection test and the compactness of the surrounding objects and has high efficiency. Compared with partitioning methods based on spatial segmentation, partitioning methods based on hierarchical relationships have more advantages in dynamic scenes. Because, in a dynamic scene, the organizational structure of the scene has to change, and each frame has to be rebuilt, which is undoubtedly a huge overhead, resulting in a surge in calculations. In dynamic scenes, the data structure formed by the division method based on a hierarchical relationship can only update the relevant information (such as its position and size) of the relevant bounding box. That is to say, BVH only needs to refresh the data structure without rebuilding, so that the efficiency is significantly improved.

  1. Fast rejection: first check rays against bounding volumes
  2. Faster rejection: check the bounding volume of frustum objects

 

One organizational structure is to group bounding volumes into containing volumes to create a bottom-up tree hierarchy, and when a bounding volume is hit, its child nodes are checked recursively.

 

3.4.2 Uniform Grid

The space grid is to divide the three-dimensional space along the three direction axes with a specific width to obtain a grid with a certain resolution, and the patches in the scene are all assigned to the corresponding grid. Each mesh can contain different patches, and each mesh holds a reference to the scene patch it contains. The simplest is a uniform grid, which divides the scene evenly.

 

Generates a uniform grid and creates a structure linking each grid position to another object within the grid position. For each mesh the ray touches, check if the object intersects the ray.

 

The advantage of this method is that it is convenient and concise, easy to create, and can quickly assign the patches in the scene to the corresponding meshes. However, in actual scenes, the distribution of patches is relatively uneven. For example, most of the patches are concentrated in a few grids. When performing ray traversal, many patches need to be traversed, and an intersection test may also be possible. There are very few irrelevant patches that can be kicked out, and the traversal process is very troublesome. Therefore, if the patch distribution in the scene is not uniform enough, the traversal efficiency of this segmentation method is very low.

 

3.4.3 KD-Tree

BSP tree is a space segmentation technology, which has been applied in many fields and was introduced into various research fields and application occasions of computer graphics in the 1990s. KDTree can be regarded as a special case of a BSP tree, which is a tree structure extended to multidimensional space. The principle is to treat the entire scene as a tree, divide the current tree into two spaces by the split plane to obtain two subtrees, and these two subtrees are divided by their respective split planes to obtain smaller subtrees until the depth of the tree reaches the predetermined threshold or the number of scene patches contained in the node is less than the predetermined threshold. Each node of the tree represents a subspace, including all the patches contained in the represented space, and the root node of the tree represents the entire scene space.


4. Distributed Ray Tracing (DRT)

We are finally about to touch the modern approach to ray tracing using Monte Carlo methods – Distributed Ray Tracing. Distributed ray tracing is not ray tracing on distributed systems. Distributed ray tracing is a method of ray tracing based on random distribution sampling to reduce artifacts in rendered images. At first, we called it distributed ray tracing and later changed it to distribution ray tracing, in order to distinguish it from parallel computing. We also call it stochastic ray tracing according to its characteristics. You will know why it is called this name after reading the following contents. 

 

Looking at the picture above, anyone can clearly conclude that this is a computer-rendered image rather than a camera-taken picture. Because it has perfect reflections, the surface is perfectly colored, and it has harsh shadows, ugly jaggies, and no depth of field. We call this phenomenon distortion. Even before Kajiya (1986) formalized the rendering equation, Cook et al. recognized that rendering is simply the process of solving a set of nested integrals. These integrals do not have analytical solutions that can be computed in finite time, so Monte Carlo techniques are used to solve these problems. In Cook’s distributed ray tracing (1984) to solve these problems, we can summarize its characteristics:

  1. Use non-uniform (jittered, jittered) sampling
  2. Use noise instead of image distortion
  3. Provide some special effects, such as Glossy reflections, Soft shadow, Motion Blur, Time, etc.

 

The main idea of ​​​​distributed ray tracing is that before we supersample each pixel, we can perform anti-aliasing operations by means of averaging, so it is not only possible to send multiple rays at the beginning from the eye Ray, but also after being emitted per bounce, which can achieve more effects than Whitted ray tracing with a single bounced ray.

 

It is worth noting that before the rendering equation was summarized in 1986, Cook’s paper in 1984 lacked a mathematical framework to actually reason about it, and distributed ray tracing lacked any description of global illumination effects. So, you can play around with making sure that it fits the LIT.

 

The basic sampling method of direct light and indirect light is the same as that of path trace, which will be introduced in the next section.

 

4.1 Sampling for Pixels

Just like the anti-aliasing method we talked about in Whitted before, if there is no randomness, there will be special situations that will cause errors anyway. Therefore, what we use in distributed ray tracing is stochastic sampling, which avoids the regularity of grid sampling.

 

4.1.1 Poisson Disk Distribution

An example of an uneven distribution of sampling locations is the eye. The eye has a finite number of photoreceptors. Like other sampling processes, there should be a Nyquist limit, but the eye is not normally aliased. phenomenal. In the eye’s fovea (Fovea), cells in a hexagonal pattern are packed tightly together, and the lens acts as a low-pass filter, which prevents aliasing from occurring. But outside the fovea, the arrangement of cells is very sparse, so the sampling rate is very low, but there is no aliasing because the uneven distribution of cells prevents aliasing from occurring in this area.

 

There are studies of the distribution of cone cells in the eye. Similar to the human eye, the distribution of photoreceptors outside the fovea of monkey eyes is shown in the figure. Such a distribution is called Poisson disk distribution: sampling points are randomly distributed at a certain and the distance between any two sampling points is not less than a certain value. The minimum value of the distance can limit the amount of noise. For example, the film grain (Film Grain) is randomly distributed, as shown in the figure below, but there is no minimum distance limit like the Poisson disk distribution but is purely randomly distributed. The result is that some sample points will be concentrated in some areas and leave a lot of blank space in other areas, so the Film has no aliasing, and there is Noise.

 

A simple method to realize Poisson Disk Distribution:

(1) Randomly generate a sampling position. If the randomly generated sampling position is less than a given distance from the selected one, discard it, at least until the sampling area is full. Using this method, a Lookup Table can be created;

(2) It is also necessary to calculate the value of the filter, which describes the relationship between each sampling point and the surrounding pixels. The location information and filter values are stored in a lookup table, this simple approach does produce good image results, but requires a very large lookup table. So here is another technique: Jittered.

 

4.1.2 Jitter Sampling

Jitter sampling is actually a type of Stratified Sampling, and also a form of random sampling, which is a technique that approximates the Poisson disk distribution.

 

There are many types of Jitter techniques. Here we mainly introduce the Jitter technique of the grid. This technique can produce better experimental results and is very suitable for image rendering algorithms. The specific principle is to divide each pixel and add a random offset to the central area of ​​each block to ensure that the offset is in the same block of pixels. Of course, this sampling method can also be used for sampling area lights.

 

Jitter can reduce the high-frequency signal, but the energy in the reduced high-frequency signal will appear in the noise instead of disappearing, so the basic spectral composition is not changed. This technique may cause more noise than the purebred Poisson disk distribution technique and may leave some aliasing.

 

An interesting side effect of distributed ray tracing random sampling modes is that they actually inject noise into the solution (slightly rougher image). This kind of noise is more acceptable than a distorted image.

 

To give an example of jitter, calculate the effect of time jitter (Time Jitter), the nth sample is jittered by the amount of ζn, so it will be sampled at the position of nT+ζn, T represents the sampling period, as shown in the figure below, it is the effect of time jitter. Different models can be used to represent the jitter amount ζn, such as a Gaussian distribution function with a variance of σ2, and the gain amount can be a function of the frequency μ, as shown in the following equation:

 

Consider a pixel as a grid or a large grid composed of multiple subpixel grids, which is a two-dimensional Jitter. The noise is randomly added to the position in the X direction or the position in the Y direction. The X and Y directions are independent of each other, which is equivalent to two one-dimensional jitters. It is required that each sampling point occurs within a certain pixel grid range at random positions. If it is already known which sampling points are visible, the values ​​of those sampling points are processed through the Reconstruction Filter.

 

The implementation of the reconstruction filter is an open problem. The simplest reconstruction filter is a Box Filter: take the average of multiple sampling points. It is also possible to use weighted reconstruction filters, in which case the filter is a weighted value relative to the surrounding pixels at the sample location. Each pixel is the sum of the values ​​of nearby sampling points multiplied by weighted values. These filters can be calculated in advance and stored in a lookup table.

 

Analysis of Jitter Sampling Performance and Variance The length of this article will not be discussed for the time being, and we will find out when we have time.

 

4.2 Area Lights and Soft Shadows

Shadows in Whitted ray tracing are discrete. When shading a point, each light is checked to see if it is visible. If the source is visible, it contributes to the point’s shading, otherwise, it doesn’t. The light source itself is represented by a single point.

 

It is fairly accurate for lights that are far away, but poor performance for large or close light sources. The result of this discrete decision is that the edges of shadows are very sharp. There is a distinct transition from point to visible light point to light source point, while shadows in the real world are much softer. The transition from full shadow to partial shade is gradual. This is due to the limited area of real light sources and the scattering of light from other surfaces.

 

Distributed ray tracing attempts to approximate soft shadows by representing a light source as a sphere and randomly distributing points (area lights) on the surface of the sphere. It will cast a set of rays around the light’s shadow area when looking to see if a point is in shadow. The amount of light transmitted from a light source to that point can be approximated by the ratio of the number of rays hitting the light source to the number of rays cast.

 

Area light source will produce a soft shadow that is the overlapping area between the umbra and penumbra. Soft shadow can be calculated by the shadow rays to random points on the surface of the area light.

 

4.3 Specular Reflection

Whitted ray tracing is good at representing perfectly reflective surfaces but struggles with highlighting or partially reflecting surfaces. Only when a surface is a perfect mirror does its reflection look exactly like reality. More often the surface is specular and reflects a blurred image of the scene. This is due to the light-scattering properties of the surface. Whereas reflections in traditional Whitted tracing are always sharp.

The main reason is that the reflection contribution of traditional ray tracing only comes from its single reflection vector.

 

The reflection contribution of distributed ray tracing, instead of casting a single ray along the reflection direction, emits a bunch of rays around the reflection direction, which randomly samples multiple rays around the mirror angle within the cone. The actual contribution of reflections can be found by taking the statistical average of the values returned by each ray. This method can also be used to generate specular highlights by using area lights. If rays reflected from the surface hit the light source, they will add to the specular component of the surface’s lighting. This can replace the specular component of the phong model. Also, as the reflection intensity decreases rapidly as the mirror angle decreases, the sampling probability in this direction should also decrease.

 

4.4 Transparency

Whitted ray tracing is good at representing perfectly transparent surfaces, but poor at representing translucent surfaces. Translucent real surfaces typically transmit a blurred image of the scene behind them. Distributed ray tracing achieves this type of translucent surface by casting randomly distributed rays in the general direction of traditional ray tracing’s refracted rays. The values calculated from each of these rays are then averaged to form the true translucency component.

 

The basic Monte Carlo ray tracing method is not efficient. The main reason is that there will be multiple reflections and refractions but the light source cannot be reached, especially in places where the light source area is relatively small. We can fix this by sampling the light source. After each ray hits the object to get the intersection point, we can take a point on the surface light source, and then judge whether the shadow line connecting the intersection point and the light source point is blocked; if not, we can calculate the direct illumination between the light source and the object surface intersection point. In this process, the pdf of this reflection is calculated by using Multiple Importance Sampling [4].

 

4.5 Motion Blur

Shutter speed means how long each frame represents the scene while the camera shutter is open in cameras. If objects in the scene are in motion relative to the camera, they will appear blurry on film. This principle is common sense. When shooting a running athlete, you need a faster shutter speed, otherwise, the picture will become unclear due to the long exposure time, because the final pixel is the average value of this time exposure. Of course, if the effect you want is the effect of the lights of vehicles that are constantly flowing at night, you need a relatively long shutter speed, and you also need fixed tools such as a tripod, because shaking hands will cause objects in the surrounding scenery to blur.

 

Distributed ray tracing can simulate this blur by distributing rays both in time and space. Before each ray is cast, the object is translated or rotated to the correct position for that frame. The rays are then averaged to get the actual value. Objects with the most motion will be blurred the most in the rendered image. This is actually a sampling average over time.

 

4.6 Depth of Field

Both the human eye and the camera have a finite lens aperture and thus a finite depth of field. Two distant or two close objects will appear out of focus and blurry. But almost all computer graphics rendering techniques use a pinhole camera model. In this model, all objects are in perfect focus, no matter the distance. This is advantageous in many ways, blurring due to lack of focus is often undesirable in images. However, simulated depth of field can produce more realistic images because it can simulate the real optical systems more accurately.

 

 

The distributed ray tracing creates the depth of field by placing an artificial Lens in front of the Image plane. Randomly distributed rays are used again to simulate the blurring of the depth of field. The first ray cast is not modified by the camera. Assume the focal point of the lens is at a fixed distance along this ray. The remaining rays sent to the same pixel will be scattered around the surface of the lens. At the lens, they will bend to pass through the focal point. Points in the scene that are close to the focal point of the lens will be in sharp focus, and points that are closer or further away will be blurred.

 

All objects within the focal length are clear, and the rest will be blurred.

 

4.7 Disadvantages of Distributed Ray Tracing

Its main problem lies in the exponential nature of distributed ray tracing. You start by emitting rays, which are usually the most important and important lighting contribution. But for secondary bounces, you have to fire more and more rays. 1 ray becomes 10 rays, for every 10 rays you will have 100 rays, for every 10 rays you have 1000, and so on. Therefore, the 3rd bounce needs more than 1000 times shadows, even though it might be 1/100th or less of the final lighting.

 

As shown above, if both the pixel and reflection are anti-aliasing, it will be terrible. It’s inherently inefficient because you spend the most time computing the least important parts of the image. It is also difficult to render advanced lighting effects such as caustics with distributed ray tracing.

Continue reading: Introduction of the Raytracing Technology Part 2 

Author: papalqi, https://www.zhihu.com/people/papalqi

Translation: UWA Team

Thanks to the author for your contribution. Welcome to repost and share, but please do not reprint without the authorization of the author. 
The full content can be viewed here.


YOU MAY ALSO LIKE UWA Here!!!

UWA Website: https://en.uwa4d.com

UWA Blogs: https://blog.en.uwa4d.com

UWA Product: https://en.uwa4d.com/feature/got 

Related Topics

2 Visitor Comments

Post a Reply

Your email address will not be published.