circle-loader
0
by
0/ 813/ /1

Basic Knowledge

Depth of Field is one of the most commonly used screen post-processing effects in games. It comes from a basic concept in photography, which refers to the range of the distance before and after the subject measured by the imaging of the front of the camera lens or other imagers that can obtain a clear image. Objects within the depth of field are clearly imaged, and objects outside the depth of field are blurred.

 

 

For a pinhole camera, only one ray per point of the recorded object can pass through the small hole and be recorded. The advantage of this is that the image is always clear, but the brightness of a single line of light is too low, and long exposures are necessary to accumulate enough light to make the image clear. During exposure, motion blur occurs when the subject moves.

 

For fast photography, the exposure time is reduced, and the lens is used for imaging. This allows multiple rays to be recorded at the same time, but it means that the shape of the image of a point that is not at the focal length will no longer be a point, but a circle, called the Circle of Confusion. Affected by the photosensitive element and the human eye’s resolving power, when the size of the circle of confusion is smaller than a certain size, it looks clear. Conversely, when the size of the circle of confusion is larger than a certain size, it looks blurry.

 

(The focus position of the two photos is different, and the sharpness of each object is different)

 

(Depth of Field effect in Crysis)

 

Human observation does not have the phenomenon of the depth of field, we will focus on what we want to see. But images created with depth of field draw our attention to the clear parts of the image and are a means of attracting attention and highlighting key points.

 

Next, we want to simulate this effect and determine the part we want to focus on so that objects within the depth of field are sharp, and objects outside the depth of field are blurred.


Implementation

 

Calculate CoC (Circle of Confusion) Value

There is an option to perform calculations based on physics:

 

Variables to know include:

Ape – Aperture Diameter

f – focal length

F – The distance from the point that can be perfectly focused on the focal plane to the camera, called the focus distance

P – the distance of the point of the object currently observed.

Coupled with the relationship between the distance from the focal plane to the mirror surface and the focal length, the exact relationship calculation is more complicated.

 

In fact, considering the relationship between the size of the circle of confusion and the distance from the observation point to the camera, it can be considered that when F=P, the size of the circle of confusion is 0, and the larger the difference, the larger the diameter of the circle of confusion. up to the diameter of the largest circle of confusion. According to this feature, a relatively simple mathematical model can be constructed for approximate simulation.

 

For example: At the SIGGRAPH2018 conference, the Epic team’s topic “A Life of a Bokeh” discussed the practice of depth of field. Among them, for the simulation of the size of the circle of confusion, the solution of the Unreal engine is shown in the figure:

 

(SIGGRAPH2018: A Life of a Bokeh)

 

The meanings of the relevant variables and parameters are:

P represents the distance from the focused object to the camera,

Z represents the distance from the currently drawn object to the camera

MaxBgdCoc represents the size of the circle of confusion for the maximum depth of field. It is calculated as: (aperture diameter * focal length) / (focus distance – focal length)

 

The mathematical model constructed in this way simulates the changing trend of the circle of confusion. The farther away from the focal point, the larger the diameter of the circle of confusion, which gradually tends to a maximum value.

 

Customize related variables:

#region FocalLength focal length

[SerializeField]

float _focalLength = 1.4f;

public float focalLength

{

    get { return _focalLength; }

    set { _focalLength = value; }

}

#endregion

#region Aperture Diameter Aperture diameter

[SerializeField]

float _Aperture = 1.4f;

public float Aperture

{

    get { return _Aperture; }

    set { _Aperture = value; }

}

#endregion

#region FocusDistance Depth of Focused Object

[SerializeField]

Transform _pointOfFocus;

public Transform pointOfFocus

{

    get { return _pointOfFocus; }

    set { _pointOfFocus = value; }

}

[SerializeField]

float _focusDistance;

public float focusDistance

{

    get { return _focusDistance; }

    set { _focusDistance = value; }

}

float CalculateFocusDistance()

{

    if (_pointOfFocus == null) return focusDistance;

    var cam = TargetCamera.transform;

    return Vector3.Dot(_pointOfFocus.position – cam.position, cam.forward);

}

#endregion

Calculate the value of CoC of the corresponding pixel according to the mathematical model in Shader:

 

sampler2D _MainTex;

sampler2D _CameraDepthTexture;

float _FocusDistance;

float _MaxBgdCoc;

half frag_coc(v2f_img i) : SV_Target

{

       half depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv));

       half CoC = (1 – _FocusDistance / depth);

       CoC = clamp(CoC, -1, 1)*_MaxBgdCoc;

       return CoC;

}

Use a temporary RT to store the COC value:

private void OnRenderImage(RenderTexture source, RenderTexture destination)

{

    SetUpShaderParameters(source);

    RenderTexture CocRT = new RenderTexture(source.width, source.height, 0, RenderTextureFormat.RHalf, RenderTextureReadWrite.Linear);

    Graphics.Blit(source, CocRT, _material, 0);

    Graphics.Blit(source, destination);

    RenderTexture.ReleaseTemporary(CocRT);

}

 

Simply construct a scene, the depth relationship is shown in the figure:

 

The schematic diagram of the CoC value is as follows:

 

Bokeh Filter

The next step is to produce the Bokeh Filter.

 

 

Because the structure of the aperture tends to resemble a circle, the image is not within the depth of field, and the shape of the brighter area is similar to a circle. According to the idea of the previous fuzzy course, in order to create such an effect, the sampling point range of the convolution operator used tends to be a circle. The following figure shows the shape of the circular operator with different sampling frequencies:

 

(SIGGRAPH2018: A Life of a Bokeh)

 

A sampling range is given in Unity’s post effects stack v2 for reference:

 

#if defined(KERNEL_SMALL)

// rings = 2

// points per ring = 5

static const int kSampleCount = 16;

static const float2 kDiskKernel[kSampleCount] = {

    float2(0,0),

    float2(0.54545456,0),

    float2(0.16855472,0.5187581),

    float2(-0.44128203,0.3206101),

    float2(-0.44128197,-0.3206102),

    float2(0.1685548,-0.5187581),

    float2(1,0),

    float2(0.809017,0.58778524),

    float2(0.30901697,0.95105654),

    float2(-0.30901703,0.9510565),

    float2(-0.80901706,0.5877852),

    float2(-1,0),

    float2(-0.80901694,-0.58778536),

    float2(-0.30901664,-0.9510566),

    float2(0.30901712,-0.9510565),

    float2(0.80901694,-0.5877853),

};

#endif

 

The circular operator is used for convolution, and the effect is as shown in the figure:

 

Note that when calculating the offset, the sampling range of the operator is a unit circle, which is multiplied by the diameter of the largest circle of confusion to obtain the normal size.

 

half4 frag_bokeh(v2f_img i) : SV_Target

{

       half3 color = half3(0,0,0);

       UNITY_LOOP for (int index = 0; index < kSampleCount; index++)

       {

              float2 offset = kDiskKernel[index] * _MaxBgdCoc;

              color += tex2D(_MainTex, i.uv + (offset*_MainTex_TexelSize.xy)).rgb;

       }

       color = color / kSampleCount;

       return half4(color, 1);

}

 

Obtain the effect as follows:

 

It can be seen that the brighter parts show round bright spots after processing, which meets the requirements. However, the bright spot area is very sharp and lacks transition, so we need to add another blur to make the transition more natural. You can use the downsampling operation in Dual Blur:

 

half4 frag_blur(v2f_img i) : SV_Target

{

       float4 offset = _MainTex_TexelSize.xyxy * float4(-1.0, -1.0, 1.0, 1.0);

       half4 color = tex2D(_MainTex, i.uv + offset.xy) * 0.25;

       color += tex2D(_MainTex, i.uv + offset.zy) * 0.25;

       color += tex2D(_MainTex, i.uv + offset.xw) * 0.25;

       color += tex2D(_MainTex, i.uv + offset.zw) * 0.25;

       return color;

}

 

The effect is as shown in the figure:

 

It can be seen that the transition is more natural. However, it is not necessary for the image to be all blurred, only objects outside the depth of field are expected to be blurred, while objects within the depth of field are clear.

Apply CoC

We need to apply the previously calculated value of CoC to filter. First, when blurring is performed in the previous step, the image is down-sampled once, and similarly, the RT that saves the Coc value is down-sampled once:

 

SetUpShaderParameters(source);

RenderTexture CocRT = RenderTexture.GetTemporary(source.width, source.height, 0, RenderTextureFormat.RHalf, RenderTextureReadWrite.Linear);

Graphics.Blit(source, CocRT, _material, 0);

_material.SetTexture(“_CocTex”, CocRT);

int width = source.width / 2;

int height = source.height / 2;

RenderTextureFormat format = source.format;

RenderTexture tmpRT0 = RenderTexture.GetTemporary(width, height, 0, format);

RenderTexture tmpRT1 = RenderTexture.GetTemporary(width, height, 0, format);

Graphics.Blit(source, tmpRT0, _material, 1);

Graphics.Blit(tmpRT0, tmpRT1, _material, 2);

Graphics.Blit(tmpRT1, tmpRT0, _material, 3);

Graphics.Blit(tmpRT0, destination);

RenderTexture.ReleaseTemporary(CocRT);

RenderTexture.ReleaseTemporary(tmpRT0);

RenderTexture.ReleaseTemporary(tmpRT1);

 

Here we choose to assign the mean value of the four pixels in the RT that stores the Coc value to the down-sampled RT:

 

half4 frag_filterCoc(v2f_img i): SV_Target

{

       float4 offset = _MainTex_TexelSize.xyxy * float4(0.5, 0.5, -0.5,-0.5);

       half coc0 = tex2D(_CocTex, i.uv + offset.xy).r;

       half coc1 = tex2D(_CocTex, i.uv + offset.zy).r;

       half coc2 = tex2D(_CocTex, i.uv + offset.xw).r;

       half coc3 = tex2D(_CocTex, i.uv + offset.zw).r;

       half coc = (coc0 + coc1 + coc2 + coc3) *0.25;

       return half4(tex2D(_MainTex, i.uv).rgb, coc);

}

 

As a result, in the sampled texture when bokeh blur is performed, the CoC value of the pixel is stored in the alpha channel.

 

When performing Bokeh Filter earlier, all sampling points have an impact on the center point. In fact, the CoC value represents the influence range of the pixel. If the CoC value is less than the distance from the sampling point to the center point, it means that the sampling point has no influence on the center point, and the point is not sampled at this time.

 

For example, the radius of the circle of confusion in the blue part of the figure above is smaller than the distance from the sampling point to the center point, and it will not affect the pixel color of the center point.

 

Then, let’s talk about bokeh-related shader code modifications:

 

half4 frag_bokeh(v2f_img i) : SV_Target

{

       half3 color = half3(0,0,0);

       half weight = 0;

       UNITY_LOOP for (int index = 0; index < kSampleCount; index++)

       {

              float2 offset = kDiskKernel[index] *_MaxBgdCoc;

              half radius = length(offset);

              half4 tmpColor = tex2D(_MainTex, i.uv + (offset*_MainTex_TexelSize.xy));

              if (abs( tmpColor.a )>=radius )

              {

                     color += tmpColor.rgb;

                     weight += 1;

              }

       }

       color = color / weight;

       return half4(color, 1);

}

 

First, comment out the Blur link, adjust the aperture diameter, and check the effect:

 

Focusing on the green cube, you can see that the bokeh is more pronounced for two particle systems that are farther apart:

 

And the focus part has little bokeh:

Overlay

In practice, the diameter of the circle of confusion is smaller than the pixel size of the sensor element and the object is successfully in focus. Set a variable to refer to the pixel size of the sensor element:

 

#region PixelSize

[SerializeField]

float _pixelSize = 0.2f;

public float PixelSize

{

    get { return _pixelSize; }

    set { _pixelSize = value; }

}

#endregion

……

private void OnRenderImage(RenderTexture source, RenderTexture destination)

{

    SetUpShaderParameters(source);

    ……

    _material.SetTexture(“_BokehTexture”, tmpRT0);

    Graphics.Blit(source, destination, _material, 4);

    ……

}

 

When the CoC value is less than or equal to it, the clear original image is used for display; when the CoC value is greater than it, the mixed value of the bokeh image and the original image is used. In order to ensure a smooth transition, the Lerp function is selected to perform color mixing:

 

half4 frag_combine(v2f_img i) : SV_Target

{

       half4 rawColor = tex2D(_MainTex, i.uv);

       half3 bokehColor = tex2D(_BokehTexture, i.uv).rgb;

       half CoC= tex2D(_CocTex, i.uv).r;

       half strength = smoothstep(_PixelSize, 1, abs(CoC));

       half3 color = lerp(rawColor.rgb, bokehColor, strength);

       return half4(color, rawColor.a);

}

 

Adjust the scene to get the effect:

 

Separate Foreground and Background

 

Taking this algorithm, the following situation occurs:

 

When we focus on the green square, the white spherical particles in front no longer have bokeh. This is because our CoC value is calculated using the depth value of each pixel. In fact, the correct effect should be that the white spherical example in the front should have a bokeh state and block the green square in the back.

 

Therefore, we need to divide the foreground and background and calculate them separately. The idea is relatively simple. To judge the positive and negative of CoC, the foreground and background are stored in different variables for bokeh calculation.

 

half4 frag_bokeh(v2f_img i) : SV_Target

{

  ……

  half4 bgcolor = half4(0,0,0,0);

  half4 fgcolor = half4(0, 0, 0, 0);

  UNITY_LOOP for (int index = 0; index < kSampleCount; index++)

  {

         float2 offset = kDiskKernel[index] *_MaxBgdCoc;

         half radius = length(offset);

         half4 tmpColor = tex2D(_MainTex, i.uv + (offset*_MainTex_TexelSize.xy));

           half bgWeight = saturate(max(tmpColor.a – radius, 0));

         half fgWeight = saturate(-tmpColor.a – radius);

           bgcolor += half4(tmpColor.rgb, 1) * bgWeight;

         fgcolor += half4(tmpColor.rgb, 1) * fgWeight;

  }

  bgcolor.rgb /= bgcolor.a + (bgcolor.a == 0); // zero-div guard

  fgcolor.rgb /= fgcolor.a + (fgcolor.a == 0);

  ……

}

 

The effect we want to achieve is that if there is foreground in front of a focused object, the foreground uses a bokeh effect and blocks the focused object. Therefore, when interpolating, the foreground weight is greater than 1, which proves that there is a foreground, and the bokeh effect of the foreground will be used. When blending with the source image, if there is a foreground, the bokeh effect of the foreground is also used, so the weight of the blending mode is saved as the Alpha value.

 

half4 frag_bokeh(v2f_img i) : SV_Target

{

……

  half bgfg = min(1, fgcolor.a);

  half3 color = lerp(bgcolor, fgcolor, bgfg);

  return half4(color, bgfg);

 }

 

When mixing with the source image, an interpolation is first performed according to the depth value of the background, which means that the more backward the object is, the more obvious the bokeh effect is. Then perform an interpolation according to the blending mode of the foreground, indicating that the surrounding of the pixel is affected by the bokeh effect of the foreground, and the bokeh effect of the foreground should be expressed according to the weight:

 

half4 frag_combine(v2f_img i) : SV_Target

{

    ……

       half strength = smoothstep(_PixelSize, 1.2, CoC);

       half3 color = lerp(rawColor.rgb, bokehColor.rgb, strength);

       color = lerp(color, bokehColor.rgb, bokehColor.a);

       return half4(color, rawColor.a);

}

 

The effect is as shown in the figure:


Summary

Regarding the learning of depth of field, this chapter will explain some basic realization effects. If you want to make the depth of field effect very fine, you need a lot of smooth transitions and parameter adjustments.

 

Depth of field effect achieved by CatlikeCodinghttps://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/

 

And the depth of field effect achieved by keijiro:

https://lab.uwa4d.com/lab/5b661495d7f10a201ff9e800

 

(The depth of field effect achieved by the kinoBokeh project)

 

Such an effect is applied to the performance problems of the Mobile side and how to solve the problems that can be read in the school:

https://edu.uwa4d.com/course-intro/0/141?purchased=true

 

Reference: CatlikeCoding-Depth of Field:https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/


Screen Post Processing Effects Series

Screen Post-processing Effects: Streak effect in the Lens Flare effect and Its Implementation

Screen Post-processing Effects: Real-time Glow and Bloom and Its Implementation

Screen Post Processing Effects Chapter 6: Dual Blur and Its Implementation

Screen Post Processing Effects Chapter 5: Kawase Blur and Its Implementation

Screen Post Processing Effects Chapter 4: Box Blur and Its Implementation

Screen Post Processing Effects Chapter 3: Algorithm of Gaussian Blur and Implementation Using Linear Sampling

Screen Post Processing Effects Chapter 2: Two-Step One-Dimensional Operation Algorithm of Gaussian Blur and Its Implementation

Screen Post Processing Effects Chapter 1 – Basic Algorithm of Gaussian Blur and Its Implementation

 

That’s all for today’s sharing. Of course, life is boundless but knowing is boundless. In the long development cycle, these problems you see maybe just the tip of the iceberg. We have already prepared more technical topics on the UWA Q&A website, waiting for you to explore and share them together. You are welcome to join us, who love progress. Maybe your method can solve the urgent needs of others, and the “stone” of other mountains can also attack your “jade”.

YOU MAY ALSO LIKE!!!

UWA Website: https://en.uwa4d.com

UWA Blogs: https://blog.en.uwa4d.com

UWA Product: https://en.uwa4d.com/feature/got 

Related Topics

Post a Reply

Your email address will not be published.