circle-loader
0
by
0/ 1477/ /1

Introduction

As more and more mobile games are getting better quality, various performance optimizations on mobile phones are also working hard, and I can’t wait to dig out the performance of every transistor on the chip. However, when you have a mobile phone with “high scores but low performance”, do you always feel a sense of powerlessness? A good and fast way to maintain a high frame rate and ensure picture quality in such a case is to reduce the resolution.

 

When it comes to adjusting the resolution of the device, you might be familiar with the Screen.SetResolution method, but this adjustment is a global variable and is on the hardware level, so it is impossible to adjust the 3D and UI rendering targets separately. Of course, with the introduction of the SRP pipeline, we can now realize the separate adjustment of the resolution of the 3D camera and the UI camera.

 

Today’s article will discuss the dynamic resolution solutions provided by both Unity and Unreal, which can dynamically scale a single rendering target to reduce the workload on the GPU. When it comes to rendering 3D and UI separately, some of you must have thought of a solution: rendering 3D to an RT, and finally, blit the 3D RT to the final RT. So how is this solution different from the dynamic resolution solution proposed by Unity? 

 

In this article, we will explore the principle of dynamic resolution (based on Unity) and its application scenarios.


Traditional 3D and UI Separation Scheme

 

 

As shown in the figure above, the basic principle is to adjust the size of the Viewport when rendering the scene, constrain the rendering to a part of the off-screen Render Target, and then Blit the content on the Render Target of the scene to the final RT. For example, a render target might have a size of (1920, 1080), but a viewport might have an origin of (0, 0) and a size of (1280, 720).

 

This method may have the following problems:

1. The performance loss of Blit. This operation cannot be real-time. Generally, it is set once after the game is initialized or before entering a certain scene. It is a low-frequency operation and cannot achieve a real-time adjustment.

 

2. May be limited by the rendering pipeline

  • If it is the default rendering pipeline, the timing of the final blit operation must be selected, because there is usually a post-processing stage in the game, and we must make good use of this stage to do the blit by the way. This makes it possible to use the CommandBuffer to insert viewport modifications and post-processing operations to the different rendering stages of the camera.
  • If it is an SRP rendering pipeline (version after Unity 2018), we can have our own opportunity to process Blit. Of course, this operation cannot be a high-frequency operation.

Manual

Referring to the official documentation of Unity, let’s first look at the process of using dynamic resolution.

 

First of all, we need to confirm one thing, which is GPU Bound, the prerequisite for enabling dynamic resolution. Therefore, it is determined by obtaining the running time of each frame of the GPU in real-time:

 

  • Is it due to excessive GPU pressure causing the game to drop frames?
  • The scale factor of the rendered target

 

The rendering target is then dynamically scaled according to the scaling factor. In this process, it is necessary to ensure that the GPU memory is not reallocated when the rendering target resolution is modified, otherwise, it will be the same as Screen.SetResolution (it will cause the screen to flicker).

 

1. Check the camera that needs to be dynamically zoomed, as shown in the figure:

 

2. Check “Enable Frame Timing Stats” in PlayerSettings:

 

  • Obtain the CPUTime and GPUTime through the two interfaces FrameTimingManager.CaptureFrameTimings() and FrameTimingManager.GetLatestTimings and judge the scaling factor automatically
  • Finally, call ScalableBufferManager.ResizeBuffers(m_widthScale, m_heightScale) to set the scaling

Platform Support

It is stated as follows in the Unity Official Document:

 

Let’s then explore the implementation principle of dynamic resolution.


Principle Exploration

Let’s follow the manual in the official documentation to look into the Unity source code, and see why OpenGLES is so favoritism. Because it involves the source code part, we will jump to the conclusion directly.

 

  • Zooming RT is platform-dependent, OpenGLES cannot create zooming RT, we will talk about the reason later
  • The principle of dynamic resolution is Vulkan’s memory aliasing (Memory Aliasing) function

 

Memory Aliasing

Reference 1 is Vulkan’s description of Memory Aliasing.

 

Modern graphics APIs such as DirectX 12 or Vulkan allow users to set memory locations, placing allocated GPU resources into manually created heaps, which allows us to create textures and buffers whose memory parts may even completely overlap. This is why OpenGLES does not support dynamic resolution because OpenGLES does not public a lower-level API for us to achieve more efficient memory management.

 

Take a typical frame in a game: rasterize some geometry, perform shading, then run a bunch of post-processing. The output of each stage here will be written to a texture or buffer to be used by other stages later in a frame. However, resources produced by a certain stage may only be used by a few other stages, such as in post-processing: the output produced by Bloom will only be used by Tone mapping in the next stage, and nowhere else in the frame. None are needed. We can see that the resource may have a short effective lifetime but is likely to be pre-allocated and occupy its memory throughout the frame.

 

The object pool is to solve the issue of frequent allocation and release memory. Unity’s RenderTexture.GetTemporary maintains a RenderTexture object pool internally. However, this method is only applicable to the post-processing stage, because resources of different formats and sizes cannot be reused. Post-processing is usually a full-screen pass. Textures read and written usually have the same attributes. Some simple post-processing can only be achieved by using two RTs repeatedly and alternately (I will focus on explaining this in the URP chapter later).

 

The object pool is essentially a higher-level Memory Aliasing, and developers do not need to pay attention to memory management, but modern graphics APIs (DX12 and Vulkan) provide memory management interfaces that can implement the underlying Memory Aliasing. Memory Aliasing means that different variables point to the same address, that is, multiple resources are stored in the same memory area at the same time. If there are many large resources that do not overlap in time, these resources can be allocated in the same memory. Compared with the object pool, Memory Aliasing can further reduce memory usage, because the bottom layer is a bunch of bytes, so there is no need to consider the type, format, size, etc. of resources. The specific diagram is shown in the figure below:

 

Summary

From the analysis above, we have roughly understood the principle of the implementation of the dynamic resolution in Unity, which is to use the memory management interface provided by Vulkan to realize the efficient reuse of memory at the bottom layer. In this way, we can adjust the resolution efficiently and in real-time in the game with basically no loss in performance.


URP Implementation

Considering that LWRP, the predecessor of URP, is still used by some game teams, let’s take a brief look at LWRP first.

 

LWRP

Simply put, it is achieved by recreating the camera’s rendered target. When set up, it will first enter the function RequiresIntermediateColorTexture to determine whether to create a new RT. There is a variable isScaledRender in it. If scaling is required, enter the Pass to create RT:

 

m_CreateLightweightRenderTexturesPass

public void Setup(ScriptableRenderer renderer, ref RenderingData renderingData)
{
    ...

    bool requiresRenderToTexture = ScriptableRenderer.RequiresIntermediateColorTexture(ref renderingData.cameraData, baseDescriptor);

    RenderTargetHandle colorHandle = RenderTargetHandle.CameraTarget;
    RenderTargetHandle depthHandle = RenderTargetHandle.CameraTarget;

    if (requiresRenderToTexture)
    {
          colorHandle = ColorAttachment;
          depthHandle = DepthAttachment;

          var sampleCount = (SampleCount)renderingData.cameraData.msaaSamples;
          m_CreateLightweightRenderTexturesPass.Setup(baseDescriptor, colorHandle, depthHandle, sampleCount);
          renderer.EnqueuePass(m_CreateLightweightRenderTexturesPass);
    }

    ...
}

public static bool RequiresIntermediateColorTexture(ref CameraData cameraData, RenderTextureDescriptor baseDescriptor)
{
     if (cameraData.isOffscreenRender)
          return false;

     bool isScaledRender = !Mathf.Approximately(cameraData.renderScale, 1.0f);
     bool isTargetTexture2DArray = baseDescriptor.dimension == TextureDimension.Tex2DArray;
     bool noAutoResolveMsaa = cameraData.msaaSamples > 1 && !SystemInfo.supportsMultisampleAutoResolve;
     return noAutoResolveMsaa || cameraData.isSceneViewCamera || isScaledRender || cameraData.isHdrEnabled ||
           cameraData.postProcessEnabled || cameraData.requiresOpaqueTexture || isTargetTexture2DArray || !cameraData.isDefaultViewport;
}

 

URP

Starting from the Unity 2019.3.0a version, LWRP began to be officially upgraded to URP. URP has two main folders: one is the basic core library core that is extracted separately and shared with HDRP, and the other is universal for URP itself.

 

After looking through the codes of each version of URP, it was not until Core RP library version 10.2 (corresponding to Unity version 2020.2.0b) that Unity began to pay attention to (provide) the management function of Render Target.

 

From the “Principle Exploration” in the previous chapter, we know that rendering target management is an important part of any rendering pipeline; we also know that RenderTexture can only reuse memory when the new rendering texture uses exactly the same properties and resolution.

 

In order to solve these memory allocation problems of rendering texture, Unity’s SRP (URP&HDRP) introduces the RTHandle system. This system is an abstraction layer above RenderTexture, which can better manage rendering textures. For details, please refer to reference 8. Here I will briefly introduce it.

 

 

As shown in the screenshot above, SRP implements two kinds of dynamic resolutions, “hardware” and “software”. “Hardware dynamic resolution” is implemented by using memory aliasing, while “software dynamic resolution” is a soft implementation of scaling RT to adapt to the current viewport. When the hardware dynamic resolution does not support the current platform, the RTHandle system will be automatically switched to the software dynamic resolution. Not only that, the latest URP version also implements double buffering based on RTHandle. If you are interested, you can check the URP source code to view RenderTargetBufferSystem.


Implementation

From the introduction above, we should have a deeper understanding of “dynamic resolution”. When we talk about “dynamic resolution”, what we mean is the real dynamic resolution realized at the hardware level, that is, to be able to make full use of modern graphics API’s Memory Aliasing; in order to maintain FPS at a certain level, when GPU-induced frame drops occur, it can dynamically adjust the rendering target resolution without reallocating GPU memory (using Graphics API’s Memory Aliasing).

 

However, considering the compatibility of devices, the platform supported by most of our games is OpenGLES instead of Vulkan, so, unfortunately, dynamic resolution cannot be used. As a last resort, for different rendering pipelines, here is a brief description of the solutions we can adopt:

  • Default Rendering Pipeline – Unity 2017 and earlier
  • Use CommandBuffer to dynamically adjust the viewport at the right time, but it cannot be used frequently.
  • LWRP——Unity 2018~Unity 2019.3.0a
  • URP——Unity 2019.3.0a12+~Unity 2020.2.0b8+
  • As the predecessor of URP, LWRP has many functions that are still being perfected. It can already achieve separate rendering of 3D and UI, which is more flexible than the default rendering pipeline. However, it still does not provide better RT management, which requires teams to refer to URP to customize an efficient RT management system.
  • URP – Unity 2020.2.0b12+

 

As mentioned above, Unity did not provide a relatively complete RT management system until version 10.2 of SRP’s Core RP library, and you can refer to it as appropriate.


Reference

[1] https://www.khronos.org/registry/vulkan/specs/1.0/html/chap12.html#resources-memory-aliasing

[2] 内存混叠的一种实现

[3] https://developer.nvidia.com/vulkan-memory-management

[4] https://docs.unrealengine.com/4.26/zh-CN/RenderingAndGraphics/DynamicResolution/

[5] https://www.intel.com/content/www/us/en/developer/articles/technical/dynamic-resolution-rendering-article.html

[6] https://docs.unity3d.com/Manual/DynamicResolution.html

[7] https://github.com/Unity-Technologies/DynamicResolutionSample

[8] https://docs.unity3d.com/Packages/com.unity.render-pipelines.core@10.1/manual/index.html

[9] 如何只降3D相机不降UI相机的分辨率


Author: Qiang, Lv

Translation: UWA Team

Thanks to the author for your contribution. Welcome to repost and share, but please do not reprint without the authorization of the author.

UWA Website: https://en.uwa4d.com

UWA Blogs: https://blog.en.uwa4d.com

UWA Product: https://en.uwa4d.com/feature/got 

Related Topics

Post a Reply

Your email address will not be published.