0/ 92/ /0

This time we will share several technical topics are which related to Program development. It is recommended to read for 15 minutes. Any unique insights or discoveries, please feel free to contact us or discuss.



Q1: During the pre-research of the post-processing scheme, I find that the CommandBuffer scheme used in PostProcessV2 seems to require more GPU overhead than using the Graphics interface in the V1 scheme.


The test project is here: CBTest.unitypackage


Real machine test results:


On Redmi 2A (low configuration), the post-processing using Graphics.Blit has almost no effect on the frame rate, but when CommandBuffer is turned on, the frame rate is immediately decrease from 60 to 40.


After Profiler, the cost of CommandBuffer.Blit is lower than Graphic.Blit (the former is 0.0x ms, the latter is 1.x ms), but Device.Present significantly increase after the CommandBuffer is inserted.


After replacing the high-end machine for 100 times of post-processing, the conclusion is the same. The two schemes use the same Shader, and it is a simple color calculation, so we can exclude the influence of Shader.


I want to ask if CommandBuffer consume more GPU performance? If so, where is this overhead? If there is a high overhead, what are other advantages of CommandBuffer in addition to flexibility?

I think that this is because you use cmdBuffer.Blit(BuiltinRenderTextureType.CurrentActive, mainTexId); copy back to the official version, they refer to the past directly: cmdBuffer.SetGlobalTexture (mainTexId,Buil tinRenderTextureType.CurrentActive );

Another question here. I tried using SetGlobalTexture previously:

cmdBuffer.SetGlobalTexture(mainTexId, BuiltinRenderTextureType.CameraTarget); 
cmdBuffer.SetRenderTarget(BuiltinRenderTextureType.CameraTarget, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store); 
cmdBuffer.DrawMesh(FullscreenTriangle, Matrix4x4.identity, BriMaterial);

It can be seen from Frame debugger that what is passed in to Shader is the image of the triangle drawn by DrawMesh. It is speculated that there may be some order problems when fetching images from BackBuffer and writing them. Because of the use of DrawMesh in PostProcessv2, the source passed in is also the generated RenderTexture, but not BuiltinRenderTextureType.CurrentActive.

I didn’t test the specific code. But I can provide two ideas, one is to use Camera.renderTarget.colorBuffer, it may not in custom RenderTexture, but the camera by default (unity 2019) can get it; the other is to use the idea of TAA to hold the data of the previous frame, you can try it.

Thanks to Zhang Yanfeng for providing the answer above.


Q2: The core function of the Google Play OBB Downloader plug-in on AssetStore recommended in Unity’s official tutorial is WWW.LoadFromCacheOrDownload.


After Unity is upgraded to 2017 version, this function loads the OBB package and it will prompt that the decompression fails.


The plug-in has not been updated for four years, and I have found that some people also have met this problem. How to solve it?

There is no problem for Unity to directly pack APK + OBB. If it is Unity export project + OBB, some projects may not be able to use OBB because the OBB check fails. There is a meta-data in AndroidManifest.xml of the Unity export project.

This value should correspond to the verification file after OBB is decompressed.

Thanks to Fan Shiqing for providing the answer above.

Without read and write permissions of SD card, but using OBB, errors will occur on some models. It will start normally after restarting the phone.

Thanks to deviljz for providing the answer above.


Q3: A question about the optimization order of LOD. For example: Is this priority order reasonable: scenes, characters, monsters, NPCs?

From the perspective of rendering patches, usually the number of rendering patches in the part of scene will be relatively high, so it is necessary to do LOD; secondly, the role part can be considered; and from DrawCall and Overdraw, it is recommended to add the particle system to LOD. It’s similar to control the number of particle systems, the number of emitted particles and the size of particles activated in special effects by grading from high to low.

This answer is provided by UWA.


Q4: How can scene static batching coexist with LOD? Should I need to forbidden static batching and use Mesh Baker to manually merge the mesh and LOD?

Unity’s own LODGroup and Static Batching can be used at the same time, but there may be some problems which need to be dealt with after opening Lightmap. You can refer to the content discussed here:

This answer is provided by UWA.


Q5: After Mipmap being turned on by Texture, it will become blurred. So how to control the Texture Mipmap level of a single Renderer?

It is possible to control the Mipmap selection of a single texture. When the GPU is sampling the texture, the choice of Mipmap level can be modified by Mipmap Bias. The specific meaning of this parameter and the restrictions on its use can be seen here:


But modifying this value may increase GPU performance overhead, so you should pay attention to it.

This answer is provided by UWA.

This is the 62th UWA Technology Sharing for Unity Development. As we all know, Our life has a limit but knowledge has none. These problems are only the tip of the iceberg and there are more technical problems during our program development which deserve to discuss. Welcome to join UWA Q&A community, let‘s explore and share knowledge together!




Post a Reply