0/ 46/ /0

This time we will share several technical topic which related to Program development. It is recommended to read for 15 minutes. Any unique insights or discoveries, please feel free to contact us or discuss.



Q1: Our packaging method for Shader is generally to pack all Shaders in project into an AssetBundle, which can avoid the redundancy of Shader and also make Shader perform hot update. In Unity 5.5.4, there is no problem. All Shaders can be rendered normally, used for Instancing and hot update.


However, in Unity 2018.3, if the material and the Shader it uses are not in the same AssetBundle, then it will be no problem to render the shader normally, but GPUInstancing by using this material will not take effect. In Unity 2018.4, the situation is the same.


The test project can be obtained here, and it can be reproduced on the PC platform with Unity. Can you confirm this is a bug of Unity? Or there is something wrong with the way Shader is packaged and used?

After Unity add the Shader Stripping function, there will be a lot of problems of missing Keywords.

It can be seen that there is Keyword: INSTANCING_ON in the Shader which successfully perform GPU Instancing, but when the Shader is packaged and the running Shader is loaded, this Keyword will lost.

So, as one of the solutions, you can cancel the Stripping function in Project Setting and choose Keep All:

Solution two: choose to package in the scene where the GPU Instancing function is used, so that the keywords will not be stripped. Of course, the second solution is not very practical. It is recommended to keep this Material and package it together.

This answer is provided by UWA


Q2: The Shader loading and parsing in our project uses the ShaderVariantCollection mechanism, but such problems are found during loading.


When loading Bundle of Shader, the Parse method of Shader parses all Shaders.

Then it parse again when WarmUp is executed.

Previously, I thought that Shader loading and parsing were separate in the version after Unity5.X. Now, I’m in confusion, I hope that you can tell me. (Unity version: 2018.2.19f1)

Here is my personal understanding:

1. Pack Shader and Shadervariant in the same Bundle. For Shader edited in Shadervariant, Shadervariant will have a dependency on it. When the game starts, load the Bundle where Shader locates, and then load Shadervariant from it, then it will automatically trigger the Parse of the Shader which depends on it, and the corresponding variant will generate a variant Shader submitted to GPU, but the number of variants at this time is not complete;


2. When executing Shadervariant.warmup, it will trigger the generation of the remaining variants and submit them to GPU;


3. After these two steps, the first step is Shader.Parse, and generate some variant Shader; the second step, Warmup will generate the remaining variants. Here is the screenshot of my test:

4. Compared with the number of calls, it can be seen that the first time Shader.Parse did not completely execute the generation of all variants. I tested loading the bundle where the Shader is, and then load all assets, it will trigger the parse of all Shaders, regardless of whether they are associated with Shadervariant:

It can be seen that even if all Shaders are compiled, Warmup will still perform the generation of the variant Shader. So the first step of loading doesn’t complete the generation process of all the variant Shaders.

Thanks to zblade for providing the answer above.

Performance Testing

Q3: At present, when doing performance testing, the project will be divided according to the model SoC (System on a Chip), and a fixed resolution (for example: 1024*768) is locked at the same time. I want to ask whether this method is accurate in testing?


For example, the configuration of two phones is the same. One has 4K screen and one has 1080P screen, the resolution locked is 1024*768. Can their performances be considered the same?

This method of shrinking the screen has been used in many projects. In the case of consistent rendering content, whether it is a 4K screen or a 1K screen, calculating pressure of GPU is the same, the only difference is that the RenderTexture size eventually extended is not consistent, but the overhead of this part is very low. However, because of the different GPU capabilities of the devices (generally for devices with 4K screens, their GPU performance is also very powerful), the time consumption on the GPU chip is definitely different.


For Overdraw, it also differs depending on the screen resolution. In the UWA performance report, you will see that the real Overdraw is just a piece of the screen.

This answer is provided by UWA


Q4: There is an error when doing dynamic atlas:

Unsupported texture format-needs to be ARGB32, RGBA32, RGB24, Alpha8 or one of float formats.


I originally want to use the dynamic atlas to optimize DrawCall and memory, but it seems that only Texture2D in the above format can be created. If I use other formats in Texture.SetPixels, the error above will be reported. But the format memory mentioned above is too large. Is there a way to create textures in other formats?



Texture2D texture = new Texture2D(textureSize, textureSize, TextureFormat.DXT5, false);
Color[] fillColor = texture.GetPixels();
for (int i = 0; i < fillColor.Length; ++i)
fillColor[i] = Color.clear;

First, we need to distinguish a question. Is this generated offline or generated in Runtime? There are too many ways to generate offline, here we assume that it is generated in Runtime. For SetPixels, the Unity document clearly states “This function works only on RGBA32, ARGB32, RGB24 and Alpha8 texture formats. For other formats SetPixels is ignored. The texture also has to have Is Readable flag set in the import settings.


Besides, New Texture2D you mentioned uses the DXT5 format, which is generally only used for PC.


One way is to use RGBA32 to generate, then compress it and import it. But as far as I know, Unity does not support compressed images at runtime. Therefore, I personally recommend another method, that is, use RenderTexture to copy pixels. This method is much faster than SetPixels of CPU. In terms of format, RenderTexture has a variety of formats to choose, including RGB565, ARGB4444, etc., and the memory will be smaller than that of RGBA32.

Thanks to Wang Xiaoyu for providing the answer above.


Q5: When I deal with the DrawCall of nodes on the map, I find that two things with the same atlases and the same Shader cannot even be batched. “Objects have different materials” is prompted.


Then through the memory review, there are many references, and each one in the reference is the specific referenced resource.

At present, the reason is found, the built-in shaders are used in different prefabs, then shader information will exists in different ones. So it leads to different materials. If you want to solve this problem, these prefabs should all refer to the same resource. UWA has a solution, you can refer to this article: Unity 5.x AssetBundle Zero Redundancy Solution


But there is a problem. Because the resource refers to the built-in Shader and Texture, then is it necessary to import the built-in files and then reset the references, and pack the exported file into AssetBundle package? Why should it be placed in Editor directory, don’t it need to be packed into AssetBundle? Finally, the references have been set. So why do we have to restore it at the end?

Thanks to halm for providing the answer above.

The method of restoration is the strategy of the original version, which has been abandoned. It is recommended to put it directly in the project. The newly created objects will directly use the extracted Shader and materials. Keep the name of the extracted resource the same as the built-in resource, and the newly created resource will give priority to the one under Assets.

Thanks to Zhang Di for providing the answer above.

This is the 72th UWA Technology Sharing for Unity Development. As we all know, Our life has a limit but knowledge has none. These problems are only the tip of the iceberg and there are more technical problems during our program development which deserve to discuss. Welcome to join UWA Q&A community, let‘s explore and share knowledge together!




Post a Reply