0/ 1286/ /5

1) Doubts about _CameraDepthTexture

2) The effect of Alpha Channel on image size

3) How does URP achieve the effect GrabPass

4) How to obtain the Instance ID of Asset that failed to load in AssetDatabase

5) How to determine when the Bundle files are loaded into the Memory


Q: A question about _CameraDepthTexture.

If _CameraDepthTexture is turned on, Camera needs to render the Pass of all visible objects with ShadowCaster in the scene to realize the depth map.

But the depth was already written into the Depth Buffer when ZWrite of the objects in the scene is turned on. Is it more efficient to obtain this Depth Buffer directly than the mentioned above approach that nearly doubles DrawCall? Or does Unity have any considerations in this regard?

Also, a more practical question: Our project needs to render the depth effect of the middle lake of the scene. The materials of all opaque scene objects are associated with the same Shader, which has a ShadowCaster. But only individual objects inserted in the water need to render the ShadowCaster Pass. Is there a way to prevent objects not inserted in the water from rendering the Shadow Caster Pass without adding a Shader? FYI, we are using the Built-in rendering pipeline.

A: For the first question, you can refer to the official Unity staff’s reply in this question:

There are two reasons mentioned here. The first is that in the case of non-full-screen rendering, the depth of the camera rendering was to be taken, but the Depth Buffer is full-screen. The second reason is that many platforms do not support directly taking Depth Buffer data.


In addition, when checking FrameBufferFetch related questions, I saw the answer in another post in the Unity forum. It says that Unity supports FrameBufferFetch, but does not support reading the DepthBuffer.


As for the second question, there is no better solution I can think of if you don’t increase Shader. If a Shader can be added, you can make a copy of the original Shader and add a “NeedDepth” tag to the ShadowCaster part, replacing the material of the objects underwater with this Shader. Then you need to make another Shader with only ShadowCaster and “NeedDepth” Tag Shader and use this Shader for Replace operation.

Add an extra Camera, and make this Camera follows the main camera, or as a child node of the main camera; create an RT, let this Camera render to this RT, and use ReplaceShader in Update to draw it, then only the ShadowCaster with that Tag will perform the depth Rendering, and you can then encode this RT that records the depth of underwater objects. The whole process does not seem to have a lot of extra work, so I think it can be worth a try (I haven’t done the test before, but it is theoretically feasible).


Q: In the article “Texture Optimization: More Than Just a Picture” by UWA, it is described that the Alpha channel in the picture will affect the memory.

Using the following test resource configuration:

  • Tga_Alpha – with Alpha channel
  • Tga_NoAlpha – without Alpha channel
  • Png_Trans-contains transparent pictures
  • Format suitable in Unity, and all codes are set to TextureImporterFormat.ASTC_6x6

Test Results:

  • The format of all three images in Unity is as shown below:
  • The same memory size is shown for three images
  • The Alpha Source of Texture Importer is set to None, which has no effect on the test results

Question: Does the picture containing an Alpha channel have an effect on the memory size for images with the same Format?

A1: The optimization suggestion in the original text is to remove the meaningless Alpha channel (defined in the original text that the Alpha channel has the value of 1). This is indeed helpful for memory optimization.

In your test case, whether it is in png or tga format, after entering the engine, it will be converted to the internal format (RGBA, ETC, ASTC, etc.) by the engine. Imagine taking ETC2 as an example. If there is no Alpha channel and the compression quality is acceptable, RGB_Compressed_ETC2_4bits can be used. If this meaningless Alpha channel is added, then we will automatically select RGBA_Compressed_ETC2_8bits when importing the settings in batches. In this way, the memory is doubled.

Regarding the ASTC in the question, the definition of the Texture import format ( in the Unity editor source code has this comment: // ASTC uses 128bit block of varying sizes (we use only square blocks). It does not distinguish RGB/RGBA. That is, it has nothing to do with whether the Alpha channel is included. For an introduction to the ASTC format, I recommend you to read the ASTC Format Overview on Github:

Regarding the issue of the new version of Unity mentioned in your previous reply, you can look at this issue on the Unity forum:

Some people may think that if the format of all textures in the project is ASTC, the meaning of removing Alpha is not very significant. In fact, it cannot be generalized. Although the size is the same, according to the test in (, Whether there is an Alpha channel will affect the compression quality to a certain extent. Therefore, it must be used reasonably in the project.


A2: When the game is running, the alpha channel of the texture depends on the situation after importing, not the situation of the source file. UWA Static Check detects the Alpha channel of Texture as the result after import.

The format of the image source file is not supported by the graphics hardware or Unity directly. After a picture is imported, the picture will be processed according to the settings in Import Settings, and the picture will be imported as the format supported by the hardware (the format in the engine), and the resources used in the operation are also the imported ones.

Alpha Source is set to None in your case, then the Alpha channel of the source file will not be imported, and the compression format is ASTC_6x6, which contains the Alpha channel. After importing, all three resources will generate an Alpha channel with all values of 1 by default. The amount of memory occupied is naturally the same.


Q: How does URP achieve the effects of GrabPass? The original GrabPass function of the URP pipeline has been removed, and now has a distortion special effect function, similar to the kind of thermal disturbance. Although URP can directly obtain the physical image for distortion effect, it cannot do it if translucent objects (such as water, other special effects, etc.) also need to be distorted.

A: It might not be realistic for you to fully realize GrabPass estimation, but there are alternatives:

GrabPass means to draw an object a first, and then affect a when drawing an object b, and affect a & b when drawing an object c. Note that the most important thing is that the disturbance of the b object cannot affect the c object, because the b object is drawn before the c object.

But if you can accept the disturbance of object b and object c to affect abc at the same time, then it is simple, you can firstly use an RT to save the distortion information (which is UV offset), and then apply the offset to the screen in Uber Shader. This distorted information will then disturb all objects (whether opaque or translucent).

Although I don’t know how it is implemented in URP, it is the way that we used it in editing SRP.


Q: There is an Asset inherited from ScriptableObject. For whatever reason, the GUID of the reference script is wrong. We can find the Path of this Asset through a certain editor script, but how to select it in the ProjectView?

After testing, it can be selected using Selection.instanceID, and Instance ID generally needs to be obtained through Object, but because the script reference is wrong, AssetDatabase cannot load this Object. Is there any way or API to get this Instance ID?

A: You can find an available method through the following open source code:

//assetPath : The instance path of an Asset where the corresponding script cannot be found
HierarchyProperty property = new HierarchyProperty(assetPath);
Selection.activeInstanceID = property.GetInstanceIDIfImported();


Q: After downloading all the resources, I call the LoadAssetAsync function in a test script, as shown in the figure below:

Here is the first doubt in AssetBundleProvider:

Since Addressables all use a caching mechanism to store AssetBundle packages, isn’t File.Exists always False here?

The second doubt lies in:

How to grasp the timing of loading the Bundle into the memory after the download of Addressables is completed in the WebRequest? I might get it wrong, but I did not find a call similar to LoadFromFile in the following stack trace.

A: The File.Exists here is not to check the cache, but to check whether the file is in StreamingAssets. And all downloaded resources are obtained through the cache, if there is none in the cache, the required resources will be downloaded. The Load part has been encapsulated in UnityWebRequestAssetBundle.

That’s all for today’s sharing. Of course, life may have its limit but knowledge knows no bounds. In the long development cycle, these problems you see maybe just the tip of the iceberg. We have already prepared many technical topics on the UWA blog, waiting for you to explore and share them together. You are welcome to join in, who loves progress. Perhaps your method can solve the urgent needs of others, and the “stone” of other mountains can also attack your “jade”.


UWA Website:

UWA Blogs:




Related Topics

Post a Reply

Your email address will not be published.