circle-loader
0
by
0/ 88/ /0

This time we will share several technical topics which are related to Program development. It is recommended to read for 15 minutes. Any unique insights or discoveries, please feel free to contact us or discuss.

UWA Q&A Community:answer.uwa4d.com


UI

Q1: There may be hundreds or thousands of prop icons. If you put them in the same atlas, the size is 2048P, and there is extra memory waste during the actual game.

 

It is not a good way to separate them by type. There is one for each type of props for novices, so all types of atlases are cited once, compare this method with packing all in one atlas, there is no big difference.

 

It may be better if you plan it by level segment, but for icons displayed on malls or some specific UIs, it is likely that they are not the same level segment, which will also cause situations where multiple atlases are referenced. If I divide the atlas according to common or infrequent usage, then I need to plan all things to follow up this part of the process, it is also easy to cause problems.

 

Or, if I don’t pack these large numbers of icons, then they are all scattered images, although there will not be too much memory waste, the rendering efficiency is much lower.

 

Is there a better solution to deal with this problem?

1. If the amount of images far exceeds the amount which can be displayed on a single screen, the more practical method is to realize the dynamic atlas by yourself, that is to say, the icons are all scattered images, which are assembled into large images in real time according to requirements during the operation. It can be realized by using Render to Texture.

 

2. For most games, if it is not the interface of the permanent screen, according to the performance of the current machine, consumption of DrawCall can be supported.

Thanks to Zhao Wenyong for providing the answer above.


Script

Q2: How to implement inter-process communication without using Native DLL in Unity? I have met some problems when implementing IPC in Unity. The main reason is that .Net 3.5 does not have the class of System.IO.MemoryMappedFiles, and I need to use File mapping to implement IPC when making the Editor plugin in Unity. If various projects need to be generally adapted to this plugin, it must be compatible with .Net 3.5. In this way, there seems to be no good way except the DLL written by Import C ++, but the plug-in is used in pure Editor. So I don’t want to cause the game-unrelated DLL to be put into Build for this reason, so I would like to ask if there is a way to implement IPC with .Net 3.5 shared kernel memory.

1. C # can use socket to realize IPC, which is very general and cross-platform.

 

2. You can choose whether to pack them into the package even if you use the Native DLL, at least in the current version of Unity.

Thanks to Zhao Wenyong for providing the answer above.

For some functions which are relatively close to the bottom layer, you can write the Native plug-in yourself, and you can avoid the problems you said by Import Setting and Macro Definition of DLL.

 

PS: The package of some non rendering functions in Unity is inefficient and uncontrollable, such as Camera, Microphone, etc.

Thanks to 志鹏Zipper for providing the answer above.


Script

Q3: Why I can’t add components to an object through the ?? operator?

public class PlayerMoveTest : MonoBehaviour
{
    CharacterController controller;
     private void Awake()
        {
        controller = transform.GetComponent<CharacterController>() ?? gameObject.AddComponent<CharacterController>();
    }
}

 

public class PlayerMoveTest : MonoBehaviour
{
    CharacterController controller;
     private void Awake()
        {
        controller = transform.GetComponent<CharacterController>();
        if (controller == null)
            {
                    controller = gameObject.AddComponent<CharacterController>();
            }
    }
}

 

As shown in the code above, the first piece of code does not work, the controller displays null, but the second piece of code can add components, why?

 

For the ?? operator, isn’t the preceding of ?? is null, then the expression after ?? will be executed?

I have tried:

public class PlayerMoveTest : MonoBehaviour
{
    CharacterController controller;
     private void Awake()
        {
        controller = transform.GetComponent<CharacterController>();
        if (controller == null)
            {
                    controller = gameObject.AddComponent<CharacterController>();
            }
    }
}

Decompile the code above:

if (GUI.Button(new Rect(100f, 100f, 200f, 80f), "Test") && (Object)base.transform.GetComponent<CharacterController>() == (Object)null)
    {
        base.gameObject.AddComponent<CharacterController>();
    }
    if (GUI.Button(new Rect(200f, 200f, 200f, 80f), "Test2") && (object)base.transform.GetComponent<CharacterController>() == null)
    {
        base.gameObject.AddComponent<CharacterController>();
    }

It can be seen that these two pieces of codes are different. The first comparison is the type of UnityEngine.Object, and the second comparison is the type of System.object.

 

Write another piece of code to test:

var c1 = (object)base.transform.GetComponent<CharacterController>();
     if(c1 == null)
         Debug.Log(1);
     var c2 = (Object)base.transform.GetComponent<CharacterController>();
     if(c2 ==(Object)null)
         Debug.Log(2);
     if(c2 == null)
         Debug.Log(3);

The output is 2 and 3.

 

Breakpoint debugging these values:

The reason is that the type of UnityEngine.Object overloads the == operator, so “null” and null are considered equal, but when compared with the System.object type, they are considered unequal.

Thanks to deviljz for providing the answer above.

I find that “null” is not null:

When this sentence is executed, the value has been assigned: contor = transform.GetComponent (); However, the value assigned is “null”. Looking at the printed result, you know what is the difference between the two nulls.

To add one more thing, when I print the type of contor, in the “one” line, it will report that the object reference is not set as an instance, and in the “two” line, it will print the type, which means that when the left side of the ?? operator is unassigned variable, then the right side will be executed. The example is in the following code b, the final print is v, c is an unassigned string c which I defined.

Thanks to Marco for providing the answer above.


Script

Q4: The camera will follow the character, but when the character is blocked by the building, I want to make the building semi-transparent. At this time, ray detection is required, but due to performance issues, scene objects are not allowed to add Collider components. However, when writing with pure Shader, there is always something wrong in the effect. Does anyone have another solution?

One way is to use Box Collider to make approximate shape of the object to carry out the detection of physical collision when making the scene. If you don’t allow the Collider components to be used, you will not allow this method either.

 

I don’t know your idea of simple Shader writing. I think that there is no way to process this requirement in simple Shader.

 

I’m not sure if there is a better solution. In the case of more general and not very tricky process, I feel that physics method is a more direct and efficient way.

 

PS: If it is a 2.5D angle of view and the camera will not looking down and looking up, you can pre-calculate a 2D data for retrieval in Runtime, but there are limitations and problems, and I haven’t tried this method before.

Thanks to Jia Weihao for providing the answer above.

The most direct way is to use Box Collider to make an approximate shape of the scene object for physical detection. There is no more direct way in Unity 3D than this method.

 

Jia Weihao mentioned that a piece of data can be pre-calculated for use in Runtime detection. I know that in the Quake series, the scene rendering and physical collision detection data are pre-baked and then put into the BSP tree. The collision detection of Quake in the Runtime stage is based on BSP.

Thanks to Zhang Rui for providing the answer above.


Timeline

Q5: When using the new Timeline, I find that there is no option/key to manually add keyframes, nor related APIs. How should I add keyframes manually?

In Editor, you can add key frames to the head or tail of Clip by pulling the sides of Clip, and how many frames are +/- in the current operation will be shown in Editor.

 

If you want Runtime to add keyframes to the built-in track, Unity does not currently provide related interfaces.

 

You can implement a customized Track/Clip through a script, and control the Playhead of Timeline in the script to stay at a fixed time, or jump to a specified time.

 

You need to implement TrackAsset, PlayableAsset and PlayableBehaviour by yourself.

 

Among them:

  • Implement TrackAsset for external Mono script to operate Track.
  • Implement PlayableAsset for providing a customized Clip and providing related data for Track.
  • Implement PlayableBehaviour and implement the control logic to control Timeline.

Thanks to Zhang Rui for providing the answer above.


This is the 60th UWA Technology Sharing for Unity Development. As we all know, Our life has a limit but knowledge has none. These problems are only the tip of the iceberg and there are more technical problems during our program development which deserve to discuss. Welcome to join UWA Q&A community, let‘s explore and share knowledge together!

UWA website:en.uwa4d.com

UWA Blog:blog.en.uwa4d.com

UWA Q&A community:answer.uwa4d.com

Post a Reply