0/ 38/ /0

This time we will share several technical topics which are related to Program development. It is recommended to read for 15 minutes. Any unique insights or discoveries, please feel free to contact us or discuss.


Frame synchronization

Q1: When I used to make demos myself, I used CharacterController and rigid bodies to solve the problem of moving through walls. The code is simple and the effect is not bad. But when I need to make a frame-synchronous shooting game, the problem occurs. After all, the rigid body belongs to the display layer. Each machine has different performance and different update frequency, and the calculated results are different, causing out of synchronization, so it is impossible to use it.


The current method is to realize it through Unity’s own Physics.BoxCast. When the player inputs, the collision box can be projected in the input direction, and then collides with the obstacle and calculates the farthest position.


For simple terrain, the effect is also not bad, but complex terrain will always encounter a variety of new problems. I would like to ask if there are related algorithms, articles, papers, books for reference?


The separation of performance and logic has been achieved. The project currently uses a large number of floating-point numbers, but it has also begun to gradually change to fixed-point numbers, and will also use fixed-point physics libraries. Regarding the problem of frame synchronization, there are many articles for reference, but I still have some problems:

1. Obstacle detection and character position processing for horizontal movement and 360-degree movement in the air;

2. The position, orientation and movement of climbing on the wall.


I have not found relevant information.


Our project has not yet introduced a mature physics engine, I write these algorithms myself, so I will have these confusions. After thinking about it, there is really no need to make troubles myself. Here comes the question, can you recommend some fixed-point physics engines available to Unity?

Personally, there are two points:

1. Ideally, if you have a complete physics engine and pathfinding system at the logical layer, the content you listed can be done naturally. The pathfinding system may be better. The complete physics engine is generally hard to re-implement, basically based on specific functions to achieve the needs.

2. Regarding the action processing of climbing, in theory, the logic layer does not pay attention to the action, only the state, and the action is related to the presentation layer.


Looking at the points you mentioned, I feel that most of your games are action games. If you want a good operating experience, for real action games, frame synchronization may not be the best way, you need to add a lot of Pre-performance and rollback with errors to realize the good experience.

Thanks to Jia Weihao for providing the answer above.

For frame synchronization solutions, you can refer to LockStepFramework on Github, Tencent seems to have a set of frame synchronization solutions, you can contact them to see if it is suitable. At the same time, as Jia Weihao said, if your games are action games, pre-performance and rollback with errors are more important points for a good experience, and you should consider them in advance when designing the framework.


1. First of all, frame synchronization requires a set of fixed-point number libraries, because different hardware platforms have inconsistent floating-point calculation results.

2. Performance and logic must be separated.

3. It is recommended that you implement a set of physical libraries yourself.

Thanks to Ye Siyuan for providing the answer above.

Regardless of the client, you can just take care of your mobile physical collisions, and don’t need to collide with other characters. According to scene information of synchronization (position, orientation, motion) , make appropriate predictions. Synchronization information, position, orientation, and motion (in which frame) must allow the timeline to reenter. This is actually very easy to do. In short, other objects do not need to collide, just calculate by yourself.


There is also a solution where all roles, including yourself, are made by using server broadcast. You can make a hidden role for pre-collision and upload status information to the server, then broadcast and displayed by the server. In this way, all the information can be replayed at a point in time. You only need to record these message packets, replay them, and adjust the corresponding strategy repeatedly.

Thanks to kk for providing the answer above.

Versino Control

Q2: This is a problem which has troubled me for a long time.


As soon as the project should be tested, we must open a branch. After the package is out, if the branch has problems or new requirements, it is necessary to synchronize the resources and code from the trunk. As the project gets bigger, more functions and bigger teams, synchronization becomes a very troublesome thing.


Our current development and synchronization process is very primitive: everyone develops on the trunk, and when there is a need to test and release a version, open a branch. Requirements and bugs on the branch are developed in the trunk. When we need to update, synchronize the code and resources to the branch, and repack the resources and code packages.


The code is aggregated to one person, and the resources are aggregated to another person. All the people related to function development are in a “synchronization group”, and everyone sends the content to be synchronized to the group in a fixed format. After the development is completed, when the synchronization is really needed, the two persons responsible for synchronization will synchronize one by one according to the synchronization content sent by others.


The problem is obvious:

1. Synchronization always requires those two people to operate, one is the client responsible, and the other is product manager managing the synchronization, this is a waste. And the code is all the same, every synchronization takes half a day, so many codes need to be synchronized, and it is really troublesome.


2. Easy to make mistakes:

a. For the person who complete the synchronization: this person and the person who developed it are not the same person, so synchronization is prone to errors.

b. For those who complete the development: Some of the content has been developed. When you need to synchronize, you can’t remember what you want to synchronize.


Have you encountered the same problem? Is there any better process? I hope you can give me some suggestions, or simply talk about the process of your project, thank you!

Our git process:


1. No longer use the master, because there is basically no need for the master during the entire development process;


2. The main development branch is dev, and all tested new functions are integrated into dev;


3. Each person should be divided into several features which are as detailed as possible according to functions and responsibilities, and established a feature branch locally for development. Each person can preform several features at the same time, but it is not recommended too much, otherwise the management difficulty will increase. Try not uploading the local development branch to the server to avoid too many branches of intermediate processes on the server. Unless this feature has a long time span or requires multiple people to collaborate;


4. In the process of developing features, you must choose the timing (every day or some key nodes) to integrate dev into your own feature branch to ensure that your branch functions are newest and minimize conflicts;


5. When the feature development is completed and release is required, follow the merge process of dev->feature->dev to merge the feature into dev;


6. Keep the merged features for a period of time, so as to modify it later, etc. After a period of time (usually after the function is online), delete the local feature branch to avoid more and more branches;


7. When you want to release it, merge the features which everyone has one by one by 5 into dev, and then start the test. After the intranet test passes, open a new branch from this dev and take the name of the scheduled time of this release, and then continue to conduct external network testing and bug fixes on this branch;


8. Merge by using the bugs found and corrected on branch, and merged back to dev by the developer of this problem, we often use cherry-pick, others continue to develop on dev and their feature branches;


9. Regarding resources, in general, we are oriented to function development, and we will not assign someone to manage resources. Therefore, the function developer needs to fully manage and master all the resources and codes related to the function. He is responsible for solving the problem of synchronization of resource and code upload.


10. Separate permissions and processes as much as possible, so that each project developer can use it proficiently. Allow mistakes and learn from them. When everyone can pass through the process, it can avoid the interruption of the work caused by the change of persons.

Thanks to Huang Cheng for providing the answer above.


Q3: In the current project, we need a screenshot of the camera for 30 times a second, but using the ReadPixels function to call 30 times a second will cost a lot. Is there a method which can replace it, or is there any other way which can save performance by taking a screenshot of the camera.

For the newer version, Unity has an AsyncGPUReadback function which can help take screenshots, it is faster than ReadPixels, but 30 times per second still has a lot of overhead.


You can refer to this example in the lab:


If you want to export it and store it outside after taking a screenshot, for the Windows platform, there is a faster way to capture the full screen: Desktop Duplication screenshot technology. The speed is very fast, almost the same as the video.


You can refer to this example:

This answer is provided by UWA


Q4: I want to get the level of MipMap usage of a current mapping, Unity 2017.4 does not have a corresponding API. Is there any other way to get it?

The level of MipMap used in rendering is different pixel by pixel, but not according to texture. So it should be possible to get it in the fragment stage of Shader. There are some tools for displaying MipMap on AssetStore. For the mode under Scene View which Unity brings, I don’t know how can it calculate. If you can find the source code, you can refer to it. You should be able to find the method under retrieval. I haven’t done it specifically, but I the calculation should be based on ddx and ddy.

Thanks to Jia Weihao for providing the answer above.


Q5: Like this special effect, 4 nodes, each node uses a shader, a mapping, so there are 4 DrawCalls.

Because it is the same Shader, I consider merging them, but our mappings may also be used for other special effects. That is to say, if this part of mappings are merged into an Atlas, there will be redundancy, and the memory is high. If they are not merged, DrawCall will be higher. I would like to ask if you have any good suggestions in this situation? Especially for special effects.

In general, what size are your special effect mappings? If the size is small, you can consider merging several special effects in one atlas, such as combining special effects which use the common mappings and do preloading, and then pack them.


In addition, it is better to get some data to help make decisions: for example, the DrawCall and memory conditions of the special effects in the battle; the total DrawCall and memory conditions. It should be evaluated that whether DrawCall is currently tight, or memory is tight.

This answer is provided by UWA

This is the 70th UWA Technology Sharing for Unity Development. As we all know, Our life has a limit but knowledge has none. These problems are only the tip of the iceberg and there are more technical problems during our program development which deserve to discuss. Welcome to join UWA Q&A community, let‘s explore and share knowledge together!




Post a Reply