WWW.GAMEDEVELOPER.COM
Fake it til you make it - faking extended draw distance in mobile games
VladislavSobolevNovember 1, 20245 Min ReadImage via Kevin Ku.Fake it til you make it - faking extended draw distance in mobile gamesOptimization is a cornerstone of mobile game development. With thousands of phone models in circulation, many of them running outdated chipsets, every game needs to target a reasonable lowest common denominator, and one of the most consistent ways to optimize performance in 3D games is to manage draw distance.Drawing distance must be as short as possible to achieve stable FPS. But what about open worlds, where players need to see the entire map from any point? This is the challenge we faced in Cubic Games while developing Block City Wars, and below we will explore the solution we settled on, and the strengths of this particular approach.The problem:In a game like Block City Wars, every player needs to see the entire map from any position or be at a disadvantage, and simply increasing the far clip plane wont work. Increasing the draw distance raises the number of triangles that pass through all culling stages: more objects undergo bounding box checks on the CPU and more fragments are drawn on the GPU.Using another camera for the background with a different drawing distance complicates camera management and adds unnecessary overhead. Lastly, experiments with HLOD (Hierarchical Level-Of-Detail) were also found unsuitable for solving this problem. While some of these solutions might be applicable to other games, they failed to address our needs. When all else fails, shader magic saves the day.The essence of the solution:The solution we settled on was using a mixture of shader trickery combined with our existing simple fog effect to provide useful but largely faked detail. Using a shader, we can create the illusion that an object is far away while it is actually close to the player. This allows us to choose which objects will always be visible, regardless of distance.It makes sense to use only sufficiently tall objects so players can orient themselves on the map, allowing us to fully remove visual clutter from the final render. To ensure a seamless transition between fake objects and real ones, we will render silhouettes in fog color. This also allows us to significantly reduce detail. It will look like this:BeforeAfterDeceiving CPU Culling:To achieve this effect, we can leverage the tools that Unity provides us. For a mesh to be sent for rendering, its bounds must fall within the camera frustum. This can be easily done, for example, using this MonoBehaviour. We will do this in Start() because Unity recalculates bounds when the mesh is initialized. For our purposes, we need to set the size so that the players camera is always inside the bounds; thus, the mesh will always be sent for rendering on the GPU, lightening the load on older CPU models.void Start() {Mesh mesh = selectedMeshFilter.sharedMesh;Bounds bounds = mesh.bounds;bounds.center = newCenter;bounds.size = newSize;mesh.bounds = bounds;}Deceiving GPU Culling:Once the mesh is on the GPU, there is one more stage of frustum cullingbetween the vertex and fragment stages. To bypass this, we need to transform the vertex coordinates so that all vertices are within the cameras view, while still preserving perspective.v2f vert (appdata v){v2f o;float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;float3 directionToOriginal = normalize(worldPos - _WorldSpaceCameraPos);float3 scaledPos = _WorldSpaceCameraPos + directionToOriginal*_ScaleDownFactor;float3 objectPos = mul(unity_WorldToObject, float4(scaledPos,1));o.vertex =UnityObjectToClipPos(objectPos);return o; }_ScaleDownFactor is the distance from the camera at which all vertices will be located. It needs to be adjusted according to the fog distance to hide the transition.All we need to do in the fragment shader is simply draw the fog color, which will mask the geometry cutoff.fixed4 frag (v2f i) : SV_Target{return unity_FogColor;}Example with an Island Mesh:This effect can be clearly seen in Blender. If you position the camera at the origin and point it at a cube, then duplicate the cube and scale it relative to 0, from the cameras perspective, there will be no difference between these cubes. Obviously a trick that wont work quite right in VR, but were developing for mobile here, so depth perception isnt something we have to work around.In our case, an additional step is added: the mesh is squashed to fit right at the edge of the cameras drawing distance. This is done to avoid overlapping with the z-buffer of other objects that should be closer to the player. When dealing with impostor detail objects like this, one little rendering glitch is all it takes to shatter the illusion and bring attention to background objects that should normally be seamless.We must also keep in mind cases where the camera might end up inside the silhouette mesh. Vertices in one triangle can end up on different sides of the camera, causing it to stretch across the entire screen. This should be taken into account when creating the silhouette mesh, ensuring the camera does not enter it or disabling meshes when the camera approaches.ConclusionWhile this approach wont be applicable for all games, it fits Block City Wars and its existing fog effects perfectly. This approach allows for quickly extending the effective draw distance using faked silhouetted detail under serious performance constraints, leveraging the existing fog effects to hide the smoke-and-mirrors used. It is easy to reproduce in any render pipeline and engine, and it does not require modification of existing code.Even with much of the fine detail faked and obscured behind fog effects, the distant silhouettes still provide useful gameplay information to players at minimal performance cost. A net win for players across all platforms, especially older hardware.Read more about:BlogsFeatured BlogsTop StoriesAbout the AuthorDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
0 Commentaires 0 Parts 26 Vue