Using Shader Forge for Mobile

From Shader Forge Wiki
Jump to: navigation, search

If you are like me and work primarily with Mobile game development (who doesn't nowadays, right?) you have to be wary about the features you implement and how you implement them, because based on a rough guesstimation, the devices your game will run on are somewhere around 10-30 times slower than the environment you developed the game on. Graphical "features" beyond displaying meshes with textures are rare, and something seemingly as trivial as transparency can sometimes be considered an expensive effect on older devices!

There are a lot of resources available already which cover what to watch out for when making graphics for mobile games so I'm going to keep general information like that to a minimum, but mention some things quickly, and try to focus on things relevant to Shader Forge.


Basics

First off, there's nothing inherently different in the shaders produced by Shader Forge compared to what you would write by hand, in short meaning that they do pretty much work off the bat! That is, if it wasn't for the fact that Shader Forge forces compilation only for shader model 3.0. Before the 0.26 beta of Shader Forge, I had to open the shader and manually remove the line(s) which said:


#pragma target 3.0


And things would work swimmingly! Removing the line means Unity attempts to compile for shader-model 2.0, which is what is available on all mobile devices supported by Unity nowadays. 3.0 is still very new, only the highest-end devices support it at the time of writing. In any case, since 0.26 you have a setting for this in Shader Forge instead. In the panel on the left-side, you can find the Experimental tab, under which you have a checkbox for "Force Shader Model 2.0":


Much-Experimental-Very-Useful.png


This will change the #pragma target line from 3.0 to 2.0, and there will be no need to open the shader and edit things!

Hah! You wish. Well, in theory this is true, your shaders will now run on the platform, but they're not optimized for it, so they're often not useful in practice until you've scrutinized them with your super-experienced Mobile-shader-developer eyes :P. So you could say that for Mobile, Shader Forge is best used to prototype things, but in the end you have to optimize things properly in code if you want your game to run on older hardware. It's still a very powerful tool even for Mobile developers, since it gives you visual feedback when you are experimenting on an idea for an effect.

In general you want to keep your instruction counts as low as possible, meaning you want to do as few things as you can in your shader. Then there are things which are faster/slower than other things. Such as texture lookups which are generally cheap, so you might want to use another lowish-res texture for a certain effect rather than calculating the effect using expensive functions like Sin or Frac. "Random" noise would be a typical example of something you can easily and more cheaply achieve using a texture than using the Noise node. Then there are things which can be hard for Shader Forge to do anything about or be a bit convoluted to implement a solution for, which make more sense to solve in code. Let's grab a practical example:


Move stuff to Vertex shader

Very-Unlit.png


"This shader is rendering a texture, with no lighting. What on earth could be optimized here O_o" you cry in confusion. Let's grab a look at the source produced by Shader Forge (I stripped out the metadata and comments):


Shader "Shader Forge/NewShader" {
    Properties {
        _MainTexture ("MainTexture", 2D) = "white" {}
    }
    SubShader {
        Tags {
            "RenderType"="Opaque"
        }
        Pass {
            Name "ForwardBase"
            Tags {
                "LightMode"="ForwardBase"
            }
 
 
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #define UNITY_PASS_FORWARDBASE
            #include "UnityCG.cginc"
            #pragma multi_compile_fwdbase_fullshadows
            #pragma exclude_renderers xbox360 ps3 flash d3d11_9x 
            #pragma target 2.0
            uniform sampler2D _MainTexture; uniform float4 _MainTexture_ST;
            struct VertexInput {
                float4 vertex : POSITION;
                float4 uv0 : TEXCOORD0;
            };
            struct VertexOutput {
                float4 pos : SV_POSITION;
                float4 uv0 : TEXCOORD0;
            };
            VertexOutput vert (VertexInput v) {
                VertexOutput o;
                o.uv0 = v.uv0;
                o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                return o;
            }
            fixed4 frag(VertexOutput i) : COLOR {
                float2 node_21 = i.uv0;
                float3 emissive = tex2D(_MainTexture,TRANSFORM_TEX(node_21.rg, _MainTexture)).rgb;
                float3 finalColor = emissive;
                return fixed4(finalColor,1);
            }
            ENDCG
        }
    }
    FallBack "Diffuse"
    CustomEditor "ShaderForgeMaterialInspector"
}


See that line near the bottom which says:


float3 emissive = tex2D(_MainTexture,TRANSFORM_TEX(node_21.rg, _MainTexture)).rgb;


This line belongs to the fragment shader portion of the shader-combo, which means that this line is run for every pixel. Shader Forge currently has a tendency to do most of its calculations in the fragment shader, keeping only computations which regard the vertices themselves in the vertex shader. This doesn't have to be the case! Considering you often have more pixels to draw for rendering the object than the vertices which make up the model, you can save a lot of calculations by moving them over to the vertex shader. "But of course you need to do texture-lookups for every pixel" you might think for yourself (or out loud). This is of course true, but we certainly don't need to run the TRANSFORM_TEX function every pixel. It handles the Tiling & Offset inputs in the Material inspector. Best of all would be to get rid of it entirely, so if you're not using those inputs (which is often the case) you can just change that line to:


float3 emissive = tex2D(_MainTexture,i.uv0.rg).rgb;


That way you are just using the UV-mapping you got from the vertex without scaling or translating it, and you've saved two operations per pixel! But what if you actually still want that functionality, but still optimized? You might have noticed earlier in this paragraph, I say "UV-mapping you got from the vertex". Why not make the scaling and translation per vertex instead? It'll give exactly the same results, and in many cases save a lot of performance. Only case it would reduce performance instead is if your mesh is very high-poly and small on the screen, meaning you do more calculations per vertex than you would per pixel since it's so small on screen. This quite an uncommon case, so it's fairly safe to say that this is an optimization for most! So what you want to do is change the line in the vertex shader which currently looks like this:


o.uv0 = v.uv0;


Into this:


o.uv0.rg = TRANSFORM_TEX(v.uv0.rg, _MainTexture)


Hooray! Note that this also requires you made the change in the fragment shader from before, where we entirely removed the calculation. Now you're doing the Tiling & Offset calculations per vertex, so if your vertex counts are within reason and/or your model is taking up a good portion of the screen, you should notice a performance increase. Best of all is of course to do the former, where the calculation is skipped entirely, but sometimes you need it.


Variable precision

Another topic which Shader Forge currently isn't very bothered with, is the precision of the variables. You have three levels of precision available, which are float, half and fixed, with the highest precision one being float. Some details can be found here on Unitys own tips page and there's obviously a bunch more info online. To make sure there's no quality loss happening anywhere, Shader Forge uses float for everything. In practice this isn't always necessary, but it might be hard for Shader Forge to detect when it is and isn't a necessity. Therefore this is another thing you should change in the shader before you can call it good to go for mobile. So an example for the code above would be changing this line:


float3 emissive = tex2D(_MainTexture,i.uv0.rg).rgb;


Into this instead:


fixed3 emissive = tex2D(_MainTexture,i.uv0.rg).rgb;


And then also feed that variable directly to the return line instead of the extra "finalColor" variable currently there:


fixed3 emissive = tex2D(_MainTexture,i.uv0.rg).rgb;
return fixed4(emissive,1);


This is both because colors only need 255 unique values per channel, but also because otherwise we'd have a precision conversion when the value is returned to the rendering. Some of these things might be fixed by Unitys shader optimizer, but there are some which you need to do yourself. I generally think better safe than sorry and fix as many things as I can by hand when it comes to mobile.