Unity shader glitches according to the Dark Mode on iOS 15 - iphone

I'm using a shader (code below) which allows me to turn an image color into a grayscale (with transparency if needed).
Everything was perfect until I updated my device to iOS 15. Since that update, my shaders glitch as soon as the scene is rendered.
After a lot of days searching for a solution, I noticed that this is related to the iPhone Dark Mode.
I provided here some "concept" example in order to show you what currently happens:
The Grayscale shader is applied onto a red cube.
The cube A runs on an iPhone with Dark Mode activated (which is the result I get in Unity, the correct one).
The cube B represents the same object with Dark Mode disabled.
The problem is that I've been using these shaders for a lot of items inside my application and this gives me an inconsistency and ugly UI according to the User's Dark Mode preferences.
Note: I don't think that the problem is the shader itself, because on the < iOS 15 version it works fine. I think is something about how iOS 15 handles shaders with transparency effects, but It's just a supposition 'cause I still don't know how to work with shaders (I'm a student).
This is the shader I'm using:
Shader "Unlit/NewUnlitShader"
{
Properties
{
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
_EffectAmount ("Effect Amount", Range (0, 1)) = 1.0
}
SubShader
{
Tags
{
"Queue"="Transparent"
"IgnoreProjector"="True"
"RenderType"="Transparent"
}
LOD 200
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
half2 texcoord : TEXCOORD0;
};
sampler2D _MainTex;
float4 _MainTex_ST;
uniform float _EffectAmount;
v2f vert (appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
fixed4 c = tex2D(_MainTex, i.texcoord);
c.rgb = lerp(c.rgb, dot(c.rgb, float3(0.3, 0.59, 0.11)), _EffectAmount);
return c;
}
ENDCG
}
}
Fallback "Standard"
}
Is this a bug or am I missing something?
UPDATE - SOLVE
It's a bug, Unity devs have been notified about this.

I experienced something similar, where materials (and presumably shades...) looked different in an IOS build with dark mode off compared to the editor or an IOS build with dark mode on.
From here a dirty hack until this bug is solved is to add this key to the info.plist:
UIUserInterfaceStyle = Dark
info.plist
This basically forces the app to use dark mode. It works the same for
UIUserInterfaceStyle = Light

"shader works < iOS 15" doesn't mean the shader itself is always correct.
Some simple shader variable types like float, half, and fixed can give you total different result in different devices and OS.
"half" and "fixed" variables for better performance, while "float" for less bug.
It happens mostly on mobile, due to different CPU/GPU specs and also your Graphic APIs option.
Another key word will be "Color Space", please check Linear & Gamma option in your Unity Build Player settings.
A broken shader will return pink color. If it's not pink, it must be some math issue in your shader for wrong result.
The shader code works very straight forwarding. If the rendering result changes after a while, it seems likely some variables are changed in runtime too. And obviously the plugin which you are using, is also included lots of math calculation between C# and Shader.
You can imagine this:
When the C# code is trying to get variable from Shader, but shader return a wrong variable for your calculation in C#. Then, C# assign the wrong calculation result again to the Shader.
This will become infinite loop for wrong result.
Unity UI Effects(with shader) are not reliable:
Sometimes, they are just not updated... You have to force them update via script. Below commands may help sometimes, but not always...
Canvas.ForceUpdateCanvases();
ScrollRect.GraphicUpdateComplete();
Thus, you should contact the developer who maintain this plugin instead. As they know the best of how their plugin works
Otherwise, you should begin writing your own shader scripts instead.
Grayscale shader is just extremely easy to write..
Edit 2021-12-07:
From your shader, I can't see any relationship between greyscale and alpha channel.
c.rgb = lerp(c.rgb, dot(c.rgb, float3(0.3, 0.59, 0.11)), _EffectAmount);
I think this would be a proper way to achieve what you needed.
fixed3 greyColor = dot(c.rgb, float3(0.3, 0.59, 0.11));
c.rgb = lerp(c.rgb, greyColor, _EffectAmount);
c.a = greyColor.a;
Meanwhile, removing the line "Fallback..." should help debugging. As sometimes the fallback shader will override your current shader scripts.
Fallback "Standard"//remove it<---
There is also a mis-matching variable type in your original code, it should be float2 instead of half2.
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
};

Related

Unity Camera Rendering to Screen at Low Quality When Trying to Use Post-Processing (URP)

I've been having troubles with getting custom post-processing shaders to work with the 2D URP renderer, after a lot of searching I found a solution that let me use post-processing effects in 2D with URP by using camera stacking and render features. I do this by having a camera that renders most of the scene as a base camera that renders the 2D lights (the main reason I'm using URP) and a second overlay camera that renders the post-processing effect. The issue is that for some reason the quality drops a lot when I have the camera that's applying the post-processing effect enabled. Here's a couple examples:
With post-processing camera enabled
With post-processing camera disabled
The shader shouldn't be doing anything at the moment, but if I do make it do something like inverting the colors, the effect does get applied if I have the camera enabled. The UI has it's own camera so it's unaffected by both the low quality and the shader. I've found that disabling the render feature brings the quality back as well, but it doesn't seem to be the shader that's doing this because I can unattach the shader from the feature without disabling the feature and the low quality stays. I'm still pretty new with shaders though, so in case there is something wrong with my shader that's causing this, here's the code:
Shader "PixelationShader"
{
SubShader
{
Tags { "RenderType" = "Opaque" "RenderPipeline" = "UniversalPipeline"}
LOD 100
ZWrite Off Cull Off
Pass
{
Name "PixelationShader"
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
struct Attributes
{
float4 positionHCS : POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct Varyings
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_OUTPUT_STEREO
};
Varyings vert(Attributes input)
{
Varyings output;
// Note: The pass is setup with a mesh already in clip
// space, that's why, it's enough to just output vertex
// positions
output.positionCS = float4(input.positionHCS.xyz, 1.0);
#if UNITY_UV_STARTS_AT_TOP
output.positionCS.y *= -1;
#endif
output.uv = input.uv;
return output;
}
TEXTURE2D_X(_CameraOpaqueTexture);
SAMPLER(sampler_CameraOpaqueTexture);
half4 frag(Varyings input) : SV_Target
{
UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
float4 color = SAMPLE_TEXTURE2D_X(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, input.uv);
//color.rgb = 1 - color.rgb;
return color;
}
ENDHLSL
}
}
}
Please let me know if you have any ideas, thanks! Also, the editor light icons you can see in the images just started appearing in game as well, if anyone knows how to remove those or fix the white lines at the edges of the screen, that would be handy to know as well
Edit: I've noticed that the quality difference in the images I sent isn't very noticeable, but it's much more noticeable when actually playing the game
_CameraOpaqueTexture uses bilinear downsampling by default. You can change that in the universal render pipeline asset that you use:
dropdown under "Rendering/opaque downsampling" needs to be none
After trying a bunch of different things, I decided to just remove URP from my project and use 3D lights on 2D sprites instead

Unity only render objects from a layer inside a 2D mesh

I'm trying to make a 2D top down game with a field of view.
My field of view is shown by a 2D mesh of the fov, not being able to pass through walls.
I need to be able to put some objects such as enemies in a layer that's only rendered when it's inside the view cone.
I was following this tutorial but couldn't find the overwrite setting shown at 18:16 (I believe this is because the LWRP no longer exists in Unity). Are there any alternatives or other solutions?
The concept of using Layer-wise render passes and the stencil buffer is basically still the same also in URP (the replacement for LWRP).
The asset to configure this should by default be in
Assets/UniversalRenderPipelineAsset_Renderer
Also see See Through Objects with Stencil Buffers using Unity URP which explains it pretty good.
Or if you use the built-in render pipeline you would need shaders that directly implement this via a Stencil pass or subshader.
As an alternative you can also use the DepthTexture. For some examples using that and the ShaderGraph checkout Creating Overlap Shader Effects, Such as Underwater and Night Vision, in Unity's Shader Graph.
if you want to implement it from scratch and WITH BUILT-IN RENDER PIPELINE, you can read along :
lets say this is the scene , the white plane is fov and cubes are enemies:
the fov can have a simple shader. very simple shader. this for example:
Shader "Unlit/simple_shader"
{
Properties
{
}
SubShader
{
Tags { "RenderType"="Opaque" }
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
return 1;
}
ENDCG
}
}
}
Then you'll need a seperate camera to render ONLY the fov mesh and save it as an image, which we know as RenderTexture. which will look like a black and white mask:
then make another camera layer that only renders enemies:
then add the generated mask to it:
then put it on top of the main camera:
here is the sample asset I exported along with the scene:
https://usaupload.com/shared/hkcayffn-9psb5e_loknmu3086nnh2--tbk0fm5s8xqj8ilnyok2zklav8-4mk7i8tf1tsjfbstgzrpw3upwi_w2e96k9nqm05-knhk3m_1237myy8nvco910c67afmo
The solution used in your tutorial is to have a shader using the stencil buffer to only show the part of the enemies that stands inside your FOV mesh.
This solution a quite possible in any Render Pipeline
Hope that helped ;)
If your are using 2019.3 URP or later, you need to change the FieldOfView shader type to Universal Render Pipeline/Simple Lit. It has the same Render Face functions as the shader from the video. https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal#7.1/manual/simple-lit-shader.html

Coding unity shaders with unity visual studio [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Recently I've been coding shaders for unity in visual studio and I've noticed that since unity shaders are written in a combination of unity's shader lab language and CG, visual studio 2015 doesn't seem to recognize the languages. Because of this, visual studio is not reconizing keywords, has no intellesense, and worse of all tabbing incorrectly whenever I press enter to go to a new line. I've tried this visual studio extension https://visualstudiogallery.msdn.microsoft.com/ed812631-a7d3-4ca3-9f84-7efb240c7bb5 and it doesn't seem to fully work. I was wondering if anyone on here has had experience working with shaders and has an extension they know about to fix this problem.
That plugin should resolve your problem:
ShaderlabVS
It's supports Visual Studio 2013 and 2015. 2017 is on testing stage.
i bassicly used this for shaders:
http://wiki.unity3d.com/index.php/Silhouette-Outlined_Diffuse
the part that says "Outline only"
with the description: "The thing that does the trick here is "Blend Zero One" which is to completely forego rendering our object and use only the destination color (i.e. whatever is behind the object). In effect, the object itself is invisible, but we still let the outline render itself. So that's what we're left with: only the outline."
You first need to make a shader script and place it somewhere it suits you i always place this into the folder shaders.
the code is bassicly on the site but to make it easier for you i will paste it in here. be sure to read the code because you can edit this pretty easily from the code or inspector of Unity.
hereby the code for the created shaderscript:
Shader "Outlined/Silhouette Only" {
Properties {
_OutlineColor ("Outline Color", Color) = (0,0,0,1)
_Outline ("Outline width", Range (0.0, 0.03)) = .005
}
CGINCLUDE
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 pos : POSITION;
float4 color : COLOR;
};
uniform float _Outline;
uniform float4 _OutlineColor;
v2f vert(appdata v) {
// just make a copy of incoming vertex data but scaled according to normal direction
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal);
float2 offset = TransformViewToProjection(norm.xy);
o.pos.xy += offset * o.pos.z * _Outline;
o.color = _OutlineColor;
return o;
}
ENDCG
SubShader {
Tags { "Queue" = "Transparent" }
Pass {
Name "BASE"
Cull Back
Blend Zero One
// uncomment this to hide inner details:
//Offset -8, -8
SetTexture [_OutlineColor] {
ConstantColor (0,0,0,0)
Combine constant
}
}
// note that a vertex shader is specified here but its using the one above
Pass {
Name "OUTLINE"
Tags { "LightMode" = "Always" }
Cull Front
// you can choose what kind of blending mode you want for the outline
//Blend SrcAlpha OneMinusSrcAlpha // Normal
//Blend One One // Additive
Blend One OneMinusDstColor // Soft Additive
//Blend DstColor Zero // Multiplicative
//Blend DstColor SrcColor // 2x Multiplicative
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
half4 frag(v2f i) :COLOR {
return i.color;
}
ENDCG
}
}
Fallback "Diffuse"
}
P.S. can you tell me what kind of shaders you're aiming to make?

Unity3D visible seams on borders when tiling texture

For my game I have written a shader that allows my texture to tile nicely over multiple objects. I do that by choosing the uv not based on the relative position of the vertex, but on the absolute world position. The custom shader is as follows. Basically it just tiles the texture in a grid of 1x1 world units.
Shader "MyGame/Tile"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
float3 worldPos;
};
void surf (Input IN, inout SurfaceOutput o)
{
//adjust UV for tiling
float2 cell = floor(IN.worldPos.xz);
float2 offset = IN.worldPos.xz - cell;
float2 uv = offset;
float4 mainTex = tex2D(_MainTex, uv);
o.Albedo = mainTex.rgb;
}
ENDCG
}
FallBack "Diffuse"
}
I have done this approach in Cg and in HLSL shaders on XNA before and it always worked like a charm. With the Unity shader, however, I get a very visible seam on the edges of the texture. I tried a Unity surface shader as well as a vertex/fragment shader, both with the same results.
The texture itself looks as follows. In my game it is actually a .tga, not a .png, but that doesn't cause the problem. The problem occurs on all texture filter settings and on repeat or clamp mode equally.
Now I've seen someone have a similar problem here: Seams between planes when lightmapping.
There was, however, no definitive answer on how to solve such a problem. Also, my problem doesn't relate to a lightmap or lighting at all. In the fragment shader I tested, there was no lighting enabled and the issue was still present.
The same question was also posted on the Unity answers site, but I received no answers and not a lot of views, so I am trying it here as well: Visible seams on borders when tiling texture
This describes the reason for your problem:
http://hacksoflife.blogspot.com/2011/01/derivatives-i-discontinuities-and.html
This is a great visual example, like yours:
http://aras-p.info/blog/2010/01/07/screenspace-vs-mip-mapping/
Unless you're going to disable mipmaps, I don't think this is solvable with Unity, because as far as I know, it won't let you use functions that let you specify what mip level to use in the fragment shader (at least on OS X / OpenGL ES; this might not be a problem if you're only targeting Windows).
That said, I have no idea why you're doing the fragment-level uv calculations that you are; just passing data from the vertex shader works just fine, with a tileable texture:
struct v2f {
float4 position_clip : SV_POSITION;
float2 position_world_xz : TEXCOORD;
};
#pragma vertex vert
v2f vert(float4 vertex : POSITION) {
v2f o;
o.position_clip = mul(UNITY_MATRIX_MVP, vertex);
o.position_world_xz = mul(_Object2World, vertex).xz;
return o;
}
#pragma fragment frag
uniform sampler2D _MainTex;
fixed4 frag(v2f i) : COLOR {
return tex2D(_MainTex, i.position_world_xz);
}

CG: Specify a variable not to be interpolated between vertex and fragment shader

I'm using GC for writing shaders inside Unity3D.
I'm using vertex colors attributes for passing some parameters to the shader. They won't be used so for defining colors, and should be forwarded from vertex shader to pixel shader without modifyng them.
This is the structure I'm taking as input from Unity3D to the vertex shader:
struct appdata_full {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float4 texcoord : TEXCOORD0;
float4 texcoord1 : TEXCOORD1;
fixed4 color : COLOR;
#if defined(SHADER_API_XBOX360)
half4 texcoord2 : TEXCOORD2;
half4 texcoord3 : TEXCOORD3;
half4 texcoord4 : TEXCOORD4;
half4 texcoord5 : TEXCOORD5;
#endif
};
This is the structure returned by vertex shader as input to the fragment:
struct v2f {
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
fixed4 col: COLOR;
};
If I simply forward the parameter to the fragment shader, of course it will be interpolated:
v2f vert (appdata_full v)
{
v2f output;
//....
output.col = v.color;
}
I'd like to pass v.color parameter not interpolated to the fragment shader.
Is this possible?if yes how?
EDIT
like Tim pointed out, this is the expected behavior, because of the shader can't do anything else than interpolating colors if those are passed out from vertex shader to fragment.
I'll try to explain better what I'm trying to achieve. I'm using per vertex colors to store other kind of information than colors. Without telling all details on what I'm doing with that, let's say you can consider each color vertex as an id(each vertex of the same triangle, will have the same color. Actually each vertex of the same mesh).
So I used the color trick to mask some parameters because I have no other way to do this. Now this piece of information must be available at the fragment shader in some way.
If a pass as an out parameter of the vertex shader, this information encoded into a color will arrive interpolated at the fragment, that can't no longer use it.
I'm looking for a way of propagating this information unchanged till the fragment shader (maybe is possible to use a global variable or something like that?if yes how?).
I'm not sure this counts for an answer but it's a little much for a comment. As Bjorke points out, the fragment shader will always receive an interpolated value. If/when Unity supports Opengl 4.0 you might have access to Interpolation qualifiers, namely 'flat' that disables interpolation, deriving all values from a provoking vertex.
That said, the problem with trying to assign the same "color" value to all vertices of a triangle is that the vertex shader iterates over the vertices once, not per triangle. There will always be a "boundary" region where some vertex shares multiple edges with other vertices of a different "color" or "id", see my dumb example below. When applied to a box at (0,0,0), the top will be red, the bottom green, and the middle blue.
Shader "custom/colorbyheight" {
Properties {
_Unique_ID ("Unique Identifier", float) = 1.0
}
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
uniform float _Unique_ID;
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
float3 worldpos = mul(_Object2World, v.vertex).xyz;
if(worldpos[1] >= 0.0)
o.color.xyz = 0.35; //unique_id = 0.35
else
o.color.xyz = 0.1; //unique_id = 0.1
o.color.w = 1.0;
return o;
}
fixed4 frag (v2f i) : COLOR0 {
// local unique_id's set by the vertex shader and stored in the color
if(i.color.x >= 0.349 && i.color.x <=0.351)
return float4(1.0,0.0,0.0,1.0); //red
else if(i.color.x >= 0.099 && i.color.x <=0.11)
return float4(0.0,1.0,0.0,1.0); //green
// global unique_id set by a Unity script
if(_Unique_ID == 42.0)
return float4(1.0,1.0,1.0,1.0); //white
// Fallback color = blue
return float4(0.0,0.0,1.0,1.0);
}
ENDCG
}
}
}
In your addendum note you say "Actually each vertex of the same mesh." If that's the case, why not use a modifiable property, like I have included above. Each mesh just needs a script then to change the unique_id.
public class ModifyShader : MonoBehaviour {
public float unique_id = 1;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
renderer.material.SetFloat( "_Unique_ID", unique_id );
}
}
I know this is an old thread, but it's worth answering anyway since this is one of the top google results.
You can now use the nointerpolation option for your variables in regular CG shaders. i.e.
nointerpolation fixed3 diff : COLOR0;
This is a pretty old thread, but I recently had a similar issue and I found a super simple answer. OSX Mavericks now supports OpenGL 4.1 so soon it won't be an issue at all, but it still may take a while before Unity3d picks it up.
Anyway, there is a neat way to enable flat shading in Unity even on earlier OSX (e.g. Mountain Lion) !
The shader below will do the job (the crucial part is the line with #extension, otherwise you'd get a compilation error for using a keyword flat"
Shader "GLSL flat shader" {
SubShader {
Pass {
GLSLPROGRAM
#extension GL_EXT_gpu_shader4 : require
flat varying vec4 color;
#ifdef VERTEX
void main()
{
color = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
#endif
#ifdef FRAGMENT
void main()
{
gl_FragColor = color; // set the output fragment color
}
#endif
ENDGLSL
}
}
}
Got to it by combining things from:
http://poniesandlight.co.uk/notes/flat_shading_on_osx/
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Debugging_of_Shaders
The GPU will always interpolated between values. If you want a constant value for a triangle, you need to set the same value for all vertices of that triangle. This can at times be inefficient, but it's how OpenGL (and DirectX) works. There is no inherent notion of a "face" value.
You might do this: glShadeModel(GL_FLAT). This turns off interpolation for all fragment shader inputs, and is available in older OpenGL also (pre 4.0).
If you have some inputs you want to interpolate and some you don't, render once with GL_FLAT to a texture of the same resolution as your output, and then render again with GL_SMOOTH and sample the texture to read the flat values for each pixel (while also getting interpolated values in the usual way).
If you could use DirectX9 instead, you can use the nointerpolation modifier on individual fragment shader inputs (shader model 4 or later).
The following steps works for me.
Unfortunately, DX uses vertex 0 as the provoking vertex while GL by default uses 2.
You can change this in GL but glProvokingVertex does not seem to be exposed.
We are doing flat shading and this reduces our vertex count significantly.
We have to reorder the triangles and compute normals in a special way (if anyone is interested I can post example source).
The problem is that we have to have different meshes on GL vs DX as the triangle indexes need to be rotated in order for the triangles to use the appropriate provoking vertex.
Maybe there is some way to execute a GL command via a plugin.