I had previously written a few custom shaders for my game, but now I want to upgrade to the Universal Render Pipeline in Unity and I'm struggling to recreate an old shader with the new Shader Graph. How would I do this given the code below?
The shader is applied to a tilemap; it assigns the appropriate color to each tile (is it grass, water, or sand?). It is based on a set of starting and ending colors given through a script for each of the tile types (grass, sand, water) and the number of steps between those colors (_StartCols, _EndCols).
I have tried implementing the 'Tiling and Offset' node to no avail; I cannot change the colors and get the colors I want. My current version of Unity is Unity 2019.3, and I'm using the Shader Graph to try to recreate this old shader.
Here is the code for the old shader:
Shader "Custom/Terrain"
{
Properties
{
_MainTex ("Albedo (RGB)", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
};
float4 _StartCols[3];
float4 _EndCols[3];
void surf (Input IN, inout SurfaceOutputStandard o)
{
int colIndex = (int)IN.uv_MainTex.x;
float t = IN.uv_MainTex.y;
o.Albedo = lerp(_StartCols[colIndex],_EndCols[colIndex],t);
}
ENDCG
}
FallBack "Diffuse"
}
I have not received any error messages so far; however, the colors are way off given the shader graph I have configured currently. I do know that it is not complete at the moment, because I am currently stuck at this point.
Here is the result I previously had before changing render pipelines:
http://prntscr.com/p0xcwl
Here is what I have now:
http://prntscr.com/p0xdvf
This is what my Shader Graph currently looks like:
http://prntscr.com/p0xfhh
If you need any more information, please don't hesitate to ask!
Related
I've been having troubles with getting custom post-processing shaders to work with the 2D URP renderer, after a lot of searching I found a solution that let me use post-processing effects in 2D with URP by using camera stacking and render features. I do this by having a camera that renders most of the scene as a base camera that renders the 2D lights (the main reason I'm using URP) and a second overlay camera that renders the post-processing effect. The issue is that for some reason the quality drops a lot when I have the camera that's applying the post-processing effect enabled. Here's a couple examples:
With post-processing camera enabled
With post-processing camera disabled
The shader shouldn't be doing anything at the moment, but if I do make it do something like inverting the colors, the effect does get applied if I have the camera enabled. The UI has it's own camera so it's unaffected by both the low quality and the shader. I've found that disabling the render feature brings the quality back as well, but it doesn't seem to be the shader that's doing this because I can unattach the shader from the feature without disabling the feature and the low quality stays. I'm still pretty new with shaders though, so in case there is something wrong with my shader that's causing this, here's the code:
Shader "PixelationShader"
{
SubShader
{
Tags { "RenderType" = "Opaque" "RenderPipeline" = "UniversalPipeline"}
LOD 100
ZWrite Off Cull Off
Pass
{
Name "PixelationShader"
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
struct Attributes
{
float4 positionHCS : POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct Varyings
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_OUTPUT_STEREO
};
Varyings vert(Attributes input)
{
Varyings output;
// Note: The pass is setup with a mesh already in clip
// space, that's why, it's enough to just output vertex
// positions
output.positionCS = float4(input.positionHCS.xyz, 1.0);
#if UNITY_UV_STARTS_AT_TOP
output.positionCS.y *= -1;
#endif
output.uv = input.uv;
return output;
}
TEXTURE2D_X(_CameraOpaqueTexture);
SAMPLER(sampler_CameraOpaqueTexture);
half4 frag(Varyings input) : SV_Target
{
UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
float4 color = SAMPLE_TEXTURE2D_X(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, input.uv);
//color.rgb = 1 - color.rgb;
return color;
}
ENDHLSL
}
}
}
Please let me know if you have any ideas, thanks! Also, the editor light icons you can see in the images just started appearing in game as well, if anyone knows how to remove those or fix the white lines at the edges of the screen, that would be handy to know as well
Edit: I've noticed that the quality difference in the images I sent isn't very noticeable, but it's much more noticeable when actually playing the game
_CameraOpaqueTexture uses bilinear downsampling by default. You can change that in the universal render pipeline asset that you use:
dropdown under "Rendering/opaque downsampling" needs to be none
After trying a bunch of different things, I decided to just remove URP from my project and use 3D lights on 2D sprites instead
I'm using a shader (code below) which allows me to turn an image color into a grayscale (with transparency if needed).
Everything was perfect until I updated my device to iOS 15. Since that update, my shaders glitch as soon as the scene is rendered.
After a lot of days searching for a solution, I noticed that this is related to the iPhone Dark Mode.
I provided here some "concept" example in order to show you what currently happens:
The Grayscale shader is applied onto a red cube.
The cube A runs on an iPhone with Dark Mode activated (which is the result I get in Unity, the correct one).
The cube B represents the same object with Dark Mode disabled.
The problem is that I've been using these shaders for a lot of items inside my application and this gives me an inconsistency and ugly UI according to the User's Dark Mode preferences.
Note: I don't think that the problem is the shader itself, because on the < iOS 15 version it works fine. I think is something about how iOS 15 handles shaders with transparency effects, but It's just a supposition 'cause I still don't know how to work with shaders (I'm a student).
This is the shader I'm using:
Shader "Unlit/NewUnlitShader"
{
Properties
{
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
_EffectAmount ("Effect Amount", Range (0, 1)) = 1.0
}
SubShader
{
Tags
{
"Queue"="Transparent"
"IgnoreProjector"="True"
"RenderType"="Transparent"
}
LOD 200
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
half2 texcoord : TEXCOORD0;
};
sampler2D _MainTex;
float4 _MainTex_ST;
uniform float _EffectAmount;
v2f vert (appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
fixed4 c = tex2D(_MainTex, i.texcoord);
c.rgb = lerp(c.rgb, dot(c.rgb, float3(0.3, 0.59, 0.11)), _EffectAmount);
return c;
}
ENDCG
}
}
Fallback "Standard"
}
Is this a bug or am I missing something?
UPDATE - SOLVE
It's a bug, Unity devs have been notified about this.
I experienced something similar, where materials (and presumably shades...) looked different in an IOS build with dark mode off compared to the editor or an IOS build with dark mode on.
From here a dirty hack until this bug is solved is to add this key to the info.plist:
UIUserInterfaceStyle = Dark
info.plist
This basically forces the app to use dark mode. It works the same for
UIUserInterfaceStyle = Light
"shader works < iOS 15" doesn't mean the shader itself is always correct.
Some simple shader variable types like float, half, and fixed can give you total different result in different devices and OS.
"half" and "fixed" variables for better performance, while "float" for less bug.
It happens mostly on mobile, due to different CPU/GPU specs and also your Graphic APIs option.
Another key word will be "Color Space", please check Linear & Gamma option in your Unity Build Player settings.
A broken shader will return pink color. If it's not pink, it must be some math issue in your shader for wrong result.
The shader code works very straight forwarding. If the rendering result changes after a while, it seems likely some variables are changed in runtime too. And obviously the plugin which you are using, is also included lots of math calculation between C# and Shader.
You can imagine this:
When the C# code is trying to get variable from Shader, but shader return a wrong variable for your calculation in C#. Then, C# assign the wrong calculation result again to the Shader.
This will become infinite loop for wrong result.
Unity UI Effects(with shader) are not reliable:
Sometimes, they are just not updated... You have to force them update via script. Below commands may help sometimes, but not always...
Canvas.ForceUpdateCanvases();
ScrollRect.GraphicUpdateComplete();
Thus, you should contact the developer who maintain this plugin instead. As they know the best of how their plugin works
Otherwise, you should begin writing your own shader scripts instead.
Grayscale shader is just extremely easy to write..
Edit 2021-12-07:
From your shader, I can't see any relationship between greyscale and alpha channel.
c.rgb = lerp(c.rgb, dot(c.rgb, float3(0.3, 0.59, 0.11)), _EffectAmount);
I think this would be a proper way to achieve what you needed.
fixed3 greyColor = dot(c.rgb, float3(0.3, 0.59, 0.11));
c.rgb = lerp(c.rgb, greyColor, _EffectAmount);
c.a = greyColor.a;
Meanwhile, removing the line "Fallback..." should help debugging. As sometimes the fallback shader will override your current shader scripts.
Fallback "Standard"//remove it<---
There is also a mis-matching variable type in your original code, it should be float2 instead of half2.
struct v2f
{
float4 vertex : SV_POSITION;
float2 texcoord : TEXCOORD0;
};
I'm trying to make a decal shader to use with a projector in Unity. Here's what I've put together:
Shader "Custom/color_projector"
{
Properties {
_Color ("Tint Color", Color) = (1,1,1,1)
_MainTex ("Cookie", 2D) = "gray" {}
}
Subshader {
Tags {"Queue"="Transparent"}
Pass {
ZTest Less
ColorMask RGB
Blend One OneMinusSrcAlpha
Offset -1, -1
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 uvShadow : TEXCOORD0;
float4 pos : SV_POSITION;
};
float4x4 unity_Projector;
float4x4 unity_ProjectorClip;
v2f vert (float4 vertex : POSITION)
{
v2f o;
o.pos = UnityObjectToClipPos (vertex);
o.uvShadow = mul (unity_Projector, vertex);
return o;
}
sampler2D _MainTex;
fixed4 _Color;
fixed4 frag (v2f i) : SV_Target
{
fixed4 tex = tex2Dproj (_MainTex, UNITY_PROJ_COORD(i.uvShadow));
return _Color * tex.a;
}
ENDCG
}
}
}
This works well in most situations:
However, whenever it projects onto a transparent surface (or multiple surfaces) it seems to render an extra time for each surface. Here, I've broken up the divide between the grass and the paving using grass textures with transparent areas:
I've tried numerous blending and options and all of the ZTesting options. This is the best I can get it to look.
From reading around I gather this might be because the a transparent shader does not write to the depth buffer. I tried adding ZWrite On and I tried doing a pass before the main pass:
Pass {
ZWrite On
ColorMask 0
}
But neither had any effect at all.
How can this shader be modified so that it only projects the texture once on the nearest geometries?
Desired result (photoshopped):
The problem is due to how projectors work. Basically, they render all meshes within their field of view a second time, except with a different shader. In your case, this means that both the ground and the plane with the grass will be rendered twice and layered on top of each other. I think it could be possible to fix this using two steps;
First, add the following to the tags of the transparent (grass) shader:
"IgnoreProjector"="True"
Then, change the render queue of your projector from "Transparent" to "Transparent+1". This means that the ground will render first, then the grass edges, and finally the projector will project onto the ground (except appearing on top, since it is rendered last).
As for the blending, i think you want regular alpha blending:
Blend SrcAlpha OneMinusSrcAlpha
Another option if you are using deferred rendering is to use deferred decals. These are both cheaper and usually easier to use than projectors.
For my game I have written a shader that allows my texture to tile nicely over multiple objects. I do that by choosing the uv not based on the relative position of the vertex, but on the absolute world position. The custom shader is as follows. Basically it just tiles the texture in a grid of 1x1 world units.
Shader "MyGame/Tile"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
float3 worldPos;
};
void surf (Input IN, inout SurfaceOutput o)
{
//adjust UV for tiling
float2 cell = floor(IN.worldPos.xz);
float2 offset = IN.worldPos.xz - cell;
float2 uv = offset;
float4 mainTex = tex2D(_MainTex, uv);
o.Albedo = mainTex.rgb;
}
ENDCG
}
FallBack "Diffuse"
}
I have done this approach in Cg and in HLSL shaders on XNA before and it always worked like a charm. With the Unity shader, however, I get a very visible seam on the edges of the texture. I tried a Unity surface shader as well as a vertex/fragment shader, both with the same results.
The texture itself looks as follows. In my game it is actually a .tga, not a .png, but that doesn't cause the problem. The problem occurs on all texture filter settings and on repeat or clamp mode equally.
Now I've seen someone have a similar problem here: Seams between planes when lightmapping.
There was, however, no definitive answer on how to solve such a problem. Also, my problem doesn't relate to a lightmap or lighting at all. In the fragment shader I tested, there was no lighting enabled and the issue was still present.
The same question was also posted on the Unity answers site, but I received no answers and not a lot of views, so I am trying it here as well: Visible seams on borders when tiling texture
This describes the reason for your problem:
http://hacksoflife.blogspot.com/2011/01/derivatives-i-discontinuities-and.html
This is a great visual example, like yours:
http://aras-p.info/blog/2010/01/07/screenspace-vs-mip-mapping/
Unless you're going to disable mipmaps, I don't think this is solvable with Unity, because as far as I know, it won't let you use functions that let you specify what mip level to use in the fragment shader (at least on OS X / OpenGL ES; this might not be a problem if you're only targeting Windows).
That said, I have no idea why you're doing the fragment-level uv calculations that you are; just passing data from the vertex shader works just fine, with a tileable texture:
struct v2f {
float4 position_clip : SV_POSITION;
float2 position_world_xz : TEXCOORD;
};
#pragma vertex vert
v2f vert(float4 vertex : POSITION) {
v2f o;
o.position_clip = mul(UNITY_MATRIX_MVP, vertex);
o.position_world_xz = mul(_Object2World, vertex).xz;
return o;
}
#pragma fragment frag
uniform sampler2D _MainTex;
fixed4 frag(v2f i) : COLOR {
return tex2D(_MainTex, i.position_world_xz);
}
I have a scene where I really need depth of field.
Apparently, Unity's depth of field doesn't work with any of the shaders, neither built-in or custom, that process the alpha.
So this happens, for example, with the Transparent/Diffuse shader. Transparent/Cutout works instead.
Here's the simplest custom shader I made that triggers this behaviour:
Shader "Custom/SimpleAlpha" {
Properties {
_MainTex ("Base (RGBA)", 2D) = "white" {}
}
SubShader {
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
//Tags { "RenderType"="Opaque" }
LOD 300
ZWrite Off
CGPROGRAM
#pragma surface surf Lambert alpha
#include "UnityCG.cginc"
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex);
o.Albedo = c.rgb;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
If you try the code in a project you'll notice that EVERY object that wears the shader is blurred with the very same amount instead of being blurred basing on Z.
Any help is much appreciated.
Thanks in advance.
I posted the same question on Unity Answers: http://answers.unity3d.com/questions/438556/my-shader-brakes-depth-of-field.html
Since depth of field is a post processing effect that uses the values stored in the Z-buffer, the following line is the culprit:
ZWrite Off
For transparent objects, Z-buffer writes are usually disabled because the Transparent render queue doesn't need the Z-buffer.
So if you remove that line, you should see depth of field correctly applied to transparent objects. But objects lying behind fully transparent areas will now be blurred using the wrong Z value. As quick fix, you could try to use an alpha test like AlphaTest Greater 0.1.