I am trying to learn about shaders by trying to do this tutorial. I finished the first part where I was able to draw the outlines using depth. But even though I copied everything accordingly during the normal map phase, I only get a black screen. Is there something wrong with my following?
My outline shader looks like this, so currently I am only rendering the normals but screen is all black. I've modified other files as well according to the tutorial. Maybe there's a problem with the new version of unity? I don't know. Help would be much apperciated.
Shader "Hidden/Roystan/Outline Post Process"
{
SubShader
{
Cull Off ZWrite Off ZTest Always
Pass
{
// Custom post processing effects are written in HLSL blocks,
// with lots of macros to aid with platform differences.
// https://github.com/Unity-Technologies/PostProcessing/wiki/Writing-Custom-Effects#shader
HLSLPROGRAM
#pragma vertex VertDefault
#pragma fragment Frag
#include "Packages/com.unity.postprocessing/PostProcessing/Shaders/StdLib.hlsl"
TEXTURE2D_SAMPLER2D(_MainTex, sampler_MainTex);
// _CameraNormalsTexture contains the view space normals transformed
// to be in the 0...1 range.
TEXTURE2D_SAMPLER2D(_CameraNormalsTexture, sampler_CameraNormalsTexture);
TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
// Data pertaining to _MainTex's dimensions.
// https://docs.unity3d.com/Manual/SL-PropertiesInPrograms.html
float4 _MainTex_TexelSize;
float _Scale;
float _DepthThreshold;
float _NormalThreshold;
// Combines the top and bottom colors using normal blending.
// https://en.wikipedia.org/wiki/Blend_modes#Normal_blend_mode
// This performs the same operation as Blend SrcAlpha OneMinusSrcAlpha.
float4 alphaBlend(float4 top, float4 bottom)
{
float3 color = (top.rgb * top.a) + (bottom.rgb * (1 - top.a));
float alpha = top.a + bottom.a * (1 - top.a);
return float4(color, alpha);
}
float4 Frag(VaryingsDefault i) : SV_Target
{
float halfScaleFloor = floor(_Scale * 0.5);
float halfScaleCeil = ceil(_Scale * 0.5);
float2 bottomLeftUV = i.texcoord - float2(_MainTex_TexelSize.x, _MainTex_TexelSize.y) * halfScaleFloor;
float2 topRightUV = i.texcoord + float2(_MainTex_TexelSize.x, _MainTex_TexelSize.y) * halfScaleCeil;
float2 bottomRightUV = i.texcoord + float2(_MainTex_TexelSize.x * halfScaleCeil, -_MainTex_TexelSize.y * halfScaleFloor);
float2 topLeftUV = i.texcoord + float2(-_MainTex_TexelSize.x * halfScaleFloor, _MainTex_TexelSize.y * halfScaleCeil);
float depth0 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, bottomLeftUV).r;
float depth1 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, topRightUV).r;
float depth2 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, bottomRightUV).r;
float depth3 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, topLeftUV).r;
float3 normal0 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, bottomLeftUV).rgb;
float3 normal1 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, topRightUV).rgb;
float3 normal2 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, bottomRightUV).rgb;
float3 normal3 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, topLeftUV).rgb;
float3 normalFiniteDifference0 = normal1 - normal0;
float3 normalFiniteDifference1 = normal3 - normal2;
float edgeNormal = sqrt(dot(normalFiniteDifference0, normalFiniteDifference0) + dot(normalFiniteDifference1, normalFiniteDifference1));
edgeNormal = edgeNormal > _NormalThreshold ? 1 : 0;
return edgeNormal;
float depthFiniteDifference0 = depth1 - depth0;
float depthFiniteDifference1 = depth3 - depth2;
float edgeDepth = sqrt(pow(depthFiniteDifference0, 2) + pow(depthFiniteDifference1, 2)) * 100;
edgeDepth = edgeDepth > _DepthThreshold ? 1 : 0;
float edge = max(edgeDepth, edgeNormal);
//return edge;
}
ENDHLSL
}
}
}
just after the HLSLPROGRAM you have two #pragma, one for the vertex method and one for the fragment method, but later in the code, you only have the fragment function (Frag) and the vertex one is missing (VertDefault). As far as I know, you must have both methods implemented in a fragment shader in order for it to properly work. Hope I helped.
Related
I already posted this question on unity answers yesterday, but maybe anyone here can help? I've been trying to do some stuff that involves getting an image from a native plugin (in the form of a .dll file). I load the image data into a native buffer and then push that to the gpu in the form of a structured compute buffer. From there, I display the image using a shader (basically just doing something like uint idx = x + y * width to get the correct index). And this works great on my laptop (ignore the low resolution, I lowered it to be able to inspect the values for each pixel; this is exactly how it's supposed to look).
But when I try it on my desktop, all I get is this mess:
It's clearly displaying something, I'm almost able to make out contours of the text (it doesn't seem like I'm just getting random noise). But I can't seem to work out what's wrong here.
So far I've tried:
syncing the code across the two devices (it's excactly the same)
changing the unity version (tried 2020.3.26f1 and 2021.2.12f on both machines)
updating the graphics drivers
checking the directx version (DirectX 12 on both)
changing the editor game window resolution
comparing the contents of the buffer (the ComputeBuffer.GetData method is getting the same completely valid values on both machines)
building the project on both machines (both builds are working on my laptop and broken on my desktop)
Especially the last point really confused me. I'm running the same executable on both machines and it's working on my laptop with integrated graphics (not sure wether that could be relevant) but not on my desktop with a more modern dedicated gpu? The only idea I have left is that there might be some kind of optimization going on with my desktop's amd gpu that's not happening on my laptop's intel gpu. Any ideas on what I could try in the radeon software? Maybe it could even be some sort of bug (with unity or with my graphics driver)?
I'd be more than happy about any ideas on what could be the problem here (cause I have no clue at this point). And sorry if my grammar is a bit off at times, not a native speaker.
EDIT: Here's the shader I use to display the image.
Shader "Hidden/ReadUnpacked"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
static const uint PACKED_SIZE = 3;
static const uint PIXELS_PER_PACK = 4;
static const uint BYTES_PER_PIXEL = 8;
static const uint PERCISION = 0xFF; // 0xFF = 2^8
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
struct packed4
{
uint p[PACKED_SIZE];
};
struct unpacked4
{
fixed4 p[PIXELS_PER_PACK];
};
StructuredBuffer<packed4> InputBuffer;
uint ImgIdx;
float2 Resolution;
float2 TexelOffset;
fixed unpackSingle(packed4 val, uint idx)
{
uint pid = idx / PIXELS_PER_PACK; // pixel index
uint sid = idx % PIXELS_PER_PACK * BYTES_PER_PIXEL; // shift index
return ((val.p[pid] >> sid) & PERCISION) / (half)PERCISION;
}
unpacked4 unpack(packed4 packed)
{
unpacked4 unpacked;
half r, g, b;
uint idx = 0;
[unroll(PIXELS_PER_PACK)] for (uint i = 0; i < PIXELS_PER_PACK; i++)
{
fixed4 upx = fixed4(0, 0, 0, 1);
[unroll(PACKED_SIZE)] for (uint j = 0; j < PACKED_SIZE; j++)
{
upx[j] = unpackSingle(packed, idx++);
}
unpacked.p[i] = upx;
}
return unpacked;
}
fixed4 samplePackedBuffer(float2 uv)
{
int2 tc = float2(uv.x, 1 - uv.y) * Resolution;
uint idx = tc.x + tc.y * Resolution.x; // image pixel index
idx += Resolution.x * Resolution.y * ImgIdx;
uint gid = floor(idx / PIXELS_PER_PACK); // packed global index
uint lid = idx % PIXELS_PER_PACK; // packed local index
packed4 ppx = InputBuffer[gid];
unpacked4 upx = unpack(ppx);
return upx.p[lid];
}
v2f vert(appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
return o;
}
fixed4 frag(v2f i) : SV_Target
{
fixed4 col = samplePackedBuffer(i.uv);
return col;
}
ENDCG
}
}
}
You should check all other 3D APIs (D3D11, Vulkan, OpenGL,...).
I made a grid shader which is working fine. However, it does not get impacted at all by any light. Just so that you know concerning the plane having the shader:
Its dimensions are 1000x1x1000 (so wide enough)
Displays shadows with any other material and cast shadows is on
Using Unity 2019.3.0f3
Universal Render Pipeline
The plane using custom grid shader (not receiving light)
The plane using basic shader (receiving light)
Custom grid shader code
I tried few solutions though including adding FallBack "Diffuse" at the end, or #include along with TRANSFER_SHADOW things. However, these don't work either.
You need to tell your shader what to do with the light information if you want it to be lit. Here is an example applying diffuse light directly to the albedo of your grid shader:
Shader "Custom/Grid"
{
Properties
{
_GridThickness("Grid Thickness", Float) = 0.01
_GridSpacing("Grid Spacing", Float) = 10.0
_GridColour("Grid Colour", Color) = (0.5, 0.5, 0.5, 0.5)
_BaseColour("Base Colour", Color) = (0.0, 0.0, 0.0, 0.0)
}
SubShader{
Tags { "Queue" = "Transparent" }
Pass {
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Tags {
"LightMode" = "ForwardBase"
} // gets us access to main directional light
CGPROGRAM
// Define the vertex and fragment shader functions
#pragma vertex vert
#pragma fragment frag
#include "UnityStandardBRDF.cginc" // for shader lighting info and some utils
#include "UnityStandardUtils.cginc" // for energy conservation
// Access Shaderlab properties
uniform float _GridThickness;
uniform float _GridSpacing;
uniform float4 _GridColour;
uniform float4 _BaseColour;
// Input into the vertex shader
struct vertexInput
{
float4 vertex : POSITION;
float3 normal : NORMAL; // include normal info
};
// Output from vertex shader into fragment shader
struct vertexOutput
{
float4 pos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 normal : TEXCOORD1; // pass normals along
};
// VERTEX SHADER
vertexOutput vert(vertexInput input)
{
vertexOutput output;
output.pos = UnityObjectToClipPos(input.vertex);
// Calculate the world position coordinates to pass to the fragment shader
output.worldPos = mul(unity_ObjectToWorld, input.vertex);
output.normal = input.normal; //get normal for frag shader from vert info
return output;
}
// FRAGMENT SHADER
float4 frag(vertexOutput input) : COLOR
{
float3 lightDir = _WorldSpaceLightPos0.xyz;
float3 viewDir = normalize(_WorldSpaceCameraPos - input.worldPos);
float3 lightColor = _LightColor0.rgb;
float3 col;
if (frac(input.worldPos.x / _GridSpacing) < _GridThickness || frac(input.worldPos.z / _GridSpacing) < _GridThickness)
col = _GridColour;
else
col = _BaseColour;
col *= lightColor * DotClamped(lightDir, input.normal); // apply diffuse light by angle of incidence
return float4(col, 1);
}
ENDCG
}
}
}
You should check out these tutorials to learn more about other ways to light your objects. Same applies if you want them to accept shadows.
Setting FallBack "Diffuse" won't do anything here since the shader is not "falling back", it's running exactly the way you programmed it to, with no lighting or shadows.
Hello I have followed a video series on YouTube made by Sebastian league on procedural generation and I have followed his whole video series, however on my part there are black spots in the mesh, only on water regions. I'm using global mode for those wondering, also using unity 2019.4.6f1. I want to get rid of the black spots have tried to build and run the and the blackspots were there.
Link to his serie is: https://www.youtube.com/watch?v=wbpMiKiSKm8&list=PLFt_AvWsXl0eBW2EiBtl_sxmDtSgZBxB3
I have dowloaded his project on GitHub and he seems doesn't have a problem with here is his GitHub page: https://github.com/SebLague/Procedural-Landmass-Generation
Also here is a picture -> here
I'm creating my own custom shader for the terrain, here it is
Shader "Custom/terrain"
{
// this properties will be added to our meshMaterial
Properties {
testTexture("Texture", 2D) = "white"{}
testScale("Scale", Float) = 1
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
const static int maxLayerCount = 8;
const static float epsilon = 1E-4;
int layerCount;
// float3 because of RGB
float3 baseColors[maxLayerCount];
float baseStartHeights[maxLayerCount];
float baseBlends[maxLayerCount];
float baseColorStrength[maxLayerCount];
float baseTextureScales[maxLayerCount];
float minHeight;
float maxHeight;
sampler2D testTexture;
float testScale;
UNITY_DECLARE_TEX2DARRAY(baseTextures);
struct Input {
float3 worldPos;
float worldNormal;
};
// float a is min value, float b is max value and value is current value
float inverseLerp(float a, float b, float value) {
// saturate means clamp the value between 0 and 1
return saturate((value - a)/(b - a));
}
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_BUFFER_END(Props)
float3 triplanar(float3 worldPos, float scale, float3 blendAxis, int textureIndex) {
float3 scaledWorldPos = worldPos / scale;
// tripleaner mapping
float3 xProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures,
float3(scaledWorldPos.y, scaledWorldPos.z, textureIndex)) * blendAxis.x;
float3 yProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures,
float3(scaledWorldPos.x, scaledWorldPos.z, textureIndex)) * blendAxis.y;
float3 zProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures,
float3(scaledWorldPos.x, scaledWorldPos.y, textureIndex)) * blendAxis.z;
return xProjection + yProjection + zProjection;
}
// this function will be called for every pixel that our mesh is visible
// we want to set the color at that surface
void surf (Input IN, inout SurfaceOutputStandard o) {
float heightPercent = inverseLerp(minHeight, maxHeight, IN.worldPos.y);
float3 blendAxis = abs(IN.worldNormal);
blendAxis /= blendAxis.x + blendAxis.y + blendAxis.z;
for (int i = 0; i < layerCount; i++) {
float drawStrength = inverseLerp(-baseBlends[i]/2 - epsilon, baseBlends[i]/2, (heightPercent - baseStartHeights[i]));
float3 baseColor = baseColors[i] * baseColorStrength[i];
float3 textureColor = triplanar(IN.worldPos, baseTextureScales[i], blendAxis, i) * (1-baseColorStrength[i]);
// if drawStrength is 0 then we would set color to black
// but what we want is that if drawstength is 0
// then we want to use the same color, albedo * 1 + 0 will be same (what we want)
o.Albedo = o.Albedo * (1-drawStrength) + (baseColor + textureColor) * drawStrength;
}
}
ENDCG
}
FallBack "Diffuse"
}
So I thought the problem was the code but I compared my code against Sebastian league code that is by the way available at GitHub and the there was nothing there however the problem turned out to be with the animation curve that we used to assign base heights. Just make sure that it stars a bit below zero and that was in my case the solution
Github Link:
https://github.com/SebLague/Procedural-Landmass-Generation/tree/master/Proc%20Gen%20E21
I'm trying to fluctuate between two values inside a shader to achieve a glowing effect.
I need it to be done inside the shader itself and not via C# scripting.
I've tried using the _Time value that Unity gives us for shader animation but it isn't working:
Shader "Test shader" {
Properties {
_ColorTint ("Color", Color) = (1,1,1,1)
_MainTex ("Base (RGB)", 2D) = "white" {}
_GlowColor("Glow Color", Color) = (1,0,0,1)
_GlowPower("Glow Power", Float) = 3.0
_UpDown("Shine Emitter Don't Change", Float) = 0
}
SubShader {
Tags {
"RenderType"="Opaque"
}
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float4 color : Color;
float2 uv_MainTex;
float3 viewDir;
float4 _Time;
};
float4 _ColorTint;
sampler2D _MainTex;
float4 _GlowColor;
float _GlowPower;
float _UpDown;
void surf(Input IN, inout SurfaceOutput o) {
if (_UpDown == 0) {
_GlowPower += _Time.y;
}
if (_UpDown == 1) {
_GlowPower -= _Time.y;
}
if (_GlowPower <= 1) {
_UpDown = 0;
}
if (_GlowPower >= 3) {
_UpDown = 1;
}
IN.color = _ColorTint;
o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb * IN.color;
half rim = 1.0 - saturate(dot(normalize(IN.viewDir), o.Normal));
o.Emission = _GlowColor.rgb * pow(rim, _GlowPower);
}
ENDCG
}
FallBack "Diffuse"
}
This makes the glow grow to the infinite.
What am I doing wrong?
Extending my comment slightly:
You can't use _Time.y in this case, as it is the elapsed time since the game started, thus it will increase over time.
You can use _SinTime.y instead, which represents sin(_Time.y). This means that oscillates between the values -1 and 1. You can use this and assign (maybe a scaled version of _SinTime) to your variable: _GlowPower = C * _SinTime.y
More on build-in shader variables: http://docs.unity3d.com/Manual/SL-UnityShaderVariables.html
For doing a pulsing glow... I'd have a script 'outside' the shader and send in a paramater (_GlowPower) calculate in c# script like this....
glowPow = Mathf.Sin(time);
Then you only need to calculate it once. IF you put it in VErtex shader... it does it once per vertex, and if its in surface shader... then once per pixel = performance waste.
you can send variables to your shader like this... (very handy)
material.SetFloat(propertyName, valueToSend);
So you could send, time, strength, glow or whatverer you want.
If you really need to do a glow calculation per vertex or per pixel, then use
_glowPow = sin(_time);
inside the shader.
I've written a shader and it works fine when I added it in a plane located in front of camera (in this case camera does not have shader). but then I add this shader to the camera, it does not show anything on the screen. Herein is my code, could you let me know how can I change it to be compatible with Camera.RenderWithShader method?
Shader "Custom/she1" {
Properties {
top("Top", Range(0,2)) = 1
bottom("Bottom", Range(0,2)) = 1
}
SubShader {
// Draw ourselves after all opaque geometry
Tags { "Queue" = "Transparent" }
// Grab the screen behind the object into _GrabTexture
GrabPass { }
// Render the object with the texture generated above
Pass {
CGPROGRAM
#pragma debug
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
sampler2D _GrabTexture : register(s0);
float top;
float bottom;
struct data {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 position : POSITION;
float4 screenPos : TEXCOORD0;
};
v2f vert(data i){
v2f o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.screenPos = o.position;
return o;
}
half4 frag( v2f i ) : COLOR
{
float2 screenPos = i.screenPos.xy / i.screenPos.w;
float _half = (top + bottom) * 0.5;
float _diff = (bottom - top) * 0.5;
screenPos.x = screenPos.x * (_half + _diff * screenPos.y);
screenPos.x = (screenPos.x + 1) * 0.5;
screenPos.y = 1-(screenPos.y + 1) * 0.5 ;
half4 sum = half4(0.0h,0.0h,0.0h,0.0h);
sum = tex2D( _GrabTexture, screenPos);
return sum;
}
ENDCG
}
}
Fallback Off
}
I think what your asking for is a replacement shader that shades everything in the camera with your shader.
Am I correct?
If so this should work
Camera.main.SetReplacementShader(Shader.Find("Your Shader"),"RenderType")
here is some more info:
http://docs.unity3d.com/Documentation/Components/SL-ShaderReplacement.html
Edit: Are you expecting the entire camera to warp like a lens effect? Because your not going to get that using a shader like this by itself, because as it stands it will only apply to objects like your plane but not the full camera view, that requires a post image effect. First your need Unity Pro. If you do, import the Image effects package and look at the fisheye script. See if you can duplicate the fisheye script with your own shader. When I attached the fisheye shader without its corresponding script I was getting the same exact results as you are with your current shader code. If you dont have access to the image effects package let me know and ill send your the fisheye scripts and shaders.
I have tried several ways so far. The shader itself work very well when I add it to a plane located in front of main Camera. But when I add it to the main Camera by below code, nothing could be visible on the screen! (just a blank screen) without any error message. I assign the above shader to repl variable.
using UnityEngine;
using System.Collections;
public class test2 : MonoBehaviour {
// Use this for initialization
public Shader repl = null;
void Start () {
Camera.main.SetReplacementShader(repl,"Opaque");
}
// Update is called once per frame
void Update () {
}
}
Just for your information, the above shader distorts the scene to a trapezium shape.