I'm currently stuck in a shader that i'm writing.
I'm trying to create a rain shader.
I have set up 3 particle system which simulates rain, and a
camera to look at this simulation. The camera view is what I
use as texture. In my shader I am now trying to make a normal map
from that texture map, but I don't know how to do it.
Shader "Unlit/Rain"
{
Properties
{
_MainTex("Albedo (RGB)", 2D) = "white" {}
_Color("Color", Color) = (1,1,1,1)
_NormalIntensity("NormalIntensity",Float) = 1
}
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
#include "Lighting.cginc"
#include "AutoLight.cginc"
struct VertexInput {
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 normal : NORMAL;
float3 tangent : TANGENT;
};
struct VertexOutput {
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float2 uv1 : TEXCOORD1;
float4 normals : NORMAL;
float3 tangentSpaceLight: TANGENT;
};
sampler2D _MainTex;
float4 _MainTex_ST;
half4 _Color;
float _NormalIntensity;
VertexOutput vert(VertexInput v) {
VertexOutput o;
o.normals = v.normal;
o.uv1 = v.uv;
o.vertex = UnityObjectToClipPos( v.vertex );
// o.uv = TRANSFORM_TEX( v.uv, _MainTex ); // used for texture
return o;
}
float4 frag(VertexOutput i) : COLOR{
float4 col2 = tex2D(_MainTex, i.uv1);
return col2 * i.normals * 5;
}
ENDCG
}
}
}
This is what the camera sees. I set the TargetTexture for this camera to be a texture I created.
In my shader I then put that texture as an albedo property
So what I wanna do is now find the normal for that texture to create a bumpmap.
It looks like your "TargetTexture" is giving you back a height map. Here is a post I found about how to turn a height map into a normal map. I've mashed the code you had originally together with the core of that forum post and output the normals as color so you can test and see how this works:
Shader "Unlit/HeightToNormal"
{
Properties
{
_MainTex("Albedo (RGB)", 2D) = "white" {}
_Color("Color", Color) = (1,1,1,1)
_NormalIntensity("NormalIntensity",Float) = 1
_HeightMapSizeX("HeightMapSizeX",Float) = 1024
_HeightMapSizeY("HeightMapSizeY",Float) = 1024
}
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
#include "Lighting.cginc"
#include "AutoLight.cginc"
struct VertexInput {
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 normal : NORMAL;
float3 tangent : TANGENT;
};
struct VertexOutput {
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float2 uv1 : TEXCOORD1;
float4 normals : NORMAL;
//float3 tangentSpaceLight: TANGENT;
};
sampler2D _MainTex;
float4 _MainTex_ST;
half4 _Color;
float _NormalIntensity;
float _HeightMapSizeX;
float _HeightMapSizeY;
VertexOutput vert(VertexInput v) {
VertexOutput o;
o.uv = TRANSFORM_TEX( v.uv, _MainTex ); // used for texture
o.uv1 = v.uv;
o.normals = v.normal;
o.vertex = UnityObjectToClipPos(v.vertex);
return o;
}
float4 frag(VertexOutput i) : COLOR
{
float me = tex2D(_MainTex,i.uv1).x;
float n = tex2D(_MainTex,float2(i.uv1.x, i.uv1.y + 1.0 / _HeightMapSizeY)).x;
float s = tex2D(_MainTex,float2(i.uv1.x, i.uv1.y - 1.0 / _HeightMapSizeY)).x;
float e = tex2D(_MainTex,float2(i.uv1.x + 1.0 / _HeightMapSizeX,i.uv1.y)).x;
float w = tex2D(_MainTex,float2(i.uv1.x - 1.0 / _HeightMapSizeX,i.uv1.y)).x;
// defining starting normal as color has some interesting effects, generally makes this more flexible
float3 norm = _Color;
float3 temp = norm; //a temporary vector that is not parallel to norm
if (norm.x == 1)
temp.y += 0.5;
else
temp.x += 0.5;
//form a basis with norm being one of the axes:
float3 perp1 = normalize(cross(i.normals,temp));
float3 perp2 = normalize(cross(i.normals,perp1));
//use the basis to move the normal i its own space by the offset
float3 normalOffset = -_NormalIntensity * (((n - me) - (s - me)) * perp1 + ((e - me) - (w - me)) * perp2);
norm += normalOffset;
norm = normalize(norm);
// it's also interesting to output temp, perp1, and perp1, or combinations of the float samples.
return float4(norm, 1);
}
ENDCG
}
}
}
To generate a normal map from a height map, you're trying to use the oriented rate of change in your height map to come up with a normal vector which can be represented using 3 float values (or color channels, if it's an image). You can sample the point of image you are on, and then small steps away from that point in four cardinal directions. Using the cross product to guarantee orthogonality you can define a basis. Using your oriented steps on your image, you can scale the two basis vectors and add them together to find a "normal offset", which is the 3D representation approximating the oriented change in value on your heightmap. Basically it's your normal.
You can see the effects of me playing with normal intensity here, and the "normal color" here. When this looks right for your use case, you can try using normals as normals instead of colored output.
Some tweaking of values will probably still be required. Good luck!
Related
I'm new to shaders, and I'm trying to make an outline of a tk2d spine object(a 2d character).
I know there are other ways to obtain the outline effect, but this approach got me curious.
(Please don't suggest another way to draw outlines, respectfully, because that is not my question.)
So basically how I'm trying to get the outline effect is by getting the vertexes of my object,
input a color, and draw it 8 directions(top, down, left, right, 4 diagonal directions) by setting offsets to each direction.
I got this working, and it looks fine, but my shader code doesn't, because I have a total of 9 passes,
8 of which do the exact same thing(draw the same texture), with the only difference being the offset direction, and the last pass drawing the tk2d spine character.
I don't like that I have 8 almost-exactly repeated codes, it seems to be a waste because there is an area where the same calculation is done 8 times, and I suppose performance would be affected as well.
So, my question is : can I compress the 8 passes into one?
i.e. saving the vertex position for each direction, and passing that information to the fragment shader.
I tried upscaling the vertexes, but this didn't get me the effect I wanted, because the characters do not have smooth edges, and therefore the "outline" became rigged.
this is my current code:
Shader "Custom/Shader_Spine_Battle_Red"
{
Properties
{
_Color ("Color", color) = (1, 0, 0, 1)
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
_OutlineWidth ("OutlineWidth", float) = 1
}
SubShader
{
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
ZWrite Off Lighting Off Cull Off Fog { Mode Off } Blend SrcAlpha OneMinusSrcAlpha
LOD 110
Stencil{
Comp Equal
}
Pass
{
CGPROGRAM
#pragma vertex vert_vct
#pragma fragment frag_mult
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
sampler2D _MainTex;
fixed4 _Color;
float _OutlineWidth;
struct vin_vct
{
float4 vertex : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD0;
};
struct v2f_vct
{
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
};
v2f_vct vert_vct(vin_vct v)
{
v2f_vct o;
// the only difference in the 8 passes is the two lines below
v.vertex.x += _OutlineWidth * 0.1;
v.vertex.y += _OutlineWidth * 0.1;
o.vertex = UnityObjectToClipPos(v.vertex);
o.color = _Color;
o.texcoord = v.texcoord;
return o;
}
fixed4 frag_mult(v2f_vct i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.texcoord);
_Color.a *= ceil(col.a);
col.rgb = _Color;
col.a *= _Color.a;
return col;
}
ENDCG
}
// (x 8 with only the rows mentioned above different
Pass
{
CGPROGRAM
#pragma vertex vert_vct
#pragma fragment frag_mult
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
sampler2D _MainTex;
fixed4 _Color;
float _OutlineWidth;
struct vin_vct
{
float4 vertex : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD0;
};
struct v2f_vct
{
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
};
v2f_vct vert_vct(vin_vct v)
{
v2f_vct o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.color = v.color;
o.texcoord = v.texcoord;
return o;
}
fixed4 frag_mult(v2f_vct i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.texcoord) * i.color;
return col;
}
ENDCG
}
}
}
If you want to approach this by storing multiple vertex data, then this could be done either on the CPU side constructing the mesh with different UV offsets or GPU side by using what is called a geometry shader. You could run however into sorting issues with the base color, and personally I would not recommend that solution. If you really need to do it this way for some reason, this way exists.
What I could recommend is performing this operation on a per-fragment basis rather than in multiple passes and sampling it multiple times as your title question mentioned. This has the limitation of being limited to the mesh of your sprite so keep in mind that you might need to alter it to fit. This can be either by adding some whitespace and changing import settings (Mesh Type, Extrude Edges). If it's a wide outline a modification of uv mapping in fragment shader might be in order to ensure it always fits.
static const float2 _dirs[8] = {
float2(1,0),
float2(-1,0),
float2(0,-1),
float2(0,1),
float2(-1,1),
float2(-1,-1),
float2(1,1),
float2(1,1)
};
fixed4 frag(v2f IN) : SV_Target
{
fixed4 c = tex2D(_MainTex, IN.texcoord) * IN.color; //original color
fixed a = c.a; //original alpha
for(int i = 0; i < 8; i++){
fixed4 o = tex2D(_MainTex, IN.texcoord + _dirs[i] * _OutlineWidth); //sample in this direction
c.a = max(c.a, o.a); //set the greater alpha
c.rgb = (a * c.rgb) + ((1-a) * _OutlineColor); //mix original color with outline one based on original alpha
}
c.rgb *= c.a;
return c;
}
Attached code is of fragment shader with variables for directions sampled. The for loop iterates over the directions and overrides the alpha value if higher and mixes the original and outline color based on original alpha value.
I'm currently working on large terrains in unity.
I need to use custom trees, that basically hold a 3d model and a sprite. With LOD component the model is disabled and sprite enabled with distance.
I founded a billboard shader on github that's match almost perfectly what's I searched for,
exepting that it isn't using lightning, so the sprite have always the same "Color" with and without light in the scene.
I'm currently trying to mix it with diffuse shader but I don't have any knowledges in shader coding.
Here is the shader in question:
Shader "Custom/BillboardSprite" {
Properties {
_MainTex ("Main Tex", 2D) = "white" {}
_Color ("Color Tint", Color) = (1, 1, 1, 1)
_VerticalBillboarding ("Vertical Restraints", Range(0, 1)) = 1
}
SubShader {
// Need to disable batching because of the vertex animation
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" "DisableBatching"="True"}
Pass {
Tags { "LightMode"="ForwardBase" }
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Cull Off
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "Lighting.cginc"
sampler2D _MainTex;
float4 _MainTex_ST;
fixed4 _Color;
fixed _VerticalBillboarding;
struct a2v {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
v2f vert (a2v v) {
v2f o;
// Suppose the center in object space is fixed
float3 center = float3(0, 0, 0);
float3 viewer = mul(unity_WorldToObject,float4(_WorldSpaceCameraPos, 1));
float3 normalDir = viewer - center;
// If _VerticalBillboarding equals 1, we use the desired view dir as the normal dir
// Which means the normal dir is fixed
// Or if _VerticalBillboarding equals 0, the y of normal is 0
// Which means the up dir is fixed
normalDir.y =normalDir.y * _VerticalBillboarding;
normalDir = normalize(normalDir);
// Get the approximate up dir
// If normal dir is already towards up, then the up dir is towards front
float3 upDir = abs(normalDir.y) > 0.999 ? float3(0, 0, 1) : float3(0, 1, 0);
float3 rightDir = normalize(cross(upDir, normalDir));
upDir = normalize(cross(normalDir, rightDir));
// Use the three vectors to rotate the quad
float3 centerOffs = v.vertex.xyz - center;
float3 localPos = center + rightDir * centerOffs.x + upDir * centerOffs.y + normalDir * centerOffs.z;
o.pos = UnityObjectToClipPos(float4(localPos, 1));
o.uv = TRANSFORM_TEX(v.texcoord,_MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target {
fixed4 c = tex2D (_MainTex, i.uv);
c.rgb *= _Color.rgb;
return c;
}
ENDCG
}
}
FallBack "Transparent/VertexLit"
}
Developing an AR game that is to utilize Spotlights, to add emphasis on this effect, the shader must also darken everything else in camera view. Being in AR, I'm assuming I'd need an image effect shader to decrease the color output onto screen to simulate the lesser brightness of the area around the user; however, adding a Spotlight effect with this is becoming quite a struggle that I could really use some direction and/or tips on achieving. Here are some examples of the end result I'm looking to achieve:
Straight Forward and Angled Examples
This is where I'm currently at with my Shader, currently only giving one object position and will worry about multiple positions later.
Shader "Unlit/Spotlight"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
_PointOfInterest("POI Pos", vector) = (0,0,0,0)
_CircleRadius("Spotlight size", Range(0,20)) = 3
_RingSize("Ring Size", Range(0,5)) = 1
_ColorTint("Room tint color", Color) = (0,0,0,0.5)
}
SubShader
{
Tags
{
"Queue" = "Transparent"
"RenderType"="Transparent"
}
LOD 100
// Should be on for Post Process Effects
ZWrite Off
ZTest Always
Cull Off
// -----
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma alpha : blend
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
// Have to bound the value somewhere on the GPU, texcoord can go up to 7
float3 worldPos : TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float4 _PointOfInterest;
float _CircleRadius;
float _RingSize;
float4 _ColorTint;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.worldPos = mul(unity_WorldToObject, v.vertex).xyz;
return o;
}
fixed4 frag (v2f i) : SV_Target
{
fixed4 darkShade = tex2D(_MainTex, i.uv) - _ColorTint;
fixed4 col = tex2D(_MainTex, i.uv) - _ColorTint;
float dist = distance(i.worldPos, _PointOfInterest.xyz);
//*
// Spotlight
if(dist < _CircleRadius)
{
// inside the circle
col = tex2D(_MainTex, i.uv);
}
// Feathering
else if (dist > _CircleRadius && dist < _CircleRadius + _RingSize)
{
float blendStrength = dist - _CircleRadius;
col = lerp(tex2D(_MainTex, i.uv), darkShade, blendStrength / _RingSize);
}
//*/
return col;
}
ENDCG
}
}
}
Not looking for direct solutions, just need direction so that I don't spend too much time chasing a destination I may be running from instead.
I am making a 360 viewer in unity, to view a 360 photo I used to have a cubemap attached to a skybox, and it worked great. But the weight of the cubemaps forced me to switch to textures.
All of the 360 viewer tutorials say to just put a sphere with a shader on it, and put the camera inside. When I do this, it doesn't work very well, because when I look to the top or bottom, I see the image warped like so: (The chairs are suppossed to look normal)
It did not happen when I used a skybox.
Does any one know why is this happening?
Thank you very much!
The shader you choose does not handle equirectangular distortion very well. At the poles (top and bottom) of the sphere much image information has to be mapped on very small space which leads to the artifacts you are seeing.
You can write a specialized shader to improve the coordinate mapping from your equirectangular image onto the sphere. At the Unity forums a specialized shader has been posted.
Shader "Custom/Equirectangular" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Diffuse (RGB) Alpha (A)", 2D) = "gray" {}
}
SubShader{
Pass {
Tags {"LightMode" = "Always"}
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma glsl
#pragma target 3.0
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f
{
float4 pos : SV_POSITION;
float3 normal : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.normal = v.normal;
return o;
}
sampler2D _MainTex;
#define PI 3.141592653589793
inline float2 RadialCoords(float3 a_coords)
{
float3 a_coords_n = normalize(a_coords);
float lon = atan2(a_coords_n.z, a_coords_n.x);
float lat = acos(a_coords_n.y);
float2 sphereCoords = float2(lon, lat) * (1.0 / PI);
return float2(sphereCoords.x * 0.5 + 0.5, 1 - sphereCoords.y);
}
float4 frag(v2f IN) : COLOR
{
float2 equiUV = RadialCoords(IN.normal);
return tex2D(_MainTex, equiUV);
}
ENDCG
}
}
FallBack "VertexLit"
}
Again, it's not my own code but I tested it on android devices and as a standalone PC version. It results in very smooth poles.
Please note: This shader does not flip normals of your sphere. So if you want your camera to sit inside the sphere you have to invert its normals with a 3d program or with the shader. Try adding Cull Front after line 9 above and the shader will apply its texture to the "wrong" side of the model.
I'm a beginner and I had to do a lot just to understand this thread. This is what worked for me. I just combined the answers and put it in one script. I'm pretty sure I will forget this in a few weeks time, so putting it here for posterity.
Shader "Custom/Equirectangular" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Diffuse (RGB) Alpha (A)", 2D) = "gray" {}
}
SubShader{
Pass {
Tags {"LightMode" = "Always"}
Cull Front
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma glsl
#pragma target 3.0
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f
{
float4 pos : SV_POSITION;
float3 normal : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.normal = v.normal;
return o;
}
sampler2D _MainTex;
#define PI 3.141592653589793
inline float2 RadialCoords(float3 a_coords)
{
float3 a_coords_n = normalize(a_coords);
float lon = atan2(a_coords_n.z, a_coords_n.x);
float lat = acos(a_coords_n.y);
float2 sphereCoords = float2(lon, lat) * (1.0 / PI);
return float2(1 - (sphereCoords.x * 0.5 + 0.5), 1 - sphereCoords.y);
}
float4 frag(v2f IN) : COLOR
{
float2 equiUV = RadialCoords(IN.normal);
return tex2D(_MainTex, equiUV);
}
ENDCG
}
}
FallBack "VertexLit"
}
This is another shader code.
'Shader "Flip Normals" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Front
CGPROGRAM
#pragma surface surf Lambert vertex:vert
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
float4 color : COLOR;
};
void vert(inout appdata_full v)
{
v.normal.xyz = v.normal * -1;
}
void surf (Input IN, inout SurfaceOutput o) {
fixed3 result = tex2D(_MainTex, IN.uv_MainTex);
o.Albedo = result.rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "Diffuse"
}`
I'm new to writing shaders and I'm working on a practice geometry shader. The goal of the shader is to make the "normal" pass produce transparent pixels such that the object is invisible, while the "geometry" pass will take each triangle, redraw in the same place as the original but colored black. Thus, I expect the output to be the original object, but black. However, my geometry pass seems to not produce any output I can see:
Here is the code I currently have for the shader.
Shader "Outlined/Silhouette2" {
Properties
{
_Color("Color", Color) = (0,0,0,1)
_MainColor("Main Color", Color) = (1,1,1,1)
_Thickness("Thickness", float) = 4
_MainTex("Main Texture", 2D) = "white" {}
}
SubShader
{
Tags{ "Queue" = "Geometry" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
Blend SrcAlpha OneMinusSrcAlpha
Cull Back
ZTest always
Pass
{
Stencil{
Ref 1
Comp always
Pass replace
}
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct v2g
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
float3 viewT : TANGENT;
float3 normals : NORMAL;
};
struct g2f
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
float3 viewT : TANGENT;
float3 normals : NORMAL;
};
float4 _LightColor0;
sampler2D _MainTex;
float4 _MainColor;
v2g vert(appdata_base v)
{
v2g OUT;
OUT.pos = mul(UNITY_MATRIX_MVP, v.vertex);
OUT.uv = v.texcoord;
OUT.normals = v.normal;
OUT.viewT = ObjSpaceViewDir(v.vertex);
return OUT;
}
half4 frag(g2f IN) : COLOR
{
//this renders nothing, if you want the base mesh and color
//fill this in with a standard fragment shader calculation
float4 texColor = tex2D(_MainTex, IN.uv);
float3 normal = mul(float4(IN.normals, 0.0), _Object2World).xyz;
float3 normalDirection = normalize(normal);
float3 lightDirection = normalize(_WorldSpaceLightPos0.xyz * -1);
float3 diffuse = _LightColor0.rgb * _MainColor.rgb * max(0.0, dot(normalDirection, lightDirection));
texColor = float4(diffuse,1) * texColor;
//
//return texColor;
return float4(0, 0, 0, 0);
}
ENDCG
}
Pass
{
Stencil{
Ref 0
Comp equal
}
CGPROGRAM
#include "UnityCG.cginc"
#pragma target 4.0
#pragma vertex vert
#pragma geometry geom
#pragma fragment frag
half4 _Color;
float _Thickness;
struct v2g
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
float4 local_pos: TEXCOORD1;
float3 viewT : TANGENT;
float3 normals : NORMAL;
};
struct g2f
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
float3 viewT : TANGENT;
float3 normals : NORMAL;
};
v2g vert(appdata_base v)
{
v2g OUT;
OUT.pos = mul(UNITY_MATRIX_MVP, v.vertex);
OUT.local_pos = v.vertex;
OUT.uv = v.texcoord;
OUT.normals = v.normal;
OUT.viewT = ObjSpaceViewDir(v.vertex);
return OUT;
}
[maxvertexcount(12)]
void geom(triangle v2g IN[3], inout TriangleStream<g2f> triStream)
{
g2f OUT;
OUT.pos = IN[0].pos;
OUT.uv = IN[0].uv;
OUT.viewT = IN[0].viewT;
OUT.normals = IN[0].normals;
triStream.Append(OUT);
OUT.pos = IN[1].pos;
OUT.uv = IN[1].uv;
OUT.viewT = IN[1].viewT;
OUT.normals = IN[1].normals;
triStream.Append(OUT);
OUT.pos = IN[2].pos;
OUT.uv = IN[2].uv;
OUT.viewT = IN[2].viewT;
OUT.normals = IN[2].normals;
triStream.Append(OUT);
}
half4 frag(g2f IN) : COLOR
{
_Color.a = 1;
return _Color;
}
ENDCG
}
}
FallBack "Diffuse"
}
Since all I'm doing is taking the same triangles I've been given and appending them to the triangle stream I'm not sure what I could be doing wrong to cause nothing to appear. Anyone know why this is happening?
I notice that you don't call
triStrem.RestartStrip(); after feeding in 3 vertices of a triangle in your geometry shader.
This informs the stream that a particular triangle strip has ended, and a new triangle strip will begin. If you don't do this, each (single) vertex passed to the stream will add on to the existing triangle strip, using the triangle strip pattern: https://en.wikipedia.org/wiki/Triangle_strip
I'm fairly new to geo-shaders myself, so I'm not sure if this is your issue or not, I don't THINK the RestartStrip function is called automatically at the end of each geomerty-shader, but have not tested this. Rather I think it gets called automatically only when you you reach the maxvertexcount. For a single triangle, I would set the maxvertexcount to 3, not the 12 you have now.
(I also know it can be tough to get ANY shader answers so figgured I'd try to help.)