Problem with surface shader for clipping with a plane in Augmented Reality - unity3d

I want to clip a 3D model with a plane in Unity, found this great tutorial for it and got it to work easily in the usual Unity environment. The tutorial uses a surface shader to clip all parts above a plane and only show the parts underneath it, so that you get the impression of cutting open a 3D model (see the GIFs in the linked tutorial). Code of the surface shader is this:
Shader "Clippingplane" {
Properties {
_Color ("Tint", Color) = (0, 0, 0, 1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Smoothness ("Smoothness", Range(0, 1)) = 0
_Metallic ("Metalness", Range(0, 1)) = 0
[HDR] _Emission ("Emission", color) = (0,0,0)
[HDR]_CutoffColor("Cutoff Color", Color) = (1,0,0,0)
}
SubShader {
Tags{ "RenderType"="Opaque" "Queue"="Geometry"}
// render faces regardless if they point towards the camera or away from it
Cull Off
CGPROGRAM
#pragma surface surf Standard fullforwardshadows
#pragma target 3.0
sampler2D _MainTex;
fixed4 _Color;
half _Smoothness;
half _Metallic;
half3 _Emission;
float4 _Plane;
float4 _CutoffColor;
struct Input {
float2 uv_MainTex;
float3 worldPos;
float facing : VFACE;
};
void surf (Input i, inout SurfaceOutputStandard o) {
//calculate signed distance to plane
float distance = dot(i.worldPos, _Plane.xyz);
distance = distance + _Plane.w;
//discard surface above plane
clip(-distance);
float facing = i.facing * 0.5 + 0.5; //convert facing from -1/1 to 0/1 for linear interpolation
//normal color stuff
fixed4 col = tex2D(_MainTex, i.uv_MainTex);
col *= _Color;
o.Albedo = col.rgb * facing;
o.Metallic = _Metallic * facing;
o.Smoothness = _Smoothness * facing;
o.Emission = lerp(_CutoffColor, _Emission, facing); // lerp = linear interpolation
}
ENDCG
}
FallBack "Standard"
}
Now I want to convert this whole interaction of moving a clipping plane through a 3D model to Augmented Reality, I use ARFoundation with ARCore for that.
For some reason, the shader doesn't work in AR as expected now anymore. The "inside color" (red) covers the whole model itself, and not only the part where the model is cut open. Seems like the shader can't differentiate between outside and inside anymore? The clipping part works, however.
Screenshot of the whole 3D model showing in red
I played around a bit, but only got it to work to show the correct colors WITHOUT the clipping part working. Especially the part with the facing variable and its conversion seems like the one messing with the result. I don't really know that much about shaders so I'm wondering if anyone could point me in the right direction what is happening with the normals and stuff?
The surface shader from the tutorial works fine in AR when leaving out the "Show the inside" part.
Super weird, that the shader works in the usual Unity environment and not in AR. Any help appreciated!

Playing around a bit more brought me to my solution, I might not have had tested enough... Seems like AR already provides VFACE in a range from 0 to 1, so converting it made things wrong.
I simply removed the conversion part which left me with only:
float facing = i.facing;
That seems to do the job! Hope this helps anyone trying to clip stuff in AR with a surface shader.

Related

Unity/HLSL: Odd results when trying to use different parts of texture atlas depending on surface normal

I'm new to 3D code and writing shaders so I'm probably doing something stupid here. I'm working on a simple Minecraft clone just to learn Unity, and I thought the easiest way to texture the ground would be to use a texture atlas with a grass + dirt texture and use a shader to choose which part of the atlas to get the texture from depending on the surface normal. i.e. for the face of a cube that points upwards, the grass texture will be used, and for all other faces the dirt texture will be used. The texture looks like this:
In my shader, if I do the following, I get the dirt texture on every face as expected:
float y = (IN.uv_MainTex.y / 2.0f);
float2 uv = {IN.uv_MainTex.x, y};
fixed4 c = tex2D (_MainTex, uv) * _Color;
o.Albedo = c.rgb;
If I use the same code but always add 0.5f to y to get the top half, i.e. float y = (IN.uv_MainTex.y / 2.0f) + 0.5f; I get the grass texture everywhere as expected:
But when I try to set y based on the normal, i.e. float y = (IN.uv_MainTex.y / 2.0f) + floor(IN.worldNormal.y) * 0.5f; I get this weird result where the top face is mostly grass but with parts of the dirt texture showing in diagonal lines:
Is IN.worldNormal the right normal to be using, or do I need to transform it into some other space? Or is the problem with how I'm using floor(), maybe? Any advice is appreciated.
Full shader code:
Shader "Custom/GroundShader"
{
Properties
{
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_Metallic ("Metallic", Range(0,1)) = 0.0
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
float3 worldNormal;
};
half _Glossiness;
half _Metallic;
fixed4 _Color;
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_BUFFER_END(Props)
void surf (Input IN, inout SurfaceOutputStandard o)
{
float y = (IN.uv_MainTex.y / 2.0f) + floor(IN.worldNormal.y) * 0.5f;
float2 uv = {IN.uv_MainTex.x, y};
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, uv) * _Color;
o.Albedo = c.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
Edit: the original answer is below, but I was approaching this problem the wrong way. The correct way to do this is just to set the UVs of the mesh using the Mesh.uv property. There is no need to use a shader in this case.
Copying an answer I got from Namey5 on the Unity forum:
This looks to be a precision issue (potentially from vertex normal
interpolation). There are probably smarter ways to do this, but as a
simple fix you could just add a small bias to the normal, i.e;
float y = (IN.uv_MainTex.y / 2.0f) + floor (IN.worldNormal.y + 0.01) * 0.5f;
I modified this slightly to float y = (IN.uv_MainTex.y / 2.0f) + floor (IN.worldNormal.y * 1.1f) * 0.5f; This ensures the bottom face also shows the dirt texture because the normal will get floored down to -2 instead of -1.

triplanar texturing with Unity HDRP

I wrote a surface shader in Unity with the default render pipeline to use Triplanar texturing on a mesh with no UVs, this worked fine, with the following code:
Shader "Custom/TerrainShader"
{
// These properties can be modified from the material inspector.
Properties{
_MainTex("Ground Texture", 2D) = "white" {}
_WallTex("Wall Texture", 2D) = "white" {}
_TexScale("Texture Scale", Float) = 1
}
// You can have multiple subshaders with different levels of complexity. Unity will pick the first one
// that works on whatever machine is running the game.
SubShader{
Tags { "RenderType" = "Opaque" } // None of our terrain is going to be transparent so Opaque it is.
LOD 200 // We only need diffuse for now so 200 is fine. (higher includes bumped, specular, etc)
CGPROGRAM
#pragma surface surf Standard fullforwardshadows // Use Unity's standard lighting model
#pragma target 3.0 // Lower target = fewer features but more compatibility.
// Declare our variables (above properties must be declared here)
sampler2D _MainTex;
sampler2D _WallTex;
float _TexScale;
// Say what information we want from our geometry.
struct Input {
float3 worldPos;
float3 worldNormal;
};
// This function is run for every pixel on screen.
void surf(Input IN, inout SurfaceOutputStandard o) {
float3 scaledWorldPos = IN.worldPos / _TexScale; // Get a the world position modified by scale.
float3 pWeight = abs(IN.worldNormal); // Get the current normal, using abs function to ignore negative numbers.
pWeight /= pWeight.x + pWeight.y + pWeight.z; // Ensure pWeight isn't greater than 1.
// Get the texture projection on each axes and "weight" it by multiplying it by the pWeight.
float3 xP = tex2D(_WallTex, scaledWorldPos.yz) * pWeight.x;
float3 yP = tex2D(_MainTex, scaledWorldPos.xz) * pWeight.y;
float3 zP = tex2D(_WallTex, scaledWorldPos.xy) * pWeight.z;
// Return the sum of all of the projections.
o.Albedo = xP + yP + zP;
}
ENDCG
}
FallBack "Diffuse"
}
However, when switching to the new RP (HD or LW) the material using it becomes pink. I know it's because Unity no longer supports surface shaders, so my question is, how do you achieve triplanar texturing with the new RP?
There is support for triplanar texturing through the shader graph. Just hit space inside the graph editor and search for "triplanar" and it will show up.
HDRP shaders use deferred rendering, so its shaders look fundamentally different. If you want to learn, i suggest you create a basic shader in the shader graph and then right click on the master node and select "copy shader". Then, you can paste the shader code into a text editor and try to reverse engineer it. The SRP GitHub is also a good reference:
https://github.com/Unity-Technologies/Graphics/tree/master/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary
https://github.com/Unity-Technologies/Graphics/tree/master/com.unity.render-pipelines.high-definition/Runtime/RenderPipeline/ShaderPass
For LWRP, there is this template shader which i've found quite useful:
https://gist.github.com/phi-lira/225cd7c5e8545be602dca4eb5ed111ba

Unity - How to recolor individual faces of a textured mesh?

I've got a textured mesh with a specific list of triangles that correspond to faces on the mesh that get recolored at runtime using code like this:
_mesh = GetComponent<MeshFilter>().mesh;
...
var colors = _mesh.colors;
foreach (int n in trianglesIndexList)
{
colors[n] = Color.blue;
}
_mesh.colors = colors;
It works great but I had originally intended these faces to be a bright blue. Unfortunately the coloration seems to be blending with the material's texture that I've assigned to the mesh in Unity. Here you can see that the green grass on the texture is mixed with the rather green looking "blue":
I've tried playing with the lighting but that does not seem to be the issue. The mesh coloration that I've got requires use of specific shaders, so I've assigned the Particles / Standard Surface shader to the mesh.
Is there any way to recolor subsections of the texture at runtime and make these faces look bright blue?
Ok, nevermind, I found my own solution by learning how shader code works within about 3 hours and writing a custom shader. Simply use Unity's default standard shader as a starting point and add worldNormal and worldPos inputs like this:
struct Input
{
float2 uv_MainTex;
float3 worldNormal;
float3 worldPos;
};
Then inside the surf function you can filter to just the upward facing normals (which is what I want in this case) and filter these mesh coordinates once again by the world space coordinates.
void surf(Input IN, inout SurfaceOutputStandard o)
{
float2 UV;
fixed4 c;
if (abs(IN.worldNormal.x) > 0.5)
{
// normals facing the left and right side
UV = IN.worldPos.yz;
c = tex2D(_MainTex, IN.uv_MainTex);
}
else if (abs(IN.worldNormal.z) > 0.5)
{
// normals facing forward and backward
UV = IN.worldPos.xy;
c = tex2D(_MainTex, IN.uv_MainTex);
}
else
{
// normals facing up and down
if (abs(IN.worldPos.x) > 0.5)
{
UV = IN.worldPos.xz;
// COLOR IT BLUE
c = tex2D(_MainTex, IN.uv_MainTex) + (_Color * _SinTime.y);
}
else
{
UV = IN.worldPos.xz;
c = tex2D(_MainTex, IN.uv_MainTex);
}
}
Can obviously simplify this quite a bit, but this is all I wanted to know how to do. I even figured out how to recolor over time using the built in SinTime variable. Super easy.

Decal wrap around mesh

I'm working on tattoo simulator program, i need to know if there's a way for the decal (tattoo) to wrap arond the target mesh, like having a tattoo that goes from one side to the other side of lets say leg, or event behind it.
Not at runtime, using a projected decal, no.
What you need here instead is a procedural tattoo map. Think of it as another texture, like a lightmap. You may need a custom shader, but it could possibly be done with the secondary albedo channel of the standard shader.
The tricky part is writing to that texture. I'll outline the basic algorithm, but leave it up to you to implement:
The first thing you need to be able to do is unwrap the mesh's triangles in code. You need to identify which edges are contiguous on the UV map, and which are separate. Next, you need a way to identify the tattoo and the initial transform. First, you'll want to define an origin on the tattoo source texture that it will rotate around. Then you'll want to define a structure that references the source texture, and the UV position (Vector2) / rotation (float) / scale (float) to apply it to in the destination texture.
Once you have the tattoos stored in that format, then you can start building the tattoo mask texture for the skin. If your skin uvs have a consistent pixel density, this is a lot easier because you can work primarily in uv-space, but if not, you'll need to re-project to get the scale for each tri. But, basically, you start with the body triangle that contains the origin, and draw onto that triangle normally. From there, you know where each vertex and edge of that triangle lies on the tattoo source texture. So, loop through each neighboring triangle (I recommend a breadth-first recursive method) and continue it from the edge you already know. If all three verts fall outside the source texture's rect, you can stop there. Otherwise, continue with the next triangle's neighbors. Make sure you're using the 3D mesh when calculating neighbors so you don't get stuck at seams.
That algorithm is going to have an edge case you'll need to deal with for when the tattoo wraps all the way around and overlaps itself, but there are a couple different ways you can deal with that.
Once you've written all tattoos to the tattoo texture, just apply it to the skin material and voila! Not only will this move all the calculations out of real-time rendering, but it will let you fully control how your tattoos can be applied.
You can use a decal projector using Unity's official preview tool Render Pipelines - High Definition.
Here's how I used it to project a "tatoo" onto a bucket. You can apply it to your model of course.
(Child the decal projector so that the tatoo follows the model)
The best way to import Render Pipelines - High Definition package is to use Unity Hub to create a new project, choosing it as a template. If it's an existing project, this official blog might help you.
Once you succefully set up the package, follow this tutorial and you'll be able to project tatoos onto your models anywhere you want.
I've done something similar with a custom shader. I think it would do what you want. Mine is dynamically rendering flags based on rank and type of a unit for an iPad game prototype. Exactly how you'll do it depends a bit on how you have things setup in your project, but here's what mine looks like - first image is the wireframe showing the mesh and second is with the shaders turned on and shows them adding the colors and emblem based on rank and unit. I've just included the shader for the top flag since that has the unit emblem added added similar to how you want your tattoo to be:
Note that you can attach multiple shaders to a particular mesh.
And the emblem is just an image with transparency that is added to the shader and referenced as a texture within the shader:
You can see we also have a picture that has some shadow texture that's used as the background for the banner.
This is my first shader and was written a while ago, so I'm sure it's sub-optimal in all kinds of ways, but it should hopefully be enough to get you started (and it still works in Unity 2018.3.x, though I had to hack in some changes to get it to compile):
Shader "Custom/TroopFlagEmblemShader" {
Properties {
_BackColor ("Background Color", Color) = (0.78, 0.2, 0.2) // scarlet
_MainTex ("Background (RGBA)", 2D) = "" {}
_EmblemTex("Emblem (RGBA)", 2D) = "" {}
_Rank ( "Rank (1-9)", Float ) = 3.0
}
SubShader {
Pass {
CGPROGRAM
#pragma exclude_renderers xbox360 ps3 flash
#pragma target 3.0
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata {
float4 vertex: POSITION;
float4 texcoord: TEXCOORD0;
};
struct v2f {
float4 pos: SV_POSITION;
float2 uv: TEXCOORD0;
};
uniform sampler2D _MainTex;
uniform sampler2D _EmblemTex;
uniform float3 _BackColor;
uniform float _Rank;
v2f vert( appdata v )
{
v2f o;
o.pos = UnityObjectToClipPos( v.vertex );
o.uv = v.texcoord.xy;
return o;
}
float4 frag( v2f IN ) : COLOR
{
float4 outColor;
float4 backTextureColor = tex2D( _MainTex, IN.uv.xy );
float4 emblemTextureColor = tex2D( _EmblemTex, IN.uv.xy );
// not drawing the square at all above rank 5
if ( _Rank >= 6.0 )
discard;
if ( _Rank < 5 ) // 4 and below
{
outColor = float4( (emblemTextureColor.rgb * emblemTextureColor.a) +
(((1.0 - emblemTextureColor.a) * backTextureColor.rgb) * _BackColor.rgb) , 1 );
// float4(_BackColor.rgb, 1 ));
}
else if ( _Rank >= 5.0 ) // but excluded from 6 above
{
// 5 is just solid backcolor combined with background texture
outColor = float4( backTextureColor.rgb * _BackColor.rgb, 1 );
}
return outColor;
}
ENDCG
}}
}
Shaders are a bit maddening to learn how to do, but pretty fun once you get them working - like most programming :)
In my case the overlay texture was the same size/shape as the flag which makes it a bit easier. I'm thinking you'll need to add some parameters to the shader that indicate where you want the overlay to be drawn relative to the mesh and do nothing for vertexes/fragments outside your tattoo bounds, just as a first thought.

Shader programming in Unity

I am trying to write a shader to read the whole frame, pixel by pixel and after some calculations re-write the pixels. I have looked through some codes but most of them were not relevant. Could you give me some hints on how I can read pixels and write pixels in Unity shader programming?
If you have the Pro version of Unity, you can achieve this with image (postprocessing) effects. All you have to do is to implement the OnRenderImage callback on a component of a camera. Then you call Graphics.Blit with a material which has a shader. The shader receives the screen contents as main texture.
You need texture buffers the size of your frame.
Then you want to render your frame into one of the buffers.
Now you need to write a fragment shader that reads one buffers, and writes to the other.
Then finally you draw the fragment shader output as a flat object that covers the screen.
In shader programming, you do not work pixel by pixel, you define a function that will be used on a single pixel at a position which is a float from 0 to 1 in all 3 axis (although you will only be using 2). That fragment shader is then run for lots of pixel all in parallel, that's how it does everything more quickly.
I hope that brief explanation is enough to get you started. Unity fragment shaders are written in Cg. Cg is a language which is half way between OpenGL's language GLSL and DirectX language HLSL, as all the high level languages compile into native instructions on the graphics card they are all fairly similar. So there are plenty of Cg samples about, and once you can write Cg, you will have no problem reading HLSL and GLSL.
Thank you for your advice. they were really helpful. I finally ended up with this code for my shader. And now, a new problem just comes up.
My solution:
To solve my keystone problem, I have adapted the "wearing a glass" idea! it means that I have placed on a plane in front of camera and attached the below shader on it. Then I attached the plane to the camera. The problem right now is that is shader works very well but in my VR setting it does not work because I have several cameras and the scene is distorted in one of them (as I want) but other cameras have a normal scenes. Everything is fine until these two scenes have intersection. In that case I have a disjoint scene (please forgive me if is not a correct word). By the way, I thought that instead of using this shader for a "plane infront of camera" I have to apply it on the camera itself. my shader does not work when I add it to the camera although it works perfectly with the plane object. Could you let me know how can I modify this code to be compatible with camera? I am more than welcome to hear your suggestion and ideas besides of my solution.
Shader "Custom/she1" {
Properties {
top("Top", Range(0,2)) = 1
bottom("Bottom", Range(0,2)) = 1
}
SubShader {
// Draw ourselves after all opaque geometry
Tags { "Queue" = "Transparent" }
// Grab the screen behind the object into _GrabTexture
GrabPass { }
// Render the object with the texture generated above
Pass {
CGPROGRAM
#pragma debug
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
sampler2D _GrabTexture : register(s0);
float top;
float bottom;
struct data {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 position : POSITION;
float4 screenPos : TEXCOORD0;
};
v2f vert(data i){
v2f o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.screenPos = o.position;
return o;
}
half4 frag( v2f i ) : COLOR
{
float2 screenPos = i.screenPos.xy / i.screenPos.w;
float _half = (top + bottom) * 0.5;
float _diff = (bottom - top) * 0.5;
screenPos.x = screenPos.x * (_half + _diff * screenPos.y);
screenPos.x = (screenPos.x + 1) * 0.5;
screenPos.y = 1-(screenPos.y + 1) * 0.5 ;
half4 sum = half4(0.0h,0.0h,0.0h,0.0h);
sum = tex2D( _GrabTexture, screenPos);
return sum;
}
ENDCG
}
}
Fallback Off
}