I've been trying to project a 360 video inside a Sphere with flipped normals for Google Cardboard VR. The Video works fine, only it is inverted horizontally, which is noticeable only when there is some text on the screen. I've included a screen shot of video and a UI.Text element in front of it to compare it with.
I've tried to invert the view of the camera through projectionMatrix but then it just ends up in blank space. Screenshot :
I can't figure out a way to make the video project the right way. Please help!
Here is a shader that displays the content correctly without inverting it, I have tested it with Unity 2018.1.1 as I am currently using it in my project:
Shader "InsideVisible" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Tags { "RenderType"="Opaque" }
Cull front // ADDED BY BERNIE, TO FLIP THE SURFACES
LOD 100
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f {
float4 vertex : SV_POSITION;
half2 texcoord : TEXCOORD0;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata_t v) {
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
// ADDED BY BERNIE:
v.texcoord.x = 1 - v.texcoord.x;
o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target {
fixed4 col = tex2D(_MainTex, i.texcoord);
return col;
}
ENDCG
}
}
}
If you need more information about the shader you can view this tutorial.
Flipping the normals on a sphere is insufficient, you also need to reverse the U part of the UV coordinates (that is, change all the values U such that they are 1-U). A sphere is set up so that the outside renders text correctly from right to left. When you flip the normals "right" is still on the right from the outside...meaning that it's on the left when viewed from the inside.
You will either need to manually edit the UV coordinates yourself or get a premade inverted sphere off the asset store (IIRC there are two that are available for free).
Related
In Unity, I want to create an effect where an arbitrary shape (a quad or a cube) acts as a "portal" that reveals an image. No matter which way the object rotates, or what the camera perspective is, the image "inside the portal" always faces the same direction.
In this image, I have a 3D plane that reveals a checkerboard pattern texture, like a cut-out in the scene. Whichever way the plane object is rotated or camera is positioned, the image inside the portal remains completely fixed. The inner image doesn't move or distort.
I want to be able to do this with multiple objects in the scene. So a sphere could be a portal to a fixed picture of a dog, or a cube could be a portal into a tiled pattern. Even knowing the name of this effect would be helpful. Do I have to write a shader to do this?
This is called a Screen Space shader. Where most shaders will calculate uv coordinates based on a pixel's location on the mesh, these shaders use the location on the screen. Here's a great article about them.
Hot tip: this is commonly used with a second camera rendering to a RenderTexture in order to create portals to 3D spaces.
You may need to play with the tiling to get the aspect ration of your texture correct, this shader assumes it is the same as your screen, i.e. 16:9
Shader "Ahoy/Screen Space Texture"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "Queue"="Transparent" "RenderType"="Transparent"}
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float4 screenPos:TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.screenPos = ComputeScreenPos(o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
float2 uvScreen = i.screenPos.xy / i.screenPos.w;
uvScreen = TRANSFORM_TEX(uvScreen,_MainTex);
return tex2D(_MainTex, uvScreen);
}
ENDCG
}
}
}
I don't know much about shaders, so I am struggling to add transparency to a shader I already use.
So basically I used the shader below to display 360 videos on a sphere. It flipps the normals so it is displayed on the inside.
However, I would like to add an alpha value to it so I can make the sphere (and therefore the video) as transparent as I need it to be. What should I change?
Shader "Custom/Equirectangular" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Diffuse (RGB) Alpha (A)", 2D) = "gray" {}
}
SubShader{
Pass {
Tags {"LightMode" = "Always"}
Cull Front
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma glsl
#pragma target 3.0
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f
{
float4 pos : SV_POSITION;
float3 normal : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.normal = v.normal;
return o;
}
sampler2D _MainTex;
#define PI 3.141592653589793
inline float2 RadialCoords(float3 a_coords)
{
float3 a_coords_n = normalize(a_coords);
float lon = atan2(a_coords_n.z, a_coords_n.x);
float lat = acos(a_coords_n.y);
float2 sphereCoords = float2(lon, lat) * (1.0 / PI);
return float2(1 - (sphereCoords.x * 0.5 + 0.5), 1 - sphereCoords.y);
}
float4 frag(v2f IN) : COLOR
{
float2 equiUV = RadialCoords(IN.normal);
return tex2D(_MainTex, equiUV);
}
ENDCG
}
}
FallBack "VertexLit"
}
EDIT
I have also noticed that texture tiling and offset does not work on this shader. Any ideas how to make that work?
Short story: this is going to be really difficult
Not the shader, the shader's easy. All you have to do is modify this line:
return tex2D(_MainTex, equiUV);
The Long story:
Or: what to modify this line to.
Video formats, due to their very nature, do not natively contain an alpha channel. You'll be hard pressed to find one that does (I looked into this briefly back in 2015 when "interviewing" for a "job" where they needed something similar).
Once you figure out how you're going to encode the alpha, then you can modify the shader to look for that data and convert it to an alpha value, and bam you're done.
I think the place that I was "interviewing" at did it by splitting the video into an upper and lower sections, the upper half was just the alpha channel (black/white) and the lower half was the color data. The player would split the video horizontally and treat the two halves differently. I didn't have to mess with it, they'd already done it, so I'm not sure how it was done programmatically, I can only speculate.
You forgot blending for transparency. And it is better to provide corresponding shader tags as well.
Tags { "LightMode"="Always" "Queue"="Transparent" "RenderType"="Transparent" }
Blend SrcAlpha OneMinusSrcAlpha
Cull Front
I want to draw a horizontal line on an object with shader code (hlsl).
The clipping shader simply takes the distance to a given Y-coordinate in the surface shader and checks if it is higher that a given value.
If so it will discard. The result is a shader that simply clips away all pixels that are not on a line.
void surf (Input IN, inout SurfaceOutputStandard o) {
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
float d = abs(_YClip - IN.worldPos.y); // _YClip is is the properties and can be changed
if (d > _LineThickness) {
discard;
}
}
Can I somehow combine this shader with the standard unity shader without changing the code?
I plan to have a gizmo shader that renders lines and all kind of stuff. It would be very practical if I could just tell unity to render this gizmo shader on top.
I believe you might be able to use or adapt this shader to your purpose.
Image showing before y axis reached.
Image showing during, where one half is above cutoff y value and other half is below. Note that the pattern it dissolves in, depends on a texture pattern you supply yourself. So it should be possible to have a strict cutoff instead of a more odd and uneven pattern.
After the object has fully passed by the cutoff y value. What I did in this case is to hide an object inside the start object that is slightly smaller than the first object you saw. But if you don't have anything inside, the object will just be invisible, or clipped.
Shader "Dissolve/Dissolve"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_DissolveTexture("Dissolve Texture", 2D) = "white" {}
_DissolveY("Current Y of the dissolve effect", Float) = 0
_DissolveSize("Size of the effect", Float) = 2
_StartingY("Starting point of the effect", Float) = -1 //the number is supposedly in meters. Is compared to the Y coordinate in world space I believe.
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// make fog work
//#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
//UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _DissolveTexture;
float _DissolveY;
float _DissolveSize;
float _StartingY;
v2f vert (appdata v) //"The vertex shader"
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
//UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target //"For drawing the pixel on top"
{
float transition = _DissolveY - i.worldPos.y; //Cutoff value where world position is taken into account.
clip(_StartingY + (transition + (tex2D(_DissolveTexture, i.uv)) * _DissolveSize)); //Clip = cutoff if above 0.
//My understanding: If StartingY for dissolve effect + transition value and uv mapping of the texture is taken into account, clip off using the _DissolveSize.
//This happens to each individual pixel.
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
// apply fog
//UNITY_APPLY_FOG(i.fogCoord, col);
//clip(1 - i.vertex.x % 10); //"A pixel is NOT rendered if clip is below 0."
return col;
}
ENDCG
}
}
}
Here you see the inspector fields available.
I have a similar script with the x axis.
Search for the issue gives a number of solutions, but they don't work for some reason in mine Unity3D 5.4. Like
camera inside a sphere
I do not see cull and/or sides in material in Unity editor.
In C#
rend = GetComponent<Renderer>();
mater = rend.material;
rend.setFaceCulling( "front", "ccw" );
mater.side = THREE.DoubleSide;
gives no such setFaceCulling and side property.
How to make material double sided?
You need a custom shader to enable double sided material by using Cull Off
The easiest/fastest way to test is to create a new Standard Surface Shader in the editor and open it. Add the line Cull Off below LOD 200.
Now one thing to consider is that lightning will not render correctly for the back faces. If you want that, I would recommend doing models with 2 sides.
Use or create a shader with
Cull off
Seen here in this simple 2 sided shader:
Shader "Custom/NewSurfaceShader" {
Properties {
}
SubShader {
Cull off
Pass {
ColorMaterial AmbientAndDiffuse
}
}
}
Maybe my answer for your Unity Version doesn't work, but here it is a solution for newer versions in HDRP in image below
just Creat an Unlitshader and edit it:
you should write Cull off bellow LOD 100
then drag it to a new material and set an picture for test - now drag material to an object .
lightning will render correctly !!! ( my unity is 2019.4 )
enter code here
Shader "Unlit/unlit"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Cull off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
// apply fog
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
} }}
Also, In Unity 2020.3 URP/Lit shader has a Render face option guys.
You may use a custom surface shader with Cull Off but the opposite faces will not get the light proper, because the normals are valid only for front faces, for back faces the normals are opposite to faces. If you want the back face to be treated like the front face and don't want to make a model with double sides mesh to consume double memory you can draw in 2 Passes, 1 Pass for front face and 1 for back face where you can inverse normal for every vertex in vertex shader. You can use Cull back and Cull front.
SubShader
{
//Here start first Pass, if you are using standard surface shader passes are created automated,
//else you should specify Pass { }
Tags { ... }
LOD 200
Cull Back
...
struct Input
{
...
}
...
...
//or vert & frag shaders
void surf(Input IN, inout SurfaceOutputStandard p)
{
//processing front face, culling back face
...
}
...
//Here start second pass put automated by unity
Tags {...}
LOD 200
Cull Front
#pragma vertex vert
#pragma surface ...
...
struct Input
{
...
}
...
void vert(inout appdata_full v)
{
v.normal = -v.normal;//flip
}
void surf(Input IN, inout SurfaceOutputStandard p)
{
//processing back face, culling front face
...
}
}
In a fragment shader like the below:
Shader "ColorReplacement" {
Properties {
_MainTex ("Greyscale (R) Alpha (A)", 2D) = "white" {}
}
SubShader {
ZTest LEqual
ZWrite On
Pass {
Name "ColorReplacement"
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma target 3.0
#include "UnityCG.cginc"
struct v2f
{
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
};
v2f vert (appdata_tan v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy;
return o;
}
sampler2D _MainTex;
float4 frag(v2f i) : COLOR
{
}
ENDCG
}
}
Fallback off
}
Is there a way to know the coordinates of i.uv in the screen?
I'm totally new to shaders. The shader is applied to an object being drawn somewhere in the screen, the first pixel passed to frag possibly does not correspond to the first pixel of the screen (the viewport), is there a way to know the position of this pixel in screen coordinates?
EDIT
Yes, I want to obtain the fragment location on the screen.
Unity accepts vertex and fragment programs written in both Cg and HLSL. But I don't know how to convert this shader to HLSL.
The equivalent of gl_FragCoord in Cg is WPOS. I can run the following shader:
Shader "Custom/WindowCoordinates/Base" {
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
float4 vert(appdata_base v) : POSITION {
return mul (UNITY_MATRIX_MVP, v.vertex);
}
fixed4 frag(float4 sp:WPOS) : COLOR {
return fixed4(sp.xy/_ScreenParams.xy,0.0,1.0);
}
ENDCG
}
}
}
That uses the screen position the way I want, but I'm a such noob that I can't even mix both shaders to get what I want: in my shader I'm trying to access v2f.pos, which is calculated the same way of sp in the shader above, but I get the error:
Program 'frag', variable/member "pos" has semantic "POSITION" which is not visible in this profile
If I change pos to be WPOS instead of SV_POSITION I get a similar error:
Program 'vert', variable/member "pos" has semantic "WPOS" which is not visible in this profile at line 35
Which is strange, since I'm using the same target 3.0 of the above shader.
In GLSL fragment stage there's a built-in variable gl_FragCoord which carries the fragment pixel position within the viewport. If the viewport covers the whole screen, this is all you need. If the viewport covers only a subwindow of the screen, you'll have to pass the xy offset the viewport and add it to gl_FragCoord.xy to get the screen position. Now your shader code is not written in GLSL, but apparently Cg (with Unity extensions as it seems). But it should have some correspondence this this available.
Thought I would suggest you to read some books or manuals on shaders,here is a simple solution
vec2 texelSize = 1.0 / vec2(textureSize(yourTexSampler, 0));
vec2 screenCoords = gl_FragCoord.xy * texelSize;
I don't remember how gl_FragCoord is called in Cg,so search in the docs.For the textureSize() - - substitute with width/height of the input texture.
Here is the pretty same question I asked some time ago.
Also take a look at this.
According to docs there is a helper function - ComputeScreenPos.