I want to draw view of camera in texture and show texture in canvas, if camera not moving. I run it for android then I got black texuture, but for WebPlayer is Good!
public RawImage rawImage;
private void Start()
{
texture = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 32, RenderTextureFormat.ARGB32);
texture.antiAliasing = 8;
texture.Create();
}
public void ShowTexture()
{
camera.targetTexture = texture;
RenderTexture.active = texture2;
if (!RenderTexture.active.IsCreated())
RenderTexture.active.Create();
camera.Render();
var texture2d = new Texture2D(camera.targetTexture.width, camera.targetTexture.height, TextureFormat.RGB24, true, true);
texture2d.ReadPixels(new Rect(0, 0, camera.pixelWidth, camera.pixelHeight), 0, 0);
texture2d.Apply(false);
RenderTexture.active = null;
camera.targetTexture = null;
rawImage.texture = texture2d;
}
figure this might help someone...
Replicated same issue by accident... working both on iOS and Droid.
Then disabled camera clear, and hey presto... works on iOS but black on Android.
changed camera clear back to Z-depth only... and both iOS and Droid work again.
So try changing your camera clear settings.
I am currently working with unity ver 2019.2.0f1.
For unknown reasons the initial camera I had imported from an earlier version was showing a black render texture and the new one was working perfectly.
I seem to have found a solution, something on the camera itself seems to be the issue. Delete camera, re-create and Color Format on the render texture rgba8_unorm with depth buffer at 24
Now both show up properly on android.
Hope this helps
This is a known issue with Unity. You can find more details at:
https://forum.unity3d.com/threads/rendertexture-not-working-on-ios-and-android-unity-4-2-0f4.192561/
https://forum.unity3d.com/threads/render-texture-not-working-on-device-unity-5-2-1f1.358483/
https://forum.unity3d.com/threads/render-texture-works-in-editor-but-not-on-devices-after-upgrade-to-unity-5.362397/
and a few others where the moderators and staff claim it is fixed in a future release or (with a touch of unnecessary arrogance) that the issue is the user and a bug never existed at all.
BUT
This is going to sound silly, but add an ImageEffect to the main camera. I have made a dummy effect that is attached to my main camera and without any logical explanation, it fixes RenderTexture on mobile.
DummyEffect.cs:
using UnityEngine;
[ExecuteInEditMode]
[AddComponentMenu("Image Effects/Dummy Effect")]
public class DummyEffect : ImageEffectBase {
// Called by camera to apply image effect
void OnRenderImage (RenderTexture source, RenderTexture destination) {
Graphics.Blit (source, destination, material);
}
}
DummyEffect.shader:
Shader "Hidden/Dummy Effect" {
Properties {
_MainTex ("Base (RGB)", RECT) = "white" {}
}
SubShader {
Pass {
ZTest Always Cull Off ZWrite Off
Fog { Mode off }
CGPROGRAM
#pragma vertex vert_img
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
uniform sampler2D _MainTex;
float4 frag (v2f_img i) : COLOR {
return tex2D(_MainTex, i.uv);
}
ENDCG
}
}
Fallback off
}
Related
I've been having troubles with getting custom post-processing shaders to work with the 2D URP renderer, after a lot of searching I found a solution that let me use post-processing effects in 2D with URP by using camera stacking and render features. I do this by having a camera that renders most of the scene as a base camera that renders the 2D lights (the main reason I'm using URP) and a second overlay camera that renders the post-processing effect. The issue is that for some reason the quality drops a lot when I have the camera that's applying the post-processing effect enabled. Here's a couple examples:
With post-processing camera enabled
With post-processing camera disabled
The shader shouldn't be doing anything at the moment, but if I do make it do something like inverting the colors, the effect does get applied if I have the camera enabled. The UI has it's own camera so it's unaffected by both the low quality and the shader. I've found that disabling the render feature brings the quality back as well, but it doesn't seem to be the shader that's doing this because I can unattach the shader from the feature without disabling the feature and the low quality stays. I'm still pretty new with shaders though, so in case there is something wrong with my shader that's causing this, here's the code:
Shader "PixelationShader"
{
SubShader
{
Tags { "RenderType" = "Opaque" "RenderPipeline" = "UniversalPipeline"}
LOD 100
ZWrite Off Cull Off
Pass
{
Name "PixelationShader"
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
struct Attributes
{
float4 positionHCS : POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct Varyings
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
UNITY_VERTEX_OUTPUT_STEREO
};
Varyings vert(Attributes input)
{
Varyings output;
// Note: The pass is setup with a mesh already in clip
// space, that's why, it's enough to just output vertex
// positions
output.positionCS = float4(input.positionHCS.xyz, 1.0);
#if UNITY_UV_STARTS_AT_TOP
output.positionCS.y *= -1;
#endif
output.uv = input.uv;
return output;
}
TEXTURE2D_X(_CameraOpaqueTexture);
SAMPLER(sampler_CameraOpaqueTexture);
half4 frag(Varyings input) : SV_Target
{
UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
float4 color = SAMPLE_TEXTURE2D_X(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, input.uv);
//color.rgb = 1 - color.rgb;
return color;
}
ENDHLSL
}
}
}
Please let me know if you have any ideas, thanks! Also, the editor light icons you can see in the images just started appearing in game as well, if anyone knows how to remove those or fix the white lines at the edges of the screen, that would be handy to know as well
Edit: I've noticed that the quality difference in the images I sent isn't very noticeable, but it's much more noticeable when actually playing the game
_CameraOpaqueTexture uses bilinear downsampling by default. You can change that in the universal render pipeline asset that you use:
dropdown under "Rendering/opaque downsampling" needs to be none
After trying a bunch of different things, I decided to just remove URP from my project and use 3D lights on 2D sprites instead
I am trying to get the world position of a pixel inside a fragmented shader.
Let me explain. I have followed a tutorial for a fragmented shader that let's me paint on objects. Right now it works through texture coordinates but I want it to work through pixel's world position. So when I click on a 3D model to be able to compare the Vector3 position (where the click happended) to the pixel Vector3 position and if the distance is small enough to lerp the color.
This is the setup I have. I created a new 3D project just for making the shader with the intent to export it later into my main project. In the scene I have the default main camera, directional light, a object with a script that shows me the fps and a default 3D cube with a mesh collider. I created a new material and a new Standard Surface Shader and added it to the cube. After that I assigned the next C# script to the cube with the shader and a camera reference.
Update: The problem right now is that the blit doesn't work as expected. If you change the shader script as how Kalle said, remove the blit from the c# script and change the shader from the 3D model material to be the Draw shader, it will work as expected, but without any lighting. For my purposes I had to change distance(_Mouse.xyz, i.worldPos.xyz); to distance(_Mouse.xz, i.worldPos.xz); so it will paint a all the way through the other side. For debugging I created a RenderTexture and every frame I am using Blit to update the texture and see what is going on. The render texture does not hold the right position as the object is colored. The 3D model I have has lot of geometry and as the paint goes to the other side it should be all over the place on the render texture...but right now it is just on line from the top to the bottom of the texture. Also I try to paint on the bottom half of the object and the render texture doesn't show anything. Only when I paint on the top half I can see red lines (the default painting color).
If you want you can download the sample project here.
This is the code I am using.
Draw.shader
Shader "Unlit/Draw"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Coordinate("Coordinate",Vector)=(0,0,0,0)
_Color("Paint Color",Color)=(1,1,1,1)
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed4 _Coordinate,_Color;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
float draw =pow(saturate(1-distance(i.uv,_Coordinate.xy)),100);
fixed4 drawcol = _Color * (draw * 1);
return saturate(col + drawcol);
}
ENDCG
}
}
}
Draw.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Draw : MonoBehaviour
{
public Camera cam;
public Shader paintShader;
RenderTexture splatMap;
Material snowMaterial,drawMaterial;
RaycastHit hit;
private void Awake()
{
Application.targetFrameRate = 200;
}
void Start()
{
drawMaterial = new Material(paintShader);
drawMaterial.SetVector("_Color", Color.red);
snowMaterial = GetComponent<MeshRenderer>().material;
splatMap = new RenderTexture(1024, 1024, 0, RenderTextureFormat.ARGBFloat);
snowMaterial.mainTexture = splatMap;
}
void Update()
{
if (Input.GetMouseButton(0))
{
if(Physics.Raycast(cam.ScreenPointToRay(Input.mousePosition),out hit))
{
drawMaterial.SetVector("_Coordinate", new Vector4(hit.textureCoord.x, hit.textureCoord.y, 0, 0));
RenderTexture temp = RenderTexture.GetTemporary(splatMap.width, splatMap.height, 0, RenderTextureFormat.ARGBFloat);
Graphics.Blit(splatMap, temp);
Graphics.Blit(temp, splatMap, drawMaterial);
RenderTexture.ReleaseTemporary(temp);
}
}
}
}
As for what I have tried to solve the problem is this. I searched on google about the problem this thread is about and tried to implement it in my project. I have also found some projects that have the feature I need like this one Mesh Texuture Painting. This one works exactly how I need it, but it doesn't work on iOS. The 3D object is turns to black. You can check out a previous postI made to solve the problem and also talked with the creator on twitter but he can't help me. Also I have tried this asset that works ok but in my main project runs with very little fps, it's hard for me to customize it for my needs and it doesn't paint on the edges of my 3D model.
The shader that works well, is simple enough so I can change it and get the desired effect is the one above.
Thank you!
There are two approaches to this problem - either you pass in the texture coordinate and try to convert it to world space inside the shader, or you pass in a world position and compare it to the fragment world position. The latter is no doubt the easiest.
So, let's say that you pass in the world position into the shader like so:
drawMaterial.SetVector("_Coordinate", new Vector4(hit.point.x, hit.point.y, hit.point.z, 0));
Calculating a world position per fragment is expensive, so we do it inside the vertex shader and let the hardware interpolate the value per fragment. Let's add a world position to our v2f struct:
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD1;
};
To calculate the world position inside the vertex shader, we can use the built-in matrix unity_ObjectToWorld:
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
return o;
}
Finally, we can access the value in the fragment shader like so:
float draw =pow(saturate(1-distance(i.worldPos,_Coordinate.xyz)),100);
EDIT: I just realized - when you do a blit pass, you are not rendering with your mesh, you are rendering to a quad which covers the whole screen. Because of this, when you calculate the distance to the vertex, you get the distance to the screen corners, which is not right. There is a way to solve this though - you can change the render target to your render texture and draw the mesh using a shader which projects the mesh UVs across the screen.
It's a bit hard to explain, but basically, the way vertex shaders work is that you take in a vertex which is in local object space and transform it to be relative to the screen in the space -1 to 1 on both axes, where 0 is in the center. This is called Normalized Device Coordinate Space, or NDC space. We can leverage this to make it so that instead of using the model and camera matrices to transform our vertices, we use the UV coordinates, converted from [0,1] space to [-1,1]. At the same time, we can calculate our world position and pass it onto the fragment separately. Here is how the shader would look:
v2f vert (appdata v)
{
v2f o;
float2 uv = v.texcoord.xy;
// https://docs.unity3d.com/Manual/SL-PlatformDifferences.html
if (_ProjectionParams.x < 0) {
uv.y = 1 - uv.y;
}
// Convert from 0,1 to -1,1, for the blit
o.vertex = float4(2 * (uv - 0.5), 0, 1);
// We still need UVs to draw the base texture
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
// Let's do the calculations in local space instead!
o.localPos = v.vertex.xyz;
return o;
}
Also remember to pass in the _Coordinate variable in local space, using transform.InverseTransformPoint.
Now, we need to use a different approach to render this into the render texture. Basically, we need to render the actual mesh as if we were rendering from a camera - except that this mesh will be drawn as a splayed out UV sheet across the screen. First, we set the active render texture to the texture we want to draw into:
// Cache the old target so that we can reset it later
RenderTexture previousRT = RenderTexture.active;
RenderTexture.active = temp;
(You can read about how render targets work here)
Next, we need to bind our material and draw the mesh.
Material mat = drawMaterial;
Mesh mesh = yourAwesomeMesh;
mat.SetTexture("_MainTex", splatMap);
mat.SetPass(0); // This tells the renderer to use pass 0 from this material
Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity);
Finally, blit the texture back to the original:
// Remember to reset the render target
RenderTexture.active = previousRT;
Graphics.Blit(temp, splatMap);
I haven't tested or verified this, but i have used a similar technique to draw a mesh into UVs before. You can read more about DrawMeshNow here.
I have a .fbx 3D mesh that I've imported as a GameObject in Unity. It has vertex colors. I'm trying to get certain parts of the mesh to render as transparent, ideally once the user selects an option to so.
For reference, here's a screenshot of the mesh I'm using.
https://imgur.com/a/FY8Z38r
I've written a shader in Unity that is attached to this GameObject's material, which allows the mesh's vertex colors to be displayed.
Shader "Custom/VertexColor" {
// Where it will appear inside of the Shader Dropdown Menu of the Material / Name of the shader
SubShader{
Tags { "RenderType" = "Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert vertex:vert
#pragma target 3.0
struct Input {
float4 vertColor;
};
float _CutoutThresh;
void vert(inout appdata_full v, out Input o) {
UNITY_INITIALIZE_OUTPUT(Input, o);
o.vertColor = v.color;
}
void surf(Input IN, inout SurfaceOutput o) {
#include "UnityCG.cginc
o.Albedo = IN.vertColor.rgb;
clip(IN.vertColor.r = 0.5); // Discards any pixel whose interpolated vertex color in the red channel is less than 0.5
}
ENDCG
}
FallBack "Diffuse"
}
Specifically, this line here:
clip(IN.vertColor.g = 0.5); // Discards any pixel whose interpolated vertex color in the green channel is less than 0.5
I expected this line to discard any non-green pixels, but my GameObject still looks the same.
HLSL's clip function discards a pixel if the value is less than zero.
What you are looking for would be something like:
clip( IN.vertColor.g < 0.5f ? -1:1 );
Using Unity 2017.3
I have a RawImage into which I display a sequence of images loaded as Texture2D's. Works perfectly and seamlessly.
I then show a video into the same RawImage, using the sample VideoPlayer code, and assigning
rawImage.texture = videoPlayer.texture;
The video plays perfectly well, but as part of switching from the still images to the video, there is a noticeable flicker in the RawImage, as if there's a frame or two of black displayed. The first frame of the video matches the last static image I displayed, so I had expected the transition to be pretty seamless.
Note that the video has been "Prepared" prior to all this - my code yields until videoPlayer.isPrepared returns true, and only then tell the video to play and set the texture.
I thought maybe there was an issue with the texture not being quite ready, but I tried yielding once or twice after calling Play and before setting the texture, but that didn't have any effect on the flicker.
I saw this item: https://answers.unity.com/questions/1294011/raw-image-flickering-when-texture-changed.html which suggests that this is something to do with material instances being set up. I don't fully understand the solution presented in that answer, nor do I understand how I could adapt it to my own case, but maybe it means something to those more skilled in Unity than I.
Any suggestions on how to get rid of that flickery frame?
EDIT: Here's the code
public class VideoAnimation : MonoBehaviour, IAnimation {
private VideoPlayer _videoPlayer;
private UnityAction _completeAction;
private bool _prepareStarted;
public void Configure(VideoClip clip, bool looping)
{
_videoPlayer = gameObject.AddComponent<VideoPlayer>();
_videoPlayer.playOnAwake = false;
_videoPlayer.isLooping = looping;
_videoPlayer.source = VideoSource.VideoClip;
_videoPlayer.clip = clip;
_videoPlayer.skipOnDrop = true;
}
public void Prepare()
{
_prepareStarted = true;
_videoPlayer.Prepare();
}
public void Begin(RawImage destImage, UnityAction completeAction)
{
_completeAction = completeAction;
_videoPlayer.loopPointReached += OnLoopPointReached;
StartCoroutine(GetVideoPlaying(destImage));
}
private IEnumerator GetVideoPlaying(RawImage destImage)
{
if (!_prepareStarted)
{
_videoPlayer.Prepare();
}
while(!_videoPlayer.isPrepared)
{
yield return null;
}
_videoPlayer.Play();
destImage.texture = _videoPlayer.texture;
}
public void OnLoopPointReached(VideoPlayer source)
{
if (_completeAction != null)
{
_completeAction();
}
}
public void End()
{
_videoPlayer.Stop();
_videoPlayer.loopPointReached -= OnLoopPointReached;
}
public class Factory : Factory<VideoAnimation>
{
}
}
In the specific case I'm dealing with, Configure and Prepare are called ahead of time, while the RawImage is showing the last static image before the video. Then when it's time to show the video, Begin is called. Thus, _prepareStarted is already true when Begin is called. Inserting log messages shows that isPrepared is returning true by the time I get around to calling Begin, so I don't loop there either.
I've tried altering the order of the two line
_videoPlayer.Play();
destImage.texture = _videoPlayer.texture;
but it doesn't seem to change anything. I also thought that maybe the VideoPlayer was somehow outputting a black frame ahead of the normal video, but inserting a yield or three after Play and before the texture set made no difference.
None of the samples I've seen have a Texture in the RawImage before the VideoPlayer's texture is inserted. So in those, the RawImage is starting out black, which means that an extra black frame isn't going to be noticeable.
EDIT #2:
Well, I came up with a solution and, I think, somewhat of an explanation.
First, VideoPlayer.frame is documented as "The frame index currently being displayed by the VideoPlayer." This is not strictly true. Or, maybe it is somewhere in the VideoPlayer's pipeline, but it's not the frame that's observable by code using the VideoPlayer's texture.
When you Prepare the VideoPlayer, at least in the mode I'm using it, the VideoPlayer creates an internal RenderTexture. You would think that, once the player has been prepared, that texture would contain the first frame of the video. It doesn't. There is a very noticeable delay before there's anything there. Thus, when my code set the RawImage texture to the player's texture, it was arranging for a texture that was, at least at that moment, empty to be displayed. This perfectly explains the black flicker, since that's the color of the background Canvas.
So my first attempt at a solution was to insert the loop here:
_videoPlayer.Play();
while(_videoPlayer.frame < 1)
{
yield return null;
}
destImage.texture = _videoPlayer.texture;
between Play and the texture set.
I figured that, despite the documentation, maybe frame was the frame about to be displayed. If so, this should result in the first (0th) frame already being in the buffer, and would get rid of the flicker. Nope. Still flickered. But when I changed to
_videoPlayer.Play();
while(_videoPlayer.frame < 2)
{
yield return null;
}
destImage.texture = _videoPlayer.texture;
then the transition was seamless. So my initial attempt where I inserted yields between the two was the right approach - I just didn't insert quite enough. One short, as a matter of fact. I inserted a counter in the loop, and it showed that I yielded 4 times in the above loop, which is what I would expect, since the video is 30fps, and I'm running at 60fps on my computer. (Sync lock is on.)
A final experiment showed that:
_videoPlayer.Play();
while(_videoPlayer.frame < 1)
{
yield return null;
}
yield return null;
destImage.texture = _videoPlayer.texture;
also did not result in a flicker. (Or, at least, not one that I could see.) So once the VideoPlayer was reporting that it was displaying the second frame (the numbers are 0-based according to the docs), it took one additional game frame before the transition was seamless. (Unless there was a 60-th of a second flicker that my eyes can't see.) That game frame might have something to do with Unity's graphic pipeline or VideoPlayer pipeline - I don't know.
So, the bottom line is that there is a noticeable delay from the time you call Play until there is actually anything in the VideoPlayer's texture that will make it to the screen, and unless you wait for that, you'll be displaying "nothing" (which, in my case, resulted in black background flickering through.)
It occurs to me that since the VideoPlayer is producing a RenderTexture, it might also be possible to blit the previous static texture to the VideoPlayer's texture (so that there would be something there right away) and then do the switch immediately. Another experiment to run...
Hmm, let's try use shaders, maybe it helps you.
First we must create custom shader and it must work like a standard UI shader.
You can download all build-in shaders in this link.
Take UI-Default.shader and modificate it. I modificate it for you!
Just create shader in unity and paste this code:
Shader "Custom/CustomShaderForUI"
{
Properties
{
//[PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {}
_CustomTexture ("Texture", 2D) = "white" {} // <--------------- new property
_Color ("Tint", Color) = (1,1,1,1)
_StencilComp ("Stencil Comparison", Float) = 8
_Stencil ("Stencil ID", Float) = 0
_StencilOp ("Stencil Operation", Float) = 0
_StencilWriteMask ("Stencil Write Mask", Float) = 255
_StencilReadMask ("Stencil Read Mask", Float) = 255
_ColorMask ("Color Mask", Float) = 15
[Toggle(UNITY_UI_ALPHACLIP)] _UseUIAlphaClip ("Use Alpha Clip", Float) = 0
}
SubShader
{
Tags
{
"Queue"="Transparent"
"IgnoreProjector"="True"
"RenderType"="Transparent"
"PreviewType"="Plane"
"CanUseSpriteAtlas"="True"
}
Stencil
{
Ref [_Stencil]
Comp [_StencilComp]
Pass [_StencilOp]
ReadMask [_StencilReadMask]
WriteMask [_StencilWriteMask]
}
Cull Off
Lighting Off
ZWrite Off
ZTest [unity_GUIZTestMode]
Blend SrcAlpha OneMinusSrcAlpha
ColorMask [_ColorMask]
Pass
{
Name "Default"
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 2.0
#include "UnityCG.cginc"
#include "UnityUI.cginc"
#pragma multi_compile __ UNITY_UI_CLIP_RECT
#pragma multi_compile __ UNITY_UI_ALPHACLIP
struct appdata_t
{
float4 vertex : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct v2f
{
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
float4 worldPosition : TEXCOORD1;
UNITY_VERTEX_OUTPUT_STEREO
};
fixed4 _Color;
fixed4 _TextureSampleAdd;
float4 _ClipRect;
v2f vert(appdata_t v)
{
v2f OUT;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(OUT);
OUT.worldPosition = v.vertex;
OUT.vertex = UnityObjectToClipPos(OUT.worldPosition);
OUT.texcoord = v.texcoord;
OUT.color = v.color * _Color;
return OUT;
}
//sampler2D _MainTex;
sampler2D _CustomTexture; // <---------------------- new property
fixed4 frag(v2f IN) : SV_Target
{
//half4 color = (tex2D(_MainTex, IN.texcoord) + _TextureSampleAdd) * IN.color;
half4 color = (tex2D(_CustomTexture, IN.texcoord) + _TextureSampleAdd) * IN.color; // <- using new property
#ifdef UNITY_UI_CLIP_RECT
color.a *= UnityGet2DClipping(IN.worldPosition.xy, _ClipRect);
#endif
#ifdef UNITY_UI_ALPHACLIP
clip (color.a - 0.001);
#endif
return color;
}
ENDCG
}
}
}
Next, create a material and set your RenderTexture to texture field in shader (not in RawImage component).
Hope this helps!
Try hiding the RawImage gameObject while the video is loading. This should fix any flickering caused by the VideoPlayer not being fully loaded.
public VideoPlayer _videoPlayer;
public RawImage _videoImage;
private void PlayClip(VideoClip videoClip)
{
StartCoroutine(PlayClipCoroutine(videoClip));
}
private IEnumerator PlayClipCoroutine(VideoClip clip)
{
_videoImage.gameObject.SetActive(false);
_videoPlayer.clip = clip;
_videoPlayer.Prepare();
while (!_videoPlayer.isPrepared)
{
yield return null;
}
_videoPlayer.Play();
_videoImage.texture = _videoPlayer.texture;
_videoImage.gameObject.SetActive(true);
}
I have a scene where I really need depth of field.
Apparently, Unity's depth of field doesn't work with any of the shaders, neither built-in or custom, that process the alpha.
So this happens, for example, with the Transparent/Diffuse shader. Transparent/Cutout works instead.
Here's the simplest custom shader I made that triggers this behaviour:
Shader "Custom/SimpleAlpha" {
Properties {
_MainTex ("Base (RGBA)", 2D) = "white" {}
}
SubShader {
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
//Tags { "RenderType"="Opaque" }
LOD 300
ZWrite Off
CGPROGRAM
#pragma surface surf Lambert alpha
#include "UnityCG.cginc"
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex);
o.Albedo = c.rgb;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
If you try the code in a project you'll notice that EVERY object that wears the shader is blurred with the very same amount instead of being blurred basing on Z.
Any help is much appreciated.
Thanks in advance.
I posted the same question on Unity Answers: http://answers.unity3d.com/questions/438556/my-shader-brakes-depth-of-field.html
Since depth of field is a post processing effect that uses the values stored in the Z-buffer, the following line is the culprit:
ZWrite Off
For transparent objects, Z-buffer writes are usually disabled because the Transparent render queue doesn't need the Z-buffer.
So if you remove that line, you should see depth of field correctly applied to transparent objects. But objects lying behind fully transparent areas will now be blurred using the wrong Z value. As quick fix, you could try to use an alpha test like AlphaTest Greater 0.1.