Using shader for Camera render in Unity - unity3d

I've written a shader and it works fine when I added it in a plane located in front of camera (in this case camera does not have shader). but then I add this shader to the camera, it does not show anything on the screen. Herein is my code, could you let me know how can I change it to be compatible with Camera.RenderWithShader method?
Shader "Custom/she1" {
Properties {
top("Top", Range(0,2)) = 1
bottom("Bottom", Range(0,2)) = 1
}
SubShader {
// Draw ourselves after all opaque geometry
Tags { "Queue" = "Transparent" }
// Grab the screen behind the object into _GrabTexture
GrabPass { }
// Render the object with the texture generated above
Pass {
CGPROGRAM
#pragma debug
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
sampler2D _GrabTexture : register(s0);
float top;
float bottom;
struct data {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 position : POSITION;
float4 screenPos : TEXCOORD0;
};
v2f vert(data i){
v2f o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.screenPos = o.position;
return o;
}
half4 frag( v2f i ) : COLOR
{
float2 screenPos = i.screenPos.xy / i.screenPos.w;
float _half = (top + bottom) * 0.5;
float _diff = (bottom - top) * 0.5;
screenPos.x = screenPos.x * (_half + _diff * screenPos.y);
screenPos.x = (screenPos.x + 1) * 0.5;
screenPos.y = 1-(screenPos.y + 1) * 0.5 ;
half4 sum = half4(0.0h,0.0h,0.0h,0.0h);
sum = tex2D( _GrabTexture, screenPos);
return sum;
}
ENDCG
}
}
Fallback Off
}

I think what your asking for is a replacement shader that shades everything in the camera with your shader.
Am I correct?
If so this should work
Camera.main.SetReplacementShader(Shader.Find("Your Shader"),"RenderType")
here is some more info:
http://docs.unity3d.com/Documentation/Components/SL-ShaderReplacement.html
Edit: Are you expecting the entire camera to warp like a lens effect? Because your not going to get that using a shader like this by itself, because as it stands it will only apply to objects like your plane but not the full camera view, that requires a post image effect. First your need Unity Pro. If you do, import the Image effects package and look at the fisheye script. See if you can duplicate the fisheye script with your own shader. When I attached the fisheye shader without its corresponding script I was getting the same exact results as you are with your current shader code. If you dont have access to the image effects package let me know and ill send your the fisheye scripts and shaders.

I have tried several ways so far. The shader itself work very well when I add it to a plane located in front of main Camera. But when I add it to the main Camera by below code, nothing could be visible on the screen! (just a blank screen) without any error message. I assign the above shader to repl variable.
using UnityEngine;
using System.Collections;
public class test2 : MonoBehaviour {
// Use this for initialization
public Shader repl = null;
void Start () {
Camera.main.SetReplacementShader(repl,"Opaque");
}
// Update is called once per frame
void Update () {
}
}
Just for your information, the above shader distorts the scene to a trapezium shape.

Related

Draw onto an object's texture based on a raycast hit position in world space

I am trying to get the world position of a pixel inside a fragmented shader.
Let me explain. I have followed a tutorial for a fragmented shader that let's me paint on objects. Right now it works through texture coordinates but I want it to work through pixel's world position. So when I click on a 3D model to be able to compare the Vector3 position (where the click happended) to the pixel Vector3 position and if the distance is small enough to lerp the color.
This is the setup I have. I created a new 3D project just for making the shader with the intent to export it later into my main project. In the scene I have the default main camera, directional light, a object with a script that shows me the fps and a default 3D cube with a mesh collider. I created a new material and a new Standard Surface Shader and added it to the cube. After that I assigned the next C# script to the cube with the shader and a camera reference.
Update: The problem right now is that the blit doesn't work as expected. If you change the shader script as how Kalle said, remove the blit from the c# script and change the shader from the 3D model material to be the Draw shader, it will work as expected, but without any lighting. For my purposes I had to change distance(_Mouse.xyz, i.worldPos.xyz); to distance(_Mouse.xz, i.worldPos.xz); so it will paint a all the way through the other side. For debugging I created a RenderTexture and every frame I am using Blit to update the texture and see what is going on. The render texture does not hold the right position as the object is colored. The 3D model I have has lot of geometry and as the paint goes to the other side it should be all over the place on the render texture...but right now it is just on line from the top to the bottom of the texture. Also I try to paint on the bottom half of the object and the render texture doesn't show anything. Only when I paint on the top half I can see red lines (the default painting color).
If you want you can download the sample project here.
This is the code I am using.
Draw.shader
Shader "Unlit/Draw"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Coordinate("Coordinate",Vector)=(0,0,0,0)
_Color("Paint Color",Color)=(1,1,1,1)
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed4 _Coordinate,_Color;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
float draw =pow(saturate(1-distance(i.uv,_Coordinate.xy)),100);
fixed4 drawcol = _Color * (draw * 1);
return saturate(col + drawcol);
}
ENDCG
}
}
}
Draw.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Draw : MonoBehaviour
{
public Camera cam;
public Shader paintShader;
RenderTexture splatMap;
Material snowMaterial,drawMaterial;
RaycastHit hit;
private void Awake()
{
Application.targetFrameRate = 200;
}
void Start()
{
drawMaterial = new Material(paintShader);
drawMaterial.SetVector("_Color", Color.red);
snowMaterial = GetComponent<MeshRenderer>().material;
splatMap = new RenderTexture(1024, 1024, 0, RenderTextureFormat.ARGBFloat);
snowMaterial.mainTexture = splatMap;
}
void Update()
{
if (Input.GetMouseButton(0))
{
if(Physics.Raycast(cam.ScreenPointToRay(Input.mousePosition),out hit))
{
drawMaterial.SetVector("_Coordinate", new Vector4(hit.textureCoord.x, hit.textureCoord.y, 0, 0));
RenderTexture temp = RenderTexture.GetTemporary(splatMap.width, splatMap.height, 0, RenderTextureFormat.ARGBFloat);
Graphics.Blit(splatMap, temp);
Graphics.Blit(temp, splatMap, drawMaterial);
RenderTexture.ReleaseTemporary(temp);
}
}
}
}
As for what I have tried to solve the problem is this. I searched on google about the problem this thread is about and tried to implement it in my project. I have also found some projects that have the feature I need like this one Mesh Texuture Painting. This one works exactly how I need it, but it doesn't work on iOS. The 3D object is turns to black. You can check out a previous postI made to solve the problem and also talked with the creator on twitter but he can't help me. Also I have tried this asset that works ok but in my main project runs with very little fps, it's hard for me to customize it for my needs and it doesn't paint on the edges of my 3D model.
The shader that works well, is simple enough so I can change it and get the desired effect is the one above.
Thank you!
There are two approaches to this problem - either you pass in the texture coordinate and try to convert it to world space inside the shader, or you pass in a world position and compare it to the fragment world position. The latter is no doubt the easiest.
So, let's say that you pass in the world position into the shader like so:
drawMaterial.SetVector("_Coordinate", new Vector4(hit.point.x, hit.point.y, hit.point.z, 0));
Calculating a world position per fragment is expensive, so we do it inside the vertex shader and let the hardware interpolate the value per fragment. Let's add a world position to our v2f struct:
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD1;
};
To calculate the world position inside the vertex shader, we can use the built-in matrix unity_ObjectToWorld:
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
return o;
}
Finally, we can access the value in the fragment shader like so:
float draw =pow(saturate(1-distance(i.worldPos,_Coordinate.xyz)),100);
EDIT: I just realized - when you do a blit pass, you are not rendering with your mesh, you are rendering to a quad which covers the whole screen. Because of this, when you calculate the distance to the vertex, you get the distance to the screen corners, which is not right. There is a way to solve this though - you can change the render target to your render texture and draw the mesh using a shader which projects the mesh UVs across the screen.
It's a bit hard to explain, but basically, the way vertex shaders work is that you take in a vertex which is in local object space and transform it to be relative to the screen in the space -1 to 1 on both axes, where 0 is in the center. This is called Normalized Device Coordinate Space, or NDC space. We can leverage this to make it so that instead of using the model and camera matrices to transform our vertices, we use the UV coordinates, converted from [0,1] space to [-1,1]. At the same time, we can calculate our world position and pass it onto the fragment separately. Here is how the shader would look:
v2f vert (appdata v)
{
v2f o;
float2 uv = v.texcoord.xy;
// https://docs.unity3d.com/Manual/SL-PlatformDifferences.html
if (_ProjectionParams.x < 0) {
uv.y = 1 - uv.y;
}
// Convert from 0,1 to -1,1, for the blit
o.vertex = float4(2 * (uv - 0.5), 0, 1);
// We still need UVs to draw the base texture
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
// Let's do the calculations in local space instead!
o.localPos = v.vertex.xyz;
return o;
}
Also remember to pass in the _Coordinate variable in local space, using transform.InverseTransformPoint.
Now, we need to use a different approach to render this into the render texture. Basically, we need to render the actual mesh as if we were rendering from a camera - except that this mesh will be drawn as a splayed out UV sheet across the screen. First, we set the active render texture to the texture we want to draw into:
// Cache the old target so that we can reset it later
RenderTexture previousRT = RenderTexture.active;
RenderTexture.active = temp;
(You can read about how render targets work here)
Next, we need to bind our material and draw the mesh.
Material mat = drawMaterial;
Mesh mesh = yourAwesomeMesh;
mat.SetTexture("_MainTex", splatMap);
mat.SetPass(0); // This tells the renderer to use pass 0 from this material
Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity);
Finally, blit the texture back to the original:
// Remember to reset the render target
RenderTexture.active = previousRT;
Graphics.Blit(temp, splatMap);
I haven't tested or verified this, but i have used a similar technique to draw a mesh into UVs before. You can read more about DrawMeshNow here.

Custom shader does not receive light

I made a grid shader which is working fine. However, it does not get impacted at all by any light. Just so that you know concerning the plane having the shader:
Its dimensions are 1000x1x1000 (so wide enough)
Displays shadows with any other material and cast shadows is on
Using Unity 2019.3.0f3
Universal Render Pipeline
The plane using custom grid shader (not receiving light)
The plane using basic shader (receiving light)
Custom grid shader code
I tried few solutions though including adding FallBack "Diffuse" at the end, or #include along with TRANSFER_SHADOW things. However, these don't work either.
You need to tell your shader what to do with the light information if you want it to be lit. Here is an example applying diffuse light directly to the albedo of your grid shader:
Shader "Custom/Grid"
{
Properties
{
_GridThickness("Grid Thickness", Float) = 0.01
_GridSpacing("Grid Spacing", Float) = 10.0
_GridColour("Grid Colour", Color) = (0.5, 0.5, 0.5, 0.5)
_BaseColour("Base Colour", Color) = (0.0, 0.0, 0.0, 0.0)
}
SubShader{
Tags { "Queue" = "Transparent" }
Pass {
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Tags {
"LightMode" = "ForwardBase"
} // gets us access to main directional light
CGPROGRAM
// Define the vertex and fragment shader functions
#pragma vertex vert
#pragma fragment frag
#include "UnityStandardBRDF.cginc" // for shader lighting info and some utils
#include "UnityStandardUtils.cginc" // for energy conservation
// Access Shaderlab properties
uniform float _GridThickness;
uniform float _GridSpacing;
uniform float4 _GridColour;
uniform float4 _BaseColour;
// Input into the vertex shader
struct vertexInput
{
float4 vertex : POSITION;
float3 normal : NORMAL; // include normal info
};
// Output from vertex shader into fragment shader
struct vertexOutput
{
float4 pos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 normal : TEXCOORD1; // pass normals along
};
// VERTEX SHADER
vertexOutput vert(vertexInput input)
{
vertexOutput output;
output.pos = UnityObjectToClipPos(input.vertex);
// Calculate the world position coordinates to pass to the fragment shader
output.worldPos = mul(unity_ObjectToWorld, input.vertex);
output.normal = input.normal; //get normal for frag shader from vert info
return output;
}
// FRAGMENT SHADER
float4 frag(vertexOutput input) : COLOR
{
float3 lightDir = _WorldSpaceLightPos0.xyz;
float3 viewDir = normalize(_WorldSpaceCameraPos - input.worldPos);
float3 lightColor = _LightColor0.rgb;
float3 col;
if (frac(input.worldPos.x / _GridSpacing) < _GridThickness || frac(input.worldPos.z / _GridSpacing) < _GridThickness)
col = _GridColour;
else
col = _BaseColour;
col *= lightColor * DotClamped(lightDir, input.normal); // apply diffuse light by angle of incidence
return float4(col, 1);
}
ENDCG
}
}
}
You should check out these tutorials to learn more about other ways to light your objects. Same applies if you want them to accept shadows.
Setting FallBack "Diffuse" won't do anything here since the shader is not "falling back", it's running exactly the way you programmed it to, with no lighting or shadows.

Wrong transformations in unity standalone player build

I am developing a native (c++) plugin (windows only for now) for unity (2018.1.0f2).
The plugin downloads textures and meshes and provides them to the unity.
There is a LOT of boilerplate code that I would like to spare you of.
Anyway, the rendering is done like this:
void RegenerateCommandBuffer(CommandBuffer buffer, List<DrawTask> tasks)
{
buffer.Clear();
buffer.SetProjectionMatrix(cam.projectionMatrix); // protected Camera cam; cam = GetComponent<Camera>();
foreach (DrawTask t in tasks)
{
if (t.mesh == null)
continue;
MaterialPropertyBlock mat = new MaterialPropertyBlock();
bool monochromatic = false;
if (t.texColor != null)
{
var tt = t.texColor as VtsTexture;
mat.SetTexture(shaderPropertyMainTex, tt.Get());
monochromatic = tt.monochromatic;
}
if (t.texMask != null)
{
var tt = t.texMask as VtsTexture;
mat.SetTexture(shaderPropertyMaskTex, tt.Get());
}
mat.SetMatrix(shaderPropertyUvMat, VtsUtil.V2U33(t.data.uvm));
mat.SetVector(shaderPropertyUvClip, VtsUtil.V2U4(t.data.uvClip));
mat.SetVector(shaderPropertyColor, VtsUtil.V2U4(t.data.color));
// flags: mask, monochromatic, flat shading, uv source
mat.SetVector(shaderPropertyFlags, new Vector4(t.texMask == null ? 0 : 1, monochromatic ? 1 : 0, 0, t.data.externalUv ? 1 : 0));
buffer.DrawMesh((t.mesh as VtsMesh).Get(), VtsUtil.V2U44(t.data.mv), material, 0, -1, mat);
}
}
There are two control modes. Either the unity camera is controlled by the camera in the plugin, or the plugin camera is controlled by the unity camera. In my current scenario, the plugin camera is controlled by the unity camera. There is no special magic behind the scenes, but some of the transformations needs to be done in double precision to work without meshes 'jumping' around.
void CamOverrideView(ref double[] values)
{
Matrix4x4 Mu = mapTrans.localToWorldMatrix * VtsUtil.UnityToVtsMatrix;
// view matrix
if (controlTransformation == VtsDataControl.Vts)
cam.worldToCameraMatrix = VtsUtil.V2U44(Math.Mul44x44(values, Math.Inverse44(VtsUtil.U2V44(Mu))));
else
values = Math.Mul44x44(VtsUtil.U2V44(cam.worldToCameraMatrix), VtsUtil.U2V44(Mu));
}
void CamOverrideParameters(ref double fov, ref double aspect, ref double near, ref double far)
{
// fov
if (controlFov == VtsDataControl.Vts)
cam.fieldOfView = (float)fov;
else
fov = cam.fieldOfView;
// near & far
if (controlNearFar == VtsDataControl.Vts)
{
cam.nearClipPlane = (float)near;
cam.farClipPlane = (float)far;
}
else
{
near = cam.nearClipPlane;
far = cam.farClipPlane;
}
}
And a shader:
Shader "Vts/UnlitShader"
{
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct vIn
{
float4 vertex : POSITION;
float2 uvInternal : TEXCOORD0;
float2 uvExternal : TEXCOORD1;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uvTex : TEXCOORD0;
float2 uvClip : TEXCOORD1;
};
struct fOut
{
float4 color : SV_Target;
};
sampler2D _MainTex;
sampler2D _MaskTex;
float4x4 _UvMat;
float4 _UvClip;
float4 _Color;
float4 _Flags; // mask, monochromatic, flat shading, uv source
v2f vert (vIn i)
{
v2f o;
o.vertex = UnityObjectToClipPos(i.vertex);
o.uvTex = mul((float3x3)_UvMat, float3(_Flags.w > 0 ? i.uvExternal : i.uvInternal, 1.0)).xy;
o.uvClip = i.uvExternal;
return o;
}
fOut frag (v2f i)
{
fOut o;
// texture color
o.color = tex2D(_MainTex, i.uvTex);
if (_Flags.y > 0)
o.color = o.color.rrra; // monochromatic texture
// uv clipping
if ( i.uvClip.x < _UvClip.x
|| i.uvClip.y < _UvClip.y
|| i.uvClip.x > _UvClip.z
|| i.uvClip.y > _UvClip.w)
discard;
// mask
if (_Flags.x > 0)
{
if (tex2D(_MaskTex, i.uvTex).r < 0.5)
discard;
}
// uniform tint
o.color *= _Color;
return o;
}
ENDCG
}
}
}
It all works perfectly - in editor. It also works well in standalone DEVELOPMENT build. But the transformations get wrong when in 'deploy' builds. The rendered parts look as if they were rotated around wrong axes or with different polarities.
Can you spot some obvious mistakes?
My first suspect was OpenGL vs DirectX differences, but the 'deploy' and 'development' builds should use the same, should they not? Moreover, I have tried changing the player setting to force one or the other, but without any differences.
Edit:
Good image: https://drive.google.com/open?id=1RTlVZBSAj7LIml1sBCX7nYTvMNaN0xK-
Bad image: https://drive.google.com/open?id=176ahft7En6MqT-aS2RdKXOVW68NmvK2L
Note how the terrain is correctly aligned with the atmosphere.
Steps to reproduce
1) Create a new project in unity
2) Download the assets https://drive.google.com/open?id=18uKuiya5XycjGWEcsF-xjy0fn7sf-D82 and extract them into the newly created project
3) Try it in editor -> should work ok (it will start downloading meshes and textures from us, so be patient; the downloaded resources are cached in eg. C://users//.cache/vts-browser)
The plane is controlled by mouse with LMB pressed.
4) Build in development build and run -> should work ok too
5) Build NOT in development build and run -> the terrain transformations behave incorrectly.
Furthermore, I have published the repository. Here is the unity-specific code: https://github.com/Melown/vts-browser-unity-plugin
Unfortunately, I did not intend to publish it this soon, so the repository is missing some formal things like readme and build instructions. Most information can, however, be found in the submodules.
CommandBuffer.SetProjectionMatrix apparently needs a matrix that has been adjusted by GL.GetGPUProjectionMatrix.
buffer.SetProjectionMatrix(GL.GetGPUProjectionMatrix(cam.projectionMatrix, false));
Unfortunately, I still do not understand why would this cause a different behavior between deploy and development builds. I would have expected it to only make difference on different platforms.

How to make Unity glass shader only refract objects behind it?

I am looking for a glass shader for Unity that only refracts the objects behind it, or ideas for how to modify an existing glass shader to do that.
This screenshot shows what happens when I use FX/Glass/Stained BumpDistort on a curved plane mesh.
As you can see, the glass shader refracts both the sphere in front of the mesh and the ground behind it. I am looking for a shader that will only refract the objects behind it.
Here is the code for that shader, for reference:
// Per pixel bumped refraction.
// Uses a normal map to distort the image behind, and
// an additional texture to tint the color.
Shader "FX/Glass/Stained BumpDistort" {
Properties {
_BumpAmt ("Distortion", range (0,128)) = 10
_MainTex ("Tint Color (RGB)", 2D) = "white" {}
_BumpMap ("Normalmap", 2D) = "bump" {}
}
Category {
// We must be transparent, so other objects are drawn before this one.
Tags { "Queue"="Transparent" "RenderType"="Opaque" }
SubShader {
// This pass grabs the screen behind the object into a texture.
// We can access the result in the next pass as _GrabTexture
GrabPass {
Name "BASE"
Tags { "LightMode" = "Always" }
}
// Main pass: Take the texture grabbed above and use the bumpmap to perturb it
// on to the screen
Pass {
Name "BASE"
Tags { "LightMode" = "Always" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord: TEXCOORD0;
};
struct v2f {
float4 vertex : SV_POSITION;
float4 uvgrab : TEXCOORD0;
float2 uvbump : TEXCOORD1;
float2 uvmain : TEXCOORD2;
UNITY_FOG_COORDS(3)
};
float _BumpAmt;
float4 _BumpMap_ST;
float4 _MainTex_ST;
v2f vert (appdata_t v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y*scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uvbump = TRANSFORM_TEX( v.texcoord, _BumpMap );
o.uvmain = TRANSFORM_TEX( v.texcoord, _MainTex );
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
sampler2D _GrabTexture;
float4 _GrabTexture_TexelSize;
sampler2D _BumpMap;
sampler2D _MainTex;
half4 frag (v2f i) : SV_Target
{
// calculate perturbed coordinates
half2 bump = UnpackNormal(tex2D( _BumpMap, i.uvbump )).rg; // we could optimize this by just reading the x & y without reconstructing the Z
float2 offset = bump * _BumpAmt * _GrabTexture_TexelSize.xy;
i.uvgrab.xy = offset * i.uvgrab.z + i.uvgrab.xy;
half4 col = tex2Dproj( _GrabTexture, UNITY_PROJ_COORD(i.uvgrab));
half4 tint = tex2D(_MainTex, i.uvmain);
col *= tint;
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
// ------------------------------------------------------------------
// Fallback for older cards and Unity non-Pro
SubShader {
Blend DstColor Zero
Pass {
Name "BASE"
SetTexture [_MainTex] { combine texture }
}
}
}
}
My intuition is that it has to do with the way that _GrabTexture is captured, but I'm not entirely sure. I'd appreciate any advice. Thanks!
No simple answer for this.
You cannot think about refraction without thinking about the context in some way, so let's see:
Basically, it's not easy to define when an object is "behind" another one. There are different ways to even meassure a point's distance to the camera, let alone accounting for the whole geometry. There are many strange situations where geometry intersects, and the centers and bounds could be anywhere.
Refraction is usually easy to think about in raytracing algorithms (you just march a ray and calculate how it bounces/refracts to get the colors). But here in raster graphics (used for 99% of real-time graphics), the objects are rendered as a whole, and in turns.
What is going on with that image is that the background and ball are rendered first, and the glass later. The glass doesn't "refract" anything, it just draws itself as a distortion of whatever was written in the render buffer before.
"Before" is key here. You don't get "behinds" in raster graphics, everything is done by being conscious of rendering order. Let's see how some refractions are created:
Manually set render queue tags for the shaders, so you know at what point in the pipeline they are drawn
Manually set each material's render queue
Create a script that constantly marshals the scene and every frame calculates what should be drawn before or after the glass according to position or any method you want, and set up the render queues in the materials
Create a script that render the scene filtering out (through various methods) the objects that shouldn't be refracted, and use that as the texture to refract (depending on the complexity of the scene, this is sometimes necessary)
These are just some options off the top of my head, everything depends on your scene
My advice:
Select the ball's material
Right-click on the Inspector window --> Tick on "Debug" mode
Set the Custom Render Queue to 2200 (after the regular geometry is drawn)
Select the glass' material
Set the Custom Render Queue to 2100 (after most geometry, but before the ball)

Cg: omit depth write

I am outputting depth in Cg in a branch, like so:
ZWrite On
..
void frag(v2f IN, out color : COLOR, out depth : DEPTH) {
if (statement) {
color = float4(1);
} else {
color = float4(0);
depth = 0;
}
}
However as you see I omit writing the depth in the first condition. This results in undefined behaviour, but I believe this is common practice in GLSL (omitting writing to glFragDepth will result in the original depth).
What should I do to get the original depth in the first condition in Cg when having a depth value for output?
YMMV w/ this script. The code, as I recall, needed to be targeted to old implementations of OpenGL or else you'd get an error like shader registers cannot be masked related to this D3D issue.
But I believe you can pull the depth from the camera depth texture and rewrite it out. You do need to calculate a projected position first using ComputeScreenPos in the vertex shader. Documentation is non-existent, AFAIK, for the functions Linear01Depth and LinearEyeDepth so I can't tell you what the performance hit might be.
Shader "Depth Shader" { // defines the name of the shader
SubShader { // Unity chooses the subshader that fits the GPU best
Pass { // some shaders require multiple passes
ZWrite On
CGPROGRAM // here begins the part in Unity's Cg
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 position : POSITION;
float4 projPos : TEXCOORD1;
};
v2f vert(float4 vertexPos : POSITION)
{
v2f OUT;
OUT.position = mul(UNITY_MATRIX_MVP, vertexPos);
OUT.projPos = ComputeScreenPos(OUT.position);
return OUT;
}
//camera depth texture here
uniform sampler2D _CameraDepthTexture; //Depth Texture
void frag(v2f IN, out float4 color:COLOR, out float depth:DEPTH) // fragment shader
{
color = float4(0);
// use eye depth for actual z...
depth = LinearEyeDepth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)).r);
//or this for depth in between [0,1]
//depth = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)).r);
}
ENDCG // here ends the part in Cg
}
}
}