How do I pass a GameObject's transform's position to my CG shader? - unity3d

So I'm making a vertex shader to make a GameObject look like it's shrinking/expanding (pulsating?) continuously.
I am using a normal scale matrix to multiply the position of every vertex, but I want to keep the object appearing centered in the same position. If I could get the transform.position of the gameObject that is being rendered, I know I would be able to keep the center position the same.
So how would I access the gameobject's position in my CG shader?
Or am I approaching this problem incorrectly?
vertexOut vert(vertexIn v)
{
vertexOut o;
o.pos = mul(UNITY_MATRIX_MVP,v.vertex);
o.pos2 = mul(UNITY_MATRIX_MVP,v.vertex);
float scaleVal = sin(_Time.y*10)/8 + 1.0;
float4x4 scaleMat = float4x4(
scaleVal, 0, 0, 0,
0, scaleVal, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
o.pos = mul(scaleMat,o.pos);
return o;
}

Simply define a shader property of type Vector. Then you can update this property on every frame by calling SetVector on the material.

Sounds like you just want to multiply v.vertex by your scaleMat. So something like:
vertexOut vert(vertexIn v)
{
vertexOut o;
o.pos2 = mul(UNITY_MATRIX_MVP,v.vertex);
float scaleVal = sin(_Time.y*10)/8 + 1.0;
float4x4 scaleMat = float4x4(
scaleVal, 0, 0, 0,
0, scaleVal, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
o.pos = mul(UNITY_MATRIX_MVP, mul(scaleMat, v.vertex));
return o;
}
Of course this might behave differently from what you want depending on how you want your mesh to behave under rotation.
To answer the actual posted question though, you can figure out what the transform's position is directly in the vertex shader by converting the origin to world space:
mul(unity_ObjectToWorld, float4(0,0,0,1)).xyz

Related

When rotate a object to face to certain direction, the object will suddenly rotate a lot at certain direction(when almost vertical)

enter image description here
float3 zaxis = normalize(forward);
float3 xaxis = normalize(cross(up, zaxis));
float3 yaxis = cross(zaxis, xaxis);
return float4x4(
xaxis.x, xaxis.y, xaxis.z, 0,
yaxis.x, yaxis.y, yaxis.z, 0,
zaxis.x, zaxis.y, zaxis.z, 0,
0, 0, 0, 1
);
my code is above

Fragment shader fmod, why is this not repeating

I created the following fragment shader that creates a tile grid of size _Size using the fracfunction and draws a small seperator line in between each tile, I save the ID of the tile in its uv.z value so I can later adres the tile based on its id (uv.z).
_Size and _CurrentID can be adjusted through the inspector
Shader "Unlit/Fractals"
{
Properties
{
[HideInInspector] _MainTex ("Texture", 2D) = "white" {}
_Size ("Size", float) = 5
_CurrentID ("ID", float) = 0
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float _Size;
float _CurrentID;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
_CurrentID = floor(_CurrentID);
//Create a tile grid that is of _Size * _Size (5 in example), and create an ID for it in the .z value based on its grid position
float3 uv = float3(frac(i.uv * _Size), (floor(i.uv.y * _Size) * _Size) + (floor(i.uv.x * _Size)));
//Create lines to seperate the tiles
float4 col = float4(1, 1, 1, 1);
if ((uv.x > 0.98 && uv.x < 1) || (uv.y > .98 && uv.y < 1))
{
col *= float4(uv.x, uv.y, 0, 1);
}
else
{
col = float4(0, 0, 0, 1);
}
//Loop through all the tiles based on the ID
if (uv.z == fmod(_CurrentID, ((_Size) * (_Size))))
{
col = float4(0, 1, 1, 1);
}
//This correctly goes through every grid tile once, confirming that uv grid ID 5 corresponds to grid position (0,1)
/*if (uv.z == _CurrentID)
{
col = float4(0, 1, 1, 1);
}*/
return col;
}
ENDCG
}
}
}
(note that the grid starts at (0,0) bottom left to (5,5) top right)
To ascertain that my ID's are set up correct I looped through each uv.z value with the floor of the _CurrentID set from the inspector, which lights up every tile once, when going from 0 to 24 (inclusive) as expected.
if (uv.z == _CurrentID)
{
col = float4(0, 1, 1, 1);
}
example of _CurrentID = 7 lighting up the 8th tile as expected
Now just using the _CurrentID would mean I can only go through every tile once. To make this repeatable regardless of how big _CurrentID is I should be able to use fmod (modulo) (although the same happens using the % modulo operator) on the _CurrentID so it loops back to 0 when CurrnetID = 25. Which I (try to) do using the following piece of code:
if (uv.z == fmod(_CurrentID, ((_Size) * (_Size))))
{
col = float4(0, 1, 1, 1);
}
This goes well for the first row (when _CurrentId >= 0 && < 5). However once I hit _CurrentID = 5 things start to break, as no tile will light up, despite previously being able to confirm that _CurrentID = 5 will light up the tile at grid (0, 1). When I set _CurrentID = 6 the proper tile starts lighting up again (grid pos (1,1)) which continues where grid (0, n) won't ever light up where n is greater than 0.
Example of _CurrentID = 5 using fmod.
Things start breaking even more once my CurrentID goes higher than 25, where it doesn't seem to modulo loop around at all. As seen in this gyazo gif. It just seems to light up random tiles.
Starting to doubt myself I double checked the modulo maths on WolframpAlpha, which seems correct.
I can "solve" the issue where it skips the first tile of every row by doing fmod(_CurrentID, ((_Size + 1) * (_Size + 1))), which will loop correctly through each tile on the first run (including the (0,n) tiles), but now my modulo starts looping at 36, after which it will still light up a random tile as shown in the gif.
What am I doing wrong here?
(Unity version 2020.1.1f1, same behavior confirmed in 2019.3.13)
It's probably a floating point precision issue since you are comparing floats for equality.
Instead of doing that you could write something like:
float id = _CurrentID % (_Size*_Size);
float epsilon = .0001f;
if (abs(uv.z - id) < epsilon)
{
col = float4(0, 1, 1, 1);
}
Or use ints for ids.

How to render individual pixels for one layer of a 3DTexture in a framebuffer?

I have a 4x4x4 3DTexture which I am initializing and showing correctly to color my 4x4x4 grid of vertices (see attached red grid with one white pixel - 0,0,0).
However when I render the 4 layers in a framebuffer (all four at one time using gl.COLOR_ATTACHMENT0 --> gl.COLOR_ATTACHMENT3, only four of the sixteen pixels on a layer are successfully rendered by my fragment shader (to be turned green).
When I only do one layer, with gl.COLOR_ATTACHMENT0, the same 4 pixels show up correctly altered for the 1 layer, and the other 3 layers stay with the original color unchanged. When I change the gl.viewport(0, 0, size, size) (size = 4 in this example), to something else like the whole screen, or different sizes than 4, then different pixels are written, but never more than 4. My goal is to individually specify all 16 pixels of each layer precisely. I'm using colors for now, as a learning experience, but the texture is really for position and velocity information for each vertex for a physics simulation. I'm assuming (faulty assumption?) with 64 points/vertices, that I'm running the vertex shader and the fragment shader 64 times each, coloring one pixel each invocation.
I've removed all but the vital code from the shaders. I've left the javascript unaltered. I suspect my problem is initializing and passing the array of vertex positions incorrectly.
//Set x,y position coordinates to be used to extract data from one plane of our data cube
//remember, z we handle as a 1 layer of our cube which is composed of a stack of x-y planes.
const oneLayerVertices = new Float32Array(size * size * 2);
count = 0;
for (var j = 0; j < (size); j++) {
for (var i = 0; i < (size); i++) {
oneLayerVertices[count] = i;
count++;
oneLayerVertices[count] = j;
count++;
//oneLayerVertices[count] = 0;
//count++;
//oneLayerVertices[count] = 0;
//count++;
}
}
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: oneLayerVertices,
},
});
And then I'm using the bufferInfo as follows:
gl.useProgram(computeProgramInfo.program);
twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);
gl.viewport(0, 0, size, size); //remember size = 4
outFramebuffers.forEach((fb, ndx) => {
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3
]);
const baseLayerTexCoord = (ndx * numLayersPerFramebuffer);
console.log("My baseLayerTexCoord is "+baseLayerTexCoord);
twgl.setUniforms(computeProgramInfo, {
baseLayerTexCoord,
u_kernel: [
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 1,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
],
u_position: inPos,
u_velocity: inVel,
loopCounter: loopCounter,
numLayersPerFramebuffer: numLayersPerFramebuffer
});
gl.drawArrays(gl.POINTS, 0, (16));
});
VERTEX SHADER:
calc_vertex:
const compute_vs = `#version 300 es
precision highp float;
in vec4 position;
void main() {
gl_Position = position;
}
`;
FRAGMENT SHADER:
calc_fragment:
const compute_fs = `#version 300 es
precision highp float;
out vec4 ourOutput[4];
void main() {
ourOutput[0] = vec4(0,1,0,1);
ourOutput[1] = vec4(0,1,0,1);
ourOutput[2] = vec4(0,1,0,1);
ourOutput[3] = vec4(0,1,0,1);
}
`;
I’m not sure what you’re trying to do and what you think the positions will do.
You have 2 options for GPU simulation in WebGL2
use transform feedback.
In this case you pass in attributes and generate data in buffers. Effectively you have in attributes and out attributes and generally you only run the vertex shader. To put it another way your varyings, the output of your vertex shader, get written to a buffer. So you have at least 2 sets of buffers, currentState, and nextState and your vertex shader reads attributes from currentState and writes them to nextState
There is an example of writing to buffers via transform feedback here though that example only uses transform feedback at the start to fill buffers once.
use textures attached to framebuffers
in this case, similarly you have 2 textures, currentState, and nextState, You set nextState to be your render target and read from currentState to generate next state.
the difficulty is that you can only render to textures by outputting primitives in the vertex shader. If currentState and nextState are 2D textures that’s trival. Just output a -1.0 to +1.0 quad from the vertex shader and all pixels in nextState will be rendered to.
If you’re using a 3D texture then same thing except you can only render to 4 layers at a time (well, gl.getParameter(gl.MAX_DRAW_BUFFERS)). so you’d have to do something like
for(let layer = 0; layer < numLayers; layer += 4) {
// setup framebuffer to use these 4 layers
gl.drawXXX(...) // draw to 4 layers)
}
or better
// at init time
const fbs = [];
for(let layer = 0; layer < numLayers; layer += 4) {
fbs.push(createFramebufferForThese4Layers(layer);
}
// at draw time
fbs.forEach((fb, ndx) => {;
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawXXX(...) // draw to 4 layers)
});
I’m guessing multiple draw calls is slower than one draw call so another solution is to instead treat a 2D texture as a 3D array and calculate texture coordinates appropriately.
I don’t know which is better. If you’re simulating particles and they only need to look at their own currentState then transform feedback is easier. If need each particle to be able to look at the state of other particles, in other words you need random access to all the data, then your only option is to store the data in textures.
As for positions I don't understand your code. Positions define a primitives, either POINTS, LINES, or TRIANGLES so how does passing integer X, Y values into our vertex shader help you define POINTS, LINES or TRIANGLES?
It looks like you're trying to use POINTS in which case you need to set gl_PointSize to the size of the point you want to draw (1.0) and you need to convert those positions into clip space
gl_Position = vec4((position.xy + 0.5) / resolution, 0, 1);
where resolution is the size of the texture.
But doing it this way will be slow. Much better to just draw a full size (-1 to +1) clip space quad. For every pixel in the destination the fragment shader will be called. gl_FragCoord.xy will be the location of the center of the pixel currently being rendered so first pixel in bottom left corner gl_FragCoord.xy will be (0.5, 0.5). The pixel to the right of that will be (1.5, 0.5). The pixel to the right of that will be (2.5, 0.5). You can use that value to calculate how to access currentState. Assuming 1x1 mapping the easiest way would be
int n = numberOfLayerThatsAttachedToCOLOR_ATTACHMENT0;
vec4 currentStateValueForLayerN = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 0), 0);
vec4 currentStateValueForLayerNPlus1 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 1), 0);
vec4 currentStateValueForLayerNPlus2 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 2), 0);
...
vec4 nextStateForLayerN = computeNextStateFromCurrentState(currentStateValueForLayerN);
vec4 nextStateForLayerNPlus1 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus1);
vec4 nextStateForLayerNPlus2 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus2);
...
outColor[0] = nextStateForLayerN;
outColor[1] = nextStateForLayerNPlus1;
outColor[2] = nextStateForLayerNPlus1;
...
I don’t know if you needed this but just to test here’s a simple example that renders a different color to every pixel of a 4x4x4 texture and then displays them.
const pointVS = `
#version 300 es
uniform int size;
uniform highp sampler3D tex;
out vec4 v_color;
void main() {
int x = gl_VertexID % size;
int y = (gl_VertexID / size) % size;
int z = gl_VertexID / (size * size);
v_color = texelFetch(tex, ivec3(x, y, z), 0);
gl_PointSize = 8.0;
vec3 normPos = vec3(x, y, z) / float(size);
gl_Position = vec4(
mix(-0.9, 0.6, normPos.x) + mix(0.0, 0.3, normPos.y),
mix(-0.6, 0.9, normPos.z) + mix(0.0, -0.3, normPos.y),
0,
1);
}
`;
const pointFS = `
#version 300 es
precision highp float;
in vec4 v_color;
out vec4 outColor;
void main() {
outColor = v_color;
}
`;
const rtVS = `
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
const rtFS = `
#version 300 es
precision highp float;
uniform vec2 resolution;
out vec4 outColor[4];
void main() {
vec2 xy = gl_FragCoord.xy / resolution;
outColor[0] = vec4(1, 0, xy.x, 1);
outColor[1] = vec4(0.5, xy.yx, 1);
outColor[2] = vec4(xy, 0, 1);
outColor[3] = vec4(1, vec2(1) - xy, 1);
}
`;
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const pointProgramInfo = twgl.createProgramInfo(gl, [pointVS, pointFS]);
const rtProgramInfo = twgl.createProgramInfo(gl, [rtVS, rtFS]);
const size = 4;
const numPoints = size * size * size;
const tex = twgl.createTexture(gl, {
target: gl.TEXTURE_3D,
width: size,
height: size,
depth: size,
});
const clipspaceFullSizeQuadBufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
});
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 4; ++i) {
gl.framebufferTextureLayer(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0 + i,
tex,
0, // mip level
i, // layer
);
}
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
]);
gl.viewport(0, 0, size, size);
gl.useProgram(rtProgramInfo.program);
twgl.setBuffersAndAttributes(
gl,
rtProgramInfo,
clipspaceFullSizeQuadBufferInfo);
twgl.setUniforms(rtProgramInfo, {
resolution: [size, size],
});
twgl.drawBufferInfo(gl, clipspaceFullSizeQuadBufferInfo);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.drawBuffers([
gl.BACK,
]);
gl.useProgram(pointProgramInfo.program);
twgl.setUniforms(pointProgramInfo, {
tex,
size,
});
gl.drawArrays(gl.POINTS, 0, numPoints);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>

How do you write z-depth in a shader?

This shader (code at the end) uses raymarching to render procedural geometry:
However, in the image (above) the cube in the background should be partially occluding the pink solid; it isn't because of this:
struct fragmentOutput {
float4 color : SV_Target;
float zvalue : SV_Depth;
};
fragmentOutput frag(fragmentInput i) {
fragmentOutput o;
...
o.zvalue = IF(output[1] > 0, 0, 1);
}
However, I cannot for the life of my figure out how to correctly generate a depth value here that correctly allows raymarched solids to obscure / not obscure the other geometry in the scene.
I know it's possible, because there's a working example here: https://github.com/i-saint/RaymarchingOnUnity5 (associated japanese language blog http://i-saint.hatenablog.com/)
However, it's in japanese, and largely undocumented, as well as being extremely complex.
I'm looking for an extremely simplified version of the same thing, from which to build on.
In the shader I'm currently using the fragment program line:
float2 output = march_raycast(i.worldpos, i.viewdir, _far, _step);
Maps an input point p on the quad need the camera (which this shader attached to it), into an output float2 (density, distance), where distance is the distance from the quad to the 'point' on the procedural surface.
The question is, how do I map that into a depth buffer in any useful way?
The complete shader is here, to use it, create a new scene with a sphere at 0,0,0 with a size of at least 50 and assign the shader to it:
Shader "Shaders/Raymarching/BasicMarch" {
Properties {
_sun ("Sun", Vector) = (0, 0, 0, 0)
_far ("Far Depth Value", Float) = 20
_edgeFuzz ("Edge fuzziness", Range(1, 20)) = 1.0
_lightStep ("Light step", Range(0.1, 5)) = 1.0
_step ("Raycast step", Range(0.1, 5)) = 1.0
_dark ("Dark value", Color) = (0, 0, 0, 0)
_light ("Light Value", Color) = (1, 1, 1, 1)
[Toggle] _debugDepth ("Display depth field", Float) = 0
[Toggle] _debugLight ("Display light field", Float) = 0
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
Blend SrcAlpha OneMinusSrcAlpha
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
#include "UnityLightingCommon.cginc" // for _LightColor0
#define IF(a, b, c) lerp(b, c, step((fixed) (a), 0));
uniform float _far;
uniform float _lightStep;
uniform float3 _sun;
uniform float4 _light;
uniform float4 _dark;
uniform float _debugDepth;
uniform float _debugLight;
uniform float _edgeFuzz;
uniform float _step;
/**
* Sphere at origin c, size s
* #param center_ The center of the sphere
* #param radius_ The radius of the sphere
* #param point_ The point to check
*/
float geom_soft_sphere(float3 center_, float radius_, float3 point_) {
float rtn = distance(center_, point_);
return IF(rtn < radius_, (radius_ - rtn) / radius_ / _edgeFuzz, 0);
}
/**
* A rectoid centered at center_
* #param center_ The center of the cube
* #param halfsize_ The halfsize of the cube in each direction
*/
float geom_rectoid(float3 center_, float3 halfsize_, float3 point_) {
float rtn = IF((point_[0] < (center_[0] - halfsize_[0])) || (point_[0] > (center_[0] + halfsize_[0])), 0, 1);
rtn = rtn * IF((point_[1] < (center_[1] - halfsize_[1])) || (point_[1] > (center_[1] + halfsize_[1])), 0, 1);
rtn = rtn * IF((point_[2] < (center_[2] - halfsize_[2])) || (point_[2] > (center_[2] + halfsize_[2])), 0, 1);
rtn = rtn * distance(point_, center_);
float radius = length(halfsize_);
return IF(rtn > 0, (radius - rtn) / radius / _edgeFuzz, 0);
}
/**
* Calculate procedural geometry.
* Return (0, 0, 0) for empty space.
* #param point_ A float3; return the density of the solid at p.
* #return The density of the procedural geometry of p.
*/
float march_geometry(float3 point_) {
return
geom_rectoid(float3(0, 0, 0), float3(7, 7, 7), point_) +
geom_soft_sphere(float3(10, 0, 0), 7, point_) +
geom_soft_sphere(float3(-10, 0, 0), 7, point_) +
geom_soft_sphere(float3(0, 0, 10), 7, point_) +
geom_soft_sphere(float3(0, 0, -10), 7, point_);
}
/** Return a randomish value to sample step with */
float rand(float3 seed) {
return frac(sin(dot(seed.xyz ,float3(12.9898,78.233,45.5432))) * 43758.5453);
}
/**
* March the point p along the cast path c, and return a float2
* which is (density, depth); if the density is 0 no match was
* found in the given depth domain.
* #param point_ The origin point
* #param cast_ The cast vector
* #param max_ The maximum depth to step to
* #param step_ The increment to step in
* #return (denity, depth)
*/
float2 march_raycast(float3 point_, float3 cast_, float max_, float step_) {
float origin_ = point_;
float depth_ = 0;
float density_ = 0;
int steps = floor(max_ / step_);
for (int i = 0; (density_ <= 1) && (i < steps); ++i) {
float3 target_ = point_ + cast_ * i * step_ + rand(point_) * cast_ * step_;
density_ += march_geometry(target_);
depth_ = IF((depth_ == 0) && (density_ != 0), distance(point_, target_), depth_);
}
density_ = IF(density_ > 1, 1, density_);
return float2(density_, depth_);
}
/**
* Simple lighting; raycast from depth point to light source, and get density on path
* #param point_ The origin point on the render target
* #param cast_ The original cast (ie. camera view direction)
* #param raycast_ The result of the original raycast
* #param max_ The max distance to cast
* #param step_ The step increment
*/
float2 march_lighting(float3 point_, float3 cast_, float2 raycast_, float max_, float step_) {
float3 target_ = point_ + cast_ * raycast_[1];
float3 lcast_ = normalize(_sun - target_);
return march_raycast(target_, lcast_, max_, _lightStep);
}
struct fragmentInput {
float4 position : SV_POSITION;
float4 worldpos : TEXCOORD0;
float3 viewdir : TEXCOORD1;
};
struct fragmentOutput {
float4 color : SV_Target;
float zvalue : SV_Depth;
};
fragmentInput vert(appdata_base i) {
fragmentInput o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.worldpos = mul(_Object2World, i.vertex);
o.viewdir = -normalize(WorldSpaceViewDir(i.vertex));
return o;
}
fragmentOutput frag(fragmentInput i) {
fragmentOutput o;
// Raycast
float2 output = march_raycast(i.worldpos, i.viewdir, _far, _step);
float2 light = march_lighting(i.worldpos, i.viewdir, output, _far, _step);
float lvalue = 1.0 - light[0];
float depth = output[1] / _far;
// Generate fragment color
float4 color = lerp(_light, _dark, lvalue);
// Debugging: Depth
float4 debug_depth = float4(depth, depth, depth, 1);
color = IF(_debugDepth, debug_depth, color);
// Debugging: Color
float4 debug_light = float4(lvalue, lvalue, lvalue, 1);
color = IF(_debugLight, debug_light, color);
// Always apply the depth map
color.a = output[0];
o.zvalue = IF(output[1] > 0, 0, 1);
o.color = IF(output[1] <= 0, 0, color);
return o;
}
ENDCG
}
}
}
(Yes, I know it's quite complex, but it's very difficult to reduce this kind of shader into a 'simple test case' to play with)
I'll happy accept any answer which is a modification of the shader above that allows the procedural solid to be obscured / obscure other geometry in the scene as though is was 'real geometry'.
--
Edit: You can get this 'working' by explicitly setting the depth value on the other geometry in the scene using the same depth function as the raymarcher:
...however, I still cannot get this to work correctly with geometry using the 'standard' shader. Still hunting for a working solution...
Looking at the project you linked to, the most important difference I see is that their raycast march function uses a pass-by-reference parameter to return a fragment position called ray_pos. That position appears to be in object space, so they transform it using the view-projection matrix to get clip space and read a depth value.
The project also has a compute_depth function, but it looks pretty simple.
Your march_raycast function is already calculating a target_ position, so you could refactor a bit, apply the out keyword to return it to the caller, and use it in depth calculations:
//get position using pass-by-ref
float3 ray_pos = i.worldpos;
float2 output = march_raycast(ray_pos, i.viewdir, _far, _step);
...
//convert position to clip space, read depth
float4 clip_pos = mul(UNITY_MATRIX_VP, float4(ray_pos, 1.0));
o.zvalue = clip_pos.z / clip_pos.w;
There might be a problem with render setup.
To allow your shader to output per-pixel depth, its depth-tests must be disabled. Otherwise, GPU would - for optimization - assume that all your pixels' depths are the interpolated depths from your vertices.
As your shader does not do depth-tests, it must be rendered before the geometry that does, or it will just overwrite whatever the other geometry wrote to depth buffer.
It must however have depth-write enabled, or the depth output of your pixel shader will be ignored and not written to depth-buffer.
Your RenderType is Transparent, which, I assume, should disable depth-write. That would be a problem.
Your Queue is Transparent as well, which should have it render after all solid Geometry, and back to front, which would be a problem as well, as we already concluded we have to render before.
So
put your shader in an early render queue that will render before solid geometry
have depth-write enabled
have depth-test disabled

How do you implement glOrtho for opengles 2.0? With or without tx,ty,tz values from the glOrtho spec?

Im trying to implement my own glOtho function from the opengles docs http://www.khronos.org/opengles/documentation/opengles1_0/html/glOrtho.html
to modify a Projection matrix in my vertex shader. It's currently not working properly as I see my simple triangles vertices out of place. Can you please check the code below and see if i'm doing things wrong.
I've tried setting tx,ty and tz to 0 and that seems to make it render properly. Any ideas why would this be so?
void ES2Renderer::_applyOrtho(float left, float right,float bottom, float top,float near, float far) const{
float a = 2.0f / (right - left);
float b = 2.0f / (top - bottom);
float c = -2.0f / (far - near);
float tx = - (right + left)/(right - left);
float ty = - (top + bottom)/(top - bottom);
float tz = - (far + near)/(far - near);
float ortho[16] = {
a, 0, 0, tx,
0, b, 0, ty,
0, 0, c, tz,
0, 0, 0, 1
};
GLint projectionUniform = glGetUniformLocation(_shaderProgram, "Projection");
glUniformMatrix4fv(projectionUniform, 1, 0, &ortho[0]);
}
void ES2Renderer::_renderScene()const{
GLfloat vVertices[] = {
0.0f, 5.0f, 0.0f,
-5.0f, -5.0f, 0.0f,
5.0f, -5.0f, 0.0f};
GLuint positionAttribute = glGetAttribLocation(_shaderProgram, "Position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(positionAttribute);
}
Vert shader
attribute vec4 Position;
uniform mat4 Projection;
void main(void){
gl_Position = Projection*Position;
}
Solution
From Nicols answer below I modifed my matrix as so and it seemed render properly
float ortho[16] = {
a, 0, 0, 0,
0, b, 0, 0,
0, 0, c, 0,
tx, ty, tz, 1
};
Important note
You cannot use GL_TRUE for transpose argument for glUniform as below. opengles does not support it. it must be GL_FALSE
glUniformMatrix4fv(projectionUniform, 1, GL_TRUE, &ortho[0]);
From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glUniform.xml
transpose
Specifies whether to transpose the matrix as the values are loaded into the uniform variable. Must be GL_FALSE.
Matrices in OpenGL are column major. Unless you pass GL_TRUE as the third parameter to glUniformMatrix4fv, the matrix will effectively be transposed relative to what you would intend.