Related
I have a 4x4x4 3DTexture which I am initializing and showing correctly to color my 4x4x4 grid of vertices (see attached red grid with one white pixel - 0,0,0).
However when I render the 4 layers in a framebuffer (all four at one time using gl.COLOR_ATTACHMENT0 --> gl.COLOR_ATTACHMENT3, only four of the sixteen pixels on a layer are successfully rendered by my fragment shader (to be turned green).
When I only do one layer, with gl.COLOR_ATTACHMENT0, the same 4 pixels show up correctly altered for the 1 layer, and the other 3 layers stay with the original color unchanged. When I change the gl.viewport(0, 0, size, size) (size = 4 in this example), to something else like the whole screen, or different sizes than 4, then different pixels are written, but never more than 4. My goal is to individually specify all 16 pixels of each layer precisely. I'm using colors for now, as a learning experience, but the texture is really for position and velocity information for each vertex for a physics simulation. I'm assuming (faulty assumption?) with 64 points/vertices, that I'm running the vertex shader and the fragment shader 64 times each, coloring one pixel each invocation.
I've removed all but the vital code from the shaders. I've left the javascript unaltered. I suspect my problem is initializing and passing the array of vertex positions incorrectly.
//Set x,y position coordinates to be used to extract data from one plane of our data cube
//remember, z we handle as a 1 layer of our cube which is composed of a stack of x-y planes.
const oneLayerVertices = new Float32Array(size * size * 2);
count = 0;
for (var j = 0; j < (size); j++) {
for (var i = 0; i < (size); i++) {
oneLayerVertices[count] = i;
count++;
oneLayerVertices[count] = j;
count++;
//oneLayerVertices[count] = 0;
//count++;
//oneLayerVertices[count] = 0;
//count++;
}
}
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: oneLayerVertices,
},
});
And then I'm using the bufferInfo as follows:
gl.useProgram(computeProgramInfo.program);
twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);
gl.viewport(0, 0, size, size); //remember size = 4
outFramebuffers.forEach((fb, ndx) => {
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3
]);
const baseLayerTexCoord = (ndx * numLayersPerFramebuffer);
console.log("My baseLayerTexCoord is "+baseLayerTexCoord);
twgl.setUniforms(computeProgramInfo, {
baseLayerTexCoord,
u_kernel: [
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 1,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
],
u_position: inPos,
u_velocity: inVel,
loopCounter: loopCounter,
numLayersPerFramebuffer: numLayersPerFramebuffer
});
gl.drawArrays(gl.POINTS, 0, (16));
});
VERTEX SHADER:
calc_vertex:
const compute_vs = `#version 300 es
precision highp float;
in vec4 position;
void main() {
gl_Position = position;
}
`;
FRAGMENT SHADER:
calc_fragment:
const compute_fs = `#version 300 es
precision highp float;
out vec4 ourOutput[4];
void main() {
ourOutput[0] = vec4(0,1,0,1);
ourOutput[1] = vec4(0,1,0,1);
ourOutput[2] = vec4(0,1,0,1);
ourOutput[3] = vec4(0,1,0,1);
}
`;
I’m not sure what you’re trying to do and what you think the positions will do.
You have 2 options for GPU simulation in WebGL2
use transform feedback.
In this case you pass in attributes and generate data in buffers. Effectively you have in attributes and out attributes and generally you only run the vertex shader. To put it another way your varyings, the output of your vertex shader, get written to a buffer. So you have at least 2 sets of buffers, currentState, and nextState and your vertex shader reads attributes from currentState and writes them to nextState
There is an example of writing to buffers via transform feedback here though that example only uses transform feedback at the start to fill buffers once.
use textures attached to framebuffers
in this case, similarly you have 2 textures, currentState, and nextState, You set nextState to be your render target and read from currentState to generate next state.
the difficulty is that you can only render to textures by outputting primitives in the vertex shader. If currentState and nextState are 2D textures that’s trival. Just output a -1.0 to +1.0 quad from the vertex shader and all pixels in nextState will be rendered to.
If you’re using a 3D texture then same thing except you can only render to 4 layers at a time (well, gl.getParameter(gl.MAX_DRAW_BUFFERS)). so you’d have to do something like
for(let layer = 0; layer < numLayers; layer += 4) {
// setup framebuffer to use these 4 layers
gl.drawXXX(...) // draw to 4 layers)
}
or better
// at init time
const fbs = [];
for(let layer = 0; layer < numLayers; layer += 4) {
fbs.push(createFramebufferForThese4Layers(layer);
}
// at draw time
fbs.forEach((fb, ndx) => {;
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawXXX(...) // draw to 4 layers)
});
I’m guessing multiple draw calls is slower than one draw call so another solution is to instead treat a 2D texture as a 3D array and calculate texture coordinates appropriately.
I don’t know which is better. If you’re simulating particles and they only need to look at their own currentState then transform feedback is easier. If need each particle to be able to look at the state of other particles, in other words you need random access to all the data, then your only option is to store the data in textures.
As for positions I don't understand your code. Positions define a primitives, either POINTS, LINES, or TRIANGLES so how does passing integer X, Y values into our vertex shader help you define POINTS, LINES or TRIANGLES?
It looks like you're trying to use POINTS in which case you need to set gl_PointSize to the size of the point you want to draw (1.0) and you need to convert those positions into clip space
gl_Position = vec4((position.xy + 0.5) / resolution, 0, 1);
where resolution is the size of the texture.
But doing it this way will be slow. Much better to just draw a full size (-1 to +1) clip space quad. For every pixel in the destination the fragment shader will be called. gl_FragCoord.xy will be the location of the center of the pixel currently being rendered so first pixel in bottom left corner gl_FragCoord.xy will be (0.5, 0.5). The pixel to the right of that will be (1.5, 0.5). The pixel to the right of that will be (2.5, 0.5). You can use that value to calculate how to access currentState. Assuming 1x1 mapping the easiest way would be
int n = numberOfLayerThatsAttachedToCOLOR_ATTACHMENT0;
vec4 currentStateValueForLayerN = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 0), 0);
vec4 currentStateValueForLayerNPlus1 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 1), 0);
vec4 currentStateValueForLayerNPlus2 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 2), 0);
...
vec4 nextStateForLayerN = computeNextStateFromCurrentState(currentStateValueForLayerN);
vec4 nextStateForLayerNPlus1 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus1);
vec4 nextStateForLayerNPlus2 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus2);
...
outColor[0] = nextStateForLayerN;
outColor[1] = nextStateForLayerNPlus1;
outColor[2] = nextStateForLayerNPlus1;
...
I don’t know if you needed this but just to test here’s a simple example that renders a different color to every pixel of a 4x4x4 texture and then displays them.
const pointVS = `
#version 300 es
uniform int size;
uniform highp sampler3D tex;
out vec4 v_color;
void main() {
int x = gl_VertexID % size;
int y = (gl_VertexID / size) % size;
int z = gl_VertexID / (size * size);
v_color = texelFetch(tex, ivec3(x, y, z), 0);
gl_PointSize = 8.0;
vec3 normPos = vec3(x, y, z) / float(size);
gl_Position = vec4(
mix(-0.9, 0.6, normPos.x) + mix(0.0, 0.3, normPos.y),
mix(-0.6, 0.9, normPos.z) + mix(0.0, -0.3, normPos.y),
0,
1);
}
`;
const pointFS = `
#version 300 es
precision highp float;
in vec4 v_color;
out vec4 outColor;
void main() {
outColor = v_color;
}
`;
const rtVS = `
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
const rtFS = `
#version 300 es
precision highp float;
uniform vec2 resolution;
out vec4 outColor[4];
void main() {
vec2 xy = gl_FragCoord.xy / resolution;
outColor[0] = vec4(1, 0, xy.x, 1);
outColor[1] = vec4(0.5, xy.yx, 1);
outColor[2] = vec4(xy, 0, 1);
outColor[3] = vec4(1, vec2(1) - xy, 1);
}
`;
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const pointProgramInfo = twgl.createProgramInfo(gl, [pointVS, pointFS]);
const rtProgramInfo = twgl.createProgramInfo(gl, [rtVS, rtFS]);
const size = 4;
const numPoints = size * size * size;
const tex = twgl.createTexture(gl, {
target: gl.TEXTURE_3D,
width: size,
height: size,
depth: size,
});
const clipspaceFullSizeQuadBufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
});
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 4; ++i) {
gl.framebufferTextureLayer(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0 + i,
tex,
0, // mip level
i, // layer
);
}
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
]);
gl.viewport(0, 0, size, size);
gl.useProgram(rtProgramInfo.program);
twgl.setBuffersAndAttributes(
gl,
rtProgramInfo,
clipspaceFullSizeQuadBufferInfo);
twgl.setUniforms(rtProgramInfo, {
resolution: [size, size],
});
twgl.drawBufferInfo(gl, clipspaceFullSizeQuadBufferInfo);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.drawBuffers([
gl.BACK,
]);
gl.useProgram(pointProgramInfo.program);
twgl.setUniforms(pointProgramInfo, {
tex,
size,
});
gl.drawArrays(gl.POINTS, 0, numPoints);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
My game has a drawing tool - a looping line renderer that is used as a marker to manipulate an area of the terrain in the shape of the line. This all happens in runtime as soon as the player stops drawing the line.
So far I have managed to raise terrain verteces that match the coordinates of the line renderer's points, but I have difficulties with raising the points that fall inside the marker's shape. Here is an image describing what I currently have:
I tried using the "Polygon Fill Algorithm" (http://alienryderflex.com/polygon_fill/), but raising the terrain vertices one line at a time is too resourceful (even when the algorithm is narrowed to a rectangle that surrounds only the marked area). Also my marker's outline points have gaps between them, meaning I need to add a radius to the line that raises the terrain, but that might leave the result sloppy.
Maybe I should discard the drawing mechanism and use a mesh with a mesh collider as the marker?
Any ideas are appreciated on how to get the terrain manipulated in the exact shape as the marker.
Current code:
I used this script to create the line - the first and the last line points have the same coordinates.
The code used to manipulate the terrain manipulation is currently triggered when clicking a GUI button:
using System;
using System.Collections;
using UnityEngine;
public class changeTerrainHeight_lineMarker : MonoBehaviour
{
public Terrain TerrainMain;
public LineRenderer line;
void OnGUI()
{
//Get the terrain heightmap width and height.
int xRes = TerrainMain.terrainData.heightmapWidth;
int yRes = TerrainMain.terrainData.heightmapHeight;
//GetHeights - gets the heightmap points of the tarrain. Store them in array
float[,] heights = TerrainMain.terrainData.GetHeights(0, 0, xRes, yRes);
if (GUI.Button(new Rect(30, 30, 200, 30), "Line points"))
{
/* Set the positions to array "positions" */
Vector3[] positions = new Vector3[line.positionCount];
line.GetPositions(positions);
/* use this height to the affected terrain verteces */
float height = 0.05f;
for (int i = 0; i < line.positionCount; i++)
{
/* Assign height data */
heights[Mathf.RoundToInt(positions[i].z), Mathf.RoundToInt(positions[i].x)] = height;
}
//SetHeights to change the terrain height.
TerrainMain.terrainData.SetHeights(0, 0, heights);
}
}
}
Got to the solution thanks to Siim's personal help, and thanks to the article: How can I determine whether a 2D Point is within a Polygon?.
The end result is visualized here:
First the code, then the explanation:
using System;
using System.Collections;
using UnityEngine;
public class changeTerrainHeight_lineMarker : MonoBehaviour
{
public Terrain TerrainMain;
public LineRenderer line;
void OnGUI()
{
//Get the terrain heightmap width and height.
int xRes = TerrainMain.terrainData.heightmapWidth;
int yRes = TerrainMain.terrainData.heightmapHeight;
//GetHeights - gets the heightmap points of the tarrain. Store them in array
float[,] heights = TerrainMain.terrainData.GetHeights(0, 0, xRes, yRes);
//Trigger line area raiser
if (GUI.Button(new Rect(30, 30, 200, 30), "Line fill"))
{
/* Set the positions to array "positions" */
Vector3[] positions = new Vector3[line.positionCount];
line.GetPositions(positions);
float height = 0.10f; // define the height of the affected verteces of the terrain
/* Find the reactangle the shape is in! The sides of the rectangle are based on the most-top, -right, -bottom and -left vertex. */
float ftop = float.NegativeInfinity;
float fright = float.NegativeInfinity;
float fbottom = Mathf.Infinity;
float fleft = Mathf.Infinity;
for (int i = 0; i < line.positionCount; i++)
{
//find the outmost points
if (ftop < positions[i].z)
{
ftop = positions[i].z;
}
if (fright < positions[i].x)
{
fright = positions[i].x;
}
if (fbottom > positions[i].z)
{
fbottom = positions[i].z;
}
if (fleft > positions[i].x)
{
fleft = positions[i].x;
}
}
int top = Mathf.RoundToInt(ftop);
int right = Mathf.RoundToInt(fright);
int bottom = Mathf.RoundToInt(fbottom);
int left = Mathf.RoundToInt(fleft);
int terrainXmax = right - left; // the rightmost edge of the terrain
int terrainZmax = top - bottom; // the topmost edge of the terrain
float[,] shapeHeights = TerrainMain.terrainData.GetHeights(left, bottom, terrainXmax, terrainZmax);
Vector2 point; //Create a point Vector2 point to match the shape
/* Loop through all points in the rectangle surrounding the shape */
for (int i = 0; i < terrainZmax; i++)
{
point.y = i + bottom; //Add off set to the element so it matches the position of the line
for (int j = 0; j < terrainXmax; j++)
{
point.x = j + left; //Add off set to the element so it matches the position of the line
if (InsidePolygon(point, bottom))
{
shapeHeights[i, j] = height; // set the height value to the terrain vertex
}
}
}
//SetHeights to change the terrain height.
TerrainMain.terrainData.SetHeightsDelayLOD(left, bottom, shapeHeights);
TerrainMain.ApplyDelayedHeightmapModification();
}
}
//Checks if the given vertex is inside the the shape.
bool InsidePolygon(Vector2 p, int terrainZmax)
{
// Assign the points that define the outline of the shape
Vector3[] positions = new Vector3[line.positionCount];
line.GetPositions(positions);
int count = 0;
Vector2 p1, p2;
int n = positions.Length;
// Find the lines that define the shape
for (int i = 0; i < n; i++)
{
p1.y = positions[i].z;// - p.y;
p1.x = positions[i].x;// - p.x;
if (i != n - 1)
{
p2.y = positions[(i + 1)].z;// - p.y;
p2.x = positions[(i + 1)].x;// - p.x;
}
else
{
p2.y = positions[0].z;// - p.y;
p2.x = positions[0].x;// - p.x;
}
// check if the given point p intersects with the lines that form the outline of the shape.
if (LinesIntersect(p1, p2, p, terrainZmax))
{
count++;
}
}
// the point is inside the shape when the number of line intersections is an odd number
if (count % 2 == 1)
{
return true;
}
else
{
return false;
}
}
// Function that checks if two lines intersect with each other
bool LinesIntersect(Vector2 A, Vector2 B, Vector2 C, int terrainZmax)
{
Vector2 D = new Vector2(C.x, terrainZmax);
Vector2 CmP = new Vector2(C.x - A.x, C.y - A.y);
Vector2 r = new Vector2(B.x - A.x, B.y - A.y);
Vector2 s = new Vector2(D.x - C.x, D.y - C.y);
float CmPxr = CmP.x * r.y - CmP.y * r.x;
float CmPxs = CmP.x * s.y - CmP.y * s.x;
float rxs = r.x * s.y - r.y * s.x;
if (CmPxr == 0f)
{
// Lines are collinear, and so intersect if they have any overlap
return ((C.x - A.x < 0f) != (C.x - B.x < 0f))
|| ((C.y - A.y < 0f) != (C.y - B.y < 0f));
}
if (rxs == 0f)
return false; // Lines are parallel.
float rxsr = 1f / rxs;
float t = CmPxs * rxsr;
float u = CmPxr * rxsr;
return (t >= 0f) && (t <= 1f) && (u >= 0f) && (u <= 1f);
}
}
The used method is filling the shape one line at a time - "The Ray Casting method". It turns out that this method starts taking more resources only if the given shape as a lot of sides. (A side of the shape is a line that connects two points in the outline of the shape.)
When I posted this question, my Line Renderer had 134 points defining the line. This also means the shape has the same number of sides that needs to pass the ray cast check.
When I narrowed down the number of points to 42, the method got fast enough, and also the shape did not lose almost any detail.
Furthermore I am planning on using some methods to make the contours smoother, so the shape can be defined with even less points.
In short, you need these steps to get to the result:
Create the outline of the shape;
Find the 4 points that mark the bounding box around the shape;
Start ray casting the box;
Check the number of how many times the ray intersects with the sides of the shape. The points with the odd number are located inside the shape:
Assign your attributes to all of the points that were found in the shape.
A parallax background with a fixed camera is easy to do, but since i'm making a topdown view 2D space exploration game, I figured that having a single SKSpriteNode filling the screen and being a child of my SKCameraNode and using a SKShader to draw a parallax starfield would be easier.
I went on shadertoy and found this simple looking shader. I adapted it successfully on shadertoy to accept a vec2() for the velocity of the movement that I want to pass as an SKAttribute so it can follow the movement of my ship.
Here is the original source:
https://www.shadertoy.com/view/XtjSDh
I managed to make the conversion of the original code so it compiles without any error, but nothing shows up on the screen. I tried the individual functions and they do work to generate a fixed image.
Any pointers to make it work?
Thanks!
This isn't really an answer, but it's a lot more info than a comment, and highlights some of the oddness and appropriateness of how SK does particles:
There's a couple of weird things about particles in SceneKit, that might apply to SpriteKit.
when you move the particle system, you can have the particles move with them. This is the default behaviour:
From the docs:
When the emitter creates particles, they are rendered as children of
the emitter node. This means that they inherit the characteristics of
the emitter node, just like nodes do. For example, if you rotate the
emitter node, the positions of all of the spawned particles are
rotated also. Depending on what effect you are simulating with the
emitter, this may not be the correct behavior.
For most applications, this is the wrong behaviour, in fact. But for what you're wanting to do, this is ideal. You can position new SKNodeEmitters offscreen where the ship is heading, and fix them to "space" so they rotate in conjunction with the directional changes of the player's ship, and the particles will do exactly as you want/need to create the feeling of moving throughout space.
SpriteKit has a prebuild, or populate ability in the form of advancing the simulation: https://developer.apple.com/reference/spritekit/skemitternode/1398027-advancesimulationtime
This means you can have stars ready to show wherever the ship is heading to, through space, as the SKEmittors come on screen. There's no need for a loading delay to build stars, this does it immediately.
As near as I can figure, you'd need a 3 particle emitters to pull this off, each the size of the screen of the device. Burst the particles out, then release each layer you want for parallax to a target node at the right "depth" from the camera, and carry on by moving these targets as per the screen movement.
Bit messy, but probably quicker, easier, and much more powerfully full of potential for playful effects than creating your own system.
Maybe... I could be wrong.
EDIT : Code is clean and working now. I've setup a GitHub repo for this.
I guess I didnt explain what I wanted properly. I needed a starfield background that follows the camera like you could find in Subspace (back in the days)
The result is pretty cool and convincing! I'll probably come back to this later when the node quantity becomes a bottleneck. I'm still convinced that the proper way to do that is with shaders!
Here is a link to my code on GitHub. I hope it can be useful to someone. It's still a work in progress but it works well. Included in the repo is the source from SKTUtils (a library by Ray Wenderlich that is already freely available on github) and from my own extension to Ray's tools that I called nuts-n-bolts. these are just extensions for common types that everyone should find useful. You, of course, have the source for the StarfieldNode and the InteractiveCameraNode along with a small demo project.
https://github.com/sonoblaise/StarfieldDemo
The short answer is, in SpriteKit you use the fragment coordinates directly without needing to scale against the viewport resolution (iResoultion in shadertoy land), so the line:
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(0.0, iTime * 0.01);
can be changed to omit the scaling:
vec2 samplePosition = fragCoord.xy + vec2(0.0, iTime * 0.01);
this is likely the root issue (hard to know for sure without your rendition of the shader code) of why you're only seeing black from the shader.
For a full answer for an implementation of a SpriteKit shader making a star field, let's take the original shader and simplify it so there's only one star field, no "fog" (just to keep things simple), and add a variable to control the velocity vector of the movement of the stars:
(this is still in shadertoy code)
float Hash(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if(starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float maxResolution = max(iResolution.x, iResolution.y);
vec2 velocity = vec2(0.01, 0.01);
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(iTime * velocity.x, iTime * velocity.y);
vec3 finalColor = AddStarField(samplePosition * 16.0, 0.00125);
fragColor = vec4(finalColor, 1.0);
}
If you paste that into a new shadertoy window and run it you should see a monochrome star field moving towards the bottom left.
To adjust it for SpriteKit is fairly simple. We need to remove the "in"s from the function variables, change the name of some constants (there's a decent blog post about the shadertoy to SpriteKit changes which are needed), and use an Attribute for the velocity vector so we can change the direction of the stars for each SKSpriteNode this is applied to, and over time, as needed.
Here's the full SpriteKit shader source, with a_velocity as a needed attribute defining the star field movement:
float Hash(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if (starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void main()
{
vec2 samplePosition = v_tex_coord.xy + vec2(u_time * a_velocity.x, u_time * a_velocity.y);
vec3 finalColor = AddStarField(samplePosition * 20.0, 0.00125);
gl_FragColor = vec4(finalColor, 1.0);
}
(worth noting, that is is simply a modified version of the original )
This shader (code at the end) uses raymarching to render procedural geometry:
However, in the image (above) the cube in the background should be partially occluding the pink solid; it isn't because of this:
struct fragmentOutput {
float4 color : SV_Target;
float zvalue : SV_Depth;
};
fragmentOutput frag(fragmentInput i) {
fragmentOutput o;
...
o.zvalue = IF(output[1] > 0, 0, 1);
}
However, I cannot for the life of my figure out how to correctly generate a depth value here that correctly allows raymarched solids to obscure / not obscure the other geometry in the scene.
I know it's possible, because there's a working example here: https://github.com/i-saint/RaymarchingOnUnity5 (associated japanese language blog http://i-saint.hatenablog.com/)
However, it's in japanese, and largely undocumented, as well as being extremely complex.
I'm looking for an extremely simplified version of the same thing, from which to build on.
In the shader I'm currently using the fragment program line:
float2 output = march_raycast(i.worldpos, i.viewdir, _far, _step);
Maps an input point p on the quad need the camera (which this shader attached to it), into an output float2 (density, distance), where distance is the distance from the quad to the 'point' on the procedural surface.
The question is, how do I map that into a depth buffer in any useful way?
The complete shader is here, to use it, create a new scene with a sphere at 0,0,0 with a size of at least 50 and assign the shader to it:
Shader "Shaders/Raymarching/BasicMarch" {
Properties {
_sun ("Sun", Vector) = (0, 0, 0, 0)
_far ("Far Depth Value", Float) = 20
_edgeFuzz ("Edge fuzziness", Range(1, 20)) = 1.0
_lightStep ("Light step", Range(0.1, 5)) = 1.0
_step ("Raycast step", Range(0.1, 5)) = 1.0
_dark ("Dark value", Color) = (0, 0, 0, 0)
_light ("Light Value", Color) = (1, 1, 1, 1)
[Toggle] _debugDepth ("Display depth field", Float) = 0
[Toggle] _debugLight ("Display light field", Float) = 0
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
Blend SrcAlpha OneMinusSrcAlpha
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
#include "UnityLightingCommon.cginc" // for _LightColor0
#define IF(a, b, c) lerp(b, c, step((fixed) (a), 0));
uniform float _far;
uniform float _lightStep;
uniform float3 _sun;
uniform float4 _light;
uniform float4 _dark;
uniform float _debugDepth;
uniform float _debugLight;
uniform float _edgeFuzz;
uniform float _step;
/**
* Sphere at origin c, size s
* #param center_ The center of the sphere
* #param radius_ The radius of the sphere
* #param point_ The point to check
*/
float geom_soft_sphere(float3 center_, float radius_, float3 point_) {
float rtn = distance(center_, point_);
return IF(rtn < radius_, (radius_ - rtn) / radius_ / _edgeFuzz, 0);
}
/**
* A rectoid centered at center_
* #param center_ The center of the cube
* #param halfsize_ The halfsize of the cube in each direction
*/
float geom_rectoid(float3 center_, float3 halfsize_, float3 point_) {
float rtn = IF((point_[0] < (center_[0] - halfsize_[0])) || (point_[0] > (center_[0] + halfsize_[0])), 0, 1);
rtn = rtn * IF((point_[1] < (center_[1] - halfsize_[1])) || (point_[1] > (center_[1] + halfsize_[1])), 0, 1);
rtn = rtn * IF((point_[2] < (center_[2] - halfsize_[2])) || (point_[2] > (center_[2] + halfsize_[2])), 0, 1);
rtn = rtn * distance(point_, center_);
float radius = length(halfsize_);
return IF(rtn > 0, (radius - rtn) / radius / _edgeFuzz, 0);
}
/**
* Calculate procedural geometry.
* Return (0, 0, 0) for empty space.
* #param point_ A float3; return the density of the solid at p.
* #return The density of the procedural geometry of p.
*/
float march_geometry(float3 point_) {
return
geom_rectoid(float3(0, 0, 0), float3(7, 7, 7), point_) +
geom_soft_sphere(float3(10, 0, 0), 7, point_) +
geom_soft_sphere(float3(-10, 0, 0), 7, point_) +
geom_soft_sphere(float3(0, 0, 10), 7, point_) +
geom_soft_sphere(float3(0, 0, -10), 7, point_);
}
/** Return a randomish value to sample step with */
float rand(float3 seed) {
return frac(sin(dot(seed.xyz ,float3(12.9898,78.233,45.5432))) * 43758.5453);
}
/**
* March the point p along the cast path c, and return a float2
* which is (density, depth); if the density is 0 no match was
* found in the given depth domain.
* #param point_ The origin point
* #param cast_ The cast vector
* #param max_ The maximum depth to step to
* #param step_ The increment to step in
* #return (denity, depth)
*/
float2 march_raycast(float3 point_, float3 cast_, float max_, float step_) {
float origin_ = point_;
float depth_ = 0;
float density_ = 0;
int steps = floor(max_ / step_);
for (int i = 0; (density_ <= 1) && (i < steps); ++i) {
float3 target_ = point_ + cast_ * i * step_ + rand(point_) * cast_ * step_;
density_ += march_geometry(target_);
depth_ = IF((depth_ == 0) && (density_ != 0), distance(point_, target_), depth_);
}
density_ = IF(density_ > 1, 1, density_);
return float2(density_, depth_);
}
/**
* Simple lighting; raycast from depth point to light source, and get density on path
* #param point_ The origin point on the render target
* #param cast_ The original cast (ie. camera view direction)
* #param raycast_ The result of the original raycast
* #param max_ The max distance to cast
* #param step_ The step increment
*/
float2 march_lighting(float3 point_, float3 cast_, float2 raycast_, float max_, float step_) {
float3 target_ = point_ + cast_ * raycast_[1];
float3 lcast_ = normalize(_sun - target_);
return march_raycast(target_, lcast_, max_, _lightStep);
}
struct fragmentInput {
float4 position : SV_POSITION;
float4 worldpos : TEXCOORD0;
float3 viewdir : TEXCOORD1;
};
struct fragmentOutput {
float4 color : SV_Target;
float zvalue : SV_Depth;
};
fragmentInput vert(appdata_base i) {
fragmentInput o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.worldpos = mul(_Object2World, i.vertex);
o.viewdir = -normalize(WorldSpaceViewDir(i.vertex));
return o;
}
fragmentOutput frag(fragmentInput i) {
fragmentOutput o;
// Raycast
float2 output = march_raycast(i.worldpos, i.viewdir, _far, _step);
float2 light = march_lighting(i.worldpos, i.viewdir, output, _far, _step);
float lvalue = 1.0 - light[0];
float depth = output[1] / _far;
// Generate fragment color
float4 color = lerp(_light, _dark, lvalue);
// Debugging: Depth
float4 debug_depth = float4(depth, depth, depth, 1);
color = IF(_debugDepth, debug_depth, color);
// Debugging: Color
float4 debug_light = float4(lvalue, lvalue, lvalue, 1);
color = IF(_debugLight, debug_light, color);
// Always apply the depth map
color.a = output[0];
o.zvalue = IF(output[1] > 0, 0, 1);
o.color = IF(output[1] <= 0, 0, color);
return o;
}
ENDCG
}
}
}
(Yes, I know it's quite complex, but it's very difficult to reduce this kind of shader into a 'simple test case' to play with)
I'll happy accept any answer which is a modification of the shader above that allows the procedural solid to be obscured / obscure other geometry in the scene as though is was 'real geometry'.
--
Edit: You can get this 'working' by explicitly setting the depth value on the other geometry in the scene using the same depth function as the raymarcher:
...however, I still cannot get this to work correctly with geometry using the 'standard' shader. Still hunting for a working solution...
Looking at the project you linked to, the most important difference I see is that their raycast march function uses a pass-by-reference parameter to return a fragment position called ray_pos. That position appears to be in object space, so they transform it using the view-projection matrix to get clip space and read a depth value.
The project also has a compute_depth function, but it looks pretty simple.
Your march_raycast function is already calculating a target_ position, so you could refactor a bit, apply the out keyword to return it to the caller, and use it in depth calculations:
//get position using pass-by-ref
float3 ray_pos = i.worldpos;
float2 output = march_raycast(ray_pos, i.viewdir, _far, _step);
...
//convert position to clip space, read depth
float4 clip_pos = mul(UNITY_MATRIX_VP, float4(ray_pos, 1.0));
o.zvalue = clip_pos.z / clip_pos.w;
There might be a problem with render setup.
To allow your shader to output per-pixel depth, its depth-tests must be disabled. Otherwise, GPU would - for optimization - assume that all your pixels' depths are the interpolated depths from your vertices.
As your shader does not do depth-tests, it must be rendered before the geometry that does, or it will just overwrite whatever the other geometry wrote to depth buffer.
It must however have depth-write enabled, or the depth output of your pixel shader will be ignored and not written to depth-buffer.
Your RenderType is Transparent, which, I assume, should disable depth-write. That would be a problem.
Your Queue is Transparent as well, which should have it render after all solid Geometry, and back to front, which would be a problem as well, as we already concluded we have to render before.
So
put your shader in an early render queue that will render before solid geometry
have depth-write enabled
have depth-test disabled
I am working on a Unity3D project which relies on a 3D texture momentarily.
The problem is, Unity only allows Pro users to make use of Texture3D. Hence I'm looking for an alternative to Texture3D, perhaps a one dimensional texture (although not natively available in Unity) that is interpreted as 3 dimensional in the shader (which uses the 3D texture).
Is there a way to do this whilst (preferably) keeping subpixel information?
(GLSL and Cg tags added because here lies the core of the problem)
Edit: The problem is addressed here as well: webgl glsl emulate texture3d
However this is not yet finished and working properly.
Edit: For the time being I disregard proper subpixel information. So any help on converting a 2D texture to contain 3D information is appreciated!
Edit: I retracted my own answer as it isn't sufficient as of yet:
float2 uvFromUvw( float3 uvw ) {
float2 uv = float2(uvw.x, uvw.y / _VolumeTextureSize.z);
uv.y += float(round(uvw.z * (_VolumeTextureSize.z - 1))) / _VolumeTextureSize.z;
return uv;
}
With initialization as Texture2D(volumeWidth, volumeHeight * volumeDepth).
Most of the time it works, but sometimes it shows wrong pixels, probably because of subpixel information it is picking up on. How can I fix this? Clamping the input doesn't work.
I'm using this for my 3D clouds if that helps:
float SampleNoiseTexture( float3 _UVW, float _MipLevel )
{
float2 WrappedUW = fmod( 16.0 * (1000.0 + _UVW.xz), 16.0 ); // UW wrapped in [0,16[
float IntW = floor( WrappedUW.y ); // Integer slice number
float dw = WrappedUW.y - IntW; // Remainder for intepolating between slices
_UVW.x = (17.0 * IntW + WrappedUW.x + 0.25) * 0.00367647058823529411764705882353; // divided by 17*16 = 272
float4 Value = tex2D( _TexNoise3D, float4( _UVW.xy, 0.0, 0.0 ) );
return lerp( Value.x, Value.y, dw );
}
The "3D texture" is packed as 16 slices of 17 pixels wide in a 272x16 texture, with the 17th column of each slice being a copy of the 1st column (wrap address mode)...
Of course, no mip-mapping allowed with this technique.
Here's the code I'm using to create the 3D texture if that's what bothering you:
static const NOISE3D_TEXTURE_POT = 4;
static const NOISE3D_TEXTURE_SIZE = 1 << NOISE3D_TEXTURE_POT;
// <summary>
// Create the "3D noise" texture
// To simulate 3D textures that are not available in Unity, I create a single long 2D slice of (17*16) x 16
// The width is 17*16 so that all 3D slices are packed into a single line, and I use 17 as a single slice width
// because I pad the last pixel with the first column of the same slice so bilinear interpolation is correct.
// The texture contains 2 significant values in Red and Green :
// Red is the noise value in the current W slice
// Green is the noise value in the next W slice
// Then, the actual 3D noise value is an interpolation of red and green based on the W remainder
// </summary>
protected NuajTexture2D Build3DNoise()
{
// Build first noise mip level
float[,,] NoiseValues = new float[NOISE3D_TEXTURE_SIZE,NOISE3D_TEXTURE_SIZE,NOISE3D_TEXTURE_SIZE];
for ( int W=0; W < NOISE3D_TEXTURE_SIZE; W++ )
for ( int V=0; V < NOISE3D_TEXTURE_SIZE; V++ )
for ( int U=0; U < NOISE3D_TEXTURE_SIZE; U++ )
NoiseValues[U,V,W] = (float) SimpleRNG.GetUniform();
// Build actual texture
int MipLevel = 0; // In my original code, I build several textures for several mips...
int MipSize = NOISE3D_TEXTURE_SIZE >> MipLevel;
int Width = MipSize*(MipSize+1); // Pad with an additional column
Color[] Content = new Color[MipSize*Width];
// Build content
for ( int W=0; W < MipSize; W++ )
{
int Offset = W * (MipSize+1); // W Slice offset
for ( int V=0; V < MipSize; V++ )
{
for ( int U=0; U <= MipSize; U++ )
{
Content[Offset+Width*V+U].r = NoiseValues[U & (MipSize-1),V,W];
Content[Offset+Width*V+U].g = NoiseValues[U & (MipSize-1),V,(W+1) & (MipSize-1)];
}
}
}
// Create texture
NuajTexture2D Result = Help.CreateTexture( "Noise3D", Width, MipSize, TextureFormat.ARGB32, false, FilterMode.Bilinear, TextureWrapMode.Repeat );
Result.SetPixels( Content, 0 );
Result.Apply( false, true );
return Result;
}
I followed Patapoms response and came to the following. However it's still off as it should be.
float getAlpha(float3 position)
{
float2 WrappedUW = fmod( _Volume.xz * (1000.0 + position.xz), _Volume.xz ); // UW wrapped in [0,16[
float IntW = floor( WrappedUW.y ); // Integer slice number
float dw = WrappedUW.y - IntW; // Remainder for intepolating between slices
position.x = ((_Volume.z + 1.0) * IntW + WrappedUW.x + 0.25) / ((_Volume.z + 1.0) * _Volume.x); // divided by 17*16 = 272
float4 Value = tex2Dlod( _VolumeTex, float4( position.xy, 0.0, 0.0 ) );
return lerp( Value.x, Value.y, dw );
}
public int GetPixelId(int x, int y, int z) {
return y * (volumeWidth + 1) * volumeDepth + z * (volumeWidth + 1) + x;
}
// Code to set the pixelbuffer one pixel at a time starting from a clean slate
pixelBuffer[GetPixelId(x, y, z)].r = color.r;
if (z > 0)
pixelBuffer[GetPixelId(x, y, z - 1)].g = color.r;
if (z == volumeDepth - 1 || z == 0)
pixelBuffer[GetPixelId(x, y, z)].g = color.r;
if (x == 0) {
pixelBuffer[GetPixelId(volumeWidth, y, z)].r = color.r;
if (z > 0)
pixelBuffer[GetPixelId(volumeWidth, y, z - 1)].g = color.r;
if (z == volumeDepth - 1 || z == 0)
pixelBuffer[GetPixelId(volumeWidth, y, z)].g = color.r;
}