I need to rotate a direction vector towards another with a maximum angle in a compute shader, just like the Vector3.RotateTowards(from, to, maxAngle, 0) function does. This needs to happen inside the compute shader, since I cannot send the needed values from and to the GPU for performance reasons. Any suggestions on how to implement this?
This is adapted from a combination of this post on the Unity forums by vc1001 and this shadertoy entry by demofox. I haven't tested this and it has been a while since I've done HLSL/cg coding, sop lease let me know if there are bugs--especially syntax errors.
float3 slerp(float3 current, float3 target, float maxAngle)
{
// Dot product - the cosine of the angle between 2 vectors.
float dot = dot(current, target);
// Clamp it to be in the range of Acos()
// This may be unnecessary, but floating point
// precision can be a fickle mistress.
dot = clamp(dot, -1, 1);
// Acos(dot) returns the angle between start and end,
// And multiplying that by percent returns the angle between
// start and the final result.
float delta = acos(dot);
float theta = min(1.0f, maxAngle / delta);
float3 relativeVec = normalize(target - current*dot); // Orthonormal basis
float3 slerped = ((start*cos(theta)) + (relativeVec*sin(theta)));
}
Related
I have a function I call to generate some randomness in my hlsl compute shader code
float rand3dTo1d(float3 value, float3 dotDir = float3(12.9898, 78.233, 37.719)){
//make value smaller to avoid artefacts
float3 smallValue = sin(value);
//get scalar value from 3d vector
float random = dot(smallValue, dotDir);
//make value more random by making it bigger and then taking the factional part
random = frac(sin(random) * 43758.5453);
return random;
}
If I pass in an incoming vectors location, all is fine, but if I try to pass in the center point of three vectors using this function into the randomness:
float3 GetTriangleCenter3d(float3 a, float3 b, float3 c) {
return (a + b + c) / 3.0;
}
Then ocassionally SOME of my points are not the same from frame to frame (shown by the color I paint the triangles with using this code). I get flickering of color.
float3 color = lerp(_ColorFrom, _ColorTo, rand1d);
I am at a total loss. I was able to at least get consitant results by using the thread id as the seed for the randomness, but not being able to use the centerpoint of the triangle is really weird to me and I have no idea what I am doing wrong or what I am missing. Any help would be great.
This is going to be quite a long post, sorry, but I think it's worth it because it's quite complicated and I would imagine quite a lot of other people would really like to be able to achieve this effect. There are a few other questions on here about SPH but none of them relate to a Niagara implementation. I've also posted this question on Unreal Engine Answers.
I've been attempting to replicate the fluid simulation in Niagara as shown by Asher Zhu here: The Art of Illusion - Niagara Simulation Framework Overview. Skip to 20:25 for the effect I'm after.
Seeing as he shows none of the Niagara system at all part from some of the bits for rendering it (as far as which I've yet to get), I've followed the article here: link.
Now, I have it looking more or less like a fluid. However, it doesn't really look anything like Asher's. It's rather unstable and will tend to sit for a few seconds with a region of higher density before exploding and then settling down. It also never develops any depth. All the particles, unless they're flying about erratically, sit on the floor. The other problem is collision - I can't see how Asher has managed to get such clean collisions with the environment. My signed distance fields are big, round and uneven and the particles never get anywhere near the walls.
The fourth image below shows it exploding just after it got to the third image and the fifth image is what it looks like after it finally settles down (as well as how far away from the walls the particles end up). The last image shows that it's completely flat (this isn't an issue with the volume of the box; I've tested that).
It's difficult to show everything in the Niagara system on here but the crucial bit is the HLSL code:
OutVelocity = Velocity;
OutPosition = Position;
Density = 0;
float Pressure = 0;
float smoothingRadius = 1.0f;
float restDensity = 0.2f;
float viscosity = 0.018f;
float gas = 500.0f;
const float3 gravity = float3(0, 0, -98);
float pi = 3.141593;
int numParticles;
DirectReads.GetNumParticles(numParticles);
const float Poly6_constant = (315 / (64 * pi * pow(smoothingRadius, 9)));
const float Spiky_constant = (-45 / (pi * pow(smoothingRadius, 6)));
float3 forcePressure = float3(0,0,0);
float3 forceViscosity = float3(0,0,0);
#if GPU_SIMULATION
//Calculate the density of this particle based on the proximity of the other particles.
for (int i = 0; i < numParticles; ++i)
{
bool myBool; //Temporary bool used to catch valid/invalid results for direct reads.
float OtherMass;
DirectReads.GetFloatByIndex<Attribute="Mass">(i, myBool, OtherMass);
float3 OtherPosition;
DirectReads.GetVectorByIndex<Attribute="Position">(i, myBool, OtherPosition);
// Calculate the distance and direction between the target Particle and itself
float distanceBetween = distance(OtherPosition, OutPosition);
if (distanceBetween < smoothingRadius)
{
Density += OtherMass * Poly6_constant * pow(smoothingRadius - distanceBetween, 3);
}
}
//Avoid negative pressure by clamping density to reference value
Density = max(restDensity, Density);
//Calculate pressure
Pressure = gas * (Density - restDensity);
//Calculate the forces.
for (int i = 0; i < numParticles; ++i)
{
if (i != InstanceId) //Only calculate the pressure-based force and Laplacian smoothing function if the other particle is not the current particle.)
{
bool myBool; //Temporary bool used to catch valid/invalid results for direct reads.
float OtherMass;
DirectReads.GetFloatByIndex<Attribute="Mass">(i, myBool, OtherMass);
float OtherDensity;
DirectReads.GetFloatByIndex<Attribute="Density">(i, myBool, OtherDensity);
float3 OtherPosition;
DirectReads.GetVectorByIndex<Attribute="Position">(i, myBool, OtherPosition);
float3 OtherVelocity;
DirectReads.GetVectorByIndex<Attribute="Velocity">(i, myBool, OtherVelocity);
float3 direction = OutPosition - OtherPosition;
float3 normalisedVector = normalize(direction);
float distanceBetween = distance(OtherPosition, OutPosition);
if (distanceBetween > 0 && distanceBetween < smoothingRadius) //distanceBetween must be >0 to avoide a div0 error.
{
float OtherPressure = gas * (OtherDensity - restDensity);
//Calculate particle pressure.
forcePressure += -1 * Mass * normalisedVector * (Pressure + OtherPressure) / (2 * Density * OtherDensity) * Spiky_constant * pow(smoothingRadius - distanceBetween, 2);
//Viscosity-based force computation with Laplacian smoothing function (W).
const float W = -(pow(distanceBetween, 3) / (2 * pow(smoothingRadius, 3))) + (pow(distanceBetween, 2) / pow(smoothingRadius, 2)) + (smoothingRadius / (2 * distanceBetween)) - 1;
forceViscosity += viscosity * (OtherMass / Mass) * (1 / OtherDensity) * (OtherVelocity - Velocity) * W * normalisedVector;
//forceViscosity += viscosity * (OtherMass / Mass) * (1 / OtherDensity) * (OtherVelocity - Velocity) * (45 / (pi * pow(smoothingRadius, 6))) * (smoothingRadius - distanceBetween);
}
}
}
OutVelocity += DeltaTime * ((forcePressure + forceViscosity) / Density);
OutPosition += DeltaTime * OutVelocity;
#endif
This code does two loops through all the other particles in the system, one to calculate the pressure and one to calculate the forces. Then it outputs the velocity and position. Just like the article I linked to above and like some other things I've seen. Yet it simply doesn't behave as shown in those resources.
I haven't applied any grid-based optimisation. To do this I'll just apply the grid optimisation used in the PBD example in UE's Content Examples project. But for now it's an added complication that isn't really needed. It runs fine with a thousands of particles even without it.
I've looked at a few resources (articles, videos and academic research papers) and I've spent a fortnight experimenting, including trial and error on the values at the top of the code. I'm obviously missing something crucial. What can it be? I'm so frustrated now that any help would be much appreciated.
For a masking object, I am trying to scale each triangle individually. If I scale the object as a whole, the points further away from the center will get moved too far and I just want the object to have 'more body'. Since I use it as a mask, it doesn't matter if the triangles end up overlapping.
Although looking at this might hurt someone deep inside, this is actually what I'm trying to achieve:
I thought this was best done in a shader and I thought this could be achieved in the geometry shader since I need to know the center of the triangle. I came up with the code below, but things keep acting... strange.
float3 center = (IN[0].vertex.xyz + IN[1].vertex.xyz + IN[2].vertex.xyz) / 3;
for (int i = 0; i < 3; i++)
{
float3 distance = IN[i].vertex.xyz - center.xyz;
float3 normal = normalize(distance);
distance = abs(distance);
float scale = 1;
float3 pos = IN[i].vertex.xyz + (distance * normal.xyz * (scale - 1));
o.pos.xyz = pos.xyz;
o.pos.w = IN[i].vertex.w;
tristream.Append(o);
}
My plan was to calculate the center of the triangle and than calculate the distance between the center and each point. I would than take the normal of this distance to know in which direction I would have to move the vertex and change the position by adding the distance * normal(direction) * scale to the original position of the vertex. Yet, it seems the triangles change when you rotate the camera, so I would doubt it if this is right. Does anyone know what could be wrong?
(Just some notes:
the mesh is basically 2D, only changing across the x- and z-axis (if this matters).
I did abs(distance) since I thought it would cancel out the normal if both would be negative. I'm not sure if this is necessary.
I did scale -1 since a scale of 1 would result in the mesh staying the same. A scale of 2 should result in all triangles being twice as big.
I have no clue on what to do with the w value, but keeping the old value at least doesn't screw up that much. Perhaps here lays the problem? I thought this value should always be 1 for matrix multiplications.
)
Oke, so besides using a way to 'complex' formula to calculate the new position of each point. (Better way at https://math.stackexchange.com/questions/1563249/how-do-i-scale-a-triangle-given-its-cartesian-cooordinates). I found out that it somehow indeed had to do with the w-value. As I always thought this was mainly a helper variable, it would be awesome if someone could explain how that values screwed things over.
Anyways, including that value in the equation it works fine.
float4 center = (IN[0].vertex.xyzw + IN[1].vertex.xyzw + IN[2].vertex.xyzw) / 3;
for (int i = 0; i < 3; i++)
{
float scale = 2;
float4 pos = (IN[i].vertex.xyzw * scale) - center.xyzw;
o.pos.xyzw = pos.xyzw;
tristream.Append(o);
}
This works just fine :)
I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}
In a surface shader, given the world's up axis (and the others too), a world space position and a normal in world space, how can we rotate the worldspace position into the space of the normal?
That is, given a up vector and a non-orthogonal target-up vector, how can we transform the position by rotating its up vector?
I need this so I can get the vertex position only affected by the object's rotation matrix, which I don't have access to.
Here's a graphical visualization of what I want to do:
Up is the world up vector
Target is the world space normal
Pos is arbitrary
The diagram is bidimensional, but I need to solve this for a 3D space.
Looks like you're trying to rotate pos by the same rotation that would transform up to new_up.
Using the rotation matrix found here, we can rotate pos using the following code. This will work either in the surface function or a supplementary vertex function, depending on your application:
// Our 3 vectors
float3 pos;
float3 new_up;
float3 up = float3(0,1,0);
// Build the rotation matrix using notation from the link above
float3 v = cross(up, new_up);
float s = length(v); // Sine of the angle
float c = dot(up, new_up); // Cosine of the angle
float3x3 VX = float3x3(
0, -1 * v.z, v.y,
v.z, 0, -1 * v.x,
-1 * v.y, v.x, 0
); // This is the skew-symmetric cross-product matrix of v
float3x3 I = float3x3(
1, 0, 0,
0, 1, 0,
0, 0, 1
); // The identity matrix
float3x3 R = I + VX + mul(VX, VX) * (1 - c)/pow(s,2) // The rotation matrix! YAY!
// Finally we rotate
float3 new_pos = mul(R, pos);
This is assuming that new_up is normalized.
If the "target up normal" is a constant, the calculation of R could (and should) only happen once per frame. I'd recommend doing it on the CPU side and passing it into the shader as a variable. Calculating it for every vertex/fragment is costly, consider what it is you actually need.
If your pos is a vector-4, just do the above with the first three elements, the fourth element can remain unchanged (it doesn't really mean anything in this context anyway).
I'm away from a machine where I can run shader code, so if I made any syntactical mistakes in the above, please forgive me.
Not tested, but should be able to input a starting point and an axis. Then all you do is change procession which is a normalized (0-1) float along the circumference and your point will update accordingly.
using UnityEngine;
using System.Collections;
public class Follower : MonoBehaviour {
Vector3 point;
Vector3 origin = Vector3.zero;
Vector3 axis = Vector3.forward;
float distance;
Vector3 direction;
float procession = 0f; // < normalized
void Update() {
Vector3 offset = point - origin;
distance = offset.magnitude;
direction = offset.normalized;
float circumference = 2 * Mathf.PI * distance;
angle = (procession % 1f) * circumference;
direction *= Quaternion.AngleAxis(Mathf.Rad2Deg * angle, axis);
Ray ray = new Ray(origin, direction);
point = ray.GetPoint(distance);
}
}