GLSL: smooth double in - double

I'd like to do this:
layout(location = 0) in dvec2 c;
But apparently I can't:
$ glslc mandlebrot.frag
mandlebrot.frag:7: error: 'double' : must be qualified as flat in
However, I require this input be interpolated, not flat. Is there a way to do this?

Nope:
Fragment shader inputs that are, or contain, integral or double-precision floating-point types must be qualified with the interpolation qualifier flat.
So you're going to have to make them floats.

While Nicol's answer appears to be technically correct, there is a work-around. A float value has more than enough precision to target a pixel on a screen; it just may not have enough to target a position within the "world" (in this case, a fractal).
The solution then is to use a uniform buffer to pass world coordinates for some fixed point relative to the screen (in this case, the centre), then compute pixel coordinates from that:
layout(set = 0, binding = 1) uniform Locals {
dvec2 centre;
dvec2 scale;
};
void main() {
dvec2 c = centre + dvec2(cf) * scale;
...
}

Related

Do two floats in a compute shader being added or subtracted not give the same value 100% of the time?

I have a function I call to generate some randomness in my hlsl compute shader code
float rand3dTo1d(float3 value, float3 dotDir = float3(12.9898, 78.233, 37.719)){
//make value smaller to avoid artefacts
float3 smallValue = sin(value);
//get scalar value from 3d vector
float random = dot(smallValue, dotDir);
//make value more random by making it bigger and then taking the factional part
random = frac(sin(random) * 43758.5453);
return random;
}
If I pass in an incoming vectors location, all is fine, but if I try to pass in the center point of three vectors using this function into the randomness:
float3 GetTriangleCenter3d(float3 a, float3 b, float3 c) {
return (a + b + c) / 3.0;
}
Then ocassionally SOME of my points are not the same from frame to frame (shown by the color I paint the triangles with using this code). I get flickering of color.
float3 color = lerp(_ColorFrom, _ColorTo, rand1d);
I am at a total loss. I was able to at least get consitant results by using the thread id as the seed for the randomness, but not being able to use the centerpoint of the triangle is really weird to me and I have no idea what I am doing wrong or what I am missing. Any help would be great.

HLSL: Unitys Vector3.RotateTowards(...)

I need to rotate a direction vector towards another with a maximum angle in a compute shader, just like the Vector3.RotateTowards(from, to, maxAngle, 0) function does. This needs to happen inside the compute shader, since I cannot send the needed values from and to the GPU for performance reasons. Any suggestions on how to implement this?
This is adapted from a combination of this post on the Unity forums by vc1001 and this shadertoy entry by demofox. I haven't tested this and it has been a while since I've done HLSL/cg coding, sop lease let me know if there are bugs--especially syntax errors.
float3 slerp(float3 current, float3 target, float maxAngle)
{
// Dot product - the cosine of the angle between 2 vectors.
float dot = dot(current, target);
// Clamp it to be in the range of Acos()
// This may be unnecessary, but floating point
// precision can be a fickle mistress.
dot = clamp(dot, -1, 1);
// Acos(dot) returns the angle between start and end,
// And multiplying that by percent returns the angle between
// start and the final result.
float delta = acos(dot);
float theta = min(1.0f, maxAngle / delta);
float3 relativeVec = normalize(target - current*dot); // Orthonormal basis
float3 slerped = ((start*cos(theta)) + (relativeVec*sin(theta)));
}

Convert screen coordinates to Metal's Normalized Device Coordinates

I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}

Calculating coordinates from reference points

I'm working on a game in Unity where you can walk around in a city that also exists in real life.
In the game you should be able to enter real-world coordinates, or use your phone's GPS, and you'll be transported to the in-game position of those coordinates.
For this, i'd need to somehow convert the game coordinates to latitude and longitude coordinates. I have some coordinates from specific buildings, and i figured i might be able to write a script to determine the game coordinates from those reference points.
I've been searching for a bit on Google, and though i have probably come across the right solutions occasionally, i've been unable to understand them enough to use it in my code.
If someone has experience with this, or knows how i could do this, i'd appreciate it if you could help me understand it :)
Edit: Forgot to mention that other previous programmers have already placed the world at some position and rotation they felt like using, which unfortunately i can't simply change without breaking things.
Tim Falken
This is simple linear math. The main issues you'll come across is the fact that your game coordinate system will be probably be reversed along one or more axis. You'll probably need to reverse the direction along the latitude (Y) axis of your app. Aside from that it is just a simple conversion of the scales. Since you say that this is the map of a real place you should be able to easily figure out the min\max lon\lat which your map covers. Take the absolute value of the difference between these two values and divide that by the width\height of your map in each direction. This will be the change in latitude per map unit value. Store this value and it should be easy to convert both ways between the two units. Make functions that abstract the details and you should have no problems calculating this either way.
I assume that you have been able to retrieve the GPS coordinates OK.
EDIT:
By simple linear math I mean something like this (this is C++ style psuedo code and completely untested; in a real world example the constants would all be member variables instead):
define('MAP_WIDTH', 1000);
define('MAP_HEIGHT', 1000);
define('MIN_LON', 25.333);
define('MIN_LAT', 20.333);
define('MAX_LON', 27.25);
define('MAX_LAT', 20.50);
class CoordConversion {
float XScale=abs(MAX_LON-MIN_LON)/MAP_WIDTH;
float YScale=abs(MAX_LAT-MIN_LAT)/MAP_HEIGHT;
int LonDir = MIN_LON<MAX_LON?1:-1;
int LatDir = MIN_LAT<MAX_LAT?1:-1;
public static float GetXFromLon(float lon) {
return (this.LonDir>0?(lon-MIN_LON):(lon-MAX_LON))*this.XScale;
}
public static float GetYFromLat(float lat) {
return (this.LatDir >0?(lat-MIN_LAT):(lat-MAX_LAT))*this.YScale;
}
public static float GetLonFromX(float x) {
return (this.LonDir>0?MIN_LON:MAX_LON)+(x/this.XScale);
}
public static float GetLatFromY(float y) {
return (this.LonDir>0?MIN_LAT:MAX_LAT)+(y/this.YScale);
}
}
EDIT2: In the case that the map is rotated you'll want to use the minimum and maximum lon\lat actually shown on the map. You'll also need to rotate each point after the conversion. I'm not even going to attempt to get this right off the top of my head but I can give your the code you'll need:
POINT rotate_point(float cx,float cy,float angle,POINT p)
{
float s = sin(angle);
float c = cos(angle);
// translate point back to origin:
p.x -= cx;
p.y -= cy;
// rotate point
float xnew = p.x * c - p.y * s;
float ynew = p.x * s + p.y * c;
// translate point back:
p.x = xnew + cx;
p.y = ynew + cy;
}
This will need to be done in when returning a game point and also it needs to be done in reverse before using a game point to convert to a lat\lon point.
EDIT3: More help on getting the coordinates of your maps. First find the city or whatever it is on Google maps. Then you can right click the highest point (furthest north) on your maps and find the highest longitude. Repeat this for all four cardinal directions and you should be set.

How does CGPoint Variable work behind the scenes?

I'm not sure how the CGPoint variable that I have created knows how to handle the specific if statement.
For example, I have CGPoint myVelocity; then I have an arbitrary number float maximumVelocity = 100;
Then I execute the following code
if (myVelocity.x > maximumVelocity) {
myVelocity.x = maximumVelocity;
}
else if (myVelocity.x < -maximumVelocity)
{
myVelocity.x = -maximumVelocity;
}
From what I understand, if the first condition is met, which is myVelocity.x > maximumVelocity then the CGPoint variable is set to the maximum, which is the number 100. This is so that my variable never exceeds the arbitrary number. And the other condition is set up so that it doesn't go into the negative..
At least that's what I think.
Now here is the important part of this post.. I'm confused with how the myVelocity variable knows what the that arbitrary number is. For example is it 10? is it 25 the next second or when does it reach 100.
I should also point out that before the if statement is run, I have the following code stored in myVelocity
The following is the code that is stored into 'myVelocity' prior to the if statement executing.
float deceleration = 0.4f;
float sensitivity = 6.0f;
float maximumVelocity = 100;
myVelocity.x = myVelocity.x *deceleration + acceleration.x *sensitivity;
I have recently inquired about code smilar to the latter part of my question, but now I'm curious about the former.
A CGPoint is just a struct with "x" and "y" components. You can think of it as an easier way to pass around a pair of floats.
So your code above would be equivalent to:
float x;
// other stuff
if (x > maximumVelocity) {
x = maximumVelocity;
}
else if (x < -maximumVelocity)
{
x = -maximumVelocity;
}
Now pair that with another variable by using a struct:
struct CGPoint {
float x;
float y;
};
and to access that "x" variable, to either set or read from it, use ".x", like you did in your code sample.
(P.S. CGPoints actually are a pair of CGFloats for reasons that are irrelevant to this post)
GCPoint represent a bidimensional space, ideally storing a velocity in a CGPoint means that you need a velocity vector represented by 2 dimensions, x and y.
In your case i see that you only use 1 dimension, i didnt quite get what your trying to achieve but in your case you can just use a float to store velocity value if it has not a direction.
If you need a 2 dimension velocity you have to check for maximumVelocity by checking the lenght of the vector. In you example your checking only the x dimension, but if the velocity is x=50,y=20000 this is moving pretty fast on the y axis.
ccpLength(<#const CGPoint v#> let you check the lenght of a CGPoint, so you can compare with a float to see if the actual velocity is faster than your maximum, in that case you need to normalize your vector to actually match your maxiumVelocity, you can do this with
ccpMult(v, maximumVelocity/ccpLength(v))