I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}
Related
I am trying to reconstruct the point where the ray of the camera rendering the current pixel intersects the near plane.
I need the coordinates of the intersection point in the local coordinates of the object being rendered.
This is my current implementation:
float4 nearClipLS = mul(inv_modelViewProjectionMatrix , float4((i.vertex.x / i.vertex.w), (i.vertex.y / i.vertex.w),-1., 1.)); nearClipLS /= nearClipLS.w;
There's got to be a more efficient way to do it, but the following should, in theory, work.
Find the offset vector from the camera to the pixel:
float3 cam2pos = v.worldPos - _WorldSpaceCameraPos;
Get the camera's forward vector:
float3 camFwd = UNITY_MATRIX_IT_MV[2].xyz;
Get the dot product of the two to determine how far the point projects in the direction of the camera's forward axis:
float projDist = dot(cam2pos, camFwd);
Then, you should be able to use that data to re-project the point onto the near clip plane:
float nearClipZ = _ProjectionParams.y;
float3 nearPos = _WorldSpaceCameraPos+ (cam2pos * (nearClipZ / projDist));
This solution doesn't address edge cases (like when it's even with or behind the camera, which could cause problems), so you may want to check those once you get it working.
This is a question for Unity people or Math geniuses.
I'm making a game where I have a circle object that I can move, but I don't want it to intersect or go into other (static) circles in the world (Physics system isn't good enough in Unity to simply use that, btw).
It's in 3D world, but the circles only ever move on 2 axis.
I was able to get this working perfectly if circle hits only 1 other circle, but not 2 or more.
FYI: All circles are the same size.
Here's my working formula for 1 circle to move it to the edge of the colliding circle if intersecting:
newPosition = PositionOfStaticCircleThatWasJustIntersected + ((positionCircleWasMovedTo - PositionOfStaticCircleThatWasJustIntersected).normalized * circleSize);
But I can't figure out a formula if the moving circle hits 2 (or more) static circles at the same time.
One of the things that confuse me the most is the direction issue depending on how all the circles are positioned and what direction the moving circle is coming from.
Here's an example image of what I'm trying to do.
Since we're operating in a 2D space, let's approach this with some geometry. Taking a close look at your desired outcome, a particular shape become apparent:
There's a triangle here! And since all circles are the same radius, we know even more: this is an isosceles triangle, where two sides are the same length. With that information in hand, the problem basically boils down to:
We know what d is, since it's the distance between the two circles being collided with. And we know what a is, since it's the radius of all the circles. With that information, we can figure out where to place the moved circle. We need to move it d/2 between the two circles (since the point will be equidistant between them), and h away from them.
Calculating the height h is straightforward, since this is a right-angle triangle. According to the Pythagorean theorem:
// a^2 + b^2 = c^2, or rewritten as:
// a = root(c^2 - b^2)
float h = Mathf.Sqrt(Mathf.Pow(2 * a, 2) - Mathf.Pow(d / 2, 2))
Now need to turn these scalar quantities into vectors within our game space. For the vector between the two circles, that's easy:
Vector3 betweenVector = circle2Position - circle1Position
But what about the height vector along the h direction? Well, since all movement is on 2D space, find a direction that your circles don't move along and use it to get the cross product (the perpendicular vector) with the betweenVector using Vector3.Cross(). For
example, if the circles only move laterally:
Vector3 heightVector = Vector3.Cross(betweenVector, Vector3.up)
Bringing this all together, you might have a method like:
Vector3 GetNewPosition(Vector3 movingCirclePosition, Vector3 circle1Position,
Vector3 circle2Position, float radius)
{
float halfDistance = Vector3.Distance(circle1Position, circle2Position) / 2;
float height = Mathf.Sqrt(Mathf.Pow(2 * radius, 2) - Mathf.Pow(halfDistance, 2));
Vector3 betweenVector = circle2Position - circle1Position;
Vector3 heightVector = Vector3.Cross(betweenVector, Vector3.up);
// Two possible positions, on either side of betweenVector
Vector3 candidatePosition1 = circle1Position
+ betweenVector.normalized * halfDistance
+ heightVector.normalized * height;
Vector3 candidatePosition2 = circle1Position
+ betweenVector.normalized * halfDistance
- heightVector.normalized * height;
// Absent any other information, the closer position will be assumed as correct
float distToCandidate1 = Vector3.Distance(movingCirclePosition, candidatePosition1);
float distToCandidate2 = Vector3.Distance(movingCirclePosition, candidatePosition2);
if (distToCandidate1 < distToCandidate2){
return candidatePosition1;
}
else{
return candidatePosition2;
}
}
I am trying to calculate circular motion (orbit) around an object. The code i have gives me a nice circular orbit around the object. The problem is that when i rotate the object, the orbit behaves as though the object were not rotated.
I've put a really simple diagram below to try and explain it better. The left is what i get when the cylinder is upright, the middle is what i currently get when the object is rotated. The image on the right is what i would like to happen.
float Gx = target.transform.position.x - ((Mathf.Cos(currentTvalue)) * (radius));
float Gz = target.transform.position.z - ((Mathf.Sin(currentTvalue)) * (radius));
float Gy = target.transform.position.y;
Gizmos.color = Color.green;
Gizmos.DrawWireSphere(new Vector3(Gx, Gy, Gz), 0.03f);
How can i get the orbit to change with the objects rotation? I have tried multiplying the orbit poisition "new Vector3(Gx,Gy,Gz)" by the rotation of the object:
Gizmos.DrawWireSphere(target.transform.rotation*new Vector3(Gx, Gy, Gz), 0.03f);
but that didn't seem to do anything?
That is happening because you are calculating the vector (Gx, Gy, Gz) in world space coordinates, where the target object's rotations are not taken in consideration.
One way to solve your needs is to calculate this rotation using the target object's local space coordinates, and then convert them to world space coordinates. This will correctly make your calculations consider the rotation of the target object.
float Gx = target.transform.localPosition.x - ((Mathf.Cos(currentTvalue)) * (radius));
float Gz = target.transform.localPosition.z - ((Mathf.Sin(currentTvalue)) * (radius));
float Gy = target.transform.localPosition.y;
Vector3 worldSpacePoint = target.transform.TransformPoint(Gx, Gy, Gz);
Gizmos.color = Color.green;
Gizmos.DrawWireSphere(worldSpacePoint, 0.03f);
Notice that instead of target.transform.position, which retrieves the world space coordinates of the given transform, I am doing the calculations using the target.transform.localPosition, which retrieves the local space coordinates of the given transform.
Also, I am calling the TransformPoint() method, which converts the coordinates which I have calculated in local space to its corresponding values in world space.
Then you might safely call the Gizmos.DrawWireSphere() method, which requires world space coordinates to work correctly.
Is it possible to generate an object shaped like this dynamically in unity?
I need the angle of the object to be able to change from a thin sliver to a full donut-like part, so modelling each possible version would be very time-consuming and hard to use.
You can programmatically generate meshes like comments say.
However, if the angle is dynamic i.e. changes often, doing so will waste CPU time and GPU bandwidth best spent on something else. A better option, model (or generate programmatically at startup) 180° or 270° shape, and write a vertex shader to roll up or roll down the shape around center.
To keep things simple, model your shape so the cylinder axis is Z, center XY is {0, 0} (Z doesn’t matter), the opening of the shape is directed towards -X, and the shape is symmetrical around Y=0 plane. I’m talking about object’s local coordinates, you can then position/rotate the model however you want.
Here’s vertex shader code (untested):
float len = length( float2( position.x, position.y ) );
float newAngle = atan2( position.y, position.x ) * rollFactor;
float3 newPosition = float3( len * cos( newAngle ), len * sin( newAngle ), position.z );
output.position = UnityObjectToClipPos( newPosition );
This way, you only need to update a single constant variable rollFactor to transform your shape into any angle. If the initial shape is 270°, set the constant to 1.33334 to roll up into a full donut, 1.0 to keep it at 270°, 0.33333 to roll down to 90° like on your screenshot, etc.
Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};
The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.
Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!