I am trying to reconstruct the point where the ray of the camera rendering the current pixel intersects the near plane.
I need the coordinates of the intersection point in the local coordinates of the object being rendered.
This is my current implementation:
float4 nearClipLS = mul(inv_modelViewProjectionMatrix , float4((i.vertex.x / i.vertex.w), (i.vertex.y / i.vertex.w),-1., 1.)); nearClipLS /= nearClipLS.w;
There's got to be a more efficient way to do it, but the following should, in theory, work.
Find the offset vector from the camera to the pixel:
float3 cam2pos = v.worldPos - _WorldSpaceCameraPos;
Get the camera's forward vector:
float3 camFwd = UNITY_MATRIX_IT_MV[2].xyz;
Get the dot product of the two to determine how far the point projects in the direction of the camera's forward axis:
float projDist = dot(cam2pos, camFwd);
Then, you should be able to use that data to re-project the point onto the near clip plane:
float nearClipZ = _ProjectionParams.y;
float3 nearPos = _WorldSpaceCameraPos+ (cam2pos * (nearClipZ / projDist));
This solution doesn't address edge cases (like when it's even with or behind the camera, which could cause problems), so you may want to check those once you get it working.
Related
I want a determined angle in a local rotated axis system. Basically I want to achieve the angle in a plane of a determined rotated axis system. The best way to explain it is graphically.
I can do that projecting the direction from origin to target in my plane, and then use Vector3.Angle(origin forward dir, Projected direction in plane).
Is there is a way to obtain this in a similar way like Quaternion.FromToRotation(from, to).eulerAngles; but, with the Euler angles, with respect to a coordinate system that is not the world's one, but the local rotated one (the one represented by the rotated plane in the picture above)?
So that the desired angle would be, for the rotation in the local y axis: Quaternion.FromToRotation(from, to).localEulerAngles.y, as the locan Euler angles would be (0, -desiredAngle, 0), based on this approach.
Or is there a more direct way than the way I achieved it?
If I understand you correct there are probably many possible ways to go.
I think you could e.g. use Quaternion.ToAngleAxis which returns an Angle and the axis aroun and which was rotated. This axis you can then convert into the local space of your object
public Vector3 GetLocalEulerAngles(Transform obj, Vector3 vector)
{
// As you had it already, still in worldspace
var rotation = Quaternion.FromToRotation(obj.forward, vector);
rotation.ToAngleAxis(out var angle, out var axis);
// Now convert the axis from currently world space into the local space
// Afaik localAxis should already be normalized
var localAxis = obj.InverseTransformDirection(axis);
// Or make it float and only return the angle if you don't need the rest anyway
return localAxis * angle;
}
As alternative as mentioned I guess yes, you could also simply convert the other vector into local space first, then Quaternion.FromToRotation should already be in local space
public Vector3 GetLocalEulerAngles(Transform obj, Vector3 vector)
{
var localVector = obj.InverseTransformDirection(vector);
// Now this already is a rotation in local space
var rotation = Quaternion.FromToRotation(Vector3.forward, localVector);
return rotation.eulerAngles;
}
I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}
As per my game requirements, I was giving manual force when two cars collide with each other and move back.
So I want the correct code that can justify this. Here is the example, collision response that I want to get:
As per my understanding, I have written this code:
Vector3 reboundDirection = Vector3.Normalize(transform.position - other.transform.position);
reboundDirection.y = 0f;
int i = 0;
while (i < 3)
{
myRigidbody.AddForce(reboundDirection * 100f, ForceMode.Force);
appliedSpeed = speed * 0.5f;
yield return new WaitForFixedUpdate();
i++;
}
I am moving, my cars using this code:
//Move the player forward
appliedSpeed += Time.deltaTime * 7f;
appliedSpeed = Mathf.Min(appliedSpeed, speed);
myRigidbody.velocity = transform.forward * appliedSpeed;
Still, as per my observation, I was not getting, collision response in the proper direction. What is the correct way for getting above image reference collision response?
Until you clarify why you have use manual forces or how you handle forces generated by Unity Engine i would like to stress one problem in your approach. You calculate direction based on positions but positions are the center of your cars. Therefore, you are not getting a correct direction as you can see from the image below:
You calculate the direction between two pivot or center points therefore, your force is a bit tilted in left image. Instead of this you can use ContactPoint and then calculate the direction.
As more detailed information so that OP can understand what i said! In the above image you can see the region with blue rectangle. You will get all the contact points for the corresponding region using Collision.contacts
then calculate the center point or centroid like this
Vector3 centroid = new Vector3(0, 0, 0);
foreach (ContactPoint contact in col.contacts)
{
centroid += contact.point;
}
centroid = centroid / col.contacts.Length;
This is the center of the rectangle to find the direction you need to find its projection on your car like this:
Vector3 projection = gameObject.transform.position;
projection.x = centroid.x;
gameObject.GetComponent<Rigidbody>().AddForce((projection - centroid )*100, ForceMode.Impulse);
Since i do not know your set up i just got y and z values from car's position but x value from centroid therefore you get a straight blue line not an arrow tilted to left like in first image even in the case two of second image. I hope i am being clear.
I am trying to calculate circular motion (orbit) around an object. The code i have gives me a nice circular orbit around the object. The problem is that when i rotate the object, the orbit behaves as though the object were not rotated.
I've put a really simple diagram below to try and explain it better. The left is what i get when the cylinder is upright, the middle is what i currently get when the object is rotated. The image on the right is what i would like to happen.
float Gx = target.transform.position.x - ((Mathf.Cos(currentTvalue)) * (radius));
float Gz = target.transform.position.z - ((Mathf.Sin(currentTvalue)) * (radius));
float Gy = target.transform.position.y;
Gizmos.color = Color.green;
Gizmos.DrawWireSphere(new Vector3(Gx, Gy, Gz), 0.03f);
How can i get the orbit to change with the objects rotation? I have tried multiplying the orbit poisition "new Vector3(Gx,Gy,Gz)" by the rotation of the object:
Gizmos.DrawWireSphere(target.transform.rotation*new Vector3(Gx, Gy, Gz), 0.03f);
but that didn't seem to do anything?
That is happening because you are calculating the vector (Gx, Gy, Gz) in world space coordinates, where the target object's rotations are not taken in consideration.
One way to solve your needs is to calculate this rotation using the target object's local space coordinates, and then convert them to world space coordinates. This will correctly make your calculations consider the rotation of the target object.
float Gx = target.transform.localPosition.x - ((Mathf.Cos(currentTvalue)) * (radius));
float Gz = target.transform.localPosition.z - ((Mathf.Sin(currentTvalue)) * (radius));
float Gy = target.transform.localPosition.y;
Vector3 worldSpacePoint = target.transform.TransformPoint(Gx, Gy, Gz);
Gizmos.color = Color.green;
Gizmos.DrawWireSphere(worldSpacePoint, 0.03f);
Notice that instead of target.transform.position, which retrieves the world space coordinates of the given transform, I am doing the calculations using the target.transform.localPosition, which retrieves the local space coordinates of the given transform.
Also, I am calling the TransformPoint() method, which converts the coordinates which I have calculated in local space to its corresponding values in world space.
Then you might safely call the Gizmos.DrawWireSphere() method, which requires world space coordinates to work correctly.
I am using the following code to handle rotating my player model to the position of my mouse.
void Update() {
// Generate a plane that intersects the transform's position with an upwards normal.
Plane playerPlane = new Plane(Vector3.up, transform.position);
// Generate a ray from the cursor position
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
// Determine the point where the cursor ray intersects the plane.
// This will be the point that the object must look towards to be looking at the mouse.
// Raycasting to a Plane object only gives us a distance, so we'll have to take the distance,
// then find the point along that ray that meets that distance. This will be the point
// to look at.
float hitdist = 0f;
// If the ray is parallel to the plane, Raycast will return false.
if (playerPlane.Raycast(ray, out hitdist)) {
// Get the point along the ray that hits the calculated distance.
var targetPoint = ray.GetPoint(hitdist);
// Determine the target rotation. This is the rotation if the transform looks at the target point.
Quaternion targetRotation = Quaternion.LookRotation(targetPoint - transform.position);
// Smoothly rotate towards the target point.
transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, speed * Time.deltaTime); // WITH SPEED
//transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, 1); // WITHOUT SPEED!!!
}
I would like to be able to determine if the rotation is clockwise or counter-clockwise for animation purposes. What would be the best way of handling this? I'm fairly unfamiliar with quaternions so I'm not really sure how to approach this.
Angles between quaternions are unsigned. You will always get the shortest distance, and there's no way of defining "counter-clockwise" or "clockwise" unless you actively specify an axis (a point of view).
What you CAN do, however, is to take the axis that you're interested in (I assume it's the normal to your base plane.. perhaps the vertical of your world?) and take the flat 2D components of your quaternions, map them there and compute a simple 2D angle between those.
Quaternion A; //first Quaternion - this is your desired rotation
Quaternion B; //second Quaternion - this is your current rotation
// define an axis, usually just up
Vector3 axis = new Vector3(0.0f, 1.0f, 0.0f);
// mock rotate the axis with each quaternion
Vector3 vecA = A * axis;
Vector3 vecB = B * axis;
// now we need to compute the actual 2D rotation projections on the base plane
float angleA = Mathf.Atan2(vecA.x, vecA.z) * Mathf.Rad2Deg;
float angleB = Mathf.Atan2(vecB.x, vecB.z) * Mathf.Rad2Deg;
// get the signed difference in these angles
var angleDiff = Mathf.DeltaAngle( angleA, angleB );
This should be it. I never had to do it myself and the code above is not tested. Similar to: http://answers.unity3d.com/questions/26783/how-to-get-the-signed-angle-between-two-quaternion.html
This should work even if A or B are not Quaternions, but one of them is an euler-angle rotation.
Two dimensional quaternions (complex numbers) have a signed angle. But, the more correct way to think about complex numbers is with an unsigned angle which is relative to either the XY oriented plane or the YX oriented plane. I.E. a combination of an unsigned angle an an oriented plane of rotation.
In 2D there are only two oriented planes of rotation so the idea of a "signed angle" is really just a trick to get both the unsigned angle and the oriented plane of rotation packed into a single number.
For a quaternion the "signed angle" trick cannot be used because in 3D you have an infinite number of oriented planes you can rotate in, so a single signed angle cannot encode all the rotation information like it can in the 2D case.
The only way for a signed angle to make sense in 3D is with reference to a particular oriented plane, such as the XY oriented plane.
-- UPDATE --
This is pretty easy to solve as a method on a quaternion class. If all you want to know is "is this counter clockwise", then since we know the rotation angle is from 0 to 180, a positive dot product between the quat's axis of rotation and the surface normal should indicate that we're rotating counter clockwise from the perspective of that surface. And a negative dot product indicates the opposite. Ignoring the zero case, this should do the trick with much less work:
public bool IsCounterClockwise( in Vector3 normal ) => I*normal.X + J*normal.Y + K*normal.Z >= 0;