What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle? - unity3d

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle? And when should we use which one?
Senario:
I'm using UI system creating my card game similar to Hearthstone. I want to transform my mouse drag positions to world position. RectTransformUtility.ScreenPointToWorldPointInRectangle(UIObjectBeingDragged.transform.parent as RectTransform, Input.mousePosition, Camera.main, out resultV3) works fine. But I also tried Camera.main.ScreenToWorldPoint(Input.mousePosition), and it give a different and "wrong" result.

ScreenToWorldPoint
Gives you a world position (the return value) that is along a ray shot through the near plane of the camera (the Camera whose method is being called) at some given point (the x and y components of the position parameter) and a given distance from that near plane (the z component of the position parameter).
You should use this when you:
have a specific distance from the near plane of the camera you are interested in and
don't need to know if it hit inside some rectangle or not
You could think of this as a shortcut for Ray.GetPoint that uses the x and y of position and various info of the Camera to make the Ray, and the z component of position is the distance parameter.
ScreenPointToWorldPointInRectangle
Also gives you a world position (worldPoint) along a ray shot through the near plane of a camera (cam) at a given point (screenPoint). Only this time instead of giving you the point a given distance along the ray, it gives you the intersection point between that ray and a given rectangle (rect) if it exists, and tells you if such an intersection exists or not (the return value).
You should use this when you:
have a specific rectangle you are interested in the intersection with a camera ray,
You don't know the distance between the camera or its near plane and the intersection point
Want to know if that rectangle is hit by the ray or not.
You could think of this as a shortcut for Plane.Raycast which uses cam and screenPoint to make the Ray, and rect to make the Plane, and also gives some more information of if it would intersect outside the boundaries of the rect.

Related

How to search for objects in a specific direction in Unity?

I am new to Unity and created an object, let's say a car. Now I want to know the distance to the next object in a specific direction, for example in front of it or at 45 degrees.
What I want to archieve is comparable to the car sending light rays in the direction measuring the distance to the next collider.
What I can think of is checking for all objects in the scene, but hopefully there is a better solution.
Your looking for Physics.Raycast.
This creates a line from point a (origin) to point b (origin + direction * maxDistance). The documentation has a nice example.
maxDistance would only return object in that range.
You can do multiple ray casts adding rotation to the direction your rays to get a wider scan. Physics.OverlapSphere is also an option, it checks a full sphere around a location for anything that overlapse. You would need to then check if the angle between the car and the object is in your range by calculating the angle between the 2 positions.

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

Why am I getting an incorrect vector when trying to find HingeJoint2D.anchor in world space?

In the scene, I have a long chain of children that are connected via hinge to their parent. For my code, I need the position of the hinge anchors in world space, so I use:
public Vector2 hingeVector => hinge.anchor + (Vector2)gameObject.transform.position;
For the first hinge, that code gives the correct position. But for the second hinge this happens:
The red point is the vector I get, the blue point is the actual position. As you can see, it's a somewhat small but still problematic difference.
Is there any way I can fix this? I couldn't find anything like this online.
You need to add the object's rotation
The anchor values are axis aligned and aren't affected by rotation, but in order to calculate the anchor point in world space, knowing the transform's position, you need to rotate the anchor point values by the object's rotation then add it to the position:
Vector2 p = hinge.anchor.Rotate(gameObject.transform.rotation.eulerAngles.z)
+ (Vector2)gameObject.transform.position;

Unity3D angle between vectors/directions on specific axis

I have two directions and i am trying to calculate the angle on a specific axis. The object from which the directions are derived is not a static object so i'm struggling to get my head round the maths to work out what i need to do.
FYI, I have the start and end points that i have used to calculate the directions if they are needed.
Here's a diagram to show what i am looking for:
The above image is from a top-down view in unity and it shows the angle i want.
The problem can be seen from the above image, which is that the directions are not on the same height so i can't use the vector3.angle function as it won't give me the correct angle value.
In essence i want to know how much would i have to rotate the red line to the left (top view) so that it would line up with the blue (top-view).
The reason i need this is as i am trying to find a way of getting the side-to-side angles of fingers from my leap motion sensor.
This a generic version of my other question:
Leap Motion - Angle of proximal bone to metacarpal (side to side movement)
It will provide more specific information as to the problem if you need it and it has more specific screenshots.
**UPDATE:
After re-reading my question i can see it wasn't particularly clear so here i will hopefully make it clearer. I am trying to calculate the angle of a finger from the leap motion tracking data. Specifically the angle of the finger relative to the metacarpal bone (bone is back of hand). An easy way to demonstrate what i mean would be for you to move your index finger side-to-side (i.e. towards your thumb and then far away from your thumb).
I have put two diagrams below to hopefully illustrate this.
The blue line follows the metacarpal bone which your finger would line up with in a resting position. What i want to calculate is the angle between the blue and red lines (marked with a green line). I am unable to use Vector3.Angle as this value also takes into account the bending of the finger. I need someway of 'flattening' the finger direction out, thus essentially ignoring the bending and just looking at the side to side angle. The second diagram will hopefully show what i mean.
In this diagram:
The blue line represents the actual direction of the finger (taken from the proximal bone - knuckle to first joint)
The green line represents the metacarpal bone direction (the direction to compare to)
The red line represents what i would like to 'convert' the blue line to, whilst keeping it's side to side angle (as seen in the first hand diagram).
It is also worth mentioning that i can't just always look at the x and z axis as this hand will be moving at rotating.
I hope this helps clear things up and truly appreciate the help received thus far.
If I understand your problem correctly, you need to project your two vectors onto a plane. The vectors might not be in that plane currently (your "bent finger" problem) and thus you need to "project" them onto the plane (think of a tree casting a shadow onto the ground; the tree is the vector and the shadow is the projection onto the "plane" of the ground).
Luckily Unity3D provides a method for projection (though the math is not that hard). Vector3.ProjectOnPlane https://docs.unity3d.com/ScriptReference/Vector3.ProjectOnPlane.html
Vector3 a = ...;
Vector3 b = ...;
Vector3 planeNormal = ...;
Vector3 projectionA = Vector3.ProjectOnPlane(a, planeNormal);
Vector3 projectionB = Vector3.ProjectOnPlane(b, planeNormal);
float angle = Vector3.Angle(projectionA, projectionB);
What is unclear in your problem description is what plane you need to project onto? The horizontal plane? If so planeNormal is simply the vertical. But if it is in reference to some other transform, you will need to define that first.

Working with the coordinate system and game screen in Unity 2d?

So I've developed games in other platforms where the x/y coordinate system made sense to me. The top left representing the game screen with coordinates of (0,0) and the bottom right was (width,height). Now I'm trying to make the jump to Unity 2d and I can't understand how the game screen works. If I had a background object and a character object on the screen, when I move the character around his x and y values vary between -3 and 3... very small coordinates and it doesn't match the game resolution I have setup (1024x768). Are there good tutorials for understanding the game grid in Unity? Or can anyone explain how I can accomplish what I'm trying to do?
There are three coordinates systems in Unity: Screen coordinates, view coordinates and the world coordinates.
World coordinates: Think of the absolute positioning of the objects in your scene, using "points". You can choose to have the units represent any length you want, for example 1 unit = 10 meters. What is actually shown on the screen is determined by where the camera is placed and how it is oriented.
View Coordinates: The coordinates in the viewport of a given camera. Viewport is the imaginary rectangle through which the world is viewed. These coordinates are porportional, and range from (0,0) to (1,1).
Screen Coordinates: The actual pixel coordinates denoting the position on the device's screen.
Note that the world co-ordinates of any given object will always be the same regardless of which camera is used to view, whereas the view coordinates depends on the camera being used. The screen coordinates in addition depend on the resolution of the device and the placement of the camera view on the screen.
The "Camera" object provides several methods to convert between these different coordinate systems like "ScreenToViewportPoint" "ScreenToWorldPoint" etc.
Example: Place object on top left of screen
float distanceFromCamera = 10.0f;
Vector3 pos = Camera.main.ScreenToWorldPoint (new Vector3 (0, Camera.main.pixelHeight, distanceFromCamera));
transform.position = pos;
The ScreenToWorldPoint function takes a Vector3 as an argument, where the x and y denote the pixel position on the screen ( 0,0 is the bottom left) and the z component denotes the desired distance from the camera. An infinite number of 3D locations can map to the same screen position, so you need to provide this value.
Just make sure that the desired position falls within the clipping region of the camera. Also, you might need to pick a proper pivot for your object depending on which part of your object you want centered on the top left.
Using:
Camera.main.WorldToScreenPoint (transform.position);
Let's me convert my GameObjects tranform position to the screen's x and y coordinate system