How do I convert a RectTransform (UI Panel) to screen coordinates? - unity3d

I want to determine the screen coordinates of a RectTransform in Unity3D. How is this done, taking into consideration that the element may be anchored, or even scaled?
I am trying to use RectTransform.GetWorldCorners and then converting each Vector3 to screen coordinates, but the values are wrong.
public Rect GetScreenCoordinates(RectTransform uiElement)
{
var worldCorners = new Vector3[4];
uiElement.GetWorldCorners(worldCorners);
var result = new Rect(
worldCorners[0].x,
worldCorners[0].y,
worldCorners[2].x - worldCorners[0].x,
worldCorners[2].y - worldCorners[0].y);
for (int index = 0; index < 4; index++)
result[index] = Camera.main.WorldToScreenPoint(result[index]);
return result;
}

Although the help says that it returns coordinates in world space, they are actually in screen space (at least when the UI is not in world space). So the solution is simple:
public Rect GetScreenCoordinates(RectTransform uiElement)
{
var worldCorners = new Vector3[4];
uiElement.GetWorldCorners(worldCorners);
var result = new Rect(
worldCorners[0].x,
worldCorners[0].y,
worldCorners[2].x - worldCorners[0].x,
worldCorners[2].y - worldCorners[0].y);
return result;
}

Related

Get normalized click position on a RectTransform

Ive made a ui touch/click controller by using an UI image with collider. The ui is rendered with a stacked camera.
Im using IPointerDownHandler.OnPointerDown to get the click event.
The controller is supposed to give a value from 0-1 depending on how far up you click it.
Im using Canvas Scaler on the UI to make the controllers resize depending on device. But that messes up my calculations since the click position wont be the same. How is this supposed to be handled? Now the calculation is only correct when i disable Canvas Scaler or run it on a display with the default dimensions.
public void OnPointerDown(PointerEventData pointerEventData)
{
SetAccelerationValue(pointerEventData.position.y);
}
private void SetAccelerationValue(float posY)
{
float percentagePosition;
var positionOnAccelerator = posY - minY;
var acceleratorHeight = maxY - minY;
percentagePosition = positionOnAccelerator / acceleratorHeight;
Debug.Log(percentagePosition);
}
I would use RectTransformUtility.ScreenPointToLocalPointInRectangle to get a position in the local space of the given RectTransform.
Then combine it with Rect.PointToNormalized
Returns the normalized coordinates cooresponding the the point.
The returned Vector2 is in the range 0 to 1 with values more 1 or less than zero clamped.
to get a normalized position within that RectTransform.rect (0,0) being bottom-left corner, (1,1) being the top-right corner
[SerializeField] private RectTransform _rectTransform;
private void Awake ()
{
if(!_rectTransform) _rectTransform = GetComponent<RectTransform>();
}
private bool GetNormalizedPosition(PointerEventData pointerEventData, out Vector2 normalizedPosition)
{
normalizedPosition = default;
// get the pointer position in the local space of the UI element
// NOTE: For click vents use "pointerEventData.pressEventCamera"
// For hover events you would rather use "pointerEventData.enterEventCamera"
if(!RectTransformUtility.ScreenPointToLocalPointInRectangle(_rectTransform, pointerEventData.position, pointerEventData.pressEventCamera, out var localPosition)) return false;
normalizedPosition = Rect.PointToNormalized(_rectTransform.rect, localPosition);
// I think this kind of equals doing something like
//var rect = _rectTransform.rect;
//var normalizedPosition = new Vector2 (
// (localPosition.x - rect.x) / rect.width,
// (localPosition.y - rect.y) / rect.height);
Debug.Log(normalizedPosition);
return true;
}
Since the normalized position returns values like
(0|1)-----(1|1)
| |
| (0|0) |
| |
(0|0)-----(1|0)
but you sounds like what you want to get is
(-1|1)----(1|1)
| |
| 0|0 |
| |
(-1|-1)----(1|-1)
So you can simply shift the returned value using e.g.
// Shift the normalized Rect position from [0,0] (bottom-left), [1,1] (top-right)
// into [-1, -1] (bottom-left), [1,1] (top-right)
private static readonly Vector2 _multiplcator = Vector2.one * 2f;
private static readonly Vector2 _shifter = Vector2.one * 0.5f;
private static Vector2 GetShiftedNormalizedPosition(Vector2 normalizedPosition)
{
return Vector2.Scale((normalizedPosition - _shifter), _multiplcator);
}
So finally you would use e.g.
public void OnPointerDown(PointerEventData pointerEventData)
{
if(!GetNormalizedPosition(pointerEventData, out var normalizedPosition)) return;
var shiftedNormalizedPosition = GetShiftedNormalizedPosition(normalizedPosition);
SetAccelerationValue(shiftedNormalizedPosition.y);
// And probably for your other question also
SetSteeringValue(shiftedNormalizedPosition.x);
}
And of course within SetAccelerationValue you don't calculate anything but just set the value ;)
This uses always the current rect so you don't have to store any min/max values and it also applies to any dynamic re-scaling of the rect.
This would then probably also apply to your other almost duplicate question ;)

AR camera distance measurement

I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!
Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}
I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent
/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).
Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)

I want to get user's height by using kinect V2 and unity3D,this code can't work well what should I do?

Attempting to get the user's height in Unity using the xbox kinect.
Below is my code and I cannot get the height.
This uses the KinectV2 interface.
// get User's height by KinectV2 and unity3D `enter code here`
float GetUserHeightByLeft(long userid)
{`enter code here`
float uheight=0.0f;
int[] joints = new int[9];
int head = (int)KinectInterop.JointType.Head;
joints[0] = head;
joints[1] = (int)KinectInterop.JointType.Neck;
int shoudlderCenter = (int)KinectInterop.JointType.SpineShoulder;
joints[2] = shoudlderCenter;
joints[3] = (int)KinectInterop.JointType.SpineMid;
joints[4] = (int)KinectInterop.JointType.SpineBase;
joints[5] = (int)KinectInterop.JointType.HipLeft;
joints[6] = (int)KinectInterop.JointType.KneeLeft;
joints[7] = (int)KinectInterop.JointType.AnkleLeft;
joints[8] = (int)KinectInterop.JointType.FootLeft;
int trackedcount = 0;
for (int i = 0; i < joints.Length; ++i)
{
if (KinectManager.Instance.IsJointTracked(userid, joints[i]))
{
++trackedcount;
}
}
//if all joints that I need have been tracked ,compute user's height
if (trackedcount == joints.Length)
{
for (int i = 0; i < joints.Length-1;++i)
{
if (KinectManager.Instance.IsJointTracked(userid, joints[i]))
{
Vector3 start= 100*KinectManager.Instance.GetJointKinectPosition(userid,joints[i]);
Vector3 end = 100*KinectManager.Instance.GetJointKinectPosition(userid,joints[i+1]);
uheight += Mathf.Abs(Vector3.Magnitude(end-start));
}
}
//some height kinectV2 can't get so I add it
uheight += 8;
}
return uheight;
}
Given your code I do see a few issues
The summation to get the total height requires all joints to be tracked at that time.
Why do you need all joints to be tracked for the height to be tracked? You should only need the head and the feet
You double check if each joint is tracked
using the left for every foot/knee/hip joint is going to give you math errors. They are offset in one direction (because our feet aren't in the center of our body)
I would track both right and left foot/knee/hip and then find the center of the two on the x axis.
If you're using the magnitude of two vectors and one knee is far in front of your hip it's going to give it an inflated value.
I would only use the y positions of your kinect joints to calculate the height.

How to center parent game object in the screen?

I am trying to make a Scrabble word game using a fixed camera, but I have a simple issue.
I add some boxes as game objects and the number of these boxes is the length of the word so if the word is "Fish" we will add 4 boxes dynamically. I did that successfully, but I can't center those boxes in the screen. I tried to add those game objects as children under another game object, then center the parent, but with no effect.
This is my code:
void Start () {
for (int i=0; i<word.Length; i++) {
GameObject LetterSpaceObj = Instantiate(Resources.Load ("LetterSpace", typeof(GameObject))) as GameObject;
LetterSpaceObj.transform.parent = gameObject.transform;
LetterSpaceObj.transform.localPosition = new Vector2(i*1.5f,0.0f);
LetterSpaceObj.name = "LetterSpace-"+count.ToString();
count++;
}
gameObject.transform.position = new Vector2 (0.0f, 0.0f);
}
This image shows you the idea:
I believe your code is working, but the problem is that your first letter is located at your parent object, and then every letter from then on is added to the right. That means that when you center the parent object what you are doing is putting the first letter in the center of the screen.
If you run the game and use the scene view to look at where the parent object is this can confirm this. What you can do instead is that rather than placing the parent at the center of the screen, offset it by an amount equal to the length of the word.
gameObject.transform.position = new Vector2 (-(word.Length / 2.0f) * 1.5f, 0.0f);
You might also want to consider changing some of those constants, such as the 1.5f into variables with names like LetterSize or basing it off the actual prefab so that way any future changes will work automatically.
This is the last solution after some edits to fix this issue.
GameObject LetterSpaceObjRow;
int count = 1;
string word = "Father";
float ObjectXPos;
float LocalScaleX;
void Start () {
for (int i=0; i<word.Length; i++) {
GameObject LetterSpaceObj = Instantiate(Resources.Load ("LetterSpace", typeof(GameObject))) as GameObject;
LocalScaleX = LetterSpaceObj.transform.localScale.x;
ObjectXPos = i*(LocalScaleX+(LocalScaleX/2));
LetterSpaceObj.transform.parent = gameObject.transform;
LetterSpaceObj.transform.localPosition = new Vector2(ObjectXPos,0.0f);
LetterSpaceObj.name = "LetterSpace-"+count.ToString();
count++;
}
gameObject.transform.position = new Vector2 (-(ObjectXPos/2.0f) , 0.0f);
}

Why comparing float values is such difficult?

I am newbie in Unity platform. I have 2D game that contains 10 boxes vertically following each other in chain. When a box goes off screen, I change its position to above of the box at the top. So the chain turns infinitely, like repeating Parallax Scrolling Background.
But I check if a box goes off screen by comparing its position with a specified float value. I am sharing my code below.
void Update () {
offSet = currentSquareLine.transform.position;
currentSquareLine.transform.position = new Vector2 (0f, -2f) + offSet;
Vector2 vectorOne = currentSquareLine.transform.position;
Vector2 vectorTwo = new Vector2 (0f, -54f);
if(vectorOne.y < vectorTwo.y) {
string name = currentSquareLine.name;
int squareLineNumber = int.Parse(name.Substring (11)) ;
if(squareLineNumber < 10) {
squareLineNumber++;
} else {
squareLineNumber = 1;
}
GameObject squareLineAbove = GameObject.Find ("Square_Line" + squareLineNumber);
offSet = (Vector2) squareLineAbove.transform.position + new Vector2(0f, 1.1f);
currentSquareLine.transform.position = offSet;
}
}
As you can see, when I compare vectorOne.y and vectorTwo.y, things get ugly. Some boxes lengthen and some boxes shorten the distance between each other even I give the exact vector values in the code above.
I've searched for a solution for a week, and tried lots of codes like Mathf.Approximate, Mathf.Round, but none of them managed to compare float values properly. If unity never compares float values in the way I expect, I think I need to change my way.
I am waiting for your godlike advices, thanks!
EDIT
Here is my screen. I have 10 box lines vertically goes downwards.
When Square_Line10 goes off screen. I update its position to above of Square_Line1, but the distance between them increases unexpectedly.
Okay, I found a solution that works like a charm.
I need to use an array and check them in two for loops. First one moves the boxes and second one check if a box went off screen like below
public GameObject[] box;
float boundary = -5.5f;
float boxDistance = 1.1f;
float speed = -0.1f;
// Update is called once per frame
void Update () {
for (int i = 0; i < box.Length; i++) {
box[i].transform.position = box[i].transform.position + new Vector3(0, speed, 0);
}
for (int i = 0; i < box.Length; i++)
{
if(box[i].transform.position.y < boundary)
{
int topIndex = (i+1) % box.Length;
box[i].transform.position = new Vector3(box[i].transform.position.x, box[topIndex].transform.position.y + boxDistance, box[i].transform.position.z);
break;
}
}
}
I attached it to MainCamera.
Try this solution:
bool IsApproximately(float a, float b, float tolerance = 0.01f) {
return Mathf.Abs(a - b) < tolerance;
}
The reason being that the tolerances in the internal compare aren't good to use. Change the tolerance value in a function call to be lower if you need more precision.