AR camera distance measurement - unity3d

I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!

Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}

I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}

While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent

/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).

Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)

Related

Why does the scale value of the object change?

I'm trying to learn the Unity game engine. I'm trying to develop a mechanic. Objects are created from the object pool in a fixed location. However, when I make them child object of a different object and finish the process and deactivate them, the scale values ​​of the reconstructed objects change. Why is this happening?
private IEnumerator Create_Green_Pizzas(List<Transform> pizzas)
{
int pizzaCount = 0;
while (pizzaCount < pizzas.Count)
{
currentColumn++;
if (currentColumn >= columnCount)
{
currentRow++;
currentColumn = 0;
}
if (currentRow >= rowCount)
{
pizzaLevel++;
currentRow = 0;
currentColumn = 0;
}
for (int i = 0; i < pizzas.Count; i++)
{
if (!pizzas[i].gameObject.activeInHierarchy)
{
pizzas[i].gameObject.SetActive(true);
pizzas[i].parent = pizzasParent;
pizzas[i].rotation = transform.rotation;
pizzas[i].position = initialSlotPosition + new Vector3(((pizzas[i].lossyScale.x) * currentColumn), (pizzaLevel * pizzas[i].lossyScale.y), ((-pizzas[i].lossyScale.z) * currentRow));
pizzas[i].GetComponent<MeshRenderer>().material.color = Color.green;
_player.collectableGreenPizzas.Add(pizzas[i]);
pizzaCount++;
yield return new WaitForSeconds(0.5f);
break;
}
}
}
}
Setting gameobject's the parent might be the cause, so instead of:
pizzas[i].parent = pizzasParent;
try instead:
pizzas[i].SetParent(pizzasParent, true);
You can find more info about this method Here
If a GameObject has a parent, they become part of the parent's referential, their transform is now a local transform that depends on the parent's transform.
This means that if a child transform has a scale of (1, 1, 1), has the same scale as its parent.
Here's a small clip that illustrates this, two sphere that are the same size while they both have a different scale.
However you can force the GameObject to preserve its coordinates and scale using this line :
child.SetParent(parent, true);

(Unity) How to bake data (Vector3 and Color32) onto render textures?

With the recent introduction of VFX Graph, attribute maps are being used to 'Set Position/Color from Map'.
In order to get an attribute map, one must bake position and color data into render textures. But there is no reference to how to do this that I could find or even on the Unity docs.
Any help on how to do this will be appreciated!
Most of the time you would want to use a Compute Shader to bake a list of points into your textures. I'd suggest you check these repositories for reference:
Bake Skinned Mesh Renderer Data into textures
https://github.com/keijiro/Smrvfx
Bake Kinect data into textures
https://github.com/roelkok/Kinect-VFX-Graph
Bake pointcloud data into texture:
https://github.com/keijiro/Pcx
Personally, I'm using these scripts which work for my purpose though I'm no expert in Compute Shaders:
public class FramePositionBaker
{
ComputeShader bakerShader;
RenderTexture VFXpositionMap;
RenderTexture inputPositionTexture;
private ComputeBuffer positionBuffer;
const int texSize = 256;
public FramePositionBaker(RenderTexture _VFXPositionMap)
{
inputPositionTexture = new RenderTexture(texSize, texSize, 0, RenderTextureFormat.ARGBFloat);
inputPositionTexture.enableRandomWrite = true;
inputPositionTexture.Create();
bakerShader = (ComputeShader)Resources.Load("FramePositionBaker");
if (bakerShader == null)
{
Debug.LogError("[FramePositionBaker] baking shader not found in any Resources folder");
}
VFXpositionMap = _VFXPositionMap;
}
public void BakeFrame(ref Vector3[] vertices)
{
int pointCount = vertices.Length;
positionBuffer = new ComputeBuffer(pointCount, 3 * sizeof(float));
positionBuffer.SetData(vertices);
//Debug.Log("Length " + vertices.Length);
bakerShader.SetInt("dim", texSize);
bakerShader.SetTexture(0, "PositionTexture", inputPositionTexture);
bakerShader.SetBuffer(0, "PositionBuffer", positionBuffer);
bakerShader.Dispatch(0, (texSize / 8) + 1, (texSize / 8) + 1, 1);
Graphics.CopyTexture(inputPositionTexture, VFXpositionMap);
positionBuffer.Dispose();
}
}
The compute shader:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> PositionTexture;
uint dim;
Buffer<float3> PositionBuffer;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
// TODO: insert actual code here!
uint index = id.y * dim + id.x;
uint lastIndex = PositionBuffer.Length - 1;
// Trick for generating a pseudo-random number.
// Inspired by a similar trick in Keijiro's PCX repo (BakedPointCloud.cs).
// The points that are in excess because of the square texture, point randomly to a point in the texture.
// e.g. if (index > lastIndex) index = 0 generates excessive particles in the first position, resulting in a visible artifact.
//if (index > lastIndex) index = ( index * 132049U ) % lastIndex;
float3 pos;
if (index > lastIndex && lastIndex != 0) {
//pos = 0;
index = ( index * 132049U ) % lastIndex;
}
pos = PositionBuffer[index];
PositionTexture[id.xy] = float4 (pos.x, pos.y, pos.z, 1);
}

How correctly get object by screenshot point coordinates in Unity3d?

I want to make next:
get screenshot from camera
process it with python script and find some points
for each point - find object that projected to this point
Screenshot getting code works well. Python script output has next view:
x=X1 y=Y1 ... \n
x=X2 y=Y2 ... \n
Where 0 <= X1 < screenshot width, 0 <= Y1 < screenshot height.
E.g. in debug mode I can see next image :
As you can see - part of points located on chairs/table projections.
And I need to get this chair/table objects (each object have attached BoxCollider).
I trying to get ray with camera.ScreenPointToRay and Physics.Raycast, but I have wrong output:
private void ApplyClassTextures()
{
var screenshot = GetComponent<Screenshoter>().MakeScreenshot();
Debug.Log(screenshot);
var map = GetComponent<Scenemap>().MakeMap(screenshot);
var objectContours = new Dictionary<GameObject, List<Scenemap.MapContour>>();
var camera = GetComponent<Camera>();
foreach (var item in map)
{
var ray = camera.ScreenPointToRay(new Vector3(item.X, item.Y, 0.0f));
var rayHit = new RaycastHit();
if (Physics.Raycast(ray, out rayHit, raycastHitDistance))
{
var obj = rayHit.collider.gameObject;
if (!objectContours.ContainsKey(obj))
{
Debug.Log(obj);
objectContours.Add(obj, new List<Scenemap.MapContour>());
}
objectContours[obj].Add(item);
}
}
}
And there I can see few floor/wall plane and one chair, but not other and table.

How do I convert a RectTransform (UI Panel) to screen coordinates?

I want to determine the screen coordinates of a RectTransform in Unity3D. How is this done, taking into consideration that the element may be anchored, or even scaled?
I am trying to use RectTransform.GetWorldCorners and then converting each Vector3 to screen coordinates, but the values are wrong.
public Rect GetScreenCoordinates(RectTransform uiElement)
{
var worldCorners = new Vector3[4];
uiElement.GetWorldCorners(worldCorners);
var result = new Rect(
worldCorners[0].x,
worldCorners[0].y,
worldCorners[2].x - worldCorners[0].x,
worldCorners[2].y - worldCorners[0].y);
for (int index = 0; index < 4; index++)
result[index] = Camera.main.WorldToScreenPoint(result[index]);
return result;
}
Although the help says that it returns coordinates in world space, they are actually in screen space (at least when the UI is not in world space). So the solution is simple:
public Rect GetScreenCoordinates(RectTransform uiElement)
{
var worldCorners = new Vector3[4];
uiElement.GetWorldCorners(worldCorners);
var result = new Rect(
worldCorners[0].x,
worldCorners[0].y,
worldCorners[2].x - worldCorners[0].x,
worldCorners[2].y - worldCorners[0].y);
return result;
}

Why comparing float values is such difficult?

I am newbie in Unity platform. I have 2D game that contains 10 boxes vertically following each other in chain. When a box goes off screen, I change its position to above of the box at the top. So the chain turns infinitely, like repeating Parallax Scrolling Background.
But I check if a box goes off screen by comparing its position with a specified float value. I am sharing my code below.
void Update () {
offSet = currentSquareLine.transform.position;
currentSquareLine.transform.position = new Vector2 (0f, -2f) + offSet;
Vector2 vectorOne = currentSquareLine.transform.position;
Vector2 vectorTwo = new Vector2 (0f, -54f);
if(vectorOne.y < vectorTwo.y) {
string name = currentSquareLine.name;
int squareLineNumber = int.Parse(name.Substring (11)) ;
if(squareLineNumber < 10) {
squareLineNumber++;
} else {
squareLineNumber = 1;
}
GameObject squareLineAbove = GameObject.Find ("Square_Line" + squareLineNumber);
offSet = (Vector2) squareLineAbove.transform.position + new Vector2(0f, 1.1f);
currentSquareLine.transform.position = offSet;
}
}
As you can see, when I compare vectorOne.y and vectorTwo.y, things get ugly. Some boxes lengthen and some boxes shorten the distance between each other even I give the exact vector values in the code above.
I've searched for a solution for a week, and tried lots of codes like Mathf.Approximate, Mathf.Round, but none of them managed to compare float values properly. If unity never compares float values in the way I expect, I think I need to change my way.
I am waiting for your godlike advices, thanks!
EDIT
Here is my screen. I have 10 box lines vertically goes downwards.
When Square_Line10 goes off screen. I update its position to above of Square_Line1, but the distance between them increases unexpectedly.
Okay, I found a solution that works like a charm.
I need to use an array and check them in two for loops. First one moves the boxes and second one check if a box went off screen like below
public GameObject[] box;
float boundary = -5.5f;
float boxDistance = 1.1f;
float speed = -0.1f;
// Update is called once per frame
void Update () {
for (int i = 0; i < box.Length; i++) {
box[i].transform.position = box[i].transform.position + new Vector3(0, speed, 0);
}
for (int i = 0; i < box.Length; i++)
{
if(box[i].transform.position.y < boundary)
{
int topIndex = (i+1) % box.Length;
box[i].transform.position = new Vector3(box[i].transform.position.x, box[topIndex].transform.position.y + boxDistance, box[i].transform.position.z);
break;
}
}
}
I attached it to MainCamera.
Try this solution:
bool IsApproximately(float a, float b, float tolerance = 0.01f) {
return Mathf.Abs(a - b) < tolerance;
}
The reason being that the tolerances in the internal compare aren't good to use. Change the tolerance value in a function call to be lower if you need more precision.