Why does the scale value of the object change? - unity3d

I'm trying to learn the Unity game engine. I'm trying to develop a mechanic. Objects are created from the object pool in a fixed location. However, when I make them child object of a different object and finish the process and deactivate them, the scale values ​​of the reconstructed objects change. Why is this happening?
private IEnumerator Create_Green_Pizzas(List<Transform> pizzas)
{
int pizzaCount = 0;
while (pizzaCount < pizzas.Count)
{
currentColumn++;
if (currentColumn >= columnCount)
{
currentRow++;
currentColumn = 0;
}
if (currentRow >= rowCount)
{
pizzaLevel++;
currentRow = 0;
currentColumn = 0;
}
for (int i = 0; i < pizzas.Count; i++)
{
if (!pizzas[i].gameObject.activeInHierarchy)
{
pizzas[i].gameObject.SetActive(true);
pizzas[i].parent = pizzasParent;
pizzas[i].rotation = transform.rotation;
pizzas[i].position = initialSlotPosition + new Vector3(((pizzas[i].lossyScale.x) * currentColumn), (pizzaLevel * pizzas[i].lossyScale.y), ((-pizzas[i].lossyScale.z) * currentRow));
pizzas[i].GetComponent<MeshRenderer>().material.color = Color.green;
_player.collectableGreenPizzas.Add(pizzas[i]);
pizzaCount++;
yield return new WaitForSeconds(0.5f);
break;
}
}
}
}

Setting gameobject's the parent might be the cause, so instead of:
pizzas[i].parent = pizzasParent;
try instead:
pizzas[i].SetParent(pizzasParent, true);
You can find more info about this method Here

If a GameObject has a parent, they become part of the parent's referential, their transform is now a local transform that depends on the parent's transform.
This means that if a child transform has a scale of (1, 1, 1), has the same scale as its parent.
Here's a small clip that illustrates this, two sphere that are the same size while they both have a different scale.
However you can force the GameObject to preserve its coordinates and scale using this line :
child.SetParent(parent, true);

Related

AR camera distance measurement

I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!
Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}
I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent
/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).
Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)

(Unity) How to bake data (Vector3 and Color32) onto render textures?

With the recent introduction of VFX Graph, attribute maps are being used to 'Set Position/Color from Map'.
In order to get an attribute map, one must bake position and color data into render textures. But there is no reference to how to do this that I could find or even on the Unity docs.
Any help on how to do this will be appreciated!
Most of the time you would want to use a Compute Shader to bake a list of points into your textures. I'd suggest you check these repositories for reference:
Bake Skinned Mesh Renderer Data into textures
https://github.com/keijiro/Smrvfx
Bake Kinect data into textures
https://github.com/roelkok/Kinect-VFX-Graph
Bake pointcloud data into texture:
https://github.com/keijiro/Pcx
Personally, I'm using these scripts which work for my purpose though I'm no expert in Compute Shaders:
public class FramePositionBaker
{
ComputeShader bakerShader;
RenderTexture VFXpositionMap;
RenderTexture inputPositionTexture;
private ComputeBuffer positionBuffer;
const int texSize = 256;
public FramePositionBaker(RenderTexture _VFXPositionMap)
{
inputPositionTexture = new RenderTexture(texSize, texSize, 0, RenderTextureFormat.ARGBFloat);
inputPositionTexture.enableRandomWrite = true;
inputPositionTexture.Create();
bakerShader = (ComputeShader)Resources.Load("FramePositionBaker");
if (bakerShader == null)
{
Debug.LogError("[FramePositionBaker] baking shader not found in any Resources folder");
}
VFXpositionMap = _VFXPositionMap;
}
public void BakeFrame(ref Vector3[] vertices)
{
int pointCount = vertices.Length;
positionBuffer = new ComputeBuffer(pointCount, 3 * sizeof(float));
positionBuffer.SetData(vertices);
//Debug.Log("Length " + vertices.Length);
bakerShader.SetInt("dim", texSize);
bakerShader.SetTexture(0, "PositionTexture", inputPositionTexture);
bakerShader.SetBuffer(0, "PositionBuffer", positionBuffer);
bakerShader.Dispatch(0, (texSize / 8) + 1, (texSize / 8) + 1, 1);
Graphics.CopyTexture(inputPositionTexture, VFXpositionMap);
positionBuffer.Dispose();
}
}
The compute shader:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> PositionTexture;
uint dim;
Buffer<float3> PositionBuffer;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
// TODO: insert actual code here!
uint index = id.y * dim + id.x;
uint lastIndex = PositionBuffer.Length - 1;
// Trick for generating a pseudo-random number.
// Inspired by a similar trick in Keijiro's PCX repo (BakedPointCloud.cs).
// The points that are in excess because of the square texture, point randomly to a point in the texture.
// e.g. if (index > lastIndex) index = 0 generates excessive particles in the first position, resulting in a visible artifact.
//if (index > lastIndex) index = ( index * 132049U ) % lastIndex;
float3 pos;
if (index > lastIndex && lastIndex != 0) {
//pos = 0;
index = ( index * 132049U ) % lastIndex;
}
pos = PositionBuffer[index];
PositionTexture[id.xy] = float4 (pos.x, pos.y, pos.z, 1);
}

Unity C# When adding 2 items into a list for some reason the following loop doesn't run?

In the following code I have the list "PlayerList" set as a static in another script, to this I am adding 2 items in the start method of their won script, this works perfectly fine for one of them but when i add another the returned value is always (0,0,0) any help would be appreciated.
Vector3 GetDesiredPosition()
{
Position = new Vector3(0, 0, 0);
for (int i = 0; i == ControllsHolder.PlayerList.Count-1; i++) //When multiple players in list it sets position to 0 for some reason?
{
if (ControllsHolder.PlayerList[i].Alive)
Position = new Vector3(Position.x + ControllsHolder.PlayerList[i].transform.position.x, Position.y + ControllsHolder.PlayerList[i].transform.position.y, 0);
else continue;
}
Position = new Vector3(Position.x / (ControllsHolder.PlayerList.Count),Position.y/(ControllsHolder.PlayerList.Count),0);
return Position;
}

I want to get user's height by using kinect V2 and unity3D,this code can't work well what should I do?

Attempting to get the user's height in Unity using the xbox kinect.
Below is my code and I cannot get the height.
This uses the KinectV2 interface.
// get User's height by KinectV2 and unity3D `enter code here`
float GetUserHeightByLeft(long userid)
{`enter code here`
float uheight=0.0f;
int[] joints = new int[9];
int head = (int)KinectInterop.JointType.Head;
joints[0] = head;
joints[1] = (int)KinectInterop.JointType.Neck;
int shoudlderCenter = (int)KinectInterop.JointType.SpineShoulder;
joints[2] = shoudlderCenter;
joints[3] = (int)KinectInterop.JointType.SpineMid;
joints[4] = (int)KinectInterop.JointType.SpineBase;
joints[5] = (int)KinectInterop.JointType.HipLeft;
joints[6] = (int)KinectInterop.JointType.KneeLeft;
joints[7] = (int)KinectInterop.JointType.AnkleLeft;
joints[8] = (int)KinectInterop.JointType.FootLeft;
int trackedcount = 0;
for (int i = 0; i < joints.Length; ++i)
{
if (KinectManager.Instance.IsJointTracked(userid, joints[i]))
{
++trackedcount;
}
}
//if all joints that I need have been tracked ,compute user's height
if (trackedcount == joints.Length)
{
for (int i = 0; i < joints.Length-1;++i)
{
if (KinectManager.Instance.IsJointTracked(userid, joints[i]))
{
Vector3 start= 100*KinectManager.Instance.GetJointKinectPosition(userid,joints[i]);
Vector3 end = 100*KinectManager.Instance.GetJointKinectPosition(userid,joints[i+1]);
uheight += Mathf.Abs(Vector3.Magnitude(end-start));
}
}
//some height kinectV2 can't get so I add it
uheight += 8;
}
return uheight;
}
Given your code I do see a few issues
The summation to get the total height requires all joints to be tracked at that time.
Why do you need all joints to be tracked for the height to be tracked? You should only need the head and the feet
You double check if each joint is tracked
using the left for every foot/knee/hip joint is going to give you math errors. They are offset in one direction (because our feet aren't in the center of our body)
I would track both right and left foot/knee/hip and then find the center of the two on the x axis.
If you're using the magnitude of two vectors and one knee is far in front of your hip it's going to give it an inflated value.
I would only use the y positions of your kinect joints to calculate the height.

Why comparing float values is such difficult?

I am newbie in Unity platform. I have 2D game that contains 10 boxes vertically following each other in chain. When a box goes off screen, I change its position to above of the box at the top. So the chain turns infinitely, like repeating Parallax Scrolling Background.
But I check if a box goes off screen by comparing its position with a specified float value. I am sharing my code below.
void Update () {
offSet = currentSquareLine.transform.position;
currentSquareLine.transform.position = new Vector2 (0f, -2f) + offSet;
Vector2 vectorOne = currentSquareLine.transform.position;
Vector2 vectorTwo = new Vector2 (0f, -54f);
if(vectorOne.y < vectorTwo.y) {
string name = currentSquareLine.name;
int squareLineNumber = int.Parse(name.Substring (11)) ;
if(squareLineNumber < 10) {
squareLineNumber++;
} else {
squareLineNumber = 1;
}
GameObject squareLineAbove = GameObject.Find ("Square_Line" + squareLineNumber);
offSet = (Vector2) squareLineAbove.transform.position + new Vector2(0f, 1.1f);
currentSquareLine.transform.position = offSet;
}
}
As you can see, when I compare vectorOne.y and vectorTwo.y, things get ugly. Some boxes lengthen and some boxes shorten the distance between each other even I give the exact vector values in the code above.
I've searched for a solution for a week, and tried lots of codes like Mathf.Approximate, Mathf.Round, but none of them managed to compare float values properly. If unity never compares float values in the way I expect, I think I need to change my way.
I am waiting for your godlike advices, thanks!
EDIT
Here is my screen. I have 10 box lines vertically goes downwards.
When Square_Line10 goes off screen. I update its position to above of Square_Line1, but the distance between them increases unexpectedly.
Okay, I found a solution that works like a charm.
I need to use an array and check them in two for loops. First one moves the boxes and second one check if a box went off screen like below
public GameObject[] box;
float boundary = -5.5f;
float boxDistance = 1.1f;
float speed = -0.1f;
// Update is called once per frame
void Update () {
for (int i = 0; i < box.Length; i++) {
box[i].transform.position = box[i].transform.position + new Vector3(0, speed, 0);
}
for (int i = 0; i < box.Length; i++)
{
if(box[i].transform.position.y < boundary)
{
int topIndex = (i+1) % box.Length;
box[i].transform.position = new Vector3(box[i].transform.position.x, box[topIndex].transform.position.y + boxDistance, box[i].transform.position.z);
break;
}
}
}
I attached it to MainCamera.
Try this solution:
bool IsApproximately(float a, float b, float tolerance = 0.01f) {
return Mathf.Abs(a - b) < tolerance;
}
The reason being that the tolerances in the internal compare aren't good to use. Change the tolerance value in a function call to be lower if you need more precision.