I'm trying to learn the Unity game engine. I'm trying to develop a mechanic. Objects are created from the object pool in a fixed location. However, when I make them child object of a different object and finish the process and deactivate them, the scale values of the reconstructed objects change. Why is this happening?
private IEnumerator Create_Green_Pizzas(List<Transform> pizzas)
{
int pizzaCount = 0;
while (pizzaCount < pizzas.Count)
{
currentColumn++;
if (currentColumn >= columnCount)
{
currentRow++;
currentColumn = 0;
}
if (currentRow >= rowCount)
{
pizzaLevel++;
currentRow = 0;
currentColumn = 0;
}
for (int i = 0; i < pizzas.Count; i++)
{
if (!pizzas[i].gameObject.activeInHierarchy)
{
pizzas[i].gameObject.SetActive(true);
pizzas[i].parent = pizzasParent;
pizzas[i].rotation = transform.rotation;
pizzas[i].position = initialSlotPosition + new Vector3(((pizzas[i].lossyScale.x) * currentColumn), (pizzaLevel * pizzas[i].lossyScale.y), ((-pizzas[i].lossyScale.z) * currentRow));
pizzas[i].GetComponent<MeshRenderer>().material.color = Color.green;
_player.collectableGreenPizzas.Add(pizzas[i]);
pizzaCount++;
yield return new WaitForSeconds(0.5f);
break;
}
}
}
}
Setting gameobject's the parent might be the cause, so instead of:
pizzas[i].parent = pizzasParent;
try instead:
pizzas[i].SetParent(pizzasParent, true);
You can find more info about this method Here
If a GameObject has a parent, they become part of the parent's referential, their transform is now a local transform that depends on the parent's transform.
This means that if a child transform has a scale of (1, 1, 1), has the same scale as its parent.
Here's a small clip that illustrates this, two sphere that are the same size while they both have a different scale.
However you can force the GameObject to preserve its coordinates and scale using this line :
child.SetParent(parent, true);
I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!
Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}
I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent
/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).
Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)
When I change material in runtime with this it does not work
expoSceneLarge.GetComponent<Renderer>().materials[0] = availableMaterials[selectedMaterialIndex];
however using this
expoSceneLarge.GetComponent<Renderer>().material = availableMaterials[selectedMaterialIndex];
material is being changed in run time.
What confuses me is that material, according to Unity return the first element in materials attached to renderer, so why materials[0] does not work?
Found a solution.
I has to create new array of materials, change the once I was interested in and the assign whole array to mesh renderer. Here is a a full code for future reference.
var newMaterials = new Material[_expoMaterials.Count()];
for(int i = 0; i < newMaterials.Count(); i++)
{
// i = 1 for floor
if(i == 1)
{
newMaterials[i] = availableMaterials[selectedMaterialIndex];
}
else
{
newMaterials[i] = _expoMaterials[i];
}
}
expoSceneLarge.GetComponent<MeshRenderer>().materials = newMaterials;
I understand that I can use leapmotion in game with Unity3D.
What I can't see any information on, is if I can use it to actual interact with assets, models etc as I build the game. For example revolving a game object around the x axis or zooming in or out of the view.
Is this possible?
Yes, it is possible, but requires some scripts that nobody has written yet (ASFAIK). Here is a VERY rough example that I worked up today since I've been curious about this question, too.
All it does is move, scale, and rotate a selected game object -- it doesn't try to do this in a good way -- it is a proof of concept only. To make it work you would have to do a sensible conversion of Leap coordinates and rotations to Unity values. To try it, put this script in a folder called "Editor", select a game object in the scene view and hold a key down while moving your hand above your Leap. As I said, none of these movements really work to edit an object, but you can see that it is possible with some sensible logic.
#CustomEditor (Transform)
class RotationHandleJS extends Editor {
var controller = new Leap.Controller();
var position;
var localScale;
var localRotation;
var active = false;
function OnSceneGUI () {
e = Event.current;
switch (e.type) {
case EventType.KeyDown:
position = target.transform.position;
localScale = target.transform.localScale;
localRotation = target.transform.localRotation;
active = true;
Debug.Log("editing");
break;
case EventType.KeyUp:
active = false;
target.transform.position = position;
target.transform.localScale = localScale;
EditorUtility.SetDirty (target);
break;
}
if(active){
frame = controller.Frame();
ten = controller.Frame(10);
scale = frame.ScaleFactor(ten);
translate = frame.Translation(ten);
target.transform.localScale = localScale + new Vector3(scale, scale, scale);
target.transform.position = position + new Vector3(translate.x, translate.y, translate.z);
leapRot = frame.RotationMatrix(ten);
quats = convertRotation(leapRot);
target.transform.localRotation = quats;
}
}
var LEAP_UP = new Leap.Vector(0, 1, 0);
var LEAP_FORWARD = new Leap.Vector(0, 0, -1);
var LEAP_ORIGIN = new Leap.Vector(0, 0, 0);
function convertRotation(matrix:Leap.Matrix) {
var up = matrix.TransformDirection(LEAP_UP);
var forward = matrix.TransformDirection(LEAP_FORWARD);
return Quaternion.LookRotation(new Vector3(forward.x, forward.y,forward.z), new Vector3(up.x, up.y, up.z));
}
}
I am creating something like in the sims where you have a character and you can add different clothes to the character.
All the models I use are using the same bones and rig. I have put these models in different asset bundles. At runtime I read the asset bundles and combine these models in one SkinnedMeshRenderer and thus in one object. But every time I add another object the bone count goes up.
I know why this is happening, but I would like to reduce the number of bones again.
I tried to find the duplicate bones and delete those and the bindpose on the same index as the bone deleted, but this still gives me the error "Number of bindposes doesn't match number of bones" even though they are both 45.
Here is the code that attaches the models:
private void UpdateSkinnedMesh()
{
float startTime = Time.realtimeSinceStartup;
// Create a list of for all data
List combineInstances = new List();
List materials = new List();
List bones = new List();
Transform[] transforms = baseModel.GetComponentsInChildren();
//TEMP
List elements = new List();
elements.Add(body);
if(_currentActiveProps.ContainsKey(ItemSubCategory.Skin))
elements.Add(tutu);
//Go over all active elements
foreach (CharacterElement element in elements)
{
//Get the skinned mesh renderer
SkinnedMeshRenderer smr = element.GetSkinnedMeshRenderer();
//Add materials to the entire list
materials.AddRange(smr.materials);
//Add all submeshes to a combined mesh
for (int sub = 0; sub < smr.sharedMesh.subMeshCount; sub++)
{
CombineInstance ci = new CombineInstance();
ci.mesh = smr.sharedMesh;
ci.subMeshIndex = sub;
combineInstances.Add(ci);
}
//Bones are not saved in asset bundle, get the names and reference them again
foreach (string bone in element.GetBoneNames())
{
foreach (Transform transform in transforms)
{
if (transform.name != bone) continue;
bones.Add(transform);
break;
}
}
//Destroy the temp object
Destroy(smr.gameObject);
}
//Get skinned mesh renderer
SkinnedMeshRenderer r = baseModel.GetComponentInChildren();
//Create enew combined mesh
r.sharedMesh = new Mesh();
r.sharedMesh.CombineMeshes(combineInstances.ToArray(), false, false);
//Add bones and materials
r.bones = bones.ToArray();
r.materials = materials.ToArray();
Debug.Log("Bindposes: " + r.sharedMesh.bindposes.Length);
Debug.Log("Generating character took: " + (Time.realtimeSinceStartup - startTime) * 1000 + " ms");
Debug.Log("Bone count: " + r.bones.Length);
}
I already have asked the question on Unity Answers, but because it takes a few hours before a moderator approves the question, I wanted to try here as well.