Unity Question: Transforming world coordinates to my Radar's Coordinates for 3D Radar - unity3d

I'm learning Unity3D and having some trouble on my 3D radar. My radar is a child of my Player game object which is rotating around as I fly. The radar itself is rotated 45 degrees so it faces the user. There is a Cube that is supposed to be the radar blip of the enemy plane. The Cube is a child of Radar so should inherit its rotation. There is a script on the Cube to update itself every update(). Here is the hierarchy:
Enemy Plane
Player
-- Camera
-- Radar
------ Cube (radar representation of Enemy Plane)
The problem is that while the Cube itself is rotated, with the Radar, its motion is not. As I get closer to the enemy plane, the Cube just gets closer to the camera (which is good) but I would expect its motion to follow the 45 degree rotation of the parent Radar object?
public class RadarGlyph : MonoBehaviour
{
GameObject radarSource;
GameObject trackedObject;
Vector3 radarScaler;
void Start()
{
this.radarSource = GameObject.Find("Radar");
this.trackedObject = GameObject.Find("Enemy Fighter");
this.radarScaler = new Vector3(0.001f, 0.001f, 0.001f);
}
void Update()
{
Vector3 vDelta = this.trackedObject.transform.position - this.radarSource.transform.position;
vDelta.Scale(this.radarScaler);
this.transform.localPosition = this.transform.InverseTransformDirection(vDelta);
}
}

For a complete solution, you have to get the position of the target wrt the ship first and then recreate it within the context of the blip and the radar.
As a quick fix, you can try changing your last line like this:
this.transform.localPosition = this.parent.localRotation * this.transform.InverseTransformDirection(vDelta);
or (apparently not good as you mentioned)
this.transform.localPosition = Quaternion.Inverse(this.parent.localRotation) * this.transform.InverseTransformDirection(vDelta);
one of these is bound to work. (The first one did)
Edit: here's a third alternative
this.transform.localPosition = this.transform.parent.parent.InverseTransformDirection(vDelta);
This one gets the position in Player's space and applies it in radar's space.
The first and third are trying to do the same thing. Since you were transforming the direction into the blip's coordinate frame, any rotations that its parents have are canceled out. Instead, the correct thing to do is to get the position relative to the Player first. Then apply it to the blip in the radar. The third line of code I have here is attempting to do that.

Related

How to make a 3D compass in Unity? similar to the transform compass in scene mode

I'm trying to make a 3D compass in unity, like the one in scene mode.
But its not looking to bright.
At the moment
end goal How do I better emulate the scene mode transform compass? and keep the GamObject "3DCompass" always in view? (without putting it under the Main Camera)
//Compass3D
public class Compass3D : MonoBehaviour
{
public Vector3 NorthDir;
public Transform Player; // Camera
public GameObject NorthLayer;
// Update is called once per frame
void Update()
{
ChangeNorthDir();
}
public void ChangeNorthDir()
{
NorthDir.z = Player.eulerAngles.y; //May need to change
NorthLayer.transform.eulerAngles = NorthDir;
}
}
Hm...
Well, firstly, you need to realize that the compass' orientation never changes. That's kind of the point of it. The compass always points the same way, meaning it doesn't rotate.
Meaning there's no need for you to do anything in update, you just set where the north is supposed to point to, and leave it at that. The illusion of it rotating comes from person rotating, while the compass keeps pointing the same (global) direction.
The second thing is a big, lazy, awesome secret I'm going to tell you about:
Quaternion.LookRotation
So what you need to do is just rotate the compass correctly on Start, meaning rotate it so that its "Forward direction" is Vector3.forwards (that's a global Z+), and its upward is Vector3.up (global y+). And then never touch the rotation again.
But you want it to stay in view (without being childed to the camera), so what you'll do in update is that:
public class Compass3D : MonoBehaviour
{
public Vector3 NorthDir;
public Transform Player; // Camera
public Vector3 offsetFromPlayer; //needed to keep position from camera. this basically determines where on screen the compass will be positioned. 0,0,0 would be at the same position as the camera, experiment with other values to find one which positions compass relative to camera in such way that it displays on screen where you want it
public GameObject NorthLayer;
void Start()
{
//northDir can be whatever you want, I'm going to assume you want it to point along Unity's forward axis
NorthDir = Vector3.forward;
//set the compass to point to north (assuming its "N" needle points in the direction of model's forward axis (z+). if not, change the model, or nest it into a gameobject within which you'll rotate the model so that the N points along the parent gameobject's z+ axis
NorthLayer.transform.rotation = Quaternion.LookRotation(NorthDir, Vector3.up);
//nothing else needed, the rotation will be fine forever now. if you want to change it later, just do the above line again, and supply it the new NorthDir
}
// Update is called once per frame
void Update()
{
//only thing you need to do in update, is keep the relative position to camera. tbh, easiest way would be to parent the compass, but you said you don't want that, so...
transform.position = Player.position + (Player.rotation * offsetFromPlayer);
//you need to make sure that the compass' offset from camera rotates with the direction of the camera itself, otherwise when camera rotates, the compass will get out of view. that's what multiplying by Player.rotation is there for. you rotate Vectors by multiplying them with quaternions (which is how we express rotations).
}
}
What this code does: first it sets the compass so it points towards north. And then it never touches that again, because once the compass points towards north, if it's not parented to anything (which I assume it's not), it will keep pointing north. IF it is parented to anything else, just copypaste that line from start to Update too, so that when the orientation of compass' parent changes, the compass fixes its pointing back to global north.
Then, in update, it only updates its position, to keep at the same place relative to camera. The important thing to note is that the position needs to take the camera's direction into account, so that when camera rotates, the compass moves similarly to if you had your hand outstretched in front of you, and you rotated your whole body. If the compass is not parented to anything, that motion the tip of your hand does is still just a position change, but it is position relative to your whole body and its direction, so that position change needs to take that into account.
In this code, if you put the offsetFromPlayer as Vector3(0,0,1), that would mean "one unit along the way the camera is looking". If you put it as Vector3(1,0,1), that would mean "one unit along the way the camera is looking, and one unit to the right". So you'll need to experiment with that value to find one that makes the compass display where you actually want it to be on the screen.

How do I apply rotations to parent child using Rigidbody?

I have a finger object. It is just three cubes representing the finger parts.
The 2nd cube is the child of the 1st one. And the 3rd cube is the child of the 2nd one.
This is the heirarchy: Cube1 -> Cube2 -> Cube3
My goal is to apply a rotation angle to the first cube and let the other cubes do the same locally.
Example: Apply 30 degrees Z rotation to the first cube, 30 degrees Z rotation to the 2nd cube, and also the 3rd one.
This will make a finger that look like this:
(Forgive me if it doesn't look like a finger)
In every Update() frame, I will change the angle (it's just one number) and it will rotate every cube for me.
My question is:
How do I make all these cubes collide properly with other objects?
I tried putting the Rigidbody on all of them and set isKinematic=false because I want to transform them myself. But I still cannot use transform.rotation to update my rotation because it will miss the collision with a ball very easily (especially the tip of the finger because it moves faster than other parts). Continuous detection doesn't help.
So I tried using rigidbody.MoveRotation() and rigidbody.MovePosition() instead, which is a pain because they need absolute values. They worked but the animation is so jumpy when I change the angle quickly.
I'm guessing that the animation is jumpy because there are many Rigidbodies or because the physics engine cannot interpolate the position of each box properly when I use MoveRotation() and MovePosition().
I need to use MovePosition() also because when I use only child.MoveRotation(transform.parent.rotation * originalChildLocalRotation * Quaternion.Euler(0, 0, angle)), the position doesn't move relative to the parent. So I have to compute child.MovePosition(parent.TransformPoint(originalLocalPositionOfTheChild)) every frame too.

Placing objects right in front of camera

I am trying to figure out how to modify HelloARController.cs from the example ARCore scene to place objects directly in front of the camera. My thinking is that we are raycasting from the camera to a Vector3 on an anchor or tracked plane, so can't we get the Vector3 of the start of that ray and place an object at or near that point?
I have tried lots, and although I am somewhat a beginner, I have come up with this
From my understanding, ScreenToWorldPoint should output a vector3 of the screen position corresponding to the world, but it is not working correctly. I have tried other options instead of ScreenToWorldPoint, but nothing has presented the desired effect. Does anyone have any tips?
To place the object right at the middle of the camera's view, you would have to change the target gameObject's transform.position (as AlmightyR has said).
The ready code would look something like this:
GameObject camera;
GameObject object;
float distance = 1;
object.transform.position = camera.transform.position + camera.transform.forward * distance;
Since camera's forward component (Z axis) is always poiting at the direction where Camera is looking to, you take that vector's direction and multiply it by a distance you want your object to be placed on. If you want your object to always stay at that position no matter how camera moves, you can make it a child of camera's transform.
object.transform.SetParent(camera.transform);
object.transform.localPosition = Vector3.forward * distance;
Arman's suggestion works. Also giving credit to AlmightyR since they got me started in the right direction. Here's what I have now:
// Set a position in front of the camera
float distance = 1;
Vector3 cameraPoint = m_firstPersonCamera.transform.position + m_firstPersonCamera.transform.forward * distance;
// Intanstiate an Andy Android object as a child of the anchor; it's transform will now benefit
// from the anchor's tracking.
var andyObject = Instantiate(m_andyAndroidPrefab, cameraPoint, Quaternion.identity,anchor.transform);
The only problem with this is that because of the existing HelloAR example code, an object is only placed if you click on a point in the point cloud in my case (or a point on a plane by default). I would like it to behave so that you click anywhere on screen, and it places an object anchored to a nearby point in the point cloud, not necessarily one that you clicked. Any thoughts for how to do that?
Side tip for those who don't know: If you want to place something anchored to a point in the cloud, instead of on a plane, change
TrackableHitFlag raycastFilter = TrackableHitFlag.PlaneWithinBounds | TrackableHitFlag.PlaneWithinPolygon;
to
TrackableHitFlag raycastFilter = TrackableHitFlag.PointCloud;

Unity3d - Need to hide a group of objects in the area

I've already tried depthmask shaders and examined some other ideas, but it seems like it doesn't suit me at all.
I'm making an AR game and I have a scene with a house and trees. All these objects are animated and do something like falling from the sky, but not all at once, but in sequence. For example, the house first, then trees, then fence etc.
(Plz, look at my picture for details) http://f2.s.qip.ru/bVqSAgcy.png
If user moves camera too far, he will see all these objects stucking in the air and waiting for their order to start falling, and it is not good. I want to hide this area from all sides (because in AR camera can move around freely) and make all parts visible only when each will start moving (falling down).
(One more screen) http://f3.s.qip.ru/bVqSAgcz.png
I thought about animation events, but there are too many objects (bricks, for example) and I can't handle all of them manually.
I look forward to your great advice ;)
P.S. Sorry for my bad english.
You can disable their(the objects that are gonna fall) mesh renderers and re active them when they are ready to fall.
See here for more details about mesh renderer.
Deactivate your Object. You might use the camera viewport coordinates to get a y position outside the viewport. They start on the bottom left of the screen (0,0) and go to the top right of the screen (1,1). Convert them to worldspace coordinates. Camera.ViewportToWorldPoint
Vector3 outsideCamera = Camera.main.ViewportToWorldPoint(new Vector3(0.5f, 1.2f, 10.0f));
Now you can use the intended x and z positions of your object. Activate it when you want to drop it.
myObject.transform.position = new Vector3(myObject.transform.position.x, outsideCamera.y, myObject.transform.position.z);
Another thing you could additionally do is scaling the object from very small to its intended size when it is falling. This would prevent the object being visible before falling when the users point the camera upwards.
1- Maybe you can use the Camera far clipping plane property.
Or you can even use 2 Cameras if you need to display let's say the landscape on one (which will not render the house + trees + ...) with a "big" far clipping plane and use a second one with Depth only clear flags rendering only the items (this one can have a smaller far clipping plane from what I understand).
2- Other suggestion I'd give you is adding the scale to your animation:
set the scale to 0 on the beginning of animation
wait for the item to be needed to fall down
set the scale to 1 (with a transition if needed)
make the item fall down
EDIT: the workaround you found is quite just fine too! But tracking only world position should be enough I think (saving a tiny amount of memory).
Hope this helps,
Finally, the solution I chose. I've added this script to each object in composition. It stores object's position (in my case both world and local) at Start() and listening if it changes in Update(). So, if true, stop monitoring and set MeshRenderer in on state.
[RequireComponent(typeof(MeshRenderer))]
public class RenderScript : MonoBehaviour
{
private MeshRenderer mr;
private bool monitoring = true;
private Vector3 posLocal;
private Vector3 posWorld;
// Use this for initialization
void Start()
{
mr = GetComponent<MeshRenderer>();
mr.enabled = false;
posLocal = transform.localPosition;
posWorld = transform.position;
}
// Update is called once per frame
void Update()
{
if (monitoring)
{
if (transform.localPosition != posLocal || transform.position != posWorld)
{
monitoring = false;
mr.enabled = true;
}
}
}
}
Even my funny cheap сhinese smartphone is alive after this, so, I guess, it's OK.

Collision Detection of objects in certain area of camera in Unity3d

I am creating a 3rd person action game where the player is a helicopter and he can shoot other objects while moving. The problem is I am trying to find the enemy objects who are inside a circle in the center of the camera and I need to track them and shoot them.
Raycast wouldn't help as i need a thicker raycast and so I tried spherecast and capsulecast.
I have a GUI element which gives the player idea on where he can shoot.When using Spherecast or Capsulecast, It is working when the enemy is near but when the enemy is far behind I guess the spherecast becomes small while traveling along z and doesn't hit the object most times.
if (Physics.SphereCast (startPoint, 1f, transform.forward, out hit)) {
if (hit.collider.CompareTag ("Shootable") ){
Debug.Log(hit.collider.name);
Destroy(hit.collider.gameObject);
}
}
I have seen raycast from camera and so i was wondering if there is something to do like circlecast from the camera which would be appropriate for this. If not how can I proceed?
Any help is really appreciated.
If you want to detect whether enemies lie within a conical area in front of your camera, using a SphereCast or RayCast will not be able to meet your needs.
Instead, you might consider checking the angle between an enemy's relative position and your camera's forward vector, to see if it is below a particular value, and hence within the cone.
For a 60-degree field of view, and assuming you store your enemy Transform components in an array/List, your code might look like:
foreach (Transform enemy in enemies){
if (Vector3.Angle(transform.forward, enemy.position - transform.position) < 30){
Destroy(enemy.gameObject);
}
}
Hope this helps! Let me know if you have any questions. (Answer adapted from this Unity question.)