What I want to do is spawn an object in front of the camera and the side I am looking at. When I change the rotation of the camera (looking at a different side), the object is still spawned at the same position. How can I change this (that the Object also changes his angle)?
public void Create(Object myPrefab)
{
Vector3 instantGO = Camera.main.transform.position + new Vector3(0, 0, 7);
Instantiate(myPrefab, instantGO, Quaternion.identity);
}
Use transform.forward - this gives a position relative to the transform e.g. Camera.main.transform.position + (Camera.main.transform.forward * 5) for 5m in front of the cam.
The forward and left properties return a vector of magnitude one, pointing in the direction the transform is facing, or to it's left. You can use -forward and -left to align backwards or to the right. Add this to the position of the object and multiply it to give a position a number of units away. In recent versions of Unity you also have transform... up, down, right etc.
Use this Instantiate variant:
Instantiate(Object original, Vector3 position, Quaternion rotation, Transform parent);
And pass the Camera transform as the parent parameter. This will make the new object a child of the camera and thus always moves/rotates with the camera.
Related
First off: I am very new to Unity, as in VERY new.
I want to do the following: I want to rotate a cube around a stationary point (in my case a camera) with a radius that is adjustable in the inspector. The cube should always have its Z-axis oriented towards the camera's position. While the cube is orbiting around the camera, it should additionally follow a sine function to move up and down with a magnitude of 2.
I have some working code, the only problem is an increase in distance over time. The longer the runtime, the higher the distance between the cube and the camera.
Here is what I currently have:
void Awake()
{
cameraPosition = GameObject.FindGameObjectWithTag("MainCamera").transform;
transform.position = new Vector3(x: transform.position.x,
y: transform.position.y,
z: cameraPosition.position.z + radius);
movement = transform.position;
}
I instantiate some variables in the Awake()-method and set the cube's position to where it should be (do you instantiate in Awake()?). I'll use the Vector3 movement later in my code for the "swinging" of the cube.
void Update()
{
transform.LookAt(cameraPosition);
transform.RotateAround(cameraPosition.position, cameraPosition.transform.up, 30 * Time.deltaTime * rotationSpeed);
MoveAndRotate();
}
Here I set the orientation of the cube's z-axis and rotate it around the camera. 30 is just a constant i am using for tests.
void MoveAndRotate()
{
movement += transform.right * Time.deltaTime * movementSpeed;
transform.position = movement + Vector3.up * Mathf.Sin(Time.time * frequency) * magnitude;
}
To be quite frank, I do not understand this bit of code completely. I do however understand that this includes a rotation as it moves the cube along it's x-axis as well as along the world's y-axis. I have yet to get into Vector and matrices, so if you could share your knowledge on that topic as well I'd be grateful for that.
It seems like I have found the solution for my problem, and it is an easy one at that.
First of all we need the initial position of our cube because we need to have access to its original y-coordinate to account for offsets.
So in Awake(), instead of
movement = transform.position;
We simply change it to
initialPosition = transform.position;
To have more readable code.
Next, we change our MoveAndRotate()-method to only be a single line long.
void MoveAndRotate()
{
transform.position = new Vector3(transform.position.x,
Mathf.Sin(Time.time * frequency) * magnitude + initialPosition.y,
transform.position.z);
}
What exactly does that line then? It sets the position of our cube to a new Vector3. This Vector consists of
its current x-value
our newly calculated y-value (our height, if you want to say so) + the offset from our original position
its current z value
With this, the cube will only bop up and down with distancing itself from the camera.
I have also found the reason for the increase in distance: My method of movement does not describe a sphere (which would keep the distance the same no matter how you rotate the cube) but rather a plane. Of course, moving the cube along a plane will automatically increase the distance for some points of the movement.
For instantiating variables in Awake it should be fine, but you could also do it in the Start(){} Method that Unity provides if you wanted to.
For the main problem itself I'm guessing that calling this function every frame is the Problem, because you add on to the position.
movement += transform.right * Time.deltaTime * movementSpeed;
Would be nice if you could try to replace it with this code and see if it helped.
movement = transform.right * Time.deltaTime * movementSpeed;
I am using unity AR foundation image tracking. When a tracked picture appears on the screen, how can I get its position or its relative distance to the camera?
You can get the distance from the camera to the picture by giving the camera position and the game object transform position of the identified picture. I'm currently using this in one of my apps and its working nicely.
Vector3.Distance(pictureGameObject.transform.position, Camera.main.transform.position)
well you would need the position of that other object to check the Distance. This can be done by using Vector3.Distance.
First however you need the Transform of the other object. You can do this by making a Raycast, in this case I'm casting the raycast from the middle of the screen. Then I will assign the Transform of whatever object was hit to the hitTransform variable. After that I can use a Vector3.Distance to compare 2 positions and calculate the distance.
Transform hitTransform;
private float distance;
void Update()
{
Ray ray = cam.ViewportPointToRay(new Vector3(0.5F, 0.5F, 0));
RaycastHit hit
if (Physics.Raycast(ray, out hit))
{
hitTransform = hit.transform;
}
distance = Vector3.Distance(hitTransform.position, transform.position);
}
So in short, if you look at an object that is in the middle of the screen the distance between that object and the camera will be in the distance variable, you can also use a raycast when you click or press a button but for the sake of simplicity I used the middle of the screen for this.
I'm currently working on a basic card game in Unity and I'm having a fair bit of trouble working out how to perform a drag on my cards. My current layout is as follows:
I'm currently trying to use the IDragHandler interface to receive callbacks whenever a drag event is detected over my card object.
My end goal is to be able to slide the cards to the left/right based on the x axis of a touch/mouse slide.
I've currently tried using the eventdata.delta value passed into OnDrag() IDragHandler to move my card but this value, from what I can tell, is in pixels and when converted to world units using Camera.main.ScreenToWorldPoint() results in a value of 0,0,0. Likewise trying to keep track of where the drag started and subtracting the current position result in a value of 0,0,0.
I'm currently at a bit of a loss as to how I can drag a game object using world units so I'd greatly appreciate it if someone can provide me some guidance.
Thanks
EDIT:
^This is why you shouldn't StackOverflow late at night.
So to add some extra context:
Cards are just game objects with a mesh renderer attached and a canvas embedded to add the text (probably not the best way of adding text but it works)
Cards are all under a "hand" object which consists of a box collider2d to receive events and a script that implements IBeginDragHandler, IEndDragHandler and IDragHandler
So far I've tried two different approaches to calculating drag distances:
public void OnDrag(PointerEventData eventData)
{
Vector2 current = Camera.main.ScreenToWorldPoint(eventData.position);
Debug.Log("Current world position: "+current);
Vector3 delta = lastDragPoint - current;
// Store current value for next call to OnDrag
lastDragPoint = eventData.position;
// Now use the values in delta to move the cards
}
Using this approach I always get a value of 0,0,0 after my call to ScreenToWorldPoint so I cant move the cards
The second approach I've tried is:
public void OnDrag(PointerEventData eventData)
{
Vector3 screenCenter = new Vector3(Screen.width * 0.5f, Screen.height * 0.5f, 1f);
Vector3 screenTouch = screenCenter + new Vector3(eventData.delta.x, eventData.delta.y, 0f);
Vector3 worldCenterPosition = Camera.main.ScreenToWorldPoint(screenCenter);
Vector3 worldTouchPosition = Camera.main.ScreenToWorldPoint(screenTouch);
Vector3 delta = worldTouchPosition - worldCenterPosition;
// Now use the values in delta to move the cards
}
Using this approach I get a much better result (the cards actually move) but they don't correctly follow the mouse. If I drag a card I will move in the direction of the mouse however the distance moved is significantly less than the distance the mouse moved (i.e. by the time the mouse has reached the edge of the screen the card has only moved one cards width)
So after a bit of playing around and inspecting the various values I was receiving from the calls to ScreenToWorldPoint I finally tracked down the issue.
In the code for my second attempted approach I found that I was using an incorrect value for the Z argument in ScreenToWorldPoint. After doing a bit of research I found that his should be the Z distance between the camera and the (i guess) touch surface.
Vector3 screenCenter = new Vector3(Screen.width * 0.5f, Screen.height * 0.5f, -Camera.main.transform.position.z);
Vector3 screenTouch = screenCenter + new Vector3(eventData.delta.x, eventData.delta.y, 0);
Thanks to everyone who took the time to read through my question.
You could use OnMouseDown -method to detect when player starts to drag the card and update its positions according to mouse position. When the drag ends the OnMouseUp -method is called.
https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnMouseDown.html
https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnMouseUp.html
I create a game and I need use inertia for object.
Example:
The image shows all what I need.
When I touch on screen, blueObject no longer uses the position of brownObject and rotation of redObject. And I add component Rigidbody. The object just falls down. I need him to fall further along his trajectory (inertia).
I tried to use addForce(transform.forward * float), this not work.
By setting the position of the transform, you don't use Unity Physics engine. Your cube must have a rigidbody from the begin of the simulation and what you need here is a spring joint (https://docs.unity3d.com/Manual/class-SpringJoint.html) or a fixed joint.
You need to calculate the current speed, when releasing the object.
Track the positions over the last frame & current frame, and use Time.deltaTime to compensate different frame-rates.
Then set this velocity to your objects rigidbody. (AddForce is just manipulating the velocity, but depending on the ForceMode it respects mass etc.)
public Vector3 lastPosition = Vector3.zero;
void Update()
{
// maybe do : if(lastPosition != Vector3.zero) to be sure
Vector3 obj_velocity = (lastPosition - transform.position) * Time.deltaTime;
lastPosition = transform.position;
// if you release the object, do your thing, add rigidbody, then:
rb.velocity = obj_velocity;
}
That should create the "inertia". the velocity contains the direction and the speed.
I have a scene with a body maked with makehuman, and I need to add a simple prefab (a torus) around the arm of the body when the user touch the arm.
I tried:
Instantiate the prefab in the point where the user touch, but the prefab apear in the border of the arm.
Instantiate the prefab in the center of the arm, with this code:
float radio = hit.transform.collider.radius; // the arm has a capsuleCollider
Ray r = Camera.main.ScreenPointToRay(Input.GetTouch(0));
Vector3 origin = r.origin;
float distance = (origin - hit.point).magnitude;
RaycastHit ou;
Vector3 position = hit.point;
Ray r2 = new Ray(r.GetPoint(distance + 10f), -r.direction);
if (cc.Raycast(r2, out ou, distance + 10f))
position = (hit.point + ou.point) / 2;
Instantiate(Prefab, position, Quaternion.identity);
This try to Select the center of the arm and initialite a torus.
The second option works in some cases, but the general impression is that is the wrong way to do it.
How can I add a prefab around a collider? or, how can I modify the mesh to add a visual indicator?
This should work a lot better as well as look a lot cleaner:
Vector3 center = hit.transform.collider.bounds.center;
Instantiate(Prefab, center, Quaternion.identity);
hit.transform.collider is a vital part of this process and you got that part. collider.bounds is the bounding box that surrounds the collider (http://docs.unity3d.com/ScriptReference/Collider-bounds.html), and bounds.center is the center of the bounding box (http://docs.unity3d.com/ScriptReference/Bounds-center.html). The Vector3 that bounds.center returns is where you want to spawn your prefab.
From there, you should be able to rotate the prefab to the desired angle and perform any number of operations you want.