Unity 2D - Keep Player Object in Boundary of it's Parent Panel - unity3d

In Unity i have a UI Panel which has a player object (an UI Image object).
I moving player object into planel with user inputs (keyboard or touch)
I can't keep player object in it's parent panel,
Please check below image, I want to keep player inside of Red Panel
Here is my Tried Code
public Camera MainCamera; //be sure to assign this in the inspector to your main camera
private Vector2 screenBounds;
private float objectWidth;
private float objectHeight;
private RectTransform pnlBackgroundTransform;
private void Start()
{
pnlBackgroundTransform = GameObject.Find("PnlBackground").GetComponent<RectTransform>();
screenBounds = MainCamera.ScreenToWorldPoint(new Vector3(pnlBackgroundTransform.rect.width , pnlBackgroundTransform.rect.height , MainCamera.transform.position.z));
objectWidth = transform.GetComponent<SpriteRenderer>().bounds.extents.x; //extents = size of width / 2
objectHeight = transform.GetComponent<SpriteRenderer>().bounds.extents.y; //extents = size of height / 2
}
void LateUpdate()
{
Vector3 viewPos = transform.position;
viewPos.x = Mathf.Clamp(viewPos.x, screenBounds.x * -1 + objectWidth, screenBounds.x - objectWidth);
viewPos.y = Mathf.Clamp(viewPos.y, screenBounds.y * -1 + objectHeight, screenBounds.y - objectHeight);
Debug.Log(screenBounds);
Debug.Log(viewPos);
transform.position = viewPos;
}

I'd say it's not very usual having the player implemented as a UI element, and instead you should be implementing it outside the UI/Canvas system.
The UI/Canvas system uses a set of rules of placing and scaling to deal with responsive design. You have at least 4 values (excluding rotation) to place something on the screen: anchor, pivot, position and scale.
For example: if you want to create a square you can either set it's size in absolute pixel values or relative values (to parent). If you're using absolute values, your UI Scale Mode, defined on the Canvas object, should affect the visual results.
This means the UI/Canvas is for elements that should adapt to the screen, such as buttons, dialogs, labels, etc. Taking advantage of device parameters to improve the UX.
Outside the UI/Canvas system, things are directly based on Linear Algebra: you have a 3D vector space (a "World") where everything exists with an absolute size and position. Then, your Camera stretches and twists the whole world to match what your current perspective. That means your object will always have the same size, regardless of screen size.
Now, assuming you have a very specific reason to implement your game into UI, there are a few ways you can do it. I'll assume you're using absolute values. Please note all the units used here are pixels, and the effect should be different for devices with different resolutions and sensible to the UI Scale Mode parameter. Also, please note I've set both anchors min and max to (0,0), the bottom left corner (default is screen center, (0.5,0.5)), in order to avoid negative coordinates.
The following script is attached to the player's UI Image.
public class UIMovementController : MonoBehaviour
{
public float speed = 5.0f;
new private RectTransform transform;
private Rect canvasRect;
private void Start()
{
transform = GetComponent<RectTransform>();
canvasRect = GetComponentInParent<Canvas>().pixelRect;
}
void Update()
{
// Keyboard Input (Arrows)
Vector2 move = new Vector2(0,0);
if (Input.GetKey(KeyCode.UpArrow)) { move.y += speed; }
if (Input.GetKey(KeyCode.DownArrow)) { move.y -= speed; }
if (Input.GetKey(KeyCode.LeftArrow)) { move.x -= speed; }
if (Input.GetKey(KeyCode.RightArrow)) { move.x += speed; }
transform.anchoredPosition += move;
// Position clamping
Vector2 clamped = transform.anchoredPosition;
clamped.x = Mathf.Clamp(clamped.x, transform.rect.width / 2, canvasRect.width - transform.rect.width / 2);
clamped.y = Mathf.Clamp(clamped.y, transform.rect.height / 2, canvasRect.height - transform.rect.height / 2);
transform.anchoredPosition = clamped;
}
}

Related

Unity3D - Move camera perpendicular to where it's facing

I'm adding the option for players to move the camera to the sides. I also want to limit how far they can move the camera to the sides.
If the camera was aligned with the axis, I could simply move around X/Z axis and set a limit on each axis as to how far it can go. But my problem is that the camera is rotated, so I'm stuck figuring out how to move it and set a limit. How could I implement this?
using UnityEngine;
[RequireComponent(typeof(Camera))]
public class CameraController : MonoBehaviour
{
Camera cam;
Vector3 dragOrigin;
bool drag = false;
void Awake()
{
cam = GetComponent<Camera>();
}
void LateUpdate()
{
// Camera movement with mouse
Vector3 diff = (cam.ScreenToWorldPoint(Input.mousePosition)) - cam.transform.position;
if (Input.GetMouseButton(0))
{
if (drag == false)
{
drag = true;
dragOrigin = cam.ScreenToWorldPoint(Input.mousePosition);
}
}
else
{
drag = false;
}
if (drag)
{
// Here I want to set a constraint in a rectangular plane perpendicular to camera view
transform.position = dragOrigin - diff;
}
}
}
Transform in Unity comes with a handy Transform.right property, which regards the object's rotation. To move your camera sideways you could further utilize Lerp to make the movement smooth.
transform.position += transform.right * factor
moves an object to the right.
Use factor to adjust the desired distance and by doing so you can also set limits. Negative factor would mean moving left by the way:) Hope that helps!
It can be tricky to deal with constraints on rotated objects. The math behind this includes some vector/rotation math to figure out the correct limits relative to the object's orientation, and whether you've exceeded them.
Luckily though, Unity gives you some shortcuts to skip this math: Transform.InverseTransformPoint() and Transform.TransformPoint()! These two methods allow you to transform a point in world space into a point in local space, and vice versa.
That means that no matter how your camera is oriented, you can interpret a position from the orientation of the camera - and with just a couple extra steps, your X/Z constraints are usable because you can calculate X/Z from the camera's point of view.
Let's try to adapt your current script to use this:
using UnityEngine;
[RequireComponent(typeof(Camera))]
public class CameraController : MonoBehaviour
{
// Set the X and Z values in the editor to define the rectangle within
// which your camera can move
public Vector3 maxConstraints;
public Vector3 minConstraints;
Camera cam;
Vector3 dragOrigin;
bool drag = false;
Vector3 cameraStart;
void Awake()
{
cam = GetComponent<Camera>();
// Here, we record the start since we'll need a reference to determine
// how far the camera has moved within the allowed rectangle
cameraStart = transform.position;
}
void LateUpdate()
{
// Camera movement with mouse
Vector3 diff = (cam.ScreenToWorldPoint(Input.mousePosition)) - cam.transform.position;
if (Input.GetMouseButton(0))
{
if (drag == false)
{
drag = true;
dragOrigin = cam.ScreenToWorldPoint(Input.mousePosition);
}
}
else
{
drag = false;
}
if (drag)
{
// Now, rather than setting the position directly, let's make sure it's
// within the valid rectangle first
Vector3 newPosition = dragOrigin - diff;
// First, we get into the local space of the camera and determine the delta
// between the start and possible new position
Vector3 localStart = transform.InverseTransformPoint(cameraStart);
Vector3 localNewPosition = transform.InverseTransformPoint(newPosition);
Vector3 localDelta = localNewPosition - localStart;
// Now, we calculate constrained values for the X and Z coordinates
float clampedDeltaX = Mathf.Clamp(localDelta.x, minConstraint.x, maxConstraint.x);
float clampedDeltaZ = Mathf.Clamp(localDelta.z, minConstraint.z, maxConstraint.z);
// Then, we can use the constrained values to determine the constrained position
// within local space
Vector3 localClampedPosition = new Vector3(clampedDeltaX, localDelta.y, clampedDeltaZ)
+ localStart;
// Finally, we can convert the local position back to world space and use it
transform.position = transform.TransformPoint(localConstrainedPosition);
}
}
}
Note that I'm somewhat assuming dragOrigin - diff moves your camera correctly in its present state. If it doesn't do what you want, please include details on the unwanted behaviour and we can sort that out too.

Get normalized click position on a RectTransform

Ive made a ui touch/click controller by using an UI image with collider. The ui is rendered with a stacked camera.
Im using IPointerDownHandler.OnPointerDown to get the click event.
The controller is supposed to give a value from 0-1 depending on how far up you click it.
Im using Canvas Scaler on the UI to make the controllers resize depending on device. But that messes up my calculations since the click position wont be the same. How is this supposed to be handled? Now the calculation is only correct when i disable Canvas Scaler or run it on a display with the default dimensions.
public void OnPointerDown(PointerEventData pointerEventData)
{
SetAccelerationValue(pointerEventData.position.y);
}
private void SetAccelerationValue(float posY)
{
float percentagePosition;
var positionOnAccelerator = posY - minY;
var acceleratorHeight = maxY - minY;
percentagePosition = positionOnAccelerator / acceleratorHeight;
Debug.Log(percentagePosition);
}
I would use RectTransformUtility.ScreenPointToLocalPointInRectangle to get a position in the local space of the given RectTransform.
Then combine it with Rect.PointToNormalized
Returns the normalized coordinates cooresponding the the point.
The returned Vector2 is in the range 0 to 1 with values more 1 or less than zero clamped.
to get a normalized position within that RectTransform.rect (0,0) being bottom-left corner, (1,1) being the top-right corner
[SerializeField] private RectTransform _rectTransform;
private void Awake ()
{
if(!_rectTransform) _rectTransform = GetComponent<RectTransform>();
}
private bool GetNormalizedPosition(PointerEventData pointerEventData, out Vector2 normalizedPosition)
{
normalizedPosition = default;
// get the pointer position in the local space of the UI element
// NOTE: For click vents use "pointerEventData.pressEventCamera"
// For hover events you would rather use "pointerEventData.enterEventCamera"
if(!RectTransformUtility.ScreenPointToLocalPointInRectangle(_rectTransform, pointerEventData.position, pointerEventData.pressEventCamera, out var localPosition)) return false;
normalizedPosition = Rect.PointToNormalized(_rectTransform.rect, localPosition);
// I think this kind of equals doing something like
//var rect = _rectTransform.rect;
//var normalizedPosition = new Vector2 (
// (localPosition.x - rect.x) / rect.width,
// (localPosition.y - rect.y) / rect.height);
Debug.Log(normalizedPosition);
return true;
}
Since the normalized position returns values like
(0|1)-----(1|1)
| |
| (0|0) |
| |
(0|0)-----(1|0)
but you sounds like what you want to get is
(-1|1)----(1|1)
| |
| 0|0 |
| |
(-1|-1)----(1|-1)
So you can simply shift the returned value using e.g.
// Shift the normalized Rect position from [0,0] (bottom-left), [1,1] (top-right)
// into [-1, -1] (bottom-left), [1,1] (top-right)
private static readonly Vector2 _multiplcator = Vector2.one * 2f;
private static readonly Vector2 _shifter = Vector2.one * 0.5f;
private static Vector2 GetShiftedNormalizedPosition(Vector2 normalizedPosition)
{
return Vector2.Scale((normalizedPosition - _shifter), _multiplcator);
}
So finally you would use e.g.
public void OnPointerDown(PointerEventData pointerEventData)
{
if(!GetNormalizedPosition(pointerEventData, out var normalizedPosition)) return;
var shiftedNormalizedPosition = GetShiftedNormalizedPosition(normalizedPosition);
SetAccelerationValue(shiftedNormalizedPosition.y);
// And probably for your other question also
SetSteeringValue(shiftedNormalizedPosition.x);
}
And of course within SetAccelerationValue you don't calculate anything but just set the value ;)
This uses always the current rect so you don't have to store any min/max values and it also applies to any dynamic re-scaling of the rect.
This would then probably also apply to your other almost duplicate question ;)

Player stops move the character direction resets [Unity 2D]

My character is a car and I try to rotate it the direction it move, so far so good I succeeded to do that but once I stop moving the character flips back to the direction it was on the start.
Also how can I make my turns from side to the opposite site smooth ?
Here is my code so far:
[SerializeField] float driveSpeed = 5f;
//state
Rigidbody2D myRigidbody;
// Start is called before the first frame update
void Start()
{
myRigidbody = GetComponent<Rigidbody2D>();
}
// Update is called once per frame
void Update()
{
Move();
}
private void Move()
{
//Control of velocity of the car
float HorizontalcontrolThrow = CrossPlatformInputManager.GetAxis("Horizontal"); // Value between -1 to 1
float VerticalcontrolThrow = CrossPlatformInputManager.GetAxis("Vertical"); // Value between -1 to 1
Vector2 playerVelocity = new Vector2(HorizontalcontrolThrow * driveSpeed, VerticalcontrolThrow * driveSpeed);
myRigidbody.velocity =playerVelocity;
**//Direction of the car**
Vector2 direction = new Vector2(HorizontalcontrolThrow, VerticalcontrolThrow);
float angle = Mathf.Atan2(direction.y, direction.x) * Mathf.Rad2Deg;
myRigidbody.rotation = angle;
}
I'm not sure about this, but maybe that last line "myRigidbody.rotation = angle" being called every frame is what is making your car reset its rotation.
Maybe change it to "myRigidbody.rotation *= angle" or "myRigidbody.rotation += angle".
It looks like it may be because HorizontalcontrolThrow and VerticalcontrolThrow are going to be reset when you release the controls. If it's resetting to its original orientation, then what's happening is that until you move, those two values are going to be at their default value. You then move and it affects the rotation. But when you release the controls, those values are back to the starting values again, and so is your rotation.
What you therefore need to do is try to separate the HorizontalcontrolThrow and VerticalcontrolThrow from the rest of the code, which should only be activated when at least one of these two variables are not at their default setting (I can't remember what the axis functions return at the moment).
Edit:
An IF statement should suffice (some rough pseudo code):
if (horizontalAxis != default || verticalAxis != default)
{
Rotate/Move
}
I solved the snap rotation using Quaternion at rotation, the issiu I had with it was to convert it from 3d to 2d, through the guide of this clip: youtube.com/watch?v=mKLp-2iseDc and made my adjustments it works just fine !

Resolution Scaling in Unity

i wanted to ask you if there is a Resolution Scale Option like in Unreal Engine in Unity too. I have looked around the internet but didn't found anything.
If you are wanting to scale objects that are not on the canvas based on the resolution then offhand I can't think of anything. However, it would be fairly easy to implement something that does this.
Create a Script and attach it to every object that should scale based on the current resolution.
public class ScaleObjectFromRes : MonoBehaviour
{
private Vector2 targetResoultion = new Vector2(1920, 1080); //can be changed here or elsewhere
private bool matchWidth = true; //0=width, 1=height used to maintain aspect ratio
// Start is called before the first frame update
void Start()
{
float difference = CalculateDifference();
ScaleObj(difference);
}
void ScaleObj(float diff)
{
gameObject.transform.localScale += (gameObject.transform.localScale * (diff/100));
}
private float CalculateDifference()
{
Vector2 actualResolution = new Vector2(Screen.width, Screen.height);
Vector2 change = actualResolution-targetResoultion;
Vector2 percentChange = (change / targetResoultion) * 100;
//match width/height
if (matchWidth)
{
return percentChange.x;
}
else
{
return percentChange.y;
}
}
This scales the object based on the percent difference between the target resolution and the actual resolution. We can choose to match the difference based upon the width or height to guarantee a constant ratio for the object. This assumes the object's scale vector's magnitude is 1. Hope this helps!

Using Input.Gyro to get the amount of "tilt" from an origin rotation

In my scenario, I have a table (plane) that a ball will roll around on using nothing but physics giving the illusion that the mobile device is the table using Input.gyro.attitude. Taking it one step further, I would like this relative to the device origin at the time Start() is called, so if it is not being held in front of a face or flat on the table, but just relative to where it started, and may even be reset when the ball is reset. So the question is, is how do I get the difference between the current attitude and the origin attitude, then convert the X and Z(?) difference into a Vector3 to AddForce() to my ball object whilst capping the max rotation at about 30 degrees? I've looked into a lot of Gyro based input manager scripts and nothing really helps me understand the mystery of Quaternions.
I could use the relative rotation to rotate the table itself, but then I am dealing with the problem of rotating the camera along the same rotation, but also following the ball at a relative height but now with a tilted offset.
AddForce() works well for my purposes with Input.GetAxis in the Editor, just trying to transition it to the device without using a Joystick style UI controller.
Edit: The following code is working, but I don't have the right angles/euler to give the right direction. The game is played in Landscape Left/Right only, so I should only need a pitch and yaw axis (imagine the phone flat on a table), but not roll (rotated around the camera/screen). I may eventually answer my own question through trial and error, which I am sure is what most programmers do.... XD
Started on the right track through this answer:
Answer 434096
private Gyroscope m_Gyro;
private speedForce = 3.0f;
private Rigidbody rb;
private void Start() {
m_Gyro = Input.gyro;
m_Gyro.enabled = true;
rb = GetComponent<Rigidbody>();
}
private Vector3 GetGyroForces() {
Vector3 resultantForce = Vector3.zero;
//Quaternion deviceRotation = new Quaternion(0.5f, 0.5f, -0.5f, -0.5f) * m_Gyro.attitude * new Quaternion(0,1,0,0);
float xForce = GetAngleByDeviceAxis(Vector3.up);
float zForce = GetAngleByDeviceAxis(Vector3.right);
//float zForce = diffRot.z;
resultantForce = new Vector3(xForce, 0, zForce);
return resultantForce;
}
private float GetAngleByDeviceAxis(Vector3 axis) {
Quaternion currentRotation = m_Gyro.attitude;
Quaternion eliminationOfOthers = Quaternion.Inverse(Quaternion.FromToRotation(axis, currentRotation * axis));
Vector3 filteredEuler = (eliminationOfOthers * currentRotation).eulerAngles;
float result = filteredEuler.z;
if (axis == Vector3.up) {
result = filteredEuler.y;
}
if (axis == Vector3.right) {
result = (filteredEuler.y > 90 && filteredEuler.y < 270) ? 180 - filteredEuler.x : filteredEuler.x;
}
return result;
}
void FixedUpdate() {
#if UNITY_EDITOR
rb.AddForce(new Vector3(Input.GetHorizontal * speedForce, 0, Input.GetVertical * speedForce));
#endif
#if UNITY_ANDROID
rb.AddForce(GetGyroForces);
#endif
}