Ive made a ui touch/click controller by using an UI image with collider. The ui is rendered with a stacked camera.
Im using IPointerDownHandler.OnPointerDown to get the click event.
The controller is supposed to give a value from 0-1 depending on how far up you click it.
Im using Canvas Scaler on the UI to make the controllers resize depending on device. But that messes up my calculations since the click position wont be the same. How is this supposed to be handled? Now the calculation is only correct when i disable Canvas Scaler or run it on a display with the default dimensions.
public void OnPointerDown(PointerEventData pointerEventData)
{
SetAccelerationValue(pointerEventData.position.y);
}
private void SetAccelerationValue(float posY)
{
float percentagePosition;
var positionOnAccelerator = posY - minY;
var acceleratorHeight = maxY - minY;
percentagePosition = positionOnAccelerator / acceleratorHeight;
Debug.Log(percentagePosition);
}
I would use RectTransformUtility.ScreenPointToLocalPointInRectangle to get a position in the local space of the given RectTransform.
Then combine it with Rect.PointToNormalized
Returns the normalized coordinates cooresponding the the point.
The returned Vector2 is in the range 0 to 1 with values more 1 or less than zero clamped.
to get a normalized position within that RectTransform.rect (0,0) being bottom-left corner, (1,1) being the top-right corner
[SerializeField] private RectTransform _rectTransform;
private void Awake ()
{
if(!_rectTransform) _rectTransform = GetComponent<RectTransform>();
}
private bool GetNormalizedPosition(PointerEventData pointerEventData, out Vector2 normalizedPosition)
{
normalizedPosition = default;
// get the pointer position in the local space of the UI element
// NOTE: For click vents use "pointerEventData.pressEventCamera"
// For hover events you would rather use "pointerEventData.enterEventCamera"
if(!RectTransformUtility.ScreenPointToLocalPointInRectangle(_rectTransform, pointerEventData.position, pointerEventData.pressEventCamera, out var localPosition)) return false;
normalizedPosition = Rect.PointToNormalized(_rectTransform.rect, localPosition);
// I think this kind of equals doing something like
//var rect = _rectTransform.rect;
//var normalizedPosition = new Vector2 (
// (localPosition.x - rect.x) / rect.width,
// (localPosition.y - rect.y) / rect.height);
Debug.Log(normalizedPosition);
return true;
}
Since the normalized position returns values like
(0|1)-----(1|1)
| |
| (0|0) |
| |
(0|0)-----(1|0)
but you sounds like what you want to get is
(-1|1)----(1|1)
| |
| 0|0 |
| |
(-1|-1)----(1|-1)
So you can simply shift the returned value using e.g.
// Shift the normalized Rect position from [0,0] (bottom-left), [1,1] (top-right)
// into [-1, -1] (bottom-left), [1,1] (top-right)
private static readonly Vector2 _multiplcator = Vector2.one * 2f;
private static readonly Vector2 _shifter = Vector2.one * 0.5f;
private static Vector2 GetShiftedNormalizedPosition(Vector2 normalizedPosition)
{
return Vector2.Scale((normalizedPosition - _shifter), _multiplcator);
}
So finally you would use e.g.
public void OnPointerDown(PointerEventData pointerEventData)
{
if(!GetNormalizedPosition(pointerEventData, out var normalizedPosition)) return;
var shiftedNormalizedPosition = GetShiftedNormalizedPosition(normalizedPosition);
SetAccelerationValue(shiftedNormalizedPosition.y);
// And probably for your other question also
SetSteeringValue(shiftedNormalizedPosition.x);
}
And of course within SetAccelerationValue you don't calculate anything but just set the value ;)
This uses always the current rect so you don't have to store any min/max values and it also applies to any dynamic re-scaling of the rect.
This would then probably also apply to your other almost duplicate question ;)
Related
I'm adding the option for players to move the camera to the sides. I also want to limit how far they can move the camera to the sides.
If the camera was aligned with the axis, I could simply move around X/Z axis and set a limit on each axis as to how far it can go. But my problem is that the camera is rotated, so I'm stuck figuring out how to move it and set a limit. How could I implement this?
using UnityEngine;
[RequireComponent(typeof(Camera))]
public class CameraController : MonoBehaviour
{
Camera cam;
Vector3 dragOrigin;
bool drag = false;
void Awake()
{
cam = GetComponent<Camera>();
}
void LateUpdate()
{
// Camera movement with mouse
Vector3 diff = (cam.ScreenToWorldPoint(Input.mousePosition)) - cam.transform.position;
if (Input.GetMouseButton(0))
{
if (drag == false)
{
drag = true;
dragOrigin = cam.ScreenToWorldPoint(Input.mousePosition);
}
}
else
{
drag = false;
}
if (drag)
{
// Here I want to set a constraint in a rectangular plane perpendicular to camera view
transform.position = dragOrigin - diff;
}
}
}
Transform in Unity comes with a handy Transform.right property, which regards the object's rotation. To move your camera sideways you could further utilize Lerp to make the movement smooth.
transform.position += transform.right * factor
moves an object to the right.
Use factor to adjust the desired distance and by doing so you can also set limits. Negative factor would mean moving left by the way:) Hope that helps!
It can be tricky to deal with constraints on rotated objects. The math behind this includes some vector/rotation math to figure out the correct limits relative to the object's orientation, and whether you've exceeded them.
Luckily though, Unity gives you some shortcuts to skip this math: Transform.InverseTransformPoint() and Transform.TransformPoint()! These two methods allow you to transform a point in world space into a point in local space, and vice versa.
That means that no matter how your camera is oriented, you can interpret a position from the orientation of the camera - and with just a couple extra steps, your X/Z constraints are usable because you can calculate X/Z from the camera's point of view.
Let's try to adapt your current script to use this:
using UnityEngine;
[RequireComponent(typeof(Camera))]
public class CameraController : MonoBehaviour
{
// Set the X and Z values in the editor to define the rectangle within
// which your camera can move
public Vector3 maxConstraints;
public Vector3 minConstraints;
Camera cam;
Vector3 dragOrigin;
bool drag = false;
Vector3 cameraStart;
void Awake()
{
cam = GetComponent<Camera>();
// Here, we record the start since we'll need a reference to determine
// how far the camera has moved within the allowed rectangle
cameraStart = transform.position;
}
void LateUpdate()
{
// Camera movement with mouse
Vector3 diff = (cam.ScreenToWorldPoint(Input.mousePosition)) - cam.transform.position;
if (Input.GetMouseButton(0))
{
if (drag == false)
{
drag = true;
dragOrigin = cam.ScreenToWorldPoint(Input.mousePosition);
}
}
else
{
drag = false;
}
if (drag)
{
// Now, rather than setting the position directly, let's make sure it's
// within the valid rectangle first
Vector3 newPosition = dragOrigin - diff;
// First, we get into the local space of the camera and determine the delta
// between the start and possible new position
Vector3 localStart = transform.InverseTransformPoint(cameraStart);
Vector3 localNewPosition = transform.InverseTransformPoint(newPosition);
Vector3 localDelta = localNewPosition - localStart;
// Now, we calculate constrained values for the X and Z coordinates
float clampedDeltaX = Mathf.Clamp(localDelta.x, minConstraint.x, maxConstraint.x);
float clampedDeltaZ = Mathf.Clamp(localDelta.z, minConstraint.z, maxConstraint.z);
// Then, we can use the constrained values to determine the constrained position
// within local space
Vector3 localClampedPosition = new Vector3(clampedDeltaX, localDelta.y, clampedDeltaZ)
+ localStart;
// Finally, we can convert the local position back to world space and use it
transform.position = transform.TransformPoint(localConstrainedPosition);
}
}
}
Note that I'm somewhat assuming dragOrigin - diff moves your camera correctly in its present state. If it doesn't do what you want, please include details on the unwanted behaviour and we can sort that out too.
I have been making a script in Unity that measures how far a player has moved in the real world using XRNodes like this for example with the right hand:
InputTracking.GetLocalPosition(XRNode.RightHand)
at the start of the movement and then comparing it to the end position
Now I would like to get the distance moved, even if the player moved around in a circle.
Is the a method to do this with XRNodes? Measuring total distance moved during play?
Yes, well, you could just simply sum it up every frame like
// Stores the overall moved distance
private float totalMovedDistance;
// flag to start and stop tracking
// Could also use a Coroutine if that fits you better
private bool track;
// Store position of last frame
private Vector3 lastPos;
public void BeginTrack()
{
// reset total value
totalMovedDistance = 0;
// store first position
lastPos = InputTracking.GetLocalPosition(XRNode.RightHand);
// start tracking
track = true;
}
public void EndTrack()
{
// stop tracking
track = false;
// whatever you want to do with the total distance now
Debug.Log($"Total moved distance in local space: {totalMovedDistance}", this);
}
private void Update()
{
// If not tracking do nothing
if(!track) return;
// get current controller position
var currentPos = InputTracking.GetLocalPosition(XRNode.RightHand);
// Get distance moved since last frame
var thisFrameDistance = Vector3.Distance(currentPos, lastPos);
// sum it up to the total value
totalMovedDistance += thisFrameDistance;
// update the last position
lastPos = currentPos;
}
In Unity i have a UI Panel which has a player object (an UI Image object).
I moving player object into planel with user inputs (keyboard or touch)
I can't keep player object in it's parent panel,
Please check below image, I want to keep player inside of Red Panel
Here is my Tried Code
public Camera MainCamera; //be sure to assign this in the inspector to your main camera
private Vector2 screenBounds;
private float objectWidth;
private float objectHeight;
private RectTransform pnlBackgroundTransform;
private void Start()
{
pnlBackgroundTransform = GameObject.Find("PnlBackground").GetComponent<RectTransform>();
screenBounds = MainCamera.ScreenToWorldPoint(new Vector3(pnlBackgroundTransform.rect.width , pnlBackgroundTransform.rect.height , MainCamera.transform.position.z));
objectWidth = transform.GetComponent<SpriteRenderer>().bounds.extents.x; //extents = size of width / 2
objectHeight = transform.GetComponent<SpriteRenderer>().bounds.extents.y; //extents = size of height / 2
}
void LateUpdate()
{
Vector3 viewPos = transform.position;
viewPos.x = Mathf.Clamp(viewPos.x, screenBounds.x * -1 + objectWidth, screenBounds.x - objectWidth);
viewPos.y = Mathf.Clamp(viewPos.y, screenBounds.y * -1 + objectHeight, screenBounds.y - objectHeight);
Debug.Log(screenBounds);
Debug.Log(viewPos);
transform.position = viewPos;
}
I'd say it's not very usual having the player implemented as a UI element, and instead you should be implementing it outside the UI/Canvas system.
The UI/Canvas system uses a set of rules of placing and scaling to deal with responsive design. You have at least 4 values (excluding rotation) to place something on the screen: anchor, pivot, position and scale.
For example: if you want to create a square you can either set it's size in absolute pixel values or relative values (to parent). If you're using absolute values, your UI Scale Mode, defined on the Canvas object, should affect the visual results.
This means the UI/Canvas is for elements that should adapt to the screen, such as buttons, dialogs, labels, etc. Taking advantage of device parameters to improve the UX.
Outside the UI/Canvas system, things are directly based on Linear Algebra: you have a 3D vector space (a "World") where everything exists with an absolute size and position. Then, your Camera stretches and twists the whole world to match what your current perspective. That means your object will always have the same size, regardless of screen size.
Now, assuming you have a very specific reason to implement your game into UI, there are a few ways you can do it. I'll assume you're using absolute values. Please note all the units used here are pixels, and the effect should be different for devices with different resolutions and sensible to the UI Scale Mode parameter. Also, please note I've set both anchors min and max to (0,0), the bottom left corner (default is screen center, (0.5,0.5)), in order to avoid negative coordinates.
The following script is attached to the player's UI Image.
public class UIMovementController : MonoBehaviour
{
public float speed = 5.0f;
new private RectTransform transform;
private Rect canvasRect;
private void Start()
{
transform = GetComponent<RectTransform>();
canvasRect = GetComponentInParent<Canvas>().pixelRect;
}
void Update()
{
// Keyboard Input (Arrows)
Vector2 move = new Vector2(0,0);
if (Input.GetKey(KeyCode.UpArrow)) { move.y += speed; }
if (Input.GetKey(KeyCode.DownArrow)) { move.y -= speed; }
if (Input.GetKey(KeyCode.LeftArrow)) { move.x -= speed; }
if (Input.GetKey(KeyCode.RightArrow)) { move.x += speed; }
transform.anchoredPosition += move;
// Position clamping
Vector2 clamped = transform.anchoredPosition;
clamped.x = Mathf.Clamp(clamped.x, transform.rect.width / 2, canvasRect.width - transform.rect.width / 2);
clamped.y = Mathf.Clamp(clamped.y, transform.rect.height / 2, canvasRect.height - transform.rect.height / 2);
transform.anchoredPosition = clamped;
}
}
I would like to be able to zoom into an ILNumerics scene viewed by a camera (as in scene.Camera) with the center point of the zoom determined by where the mouse pointer is located when I start spinning the mouse scroll wheel. The default zoom behavior is for the zoom center to be at the scene.Camera.LookAt point. So I guess this would require the mouse to be tracked in (X,Y) continuously and for that point to be used as the new LookAt point? This seems to be like this post on getting the 3D coordinates from a mouse click, but in my case there's no click to indicate the location of the mouse.
Tips would be greatly appreciated!
BTW, this kind of zoom method is standard operating procedure in CAD software to zoom in and out on an assembly of parts. It's super convenient for the user.
One approach is to overload the MouseWheel event handler. The current coordinates of the mouse are available here, too.
Use the mouse screen coordinates to acquire (to "pick") the world
coordinate corresponding to the primitive under the mouse.
Adjust the Camera.Position and Camera.ZoomFactor to 'move' the camera closer to the point under the mouse and to achieve the required 'directional zoom' effect.
Here is a complete example from the ILNumerics website:
using System;
using System.Windows.Forms;
using ILNumerics;
using ILNumerics.Drawing;
using ILNumerics.Drawing.Plotting;
using static ILNumerics.Globals;
using static ILNumerics.ILMath;
namespace ILNumerics.Examples.DirectionalZoom {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private void panel2_Load(object sender, EventArgs e) {
Array<float> X = 0, Y = 0, Z = CreateData(X, Y);
var surface = new Surface(Z, X, Y, colormap: Colormaps.Winter);
surface.UseLighting = true;
surface.Wireframe.Visible = false;
panel2.Scene.Camera.Add(surface);
// setup mouse handlers
panel2.Scene.Camera.Projection = Projection.Orthographic;
panel2.Scene.Camera.MouseDoubleClick += Camera_MouseDoubleClick;
panel2.Scene.Camera.MouseWheel += Camera_MouseWheel;
// initial zoom all
ShowAll(panel2.Scene.Camera);
}
private void Camera_MouseWheel(object sender, Drawing.MouseEventArgs e) {
// Update: added comments.
// the next conditionals help to sort out some calls not needed. Helpful for performance.
if (!e.DirectionUp) return;
if (!(e.Target is Triangles)) return;
// make sure to start with the SceneSyncRoot - the copy of the scene which receives
// user interaction and is eventually used for rendering. See: https://ilnumerics.net/scene-management.html
var cam = panel2.SceneSyncRoot.First<Camera>();
if (Equals(cam, null)) return; // TODO: error handling. (Should not happen in regular setup, though.)
// in case the user has configured limited interaction
if (!cam.AllowZoom) return;
if (!cam.AllowPan) return; // this kind of directional zoom "comprises" a pan operation, to some extent.
// find mouse coordinates. Works only if mouse is over a Triangles shape (surfaces, but not wireframes):
using (var pick = panel2.PickPrimitiveAt(e.Target as Drawable, e.Location)) {
if (pick.NextVertex.IsEmpty) return;
// acquire the target vertex coordinates (world coordinates) of the mouse
Array<float> vert = pick.VerticesWorld[pick.NextVertex[0], r(0, 2), 0];
// and transform them into a Vector3 for easier computations
var vertVec = new Vector3(vert.GetValue(0), vert.GetValue(1), vert.GetValue(2));
// perform zoom: we move the camera closer to the target
float scale = Math.Sign(e.Delta) * (e.ShiftPressed ? 0.01f : 0.2f); // adjust for faster / slower zoom
var offs = (cam.Position - vertVec) * scale; // direction on the line cam.Position -> target vertex
cam.Position += offs; // move the camera on that line
cam.LookAt += offs; // keep the camera orientation
cam.ZoomFactor *= (1 + scale);
// TODO: consider adding: the lookat point now moved away from the center / the surface due to our zoom.
// In order for better rotations it makes sense to place the lookat point back to the surface,
// by adjusting cam.LookAt appropriately. Otherwise, one could use cam.RotationCenter.
e.Cancel = true; // don't execute common mouse wheel handlers
e.Refresh = true; // immediate redraw at the end of event handling
}
}
private void Camera_MouseDoubleClick(object sender, Drawing.MouseEventArgs e) {
var cam = panel2.Scene.Camera;
ShowAll(cam);
e.Cancel = true;
e.Refresh = true;
}
// Some sample data. Replace this with your own data!
private static RetArray<float> CreateData(OutArray<float> Xout, OutArray<float> Yout) {
using (Scope.Enter()) {
Array<float> x_ = linspace<float>(0, 20, 100);
Array<float> y_ = linspace<float>(0, 18, 80);
Array<float> Y = 1, X = meshgrid(x_, y_, Y);
Array<float> Z = abs(sin(sin(X) + cos(Y))) + .01f * abs(sin(X * Y));
if (!isnull(Xout)) {
Xout.a = X;
}
if (!isnull(Yout)) {
Yout.a = Y;
}
return -Z;
}
}
// See: https://ilnumerics.net/examples.php?exid=7b0b4173d8f0125186aaa19ee8e09d2d
public static double ShowAll(Camera cam) {
// Update: adjusts the camera Position too.
// this example works only with orthographic projection. You will need to take the view frustum
// into account, if you want to make this method work with perspective projection also. however,
// the general functioning would be similar....
if (cam.Projection != Projection.Orthographic) {
throw new NotImplementedException();
}
// get the overall extend of the cameras scene content
var limits = cam.GetLimits();
// take the maximum of width/ height
var maxExt = limits.HeightF > limits.WidthF ? limits.HeightF : limits.WidthF;
// make sure the camera looks at the unrotated bounding box
cam.Reset();
// center the camera view
cam.LookAt = limits.CenterF;
cam.Position = cam.LookAt + Vector3.UnitZ * 10;
// apply the zoom factor: the zoom factor will scale the 'left', 'top', 'bottom', 'right' limits
// of the view. In order to fit exactly, we must take the "radius"
cam.ZoomFactor = maxExt * .50;
return cam.ZoomFactor;
}
}
}
Note, that the new handler performs the directional zoom only when the mouse is located over an object hold by this Camera! If, instead, the mouse is placed on the background of the scene or over some other Camera / plot cube object no effect will be visible and the common zoom feature is performed (zooming in/out to the look-at point).
I want to determine the screen coordinates of a RectTransform in Unity3D. How is this done, taking into consideration that the element may be anchored, or even scaled?
I am trying to use RectTransform.GetWorldCorners and then converting each Vector3 to screen coordinates, but the values are wrong.
public Rect GetScreenCoordinates(RectTransform uiElement)
{
var worldCorners = new Vector3[4];
uiElement.GetWorldCorners(worldCorners);
var result = new Rect(
worldCorners[0].x,
worldCorners[0].y,
worldCorners[2].x - worldCorners[0].x,
worldCorners[2].y - worldCorners[0].y);
for (int index = 0; index < 4; index++)
result[index] = Camera.main.WorldToScreenPoint(result[index]);
return result;
}
Although the help says that it returns coordinates in world space, they are actually in screen space (at least when the UI is not in world space). So the solution is simple:
public Rect GetScreenCoordinates(RectTransform uiElement)
{
var worldCorners = new Vector3[4];
uiElement.GetWorldCorners(worldCorners);
var result = new Rect(
worldCorners[0].x,
worldCorners[0].y,
worldCorners[2].x - worldCorners[0].x,
worldCorners[2].y - worldCorners[0].y);
return result;
}