Unity mobile joystick problems - unity3d

I am creating a mobile top down shooter with Unity and my problem is that aiming with the joystick feels very stagnant.
It's like the joystick knows about 100 different directions, but I need 1000. I hope this is somewhat understandable what I mean. The joystick just doesn't feel smooth.
Is there any way to change this? Something like increasing a sampling rate so that the screen on the smartphone detects even the smallest changes?
For the joystick I use the following asset:
https://assetstore.unity.com/packages/tools/input-management/joystick-pack-107631
The implementation of my rotation and trying to make it smooth looks like this:
float eulerY = (Mathf.Atan2(_JoystickShoot.Direction.x, _JoystickShoot.Direction.y) * 180 / Mathf.PI);
if(_LastJoystickShootEulerY == 0.0f || Mathf.DeltaAngle(eulerY, _LastJoystickShootEulerY) > 30)
_Weapon.GetBarrelContainer().transform.rotation = Quaternion.Euler(0, eulerY, 0);
else
_Weapon.GetBarrelContainer().transform.rotation = Quaternion.Lerp(Quaternion.Euler(0, _LastJoystickShootEulerY, 0), Quaternion.Euler(0, eulerY, 0), 0.5f);
_LastJoystickShootEulerY = eulerY;

Related

How to write dynamically to whole stencil buffer in Unity

What do I want to achieve ?
I'd like to achieve an effect in Unity3D, where I superpose a few cameras on top of each other. Each cameras would draw to a specific area of the screen. If possible, I'd like these areas to change dynamically.
I am using unity (latest version), and URP.
How technically I see it :
For implementation and performances reasons, it seems writing to the stencil buffer is the way to go. That way, I can only render what part of the screen I want for each camera. It is also quite easy once the stencil is made, cause the ForwardRendering settings in Unity offer such capabilities out of the box.
What I can't figure out :
The problem is, I don't know to efficiently write to the whole stencil buffer (each frame). The best way would be to use a compute shader (or maybe a simple script), that directly write the values after some calculations. Is there a way for that ? If yes, How ?
Another alternative may be to use a transparent quad in front of one of each camera, and to write to the stencil buffers like that. But 1) It seems there exist a SV_StencilRef keyword in the fragment buffer, but not supported by Unity yet ? 2) I will still lose performance nevertheless.
Thanks for any help / ideas about how to tackle this problem.
Edit (Clarification) : I'd like to be able to render free shapes, and not only rects, which prevent the use of the standard ViewportRect.
After some search, I found the Voronoi split screen to be quite similar (with a technical view) to what I'd like to achieve (See here)
If I understand correctly, you only need to play with the different camera Viewport Rect (https://docs.unity3d.com/ScriptReference/Camera-rect.html) to determine what camera should render what part of the screen.
Response to comment: no, it's not stretched. Here is an example with four cameras:
Create a scene with four cameras, add this script to it and add the cameras to the array on the script. I added the _movingObject just to see something moving, but it's not necessary.
using UnityEngine;
public class CameraHandler : MonoBehaviour
{
[SerializeField] private Transform _movingObject;
[SerializeField] private float _posMod = 10.0f;
[SerializeField] private float _cameraPosMod = 0.1f;
[SerializeField] private Camera[] _cameras;
private void Update()
{
float t = Time.time;
float x = Mathf.Sin(t);
float y = Mathf.Cos(t);
if (_movingObject) _movingObject.position = new(x * _posMod, 1.0f, y * _posMod);
Vector2 center = new(0.5f + x * _cameraPosMod, 0.5f + y * _cameraPosMod);
// bottom left camera
_cameras[0].rect = new(0.0f, 0.0f, center.x, center.y);
// bottom right camera
_cameras[1].rect = new(center.x, 0.0f, 1.0f - center.x, center.y);
// upper left camera
_cameras[2].rect = new(0.0f, center.y, center.x, 1.0f - center.y);
// upper right camera
_cameras[3].rect = new(center.x, center.y, 1.0f - center.x, 1.0f - center.y);
}
}
Not exactly an answer to your question about stencil buffer but I had a (hopefully) similar use case recently.
The main issue: In the URP Camera stack
If your camera is set to Base it will overdraw the entire screen
you can not adjust the Viewport on any Overlay camera
You can actually try to set the viewport via code -> result your camera renders only the correct part of the scene ... but it gets stretched to the entire screen ^^
What I did in the end was
Leave all content and cameras at the origin position
Apply according masks to filter the content per camera
Make your camera Overlay (as usual)
go through a custom Camera.projectionMatrix
m_Camera.projectionMatrix = Matrix4x4.Translate(projectionOffset) * Matrix4x4.Perspective(m_Camera.fieldOfView, m_Camera.aspect, m_Camera.nearClipPlane, m_Camera.farClipPlane);
where the projectionOffset is an offset in viewport space (normalized 0 - 1) from the bottom left corner.
For example in my case I wanted a minimap at 400, 400 pixels from the top-right corner so I did
var topRightOffsetPixels = new Vector2(400, 400);
var topRightOffsetViewport = Vector2.one - new Vector2(topRightOffsetPixels.x * 2 / Screen.width, topRightOffsetPixels.y * 2 / Screen.height);
m_Camera.projectionMatrix = Matrix4x4.Translate(topRightOffsetViewport) * Matrix4x4.Perspective(m_Camera.fieldOfView, m_Camera.aspect, m_Camera.nearClipPlane, m_Camera.farClipPlane);
See also Matrix4x4.Perspective

Moving camera to proper position in Zoom function in Unity

Hi I have a question that I'm hoping someone can help me work through. I've asked elsewhere to no avail but it seems like a standard problem so I'm not sure why I haven't been getting answers.
Its basically setting up a zoom function that mirrors Google Maps zoom. Like, the camera zooms in/out onto where your mouse is. I know this probably gets asked a lot but I think Unity's new Input System changed things up a bit since the 4-6 year old questions that I've found in my own research.
In any case, I've set up an parent GameObject that holds all 2D sprites that will be in my scene and an orthographic camera. I can set the orthographic size through code to change to zoom, but its moving the camera to the proper place that I am having trouble with.
This was my 1st attempt:
public Zoom(float direction, Vector2 mousePosition) {
// zoom calcs
float rate 1 + direction * Time.deltaTime;
float targetOrtho = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthoGraphicSize/rate, 0.1f);
// move calcs
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 deltaPosition = previousPosition - mousePosition;
// move and zoom
transform.position += new Vector3(deltaPosition.x, deltaPosition.y, 0);
// zoomLevels are a generic struct that holds the max/min values.
SetZoomLevel(Mathf.Clamp(targetOrthoSize, zoomLevels.min, zoomLevels.max));
previousPosition = mousePosition;
}
This function gets called through my input controller, activated through Unity's Input System events. When the mouse wheel scrolls, the Zoom function is given a normalized value as direction (1 or -1) and the current mousePosition. When its finished its calculation, the mousePosition is stored in previousPosition.
The code actually works -- except it is extremely jittery. This, of course happens because there is no Time.deltaTime applied to the camera movement, nor is this in LateUpdate; both of which helps to smooth the movements. Except, in the former case, multiplying Time.deltaTime to new Vector3(deltaPosition.x, deltaPosition.y, 0) seems to cause the zoom occur at the camera's centre rather than the mouse position. When i put zoom into LateUpdate, it creates a cool but unwanted vibration effect when the camera moves.
So, after doing some thinking and reading, I thought it may be best to calculate the difference between the mouse position and the camera's center point, then multiply it by a scale factor, which is the camera's orthographic size * 2 (maybe...??). Hence my updated code here:
public void Zoom(float direction, Vector2 mousePosition)
{
// zoom
float rate = 1 + direction * Time.unscaledDeltaTime * zoomSpeed;
float orthoTarget = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthographicSize * rate, maxZoomDelta);
SetZoomLevel(Mathf.Clamp(orthoTarget, zoomLevels.min, zoomLevels.max));
// movement
if (mainCam.orthographicSize < zoomLevels.max && mainCam.orthographicSize > zoomLevels.min)
{
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 offset = (mousePosition - new Vector2(transform.position.x, transform.position.y)) / (mainCam.orthographicSize * 2);
// panPositions are the same generic struct holding min/max values
offset.x = Mathf.Clamp(offset.x, panPositions.min.x, panPositions.max.x);
offset.y = Mathf.Clamp(offset.y, panPositions.min.y, panPositions.max.y);
transform.position += new Vector3(offset.x, offset.y, 0) * Time.deltaTime;
}
}
This seems a little closer to what I'm trying to achieve but the camera still zooms in near its center point and zooms out on some point... I'm a bit lost as to what I am missing out here.
Is anyone able to help guide my thinking about what I need to do to create a smooth zoom in/out on the point where the mouse currently is? Much appreciated & thanks for reading through this.
Ok I figured it out for if anyone ever comes across the same problem. it is a standard problem that is easily solved once you know the math.
Basically, its a matter of scaling and translating the camera. You can do one or the other first - it does not matter; the outcome is the same. Imagine your screen looks like this:
The green box is your camera viewport, the arrow is your cursor. When you zoom in, the orthographic size gets smaller and shrinks around its anchor point (usually P1(0,0)). This is the scaling aspect of the problem and the following image explains it well:
So, now we want to move the camera position to the new position:
So how do we do this? Its just a matter of getting distance from the old camera position (P1(0, 0)) to the new camera position (P2(x,y)). Basically, we only want this:
My solution to find the length of the arrow in the picture above was to basically subtract the length of the cursor position from the old camera position (oldLength) from the length of the cursor position to the new camera position (newLength).
But how do you find newLength? Well, since we know the length will be scaled accordingly to the size of the camera viewport, newLength will be either oldLength / scaleFactor or oldLength * scaleFactor, depending on whether you want to zoom in or out, respectively. The scale factor can be whatever you want (zoom in/out by 2, 4, 1.4... whatever).
From there, its just a matter of subtracting newLength from oldLength and adding that difference from the current camera position. The psuedo code is below:
(Note that i changed 'newLength' to 'length' and 'oldLength' to 'scaledLength')
// make sure you're working in world space
mousePosition = camera.ScreenToWorldPoint(mousePosition);
length = mousePosition - currentCameraPosition;
scaledLength = length / scaleFactor // to zoom in, otherwise its length * scaleFactor
deltaLength = length - scaledLength;
// change position
cameraPosition = currentCameraPosition - deltaLength;
// do zoom
camera.orthographicSize /= scaleFactor // to zoom in, otherwise orthographic size *= scaleFactor
Works perfectly for me. Thanks to those who helped me in a discord coding community!

Rotating player is messing up my movement

I wrote a simple player movement script which moves my player like this:
private void MovePlayer()
{
// Initialize Directions For Player Movement
movement = transform.right * horizontal * playerSpeed + transform.forward * vertical * playerSpeed;
// Move Player
rb.AddForce(movement, ForceMode.Acceleration);
}
and I am trying to rotate my player towards the axes, for example if the player is pressing a the horizontal will be -1 and I want to rotate my player left horizontal * 90f, now when I try this, my horizontal axis is acting like my vertical, if I press A it will bring my player backwards, if I press D it will do the same thing, this is how I rotate the player:
// buggy code:
private void RotatePLayerTowardsAxis()
{
// Rotate PLayer Horizontaly
transform.rotation = Quaternion.Euler(0f, horizontal * 90, 0f);
}
is there a way I can do this?
Edit:
The vertical is still pushing me up and down.
First, it is important to check if you are looking at the scene in Local or Global view. Check your axes in Local mode:
If your character model faces towards the blue axis, then everything should be okay.
If you want to rotate your character only by 90 degrees all the time, then it can be done somehow like this:
transform.rotation = Quaternion.Euler(0, 90, 0);
You can use the following as well, but this might not fit your needs:
transform.Rotate(transform.up, 90);
This will make your character rotate by 90 degrees AROUND the up axis. If you want it to rotate always around the world up axis:
transform.Rotate(Vector3.up, 90);
However, transform.rotation = Quaternion.Euler(0, 90, 0); might be better for you, as it always "resets" the rotation when you overwrite it. This will make the character fixed to a direction. So if you press left, the charater will always look into the left by 90 degree. And if you press right after the left turn, your character will do a 180° turn instead of looking forward. I hope you get what I'm trying to say. This means, that your character will be bound to the world axis instead of its own. To modify this (if you want to), write something like this:
transform.rotation += Quaternion.Euler(0, 90, 0);
Hope I could help!

Unity 3D Ball Follow the finger

I am trying to create a 3d game like ketchapp ball race, in which the cube slides along a road, and the left right movement is controlled using touch
The problem I am facing is that the touch senstivity seems to react different on different devices, due to which I am not able to calculate the left-right displacement for all devices.
This is how I am calculating the left-right displacement of the cube:
Vector2 touchDeltaPosition = Input.GetTouch(0).deltaPosition ;
transform.Translate(touchDeltaPosition.x * .1f * Time.deltaTime, 0, 0);
However this is not working properly all device . Any help will be highly appreciated
See this answer: https://stackoverflow.com/a/25740565/10063126
Basically, ScreenToWorldPoint was used.
World position is computed; not screen touch position.
But you have to manually solve for delta position.
Example:
Vector3 currPos = Input.mousePosition;
Vector3 startPos = Camera.main.ScreenToWorldPoint(currPos);
Vector3 endPos = Camera.main.ScreenToWorldPoint(prevPos);
Vector3 deltaPos = endPos - startPos;
transform.Translate(deltaPos.x * sensitivity * Time.deltaTime, 0, 0);
prevPos = currPos;
How about using 2 buttons on your screen one for left control and one for right control and then when the left button is pressed you can give a value to go left and same for right button. This way your ball's movement will be independent from the touched position's X value.

Unity - Stay Within Screen Bounds

I have been making a top down 2d game and have come across a small issue. I need the player to stay within the screen bounds at all times. I have seen people with this problem before and have tried their solutions however none of them have worked with my game. This is because my player character uses physics to move around. This is what I have inside my FixedUpdate function:
minScreenBounds = Camera.main.ScreenToWorldPoint(new Vector3(0, 0, 0));
maxScreenBounds = Camera.main.ScreenToWorldPoint(new Vector3(Screen.width, Screen.height, 0));
transform.position = new Vector3(Mathf.Clamp(transform.position.x, minScreenBounds.x + 1, maxScreenBounds.x - 1), Mathf.Clamp(transform.position.y, minScreenBounds.y + 1, maxScreenBounds.y - 1), transform.position.z);
If anyone knows how to fix this I would much appreciate it if you could tell me how.
Many Thanks,
Tommy
Make 4 kinematic Rigidbody2D to the screen edges, like below (green colliders), and manage their scale / position for your need.