Unity 2D, get joystick direction - unity3d

im making a mobile game, the player have a grappling hook and can fly with it (just like spider man), i use some script so the grappling hook is work base on mouse position, but the problem is i want to convert it to mobile game, i can just use leave it as is since click on touch is same as mouse click, but i want to use joystick to get direction. i want make it the same as the fanny mechanism in a mobile legend game, (if you know the game)
i use this script for targeting
on pc (which is fine)
if (Input.GetKeyDown(KeyCode.Mouse0))
{
Vector2 worldPos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
rope.setTarget(worldPos); // hook script will get this value
}
i try use this for mobile (the problem):
if (Input.GetKeyDown(KeyCode.Mouse0))
{
float inputX = joystick.Horizontal;
float inputY = joystick.Vertical;
Vector2 dir = new Vector2(inputX, inputY);
rope.setTarget(dir); // hook script will get this value
}

If you want to capture a joystick, it can be achieved through the Input Manager GetAxis function, which can read a Joystick, Mouse or Keyboard arrows/asdw, according to how you configure it.
In code, you would use:
float inputX = Input.GetAxis("Horizontal");
float inputY = Input.GetAxis("Vertical");
If you don't want to have the mouse to look, just configure it in Edit > Project Settings > Input > Axes. There are two "Horizontal" and two "Vertical" configurations. Let exist only that with Joystick configuration and delete the other. Below, it is the image of the axis configured for joysticks:
For more information, check https://docs.unity3d.com/Manual/class-InputManager.html. I hope that helps you to get started!
EDIT: Virtual/screen joysticks on mobile
In the case you are talking about virtual/screen joysticks, you must control it manually, in the hard way. To develop this kind of joystick, you will use the Input Manager to get the mouse/touch position during press (Input.GetMouseButtonDown(int)), release (Input.GetMouseButtonUp(int)) and drag (Input.GetMouseButton(int)). Then you control how much the mouse has moved since the user began to press it and clamp the position to not overflow the joystick area.
As you have stated in comments, you have downloaded an asset from Asset Store. The way the assets work are unique and it is up to their developers to tell how you should use them. Seek for documentation, tutorials and examples given by the developer to see how the asset is intended to be used.

Related

How can I detect a click/tap on a tracked image with Unity's ARCore?

I just got into ARCore and want something to happen when I click on one of the images tracked (defined in the reference image library of my ARTrackedImageManager).
I know how to detect a click (or a tap on the phone's screen) but I don't know how to detect if it's anywhere on the image. Where should I look for that ?
I'm thinking that maybe I gotta find a way to detect the plane of the image, then detect if the click is happening inside, not sure that's the solution. I'm also not sure if ARCore already does that or if I have to code it myself.
Any idea to help me ?
I haven't started a script yet, I know how to code everything for my little project but the click detection.
How clear is it from the question, do you need to find the location of tapping on the screen? That is, find the X and Y coordinates? If this is the case, then you can use the OnMouseDown() method to track the position of the click on the object that has a collision.
private void OnMouseDown()
{
Vector2 XYCoordinates = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
}
Attach this script to the collision you want to track.

How to animate grabbing in Unity VR?

I've been googling this for 10 hours and I'm running out of ideas. I found several video-tutorials and written-tutorials but none of them really works or they're overkill.
I have VR Unity (2020.3.16f) project designed to be run on Quest 2. I'm not using OpenXR. I already created hand, created one simple grabbing animation, added animation to Animator, created transitions, and set "IsGrabbed" parameter. Now I'm looking a simple way to change "IsGrabbed" to true/false whenever I grab/release anything. I'm expecting something like this:
public class grabber : MonoBehaviour
{
// Start is called before the first frame update
Animator animator;
???
void Start()
{
???
}
// Update is called once per frame
void Update()
{
if (???)
{
animator.SetBool("IsGrabbing", true);
}
elseif (???)
{
animator.SetBool("IsGrabbing", false);
}
}
}
Please help. We're talking about VR here so I'm sure grabbing animation is the very basics of very basics of very basics. It can't be any difficult.
Best regards
First of all, I highly recommend watching this video by Justin P Barnett to get a much better overview of how this all works.
If you don't want to go that route for some reason, there are other options available to you. One such option is the "Player Input" component, which can act as a bridge between your input devices and your code. Most XR packages these days use the new Input System package, and it makes life easier, so I will assume you have that installed.
First, you will need to create an Input Actions asset, which can be done in the project pane: right-click -> Create -> Input Actions. There are many tutorials which explain this asset in detail, but here is a simple setup to get you started. Double click on the new asset to open the editing window, and create a new Action Map. In the "Actions" list, create a new action with action type Value, Control Type Axis, and in the dropdown arrow on your new action set the path to the input source. As an example source path, I will use XR Controller -> XR Controller -> XR Controller (Left Hand) -> Optional Controls -> grip. Make sure to click Save Asset before closing the window.
Create a script similar to this:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.InputSystem;
public class ControllerInputReceiver : MonoBehaviour {
public void FloatToScale(InputAction.CallbackContext context) {
float val = 0.1f + 0.1f * context.ReadValue<float>();
transform.localScale = new Vector3(val, val, val);
}
}
Create a cube somewhere visible in your scene, and add the Input Action Manager component to it, and drag your created Input Actions asset to its list of Action Assets. Then add the ControllerInputReceiver script. Also on this cube, create a Player Input component and drag your Input Actions asset to its Actions element. Choose your map as the default map and change behavior to Invoke Unity Events. Under the events drop down, you should see an element for the Action you created earlier. Drop your Controller Input Receiver component into this Action and select the FloatToScale function.
In theory it should work at this point. Build the game to your device and see if pulling the grip causes the cube to resize. If it does, then you can replace your Update function with:
void SetGrabbing(InputAction.CallbackContext context) {
animator.SetBool("IsGrabbing", context.ReadValue<float>() > cutoff);
}
If you are still having issues at this point, I really recommend checking out these youtube channels. I only started VR a couple of months ago and learned everything I know so far from these people. JustinPBarnett, VRwithAndrew, ValemVR, LevelUp2020. (Links removed because it kept screwing up my post)
Note, the new input system has button options instead of value/axis options for VR devices. These may be closer to what you want, but I had no luck getting them to work today.
Also note, depending on how you organize your code, you may or may not need the "Input Action Manager" component somewhere in your scene with your input actions asset in its list. It enables your actions for you, without you needing to do this programmatically.
Another solution would be:
Using the OVR plugin and Hands prefab (for left and right), to check whether the rotation of each of the fingers on a specific axis (ex. Z-Axis) falls under a specific range-meaning fingers are grasping/flexed.
By referencing for example the Transform of b_l_index1 , which is the corresponding part of the Index Finger within the model, you can check its local roation every frame and trigger the event of grasping when all fingers are rotated to a specific angle. Subsequently, triggering the animation you need.

In unity 3d should i use in adventure game with point and click style or just walking using keys?

Today now i'm using the keys wsad to move my ThirdPersonCharacter around.
But since i'm doing an adventure game or more a quest game i wonder if i should make somehow a point and click style ?
The game i want to do is like the old school adventure games where you click on items and select what to do with them look/take/use. And since i'm doing it in 3d i wonder how should i make the game style ?
Another thing i didn't find any tutorials how ot make the point and click and also how to make the items for example if i put a cube on the space ship how do i make that if i use will click on it it will display a small options icons like look/take/use.
Another sub question is first time the character was when i moved it to the spaceship it was walking through it so i added a Mesh Collider to the space ship. Is that right to add a Mesh Collider ? Now it seems to be working i just wonder if it's right.
A screenshots of the game scene before added the collider. And scene3 after adding the collider:
To start, this decision is left up to you. We cannot answer that for you. However, there are some pros and cons of using either.
For point and click, there is basic code for doing this but there are so many parameters to take into account. Although you may want to go from point A to B, the code must be functional where you can avoid obstacles, walk on the ground, rotate the character accordingly, respond to other stimuli in the environment, etc. You can start with a simple:
Vector3 end;
float distance, scalarLerp;
Transform player;
LayerMask layer;
RaycastHit touch;
void Update(){
if(Input.GetButtonDown(0)){
Physics.Raycast(ScreenPointToRay(Input.MousePosition), distance, out touch, layer);
start = player.position;
end = touch.point;
}
player.position = Vector3.MoveTowards(player.position, end, Time.deltaTime * scalarLerp);
}
This code is simple and you will realise the problems with point and click.
You could remain with the Third Person Controller but that is left up to you.

Make an object rotate according to mouse movement. [UNITY C#]

I’m starting a learning project. The idea is that you have an archer character that is static, that has a bow attached to it that shoots arrows to targets of varying difficulty.
Turns out that right at the start I’m stuck. How do I make it so that the bow rotates when the player clicks and holds the mouse anywhere on the screen? So I click+hold and move left/right and the bow rotates left/right to aim the shot. I’d like to also eventually make it portable to phones (so you’d tap+hold etc).
Stack Overflow isnt a code writing service but i will explain what you must do:
Method 1 (Exact Aiming):
For every frame the mouse is down:
Make a Ray from a screen point... hint (use
camera.ScreenPointToRay).
Get a far point along the Ray using ray.GetPoint(distance);.
Bow.Transform.LookAt(newPoint, Vector3.Up);.
Method 2 (Continuous Movement):
Make a variable oldMousePos to store a Vector2 location.
Record your initial screen click position into that variable on a
mouse down event.
Have a function that runs once every frame the mouse stays down.
For the direction of the rotation of the bow, you can use
(newMousePos - oldMousePos).normalized;.
For the speed of rotation for your bow, you can use (newMousePos -
oldMousePos).sqrMagnitude;.

Unity3D Detecting the Edge of the Screen - Object is Flickering when moved

Hi am developing a simple Space Shooter styled 2D game and I am stuck at the point where the Object should restrict itself moving beyond the left and right edges of the screen.
I implemented #Waz solution in one of the answers in Unity Answers and it works great if the object is not a rigidbody. However if it is applied to a rigidbody, the object starts to flicker. Below is the code that I used from #Waz
float speed = 0.1f;
Vector3 viewPos = Camera.main.WorldToViewportPoint(transform.position);
viewPos.x = Mathf.Clamp01(viewPos.x);
viewPos.y = Mathf.Clamp01(viewPos.y);
transform.position = Camera.main.ViewportToWorldPoint(viewPos);
Here is the link where #Waz mentioned his piece of code:
http://answers.unity3d.com/questions/148790/detecting-the-edge-of-the-screen.html
Here is a link that says to use an alternative solution for rigidbody but this code does not work for me:
http://answers.unity3d.com/questions/62189/detect-edge-of-screen.html
I am not sure how to modify the above code so that the object I touch and move does not flicker. Any help would be great.
You are translating from arbitrary float coordinates to the range [0,1] and back again. Likely the issue you are running into is due to floating-point inaccuracies when your world-position is far away from 0.
There are multiple ways of solving this:
In your above script, only perform the transform if they are actually touching the edge of the screen.
Handle the OnBecameVisible() and OnBecameInvisible() messages. Don't let the player move off screen if it would cause them to "go invisible".
Use the IsVisibleFrom() callback from this wiki article. Some people prefer this because they claim that "OnBecameVisible()/OnBecameInvisible() are broken."
I don't know how/why they believe they're broken.
Have you tried using Screen.width and Screen.height in detecting the edge of the screen? Maybe it can help in the prevention of the flickering.