Horrors of OnPointerDown versus OnBeginDrag in Unity3D - unity3d

I'm concerned over the difference between OnPointerDown versus OnBeginDrag in single-finger movement code.
(In the latest Unity paradigm of using a physics raycaster: so, finally, Unity will properly ignore touch on the UI layer.
So from 2015 onwards what you must do is this:
Forget about the crap traditional Input or Touches system which are pointless crap and don't work
Add an empty game object with a usually BoxCollider2D, likely bigger than the screen. Make the layer called say "Draw". Physics settings, "Draw" interacts with nothing
Simply add to the camera, a 2D or 3D physics raycaster. Event mask the "Draw" layer.
Do a script like below and put it on.
(Tip - don't forget to simply add an EventSystem to the scene. Bizarrely, Unity does not do this automatically for you in some situations but Unity does do it automatically for you in other situations, so it's annoying if you forget!)
But here's the problem.
There has got to be some subtle difference between using OnPointerDown versus OnBeginDrag (and the matching end calls). (You can just swap the action in the following code sample.)
Naturally Unity offers no guidance on this; the following code beautifully rejects stray grabs and also flawlessly ignores your UI layer (thanks Unity! at last!) but I am mystified about the difference between the two approaches (begin drag V. begin touch) and I cannot in anyway find the logical difference between the two in unit testing.
What's the answer?
/*
general movement of something by a finger.
*/
using UnityEngine;
using System.Collections;
using UnityEngine.EventSystems;
public class FingerMove:MonoBehaviour,
IPointerDownHandler,
IBeginDragHandler,
IDragHandler,
IPointerUpHandler,
IEndDragHandler
{
public Transform moveThis;
private Camera theCam;
private FourLimits thingLimits;
private Vector3 prevPointWorldSpace;
private Vector3 thisPointWorldSpace;
private Vector3 realWorldTravel;
public void Awake()
{
theCam = Camera.main or whatever;
}
public void OnMarkersReady() // (would be EVENT DRIVEN for liveness)
{
thingLimits = Grid.liveMarkers. your motion limits
}
private int drawFinger;
private bool drawFingerAlreadyDown;
public void OnPointerDown (PointerEventData data)
{
Debug.Log(" P DOWN " +data.pointerId.ToString() );
}
public void OnBeginDrag (PointerEventData data)
{
Debug.Log(" BEGIN DRAG " +data.pointerId.ToString() );
if (drawFingerAlreadyDown == true)
{
Debug.Log(" IGNORE THAT DOWN! " +data.pointerId.ToString() );
return;
}
drawFinger = data.pointerId;
drawFingerAlreadyDown=true;
prevPointWorldSpace = theCam.ScreenToWorldPoint( data.position );
}
public void OnDrag (PointerEventData data)
{
Debug.Log(" ON DRAG " +data.pointerId.ToString() );
if (drawFingerAlreadyDown == false)
{
Debug.Log(" IGNORE THAT PHANTOM! " +data.pointerId.ToString() );
}
if ( drawFinger != data.pointerId )
{
Debug.Log(" IGNORE THAT DRAG! " +data.pointerId.ToString() );
return;
}
thisPointWorldSpace = theCam.ScreenToWorldPoint( data.position );
realWorldTravel = thisPointWorldSpace - prevPointWorldSpace;
_processRealWorldtravel();
prevPointWorldSpace = thisPointWorldSpace;
}
public void OnEndDrag (PointerEventData data)
{
Debug.Log(" END DRAG " +data.pointerId.ToString() );
if ( drawFinger != data.pointerId )
{
Debug.Log(" IGNORE THAT UP! " +data.pointerId.ToString() );
return;
}
drawFingerAlreadyDown = false;
}
public void OnPointerUp (PointerEventData data)
{
Debug.Log(" P UP " +data.pointerId.ToString() );
}
private void _processRealWorldtravel()
{
if ( Grid. your pause concept .Paused ) return;
// potential new position...
Vector3 pot = moveThis.position + realWorldTravel;
// almost always, squeeze to a limits box...
// (whether the live screen size, or some other box)
if (pot.x < thingLimits.left) pot.x = thingLimits.left;
if (pot.y > thingLimits.top) pot.y = thingLimits.top;
if (pot.x > thingLimits.right) pot.x = thingLimits.right;
if (pot.y < thingLimits.bottom) pot.y = thingLimits.bottom;
// kinematic ... moveThis.position = pot;
// or
// if pushing around physics bodies ... rigidbody.MovePosition(pot);
}
}
And here's a handy thing. Save typing with the same thing for 3D scenes, using the little-known but exquisite
pointerCurrentRaycast
here's how... notice the excellent
data.pointerCurrentRaycast.worldPosition
call courtesy Unity.
public class FingerDrag .. for 3D scenes:MonoBehaviour,
IPointerDownHandler,
IDragHandler,
IPointerUpHandler
{
public Transform moveMe;
private Vector3 prevPointWorldSpace;
private Vector3 thisPointWorldSpace;
private Vector3 realWorldTravel;
private int drawFinger;
private bool drawFingerAlreadyDown;
public void OnPointerDown (PointerEventData data)
{
if (drawFingerAlreadyDown == true)
return;
drawFinger = data.pointerId;
drawFingerAlreadyDown=true;
prevPointWorldSpace = data.pointerCurrentRaycast.worldPosition;
// in this example we'll put it under finger control...
moveMe.GetComponent<Rigidbody>().isKinematic = false;
}
public void OnDrag (PointerEventData data)
{
if (drawFingerAlreadyDown == false)
return;
if ( drawFinger != data.pointerId )
return;
thisPointWorldSpace = data.pointerCurrentRaycast.worldPosition;
realWorldTravel = thisPointWorldSpace - prevPointWorldSpace;
_processRealWorldtravel();
prevPointWorldSpace = thisPointWorldSpace;
}
public void OnPointerUp (PointerEventData data)
{
if ( drawFinger != data.pointerId )
return;
drawFingerAlreadyDown = false;
moveMe.GetComponent<Rigidbody>().isKinematic = false;
moveMe = null;
}
private void _processRealWorldtravel()
{
Vector3 pot = moveMe.position;
pot.x += realWorldTravel.x;
pot.y += realWorldTravel.y;
moveMe.position = pot;
}
}

I want to start by saying that Input and Touches are not crappy.They are still usefull and were the best way to check for touch on mobile devices before OnPointerDown and OnBeginDrag came along. OnMouseDown() you can call crappy because it was not optimized for mobile. For a beginner who just started to learn Unity, Input and Touches are their options.
As for your question, OnPointerDown and OnBeginDrag are NOT the-same. Although they almost do the-same thing but they were implemented to perform in different ways. Below I will describe most of these:
OnPointerDown:
Called when there is press/touch on the screen (when there is a click or finger is pressed down on touch screen)
OnPointerUp:
Called when press/touch is released (when click is released or finger is removed from the touch screen)
OnBeginDrag:
Called once before a drag is started(when the finger/mouse is moved for the first time while down)
OnDrag :
Repeatedly called when user is dragging on the screen (when the finger/mouse is moving on the touch screen)
OnEndDrag:
Called when drag stops (when the finger/mouse is no longer moving on the touch screen).
OnPointerDown versus OnBeginDrag and OnEndDrag
OnPointerUp will NOT be called if OnPointerDown has not been called. OnEndDrag will NOT be called if OnBeginDrag has not been called. Its like the curly braces in C++,C#, you open it '{' and you close it '}'.
THE DIFFERENCE:
OnPointerDown will be called once and immediately when finger/mouse is on the touch screen. Nothing else will happen until there is a mouse movement or the finger moves on the screen then OnBeginDrag will be called once followed by OnDrag.
These are made for doing advanced usage such such as custom UI with controls that is not included in Unity.
WHEN TO USE EACH ONE:
1. When you have to implement a simple click button, for example, Up,Down, Shoot Button on the screen, you only need OnPointerDown to detect the touch. This should work for Sprite Images.
2. When you have to implement a custom toggle switch and you want it to be realistic so that the player can drag to left/right or up/down to toggle it then you need OnPointerDown , OnBeginDrag , OnDrag , OnEndDrag , OnPointerUp. You need to write your code in this order to have a smooth Sprite/Texture transition on the screen. Some toggle switches are made to be to clicked and it will toggle. Some people prefer to make it look realistic by making it so that you have to drag it in order to toggle it.
3. Also when you want to implement a Generic re-usable pop-up window that is draggable, you also need to use those 5 functions (OnPointerDown , OnBeginDrag , OnDrag , OnEndDrag , OnPointerUp).
First detect when there is a click(OnPointerDown), check to make sure that the Sprite clicked is the right one you want to move. Wait for player to move(OnBeginDrag) their finger/mouse. Once they start dragging, maybe you can call a coroutine function with while loop that will start moving the Sprite and inside that coroutine, you can smooth the movement of the Sprite with Time.deltaTime or any other preferred method.
Since OnBeginDrag is called once, it is a good place to start the coroutine.
As the player continue to drag the Sprite, OnDrag will be called repeatedly. Use the OnDrag function to get the current location of the finder and update that to a Vector3 that the coroutine that is already running will use to update the position of the Sprite. When the player stops moving their finger/mouse on the screen, OnEndDrag is called and you can boolean variable and tell the coroutine to stop updating the position of the Sprite. Then, when the player releases their finger(OnPointerUp) you can then stop the coroutine with the StopCoroutine function.
Because of OnBeginDrag we we are able to start coroutine once drag started while waiting for drag to end. It wouldn't make sense to start that coroutine in OnPointerDown because that means that each time player touches the screen, a coroutine would be started.
Without OnBeginDrag, we have to use boolean variable to make the coroutine start only once in the OnDrag function which is called every time or else there would be coroutine running everywhere and unexpected movement of the Sprite will occur.
4. When you want to determine how long player moved their finger. Example of this is that famous game called Fruit Ninja. Lets just say you want to determine far the player swiped on the screen.
First, wait until OnPointerDown is called, wait again until OnBeginDrag is called, then you can get the current position of the finger inside OnBeginDrag function because OnBeginDrag is called before the finger starts moving. After the finger is released, OnEndDrag is called. Then you can get the current position of finger again. You can use these two positions to check how far the finger moved by subtracting them.
If you instead decide to use OnPointerDown as the place to get the first position of the finger, you will get a wrong result because if the player swipes right, then waits and swipes left then waits again and swipe up without releasing their finger after each swipe, the only good result you have is the first swipe(right swipe). The left and the up swipe will have invalid values because that first value you got when OnPointerDown was called is the value you are still using. This is because the player never removed their finger from the screen so therefore, OnPointerDown is never called again and the first old old value is still there.
But when you use OnBeginDrag instead of OnPointerDown, this problem will be gone because when the finger stops moving, OnEndDrag is called and when it starts moving again OnBeginDrag is called once again causing the first position to be overwritten with the new one.

The difference is that OnBeginDrag doesn't get called until the touch/mouse has moved a certain minimum distance, the drag threshold. You can set the drag threshold on the Event System component.
This is necessary for when you have a hierarchy of objects with different ways of handling input, especially scrollviews. Imagine you have a scrollview with a vertical stack of cells, each with a button in it. When the touch first starts on one of the buttons, we don't know whether the user is tapping a button or dragging the scrollview. It isn't until the touch gets dragged for the drag threshold that we know it is a drag and not a tap.

Related

Cant get my player to jump /get detached from a rope

I Have a player which gets childed to a game object when it walks up to a trigger now I want the player's parent to become null again after space is pressed because I'm trying to make a rope system, and it's required for the player to be able to de-attach from the rope
This is the script that's supposed to attach/detach the player from the rope
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class AttachToRope : MonoBehaviour
{
public GameObject objectToParentTo;
public GameObject objectWithSwingScript;
// Start is called before the first frame update
private void OnTriggerEnter(Collider collider)
{
if (collider.gameObject.tag == "Rope")
{
transform.parent = objectToParentTo.transform;
objectWithSwingScript.GetComponent<playerscript>().enabled = true;
GetComponent<PlayerController>().enabled = false;
GetComponent<CharacterController>().enabled = false;
GetComponent<Swinging>().enabled = false;
}
}
private void OnTriggerStay(Collider collider)
{
if (Input.GetButtonDown("Jump"))
{
transform.parent = null;
objectWithSwingScript.GetComponent<playerscript>().enabled = false;
GetComponent<PlayerController>().enabled = true;
GetComponent<CharacterController>().enabled = true;
GetComponent<Swinging>().enabled = true;
Debug.Log("Deattached");
}
}
}
What happens when the player enters the trigger is that the scripts that make the player move get disabled and then it gets chilled to the last section of the rope now in ontriggerstay i want it to check if space is pressed and re-enable all the scripts that are required for the player to move (which does not work) but since nothing in there works i tried to debug.log but even that does not work so if anyone knows how to fix this please help me
From the OnTriggerStay documentation: The function is on the physics timer so it won't necessarily run every frame.
Functions on the physics timer (e.g. FixedUpdate()) don't play nicely with Input.GetButtonDown because they don't run every frame. Instead, they run on a fixed timestep, 0.02 seconds by default.
The solution is to put calls to Input.GetButtonDown into Update(). For instance, in this example you could have a boolean member variable isJumpPushed and set it to true in Update() when the jump button is pushed and the player is attached, and then check that value in OnTriggerStay.
--
A note about debugging:
I tried to debug.log but even that does not work
If your Debug.Log isn't showing a log in the console, that still tells you something important. It tells you that code path isn't getting called. That's an important clue for you to figure out what's really going on. To further narrow down the problem, you could move the Debug.Log statement outside the if statement. This would show that it's Input.GetButtonDown that isn't returning true when you think it is.

Unity 2D drag and drop issue iDropHandler

I have a draggable gameobject (brick) that implements IBeginDragHandler, IDragHandler, IEndDragHandler
I also have another gameobject (slot) to drop the brick into that implements IDropHandler
A quick look at the Bricks' OnBeginDrag method:
public static GameObject itemBeingDragged;
Vector3 startPosition;
Transform startParent;
public void OnBeginDrag(PointerEventData eventData)
{
itemBeingDragged = gameObject;
startPosition = transform.position;
startParent = transform.parent;
GetComponent<CanvasGroup>().blocksRaycasts = false;
GetComponent<BoxCollider2D>().enabled = false;
}
When i drop the brick into the slot, the brick assumes the slot as parent and also assumes the slots position, like so in the IDropHandler's OnDrop method code:
public void OnDrop(PointerEventData eventData)
{
DragHandler.itemBeingDragged.transform.SetParent(transform);
DragHandler.itemBeingDragged.transform.position = transform.position;
}
The problem with this is when i drag and drop the brick, i want there to be slight offset to the bricks position (e.g on a mobile phone, so that while dragging the brick, the brick is not visually hidden by my finger )
So in the Bricks OnDrag code, i have something like that to give a visual offset:
public void OnDrag(PointerEventData eventData)
{
Vector3 offset = new Vector3(0, 100, 0);
transform.position = Input.mousePosition + offset;
}
I know the above is for mouse position but ultimately i want it to be touch position.
This looks fine when dragging, however, when dropping it on the slot, it seems that the slot's OnDrop method is only called when the mouse pointer is above the slot, and not when the brick is above the slot. Meaning when i release the drag while the brick is above the slot, OnDrop doesn't get called. It is only called when I release the brick outside the slot in such a way that the mouse pointer is inside the slot. Make sense?
IS there a way to make OnDrop work with the bricks position rather than the mouse position?
Thanks
Kevin
This is ultimately a hack, and it's exactly the reason why I like to do my own drag/drop logic with the pointer down/up events, but anyway: Make the visual a child of the gameobject with the brick script attached, then move only the visual up when a drag starts. When the drag is complete, move the visual back down into place.
And btw, you're using SetParent(transform) on what looks like a RectTransform. You generally want to use SetParent(transform, false) with rect transforms, because otherwise you'll mess up the layout system and lose the benefits of rect transforms anyway.

How to prevent mouse clicks from passing through GUI controls and playing animation in Unity3D [duplicate]

This question already has answers here:
Detect mouse clicked on GUI
(3 answers)
Closed 4 years ago.
Description
Hi Guys need your help I faced problems with mouse clicks which pass through UI panel in Unity, that is, I have created pause menu and when I click Resume button, the game gets unpaused and the player plays Attack animation which is undesirable.What I want is when I click Resume button, Attack animation should not be played. The same problem if I just click on panel not necessarily a button and the more I click on UI panel the more Attack animation is played after I exit pause menu. Moreover, I have searched for solutions to this issue and was suggeted to use event system and event triggers but since my knowledge of Unity is at beginner level I could not properly implement it. Please guys help and sorry for my English if it is not clear)) Here is the code that I use:
The code:
using UnityEngine;
using UnityEngine.EventSystems;
public class PauseMenu : MonoBehaviour {
public static bool IsPaused = false;
public GameObject pauseMenuUI;
public GameObject Player;
private bool state;
private void Update() {
//When Escape button is clicked, the game has to freeze and pause menu has to pop up
if (Input.GetKeyDown(KeyCode.Escape)) {
if (IsPaused) {
Resume();
}
else {
Pause();
}
}
}
//Code for Resume button
public void Resume() {
//I was suggested to use event system but no result Attack animation still plays once I exit pause menu
if (EventSystem.current.IsPointerOverGameObject()) {
Player.GetComponent<Animator>().ResetTrigger("Attack");
}
pauseMenuUI.SetActive(false);
Time.timeScale = 1f;
IsPaused = false;
}
//this method is responsible for freezing the game and showing UI panel
private void Pause() {
pauseMenuUI.SetActive(true);
Time.timeScale = 0f;
IsPaused = true;
}
//The code for Quit button
public void QuitGame() {
Application.Quit();
}
}
Im not sure if i understood your problem, but it sounds like somewhere in your code you start an attack when the player does a left click.
Now your problem is that this code is also executed when the player clicks on a UI element, for example in this case the Resume button?
You tried to fix this problem, by resetting the attack trigger of the animator, i think it would be a better solution to prevent the attack from starting instead of trying to reset it later.
EventSystem.current.IsPointerOverGameObject() returns true if the mouse is over an UI element.
So you can use it to modify your code where you start your attack:
... add this check in your code where you want to start the attack
if(EventSystem.current.IsPointerOverGameObject() == false)
{
// add your code to start your attack
}
...
Now your attack will only start if you are not over a UI element

Unity 5: How to know if finger is on joystick even if it is at horizontal & vertical zero

I'm creating a jetpack (mobile) controller where left joystick is used to control forward and backward movements and right joystick is used to control rotation and upward movements. What I want is player to go upwards whenever user is touching the right joystick even if horizontal && vertical axes both return zero. So if there is a finger on the right joystick player goes up similar to GetButton or GetKey(some keycode).
Hope this helps somebody in the future:
I found out there is OnPointerUp and OnPointerDown methods that can be used to check if joystick is pressed or not. Easiest way for me to use those were to change a few things in Standard Assets > Utility > Joystick.cs. This is how those methods look like after my modifications:
public void OnPointerUp(PointerEventData data)
{
transform.position = m_StartPos;
UpdateVirtualAxes(m_StartPos);
if (data.rawPointerPress.name == "MobileJoystick_right") {
rightJoystickPressed = false;
}
}
public void OnPointerDown(PointerEventData data) {
if (data.pointerEnter.name == "MobileJoystick_right") {
rightJoystickPressed = true;
}
}
So basically I just added the If-statements. Now I can access the rightJoystickPressed boolean from any other script to check if joystick is being pressed even if it is not moved.

Physics2D.OverlapPoint () return always null

I'm trying to detect mouse click on 2D sprite on a 3D scene.
All my Sprite have a Box Collider 2D (well placed) and a script on it but hit is null all the time. I Also tried to put the Update() function on a script on GameEngine GameObject, but I got the same result.
void Update () {
if (Input.GetMouseButtonDown(0)) {
Vector2 mouse_position = Camera.main.ScreenToWorldPoint (Input.mousePosition);
Collider2D hit = Physics2D.OverlapPoint (mouse_position);
if (hit) {
Debug.Log ("Hit" + hit.transform.name);
} else {
Debug.Log (hit);
}
}
}
void OnMouseDown() {
Debug.Log ("Hit " + this.name);
}
No need to do what you're doing. The new Canvas UI systems has a sophisticated event system built-in.
If you look at your "Image" component, it has a "Raycast Target" basically turns on or off the event system handlers for that component.
You can listen for clicks/drags and other events on canvas elements using the UnityEngine.EventSystems namespace.
Here's an example for you:
using UnityEngine;
using UnityEngine.EventSystems;
class BuildingUI : Monobehaviour, IPointerDownHandler, IPointerUpHandler {
void OnPointerDown(PointerEventData eventData)
{
Debug.Log("Pointer Down " + eventData.selectedObject.name);
}
void OnPointerUp(PointerEventData eventData)
{
Debug.Log("Pointer Up " + eventData.selectedObject.name);
}
}
There are loads of interfaces you can implement, I recommend you checkout the Manual.
IBeginDragHandler
ICancelHandler
IDeselectHandler
IDragHandler
IDropHandler
IEndDragHandler
IInitializePotentialDragHandler
IMoveHandler
IPointerClickHandler
IPointerDownHandler
IPointerEnterHandler
IPointerExitHandler
IPointerUpHandler
IScrollHandler
ISelectHandler
ISubmitHandler
IUpdateSelectedHandler
I'm pretty sure that the root cause of your problem is that your "Building" objects are UI objects, being under a canvas. There's a number of things that this could lead to, but what I believe is causing your problem is the issue of world and screen space.
You are converting the mouse location from a screen point to world space, when your "Building" objects are under a canvas that does not appear to be using world space for its location. To confirm this, I suggest that you do a Debug.Log for the original mouse position, the converted mouse position, and the actual position of your "Building" objects. If you find that the unconverted mouse position lines up more realistically with your object positions, I suggest removing the conversion.
You may have to do additional work (ie doing some math on the mouse or object positions and/or changing the anchors of your objects) to get it to work perfectly, but this should allow your mouse position and objects to be working with the same units in terms of position.