I have been trying to implement the standard gestures leap motion provides such as the circle gesture and swipe gesture but none of them seems to work. I'm having a hard time understanding why most method that exists in the API are not being recognised in Unity.
Below is code I have used to get a circle gesture.
using UnityEngine;
using System.Collections;
using Leap;
public class LeapTest : Leap.Listener {
public Leap.Controller Controller;
// Use this for initialization
public void Start () {
Controller = new Leap.Controller(this);
Debug.Log("Leap start");
}
public override void OnConnect(Controller controller){
Debug.Log("Leap Connected");
controller.EnableGesture(Gesture.GestureType.TYPECIRCLE,true);
}
public override void OnFrame(Controller controller)
{
Frame frame = controller.Frame();
GestureList gestures = frame.Gestures();
for (int i = 0; i < gestures.Count; i++)
{
Gesture gesture = gestures[0];
switch(gesture.Type){
case Gesture.GestureType.TYPECIRCLE:
Debug.Log("Circle");
break;
default:
Debug.Log("Bad gesture type");
break;
}
}
However, when I put this code into unity3D it doesn't recognise the following lines of code from the code above:
Leap.Controller
.EnableGesture(Gesture.GestureType.TYPECIRCLE, true);
GestureList gestures = frame.Gestures();
I don't understand what I am missing out here, or is it depreciated? Please, can someone explain? Thankyou
Gestures were deprecated in Orion (v3 and above), so if you're using one of the Orion versions of Leap Core Assets then you'll get this error. You can still use the v2 assets if you want to use these Gestures, otherwise you'll need to implement them yourself.
Related
Even the official documentation has borderline insane recommendations to solve what is probably one of the most common UI/3D interaction issues:
If I click while the cursor is over a UI button, both the button (via the graphics raycaster) and the 3D world (via the physics raycaster) will receive the event.
The official manual:
https://docs.unity3d.com/Packages/com.unity.inputsystem#1.2/manual/UISupport.html#handling-ambiguities-for-pointer-type-input essentially says "how about you design your game so you don't need 3D and UI at the same time?".
I cannot believe this is not a solved problem. But everything I've tried failed. EventSystem.current.currentSelectedGameObject is sticky, not hover. PointerData is protected and thus not accessible (and one guy offered a workaround via deriving your own class from Standalone Input Module to get around that, but that workaround apparently doesn't work anymore). The old IsPointerOverGameObject throws a warning if you query it in the callback and is always true if you query it in Update().
That's all just mental. Please someone tell me there's a simple, obvious solution to this common, trivial problem that I'm just missing. The graphics raycaster certainly stores somewhere if it's over a UI element, right? Please?
I've looked into this a fair bit and in the end, the easiest solution seems to be to do what the manual says and put it in the Update function.
bool pointerOverUI = false;
void Update()
{
pointerOverUI = EventSystem.current.IsPointerOverGameObject();
}
Your frustration is well founded: there are NO examples of making UI work with NewInput which I've found. I can share a more robust version of the Raycaster workaround, from Youtube:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.EventSystems;
using UnityEngine.InputSystem;
using UnityEngine.UI;
/* Danndx 2021 (youtube.com/danndx)
From video: youtu.be/7h1cnGggY2M
thanks - delete me! :) */
public class SCR_UiInteraction : MonoBehaviour
{
public GameObject ui_canvas;
GraphicRaycaster ui_raycaster;
PointerEventData click_data;
List<RaycastResult> click_results;
void Start()
{
ui_raycaster = ui_canvas.GetComponent<GraphicRaycaster>();
click_data = new PointerEventData(EventSystem.current);
click_results = new List<RaycastResult>();
}
void Update()
{
// use isPressed if you wish to ray cast every frame:
//if(Mouse.current.leftButton.isPressed)
// use wasReleasedThisFrame if you wish to ray cast just once per click:
if(Mouse.current.leftButton.wasReleasedThisFrame)
{
GetUiElementsClicked();
}
}
void GetUiElementsClicked()
{
/** Get all the UI elements clicked, using the current mouse position and raycasting. **/
click_data.position = Mouse.current.position.ReadValue();
click_results.Clear();
ui_raycaster.Raycast(click_data, click_results);
foreach(RaycastResult result in click_results)
{
GameObject ui_element = result.gameObject;
Debug.Log(ui_element.name);
}
}
}
So, just drop into my "Menusscript.cs"?
But as a pattern, this is terrible for separating UI concerns. I'm currently rewiring EVERY separately-concerned PointerEventData click I had already working, and my question is, Why? I can't even find how it's supposed to work: to your point there IS no official guide at all around clicking UI, and it does NOT just drop-on-top.
Anyway, I haven't found anything yet which makes new input work easily on UI, and definitely not found how I'm going to sensibly separate Menuclicks from Activityclicks while keeping game & ui assemblies separate.
Good luck to us all.
Unity documentation for this issue with regard to Unity.InputSystem can be found at https://docs.unity3d.com/Packages/com.unity.inputsystem#1.3/manual/UISupport.html#handling-ambiguities-for-pointer-type-input.
IsPointerOverGameObject() can always return true if the extent of your canvas covers the camera's entire field of view.
For clarity, here is the solution which I found worked best (accumulated from several other posts across the web).
Attach this script to your UI Canvas object:
public class CanvasHitDetector : MonoBehaviour {
private GraphicRaycaster _graphicRaycaster;
private void Start()
{
// This instance is needed to compare between UI interactions and
// game interactions with the mouse.
_graphicRaycaster = GetComponent<GraphicRaycaster>();
}
public bool IsPointerOverUI()
{
// Obtain the current mouse position.
var mousePosition = Mouse.current.position.ReadValue();
// Create a pointer event data structure with the current mouse position.
var pointerEventData = new PointerEventData(EventSystem.current);
pointerEventData.position = mousePosition;
// Use the GraphicRaycaster instance to determine how many UI items
// the pointer event hits. If this value is greater-than zero, skip
// further processing.
var results = new List<RaycastResult>();
_graphicRaycaster.Raycast(pointerEventData, results);
return results.Count > 0;
}
}
In class containing the method which is handling the mouse clicks, obtain the reference to the Canvas UI either using GameObject.Find() or a public exposed variable, and call IsPointerOverUI() to filter clicks when over UI.
Reply to #Milad Qasemi's answer
From the docs you have attached in your answer, I have tried the following to check if the user clicked on a UI element or not.
// gets called in the Update method
if(Input.GetMouseButton(0) {
int layerMask = 1 << 5;
// raycast in the UI layer
RaycastHit2D hit = Physics2D.Raycast(Camera.main.ScreenToWorldPoint(Input.mousePosition), Vector2.zero, Mathf.Infinity, layerMask);
// if the ray hit any UI element, return
// don't handle player movement
if (hit.collider) { return; }
Debug.Log("Touched not on UI");
playerController.HandlePlayerMovement(x);
}
The raycast doesn't seem to detect collisions on UI elements. Below is a picture of the Graphics Raycaster component of the Canvas:
Reply to #Lowelltech
Your solution worked for me except that instead of Mouse I used Touchscreen
// Obtain the current touch position.
var pointerPosition = Touchscreen.current.position.ReadValue();
An InputSytem is a way to receive new inputs provided by Unity. You can't use existing scripts there, and you'll run into problems like the original questioner. Answers with code like "if(Input.GetMouseButton(0)" are invalid because they use the old system.
I am developing a VR game in Unity (2020.3.15f2) using the XR Interaction Toolkit package (1.0.0-pre.5) for my Oculus Quest 2. At this stage in my development, I am trying to recognize presses to the trigger and grip buttons on the controllers respectively in order to animate some 3D hand models. Here's the script I've written to accomplish this:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.XR;
public class HandPresence : MonoBehaviour {
public InputDeviceCharacteristics controllerCharacteristics;
public GameObject handModelPrefab;
private InputDevice targetDevice;
private GameObject spawnedHandModel;
private Animator handAnimator;
void Start() {
TryInitialize();
}
void TryInitialize() {
List<InputDevice> devices = new List<InputDevice>();
InputDevices.GetDevicesWithCharacteristics(controllerCharacteristics, devices);
if (devices.Count > 0) {
targetDevice = devices[0];
spawnedHandModel = Instantiate(handModelPrefab, transform);
handAnimator = spawnedHandModel.GetComponent<Animator>();
}
}
void UpdateHandAnimation() {
if (targetDevice.TryGetFeatureValue(CommonUsages.trigger, out float triggerValue)) {
handAnimator.SetFloat("Trigger", triggerValue);
} else {
handAnimator.SetFloat("Trigger", 0);
}
if (targetDevice.TryGetFeatureValue(CommonUsages.grip, out float gripValue)) {
handAnimator.SetFloat("Grip", gripValue);
} else {
handAnimator.SetFloat("Grip", 0);
}
}
void Update()
{
if (!targetDevice.isValid) {
TryInitialize();
} else {
spawnedHandModel.SetActive(true);
UpdateHandAnimation();
}
}
}
The issue I'm experiencing is that the values of both triggerValue and gripValue are always 0. The value of targetDevice looks fine. I also tried using triggerButton, gripButton, primaryButton, etc. and they are always 0/false as well. The hand models show up just fine and their movement is in sync with the movement of the controllers, but they just don't seem to want to register any button presses.
I've been stuck on this one for hours and would very much appreciate any insight, thank you!
Is your project setup with the (new) Input System? I have no problem detecting there trigger and grip values.
Also make sure the targetDevice actually uses trigger and grip features, maybe it is another device such as the HMD.
I want to create a game in flutter with flame. For this game I want to detect swipes.
I could implement a tap recognition with the help of a tutorial. But I could not implement it with swipe detection.
my main with Taprecognition looks like this:
My main function is
void main() async{
Util flameUtil = Util();
await flameUtil.fullScreen();
await flameUtil.setOrientation(DeviceOrientation.portraitUp);
GameManager game = GameManager();
runApp(game.widget);
TapGestureRecognizer tapper = TapGestureRecognizer();
tapper.onTapDown = game.onTapDown;
flameUtil.addGestureRecognizer(tapper);
}
In my GameManager class I do have:
class GameMAnager extends Game{
// a few methods like update, render and constructor
void onTapDown(TapDownDetails d) {
if (bgRect.contains(d.globalPosition)) { //bgRect is the background rectangle, so the tap works on the whole screen
player.onTapDown();
}
}
And my player class contains:
void onTapDown(){
rotate();
}
Now I want to change this to rotate in the direction of the swipe instead of onTapDown.
I tried to somehow add
GestureDetector swiper = GestureDetector();
swiper.onPanUpdate = game.onPanUpdate;
to my main and
void onPanUpdate() {
}
to my gameManager class. But I cannot find anything similar to TapDownDetails for panning.
Any suggestions on this?
I saw some help for this to wrap the widget in a GestureDetector and use it like this:
GestureDetector(onPanUpdate: (details) {
if (details.delta.dx > 0) {
// swiping in right direction
}
});
But I couldn't make it work on my project.
You can use the HorizontalDragGestureDetector (or PanGestureRecognizer if you need both axes)
use the following in your main method
HorizontalDragGestureRecognizer tapper = HorizontalDragGestureRecognizer();
tapper.onUpdate = game.dragUpdate;
and then the following in your GameManager
void dragUpdate(DragUpdateDetails d) {
// using d.delta you can then track the movement and implement your rotation updade here
}
That should do the trick :D
If you need both axes, you can use PanGestureRecognizer (as #Marco Papula said).
Use this in your main method:
PanGestureRecognizer panGestureRecognizer = PanGestureRecognizer();
panGestureRecognizer.onEnd = game.onPanUpdate;
flameUtil.addGestureRecognizer(panGestureRecognizer);
and your onPanUpdate method:
void onPanUpdate(DragEndDeatils d) {
if(d.velocity.pixelsPerSecond.dx.abs()>d.velocity.pixelsPerSecond.dy.abs()) {
// X Axis
snake.velocity = d.velocity.pixelsPerSecond.dx<0 ? LeftSwipe : RightSwipe;
} else {
// Y Axis
snake.velocity = d.velocity.pixelsPerSecond.dy<0 ? UpSwipe : DownSwipe;
}
}
If you are using a newer (v1) version of Flame, you no longer need to wrap it in your own GestureDetector (thought you can). Now Flame has built in wrappers for all events, including panning! You can mix your game with:
class MyGame extends BaseGame with PanDetector {
// ...
}
And implement onPanUpdate to get the desired behaviour; or use a myriad of any other detectors (check the documentation for more options and details on how to use it).
I have created a vr app in unity using google vr sdk. Currently i could click on the screen to start the video. When i use the vr headset, i need to use the bluetooth controller to play/stop the video. Can someone help me do this?
You need to identify how unity map your controller buttons, once you identify what button do you want to press and how unity mapped in his Input Manager (Edit->Project Settings->Input) you just need to call you function from the Update like this:
void Update()
{
if(Input.GetButtonUp("Fire1"))
{
playVideoFuncion();
}
}
Where playVideoFunction() is your own fuction.
In this example I used "Fire1" but maybe in your case is different.
For example for the Xbox Controller you have this configuration explained in Xbox 360 Controller Input on Unity
If you can't find anything related on your controller you can do something like:
void Update()
{
if(Input.GetButtonUp("Fire1"))
{
Debug.Log("Fire 1 Pressed");
}
if(Input.GetButtonUp("Fire2"))
{
Debug.Log("Fire 1 Pressed");
}
if(Input.GetButtonUp("0"))
{
Debug.Log("Button 0 pressed");
}
// Add more buttons and logs
}
There are maybe other ways to identify the input from random controllers but I don't know how. I needed the mapping for the Xbox controller and that page was useful.
You can create a script and attach to the video player so that when a user clicks on the player the commands are run. I assume you already have the unity package for Google VR(GVR). Add the following sample script and modify to suit your need.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class VideoPlayer : MonoBehaviour {
private bool _isPlaying;
// Use this for initialization
void Start () {
_isPlaying = false
}
// Update is called once per frame
void Update () {
if (GvrController.ClickButtonUp && _isPlaying) {
PlayVideo();
}
else if (GvrController.ClickButtonUp && !_isPlaying){
StopVideo ();
}
}
public void PlayVideo{
//Logic to Play Video
}
public void StopVideo{
//Logic to Stop Video
}
}
Question asked just to give solution :)
Main idea is to recognize QR-code on Unity without ANY additional action like tapping on the screen or sth like this.
(For me it's not necessary that "vuforia free" have watermark, so here is my solution)
(Also Vuforia works with camera much faster and no need to realize manually autofocus)
Continious QR code recognition using Vuforia as webcam source and ZXing Library as QR recognizer
using UnityEngine;
using Vuforia;
using ZXing;
public class QRCodeReader : MonoBehaviour {
private bool _isFrameFormatSet;
IBarcodeReader _barcodeReader = new BarcodeReader();
void Start () {
InvokeRepeating("Autofocus", 2f, 2f);
}
void Autofocus () {
CameraDevice.Instance.SetFocusMode(CameraDevice.FocusMode.FOCUS_MODE_TRIGGERAUTO);
RegognizeQR();
}
private Vuforia.Image GetCurrFrame()
{
return CameraDevice.Instance.GetCameraImage(Vuforia.Image.PIXEL_FORMAT.GRAYSCALE);
}
void RegognizeQR()
{
if (!_isFrameFormatSet == _isFrameFormatSet)
{
_isFrameFormatSet = CameraDevice.Instance.SetFrameFormat(Vuforia.Image.PIXEL_FORMAT.GRAYSCALE, true);
}
var currFrame = GetCurrFrame();
if (currFrame == null)
{
Debug.Log("Camera image capture failure;");
}
else
{
var imgSource = new RGBLuminanceSource(currFrame.Pixels, currFrame.BufferWidth, currFrame.BufferHeight, true);
var result = _barcodeReader.Decode(imgSource);
if (result != null)
{
Debug.Log("RECOGNIZED: " + result.Text);
}
}
}
}
Its possible to realize also without Vuforia, ofc. Unity provides the possibility to get a camera and show it’s input on a webcamtexture. It's possible to find more documentation here.
ZXing lib you can find here, or build it by your own hands using sourse code located on github.
Both libs is cross-platform, so must be no issues on different devices.