I m building a car simulator game using Unity. For the input I m using Logitheck steering wheel G29. Now I need to use Hand Controller to accelerate or break.
This is my Hand Controller
Hand Controller HC1
Link
Now I can I interpect his input ? This device is recognize by my windows 10 system, but if I try to start the game with this device I cannot accelerate or break the car.
I configured this in my InputController of Unity:
And in my IRDSPlayerControls.cs file I write these lines of code:
if (Input.anyKey)
{
foreach (KeyCode kcode in Enum.GetValues(typeof(KeyCode)))
{
Debug.Log("Joystick pressed " + kcode);
}
}
Debug.Log("Input debug acc: " + Input.GetAxis("Vertical3"));
Debug.Log("Input debug frenata: " + Input.GetAxis("Vertical4"));
In Console of Unity, I can display this:
Input debug acc: -1
Input debug frenata: -1
You can detect a specific button on a specific joystick joystick 1 button 0, joystick 1 button 1, joystick 2 button 0…
or a specific button on any joystick joystick button 0, joystick button 1, joystick button 2…
Check out Input Manager
I can explain this step by step here but it wont be as good as some tutorials online. I recommend this video as a good tutorial to do this.
UPDATE:
I think your hand controller give analog values and the acceleration/brake buttons are not actually buttons but they are analog joy sticks and have a range of values.
to check this use Input.GetJoystickNames :
using UnityEngine;
public class Example : MonoBehaviour
{
// Prints a joystick name if movement is detected.
void Update()
{
// requires you to set up axes "Joy0X" - "Joy3X" and "Joy0Y" - "Joy3Y" in the Input Manager
for (int i = 0; i < 4; i++)
{
if (Mathf.Abs(Input.GetAxis("Joy" + i + "X")) > 0.2 ||
Mathf.Abs(Input.GetAxis("Joy" + i + "Y")) > 0.2)
{
Debug.Log(Input.GetJoystickNames()[i] + " is moved");
}
}
}
}
I will suggest to first check if those inputs are being received with:
if (Input.anyKey)
{
foreach(KeyCode kcode in Enum.GetValues(typeof(KeyCode)))
{
Debug.Log(kcode);
}
}
This way you can know if the game is recognizing the keycodes of your controller, and if it does, which names are assigned to them.
Once you got this, you only need to check keycodes as an usual keyboard!
Not every joystick, steer etc. is mapping it's inputs to the same axis.
There is a unity forum about that topic (and other related problems). And I found that there are some unity plugins, that could probably solve your problem:
https://github.com/speps/XInputDotNet
https://github.com/JISyed/Unity-XboxCtrlrInput
There are some programs that you can use to list all input axis and see which one you are currently affecting. I used one of them but don't remember the name of it. It might help you to see to which axis your break and throttle are mapped to.
Some of them also allow you to remap then, if this is what you want.
The most probable cause is Unity3D does not support this device.
Unity3D uses a mix of XInput, GameInput?, and USB HID processing for its input on Windows.
It is unclear(closed source), if GameInput is used on Windows, it is required on the modern XBOX's.
I cannot provide a definitive answer, since I do not have this controller to test, and the documentation on the controller is sparse.
The best I can do is point you in the right direction.
Does the device exist in Unity3D:
See if the Input System identifies the device when plugged in while running (make sure the game window has focus):
Adapted from https://docs.unity3d.com/Packages/com.unity.inputsystem#1.4/manual/HowDoI.html
InputSystem.onDeviceChange +=
(device, change) =>
{
switch (change)
{
case InputDeviceChange.Added:
// New Device.
Debug.Log("New device added.");
break;
case InputDeviceChange.Disconnected:
// Device got unplugged.
break;
case InputDeviceChange.Connected:
// Plugged back in.
break;
case InputDeviceChange.Removed:
// Remove from Input System entirely; by default, Devices stay in the system once discovered.
break;
default:
// See InputDeviceChange reference for other event types.
break;
}
}
A lack of log output, when plugged in means the device was not identified as a potential input device. Skip to "All else Fails" below.
Identification at this level does not imply support, as it may flag all HID devices.
Look at all low level input events while pressing the buttons:(Also adapted from 4)
var trace = new InputEventTrace(); // Can also give device ID to only
// trace events for a specific device.
trace.Enable();
//…run stuff
var current = new InputEventPtr();
while (trace.GetNextEvent(ref current))
{
Debug.Log("Got some event: " + current);
}
// Trace consumes unmanaged resources. Make sure to dispose.
trace.Dispose();
The chances of getting here with responses(given the edited output) are slim, but if it happens explore the output to find hints to the device associations and fix your mappings accordingly.
All else Fails
Request device support though Unity3D.com website. Highly recommended.
You can write your own support for the device using either the USB HID, may be flagged by virus scanners, and there is limited documentation or implement a custom GameInput interface. The inclusion in Windows Game Controllers makes this the most probable solution.
Related
I know that this question has already been asked here twice, but the answers did not fix my problem. I need to enable spatial mapping on runtime. After scanning my environment I want to disable it, or hide at least the visualization of polygons, so I can save some fps. But by disabling spatial mapping I still want to have the colliders of my environment.
What I tried:
1. This example from this post did nothing.
if (disable){
// disable
MixedRealityToolkit.SpatialAwarenessSystem.Disable();
}
else
{
// enable
MixedRealityToolkit.SpatialAwarenessSystem.Enable()
}
2. Trying to disable the visualization gives me every time a nullreference. I guess GetObservers is giving null back or maybe meshOserver is null:
foreach(var observer in MixedRealityToolkit.SpatialAwarenessSystem.GetObservers())
{
var meshObserver = observer as IMixedRealitySpatialAwarenessMeshObserver;
if (meshObserver != null)
{
meshObserver.DisplayOption = SpatialAwarenessMeshDisplayOptions.None;
}
}
3. The example given by mrtk in there SpatialAwarenessMeshDemo scene, shows how to start and stop the observer. By starting everything starts fine but after suspending and clearing the observers the whole spatial map disappears, so my cursor does not align to my environment. So this is not what I need.
SpatialAwarenessSystem.ResumeObservers(); //start
SpatialAwarenessSystem.SuspendObservers();//stop
SpatialAwarenessSystem.ClearObservations();
What I have right now:
My Spatial Awareness Profile looks like this:
My code starts the spatial mapping with ResumeObservers, the foreach-loop gives me a nullreference and SuspendObserver is comment out, because it disables the whole spatial map thing:
if (_isObserverRunning)
{
foreach (var observer in SpatialAwarenessSystem.GetObservers())
{
var meshObserver = observer as IMixedRealitySpatialAwarenessMeshObserver;
if (meshObserver != null)
{
meshObserver.DisplayOption = SpatialAwarenessMeshDisplayOptions.None;
}
}
//SpatialAwarenessSystem.SuspendObservers();
//SpatialAwarenessSystem.ClearObservations();
_isObserverRunning = false;
}
else
{
SpatialAwarenessSystem.ResumeObservers();
_isObserverRunning = true;
}
Question: How do I start and stop spatial mapping the right way, so that I can save some performance and still have the colliders of the spatial map to interact with.
My specs:
MRTK v2.0.0
Unity 2019.2.0f1
Visual Studio 2017
!--Edit--inlcuding-Solution--!
1. With option #1 I was wrong. It does what its meant for, but I used it the wrong way. If you disable for example SpatialAwarenessSystem while running the spatial mapping process, it disables the whole process including the created spatial map. So after that you cant interact with the invironment.
2. What worked for me was using for the start ResumeObservers() in combination with setting display option to visible and for stopping spatial mapping the method SuspendObservers() in combination with display option none.
3. The Nullreference if fixed by rewritting and casting to IMixedRealityDataProviderAccess:
if (CoreServices.SpatialAwarenessSystem is IMixedRealityDataProviderAccess provider)
{
foreach (var observer in provider.GetDataProviders())
{
if (observer is IMixedRealitySpatialAwarenessMeshObserver meshObs)
{
meshObs.DisplayOption = option;
}
}
}
4. Performance: To get your fps back after starting an observer, you really need to disable the system via MixedRealityToolkit.SpatialAwarenessSystem.Disable();, but this will of course disable also the spatial map, so you cant interactive with it anymore.
#Perazim,
The recommendation is based on your option #3. Call ResumeObservers() to start and SuspendObservers() to stop. There is no need to call ClearObservations() unless you wish to have them removed from your scene.
The example calls ClearObservations() to illustrate what was, at the time, a new feature added to the Spatial Awareness system.
Please file an issue on GitHub (https://github.com/microsoft/MixedRealityToolkit-Unity/issues) for #1 (failure of Enable() and Disable() to impact the system). Those methods should behave as advertised.
Thank you!
David
So I'm completely new to Unity and VR but for a project I need to detect the positions of the base stations.
I tried googling, but since I don't know all the lingo I don't really know where and what to look for.
All I can find is how to detect the controllers.
Here's one way, all with Unity code:
var nodeStates = new List<XRNodeState>();
InputTracking.GetNodeStates(nodeStates);
foreach (var trackedNode in nodeStates.Where(n => n.nodeType == XRNode.TrackingReference))
{
bool hasPos = trackedNode.TryGetPosition(out var position);
bool hasRot = trackedNode.TryGetRotation(out var rotation);
}
In OpenVR, base stations are "tracked devices", just like the controllers and HMD. The standard SteamVR plugin for Unity already has a way to get the position of any tracked device, see for example how the controllers are implemented in the standard [CameraRig] prefab.
The only problem is that you need to provide the "index" of the device, which may change every time you reconnect your headset. SteamVR plugin handles this with the SteamVR_ControllerManager component, but as the name suggests - it handles only controllers. You should be able to implement something similar, or just edit the script and find the lines
if (deviceClass == ETrackedDeviceClass.Controller ||
deviceClass == ETrackedDeviceClass.GenericTracker)
and add ETrackedDeviceClass.TrackingReference to this list. You should then be able to copy the controller objects and attach them in the "additional objects" array in SteamVR_ControllerManager to have the base stations appear in your scene.
I got a problem in my project. I want to know that mouse cliked happend on GUI or on any game object.
I have tried this but it is showing null reference exception
EventSystem eventSystem = EventSystem.current;
if (eventSystem.IsPointerOverGameObject())
Debug.Log("left click over a gui element");
how to detect?? Is there any event available or else?
IsPointerOverGameObject() is fairly broken on mobile and some corner cases. We rolled our own for our project and it works like a champ on all platforms we've thrown it at.
private bool IsPointerOverUIObject() {
PointerEventData eventDataCurrentPosition = new PointerEventData(EventSystem.current);
eventDataCurrentPosition.position = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
List<RaycastResult> results = new List<RaycastResult>();
EventSystem.current.RaycastAll(eventDataCurrentPosition, results);
return results.Count > 0;
}
Source:
http://forum.unity3d.com/threads/ispointerovereventsystemobject-always-returns-false-on-mobile.265372/
There is some approaches that you can use to detect if your mouse is over a legacy GUI element here I'll show you one that I hope will work fine for you, if not research a little about "mouse over GUI" and you'll find a lot of different ways to do it (this one is what I use on my legacy GUI projects and usually works fine with touch):
Create an easily accessible behaviour (usually a singleton) to hold your MouseOverGUI "status":
if you are using GUILayout.Button you need to catch the last drawn rect, if you are GUI.Button just use the same rect you passed as button's param like this:
// GUILayout
( Event.current.type == EventType.Repaint &&
GUILayoutUtility.GetLastRect().Contains( Event.current.mousePosition ) ) {
mouseOverGUI = true;
}
} // you need call it in your OnGUI right after the call to the element that you want to control
// GUI
( Event.current.isMouse &&
yourRect.Contains( Event.current.mousePosition ) ) {
mouseOverGUI = true;
}
}
after that you just need to test if your mouseOverGUI is true or false to allow or not your desired click actions before execute them. (a good understanding of unity loops will help you to catch and test the flag in correct timing to avoid problems expecting to get something that already changed)
edited: also remember to reset mouseOverGUI to false when it is not over GUI ;)
Finally got my answer here:
There are three ways to do this, as demonstrated in this video tutorial. this video save me:).
Use EventSystem.current.IsPointerOverGameObject
Convert your OnMouseXXX and Raycasts to an EventSystem trigger. Use a physics raycaster on the camera
Implement the various handler interfaces from the EventSystems namespace. Use a physics raycaster on the camera.
From what I understand, multitouch support was added to GTK+ as of version 3.4. What I'm not clear on is whether this applies just to touch screens like phones/tablets or whether it extends to Apple style touch pads (the way Ubuntu/Unity and OS X use multitouch gestures on the touchpad).
I've also had a hard time finding examples of how to implement gestures and how to track multitouch events.
Are there any good examples of how to implement multitouch with GTK (or something related like Clutter)?
I also couldn't find examples so here is my knowledge about it:
Mouse events (introduction):
When using mouse Gdk propagates events GDK_BUTTON_PRESS, GDK_BUTTON_RELEASE (and a few other). That gets translated into GtkWidget signals like button-press-event and then into higher level ones like GtkButton's clicked if applicable. Connecting a callback to the button-press-event signal allows access to GdkEventButton structure. Using clicked however frees you from keeping track whether it was a click (press & release) or only a release (during kinetic scrolling for instance).
Touch events:
Touch works a little bit different. There are 4 touch events:
GDK_TOUCH_BEGIN
A new touch event sequence has just started. This event type was added
in 3.4.
GDK_TOUCH_UPDATE
A touch event sequence has been updated. This event type was added in
3.4.
GDK_TOUCH_END
A touch event sequence has finished. This event type was added in 3.4.
GDK_TOUCH_CANCEL
A touch event sequence has been canceled. This event type was added in
3.4.
and GdkEventTouch structure uses GdkEventSequence for differentiating between fingers. It seems to me that it is simply a value (couldn't find definition in the sources) but I may be mistaken here. GtkWidget has touch-event signal similar to button-press-event etc that also gets translated into events like clicked.
Sample code (using gtkmm but core aspects are the same):
#include <gtkmm.h>
#include <iostream>
int main()
{
auto app = Gtk::Application::create();
Gtk::Window window;
window.set_default_size(1024, 768);
app->signal_startup().connect([&]
{
app->add_window(window);
});
window.show();
//code works for me without adding events mask but let's be thorough
window.add_events(Gdk::TOUCH_MASK);
window.signal_touch_event().connect([&](GdkEventTouch* event)->bool
{
std::cout<<"TOUCH EVENT: ";
switch(event->type)
{
case GDK_TOUCH_BEGIN:
std::cout<<"begin ";
break;
case GDK_TOUCH_UPDATE:
std::cout<<"update ";
break;
case GDK_TOUCH_END:
std::cout<<"end ";
break;
case GDK_TOUCH_CANCEL:
std::cout<<"cancel ";
break;
default:
std::cout<<"something else ";
}
std::cout<<event->sequence<<" "
<<gdk_event_get_event_sequence((GdkEvent*)event)<<" "
<<std::endl;
return GDK_EVENT_PROPAGATE;
});
window.signal_event().connect([&](GdkEvent* event)->bool
{
std::cout<<"EVENT: "<<event->type<<std::endl;
return GDK_EVENT_PROPAGATE;
});
app->run();
return 0;
}
Touchpad events:
There are also touchpad & pad events and structures but it seems that there is no explicit handling of these on Gtk level. It has to be done in callback for event signal with checking GdkEventType and casting it into appropriate structures.
I have a virtual trackpad on my iPhone and to move my mouse I'm using :
CGDisplayMoveCursorToPoint(kCGDirectMainDisplay, CGPointMake(((float)aD.msg)+location.x, ((float)aD.msg2)+location.y));
It's working well but this not a real mouse because when I put my mouse on my hidden dock, this one doesn't display it self. I don't understand why.
More over I tried to simulate mouse click with :
case MOUSECLICK:
[self postMouseEventWithButton:0 withType:kCGEventLeftMouseDown andPoint:CGEventGetLocation(CGEventCreate(NULL))];
[self postMouseEventWithButton:0 withType:kCGEventLeftMouseUp andPoint:CGEventGetLocation(CGEventCreate(NULL))];
// *********************
-(void)postMouseEventWithButton:(CGMouseButton)b withType:(CGEventType)t andPoint:(CGPoint)p
{
CGEventRef theEvent = CGEventCreateMouseEvent(NULL, t, p, b);
CGEventSetType(theEvent, t);
CGEventPost(kCGHIDEventTap, theEvent);
CFRelease(theEvent);
}
Is it the good method? Thanks for your help !
CGDisplayMoveCursorToPoint() only moves the image of the cursor, it does not generate any events. You should create and post mouse events of type kCGEventMouseMoved to simulate moving the mouse. Your own method would do it:
[self postMouseEventWithButton:0 withType:kCGEventMouseMoved andPoint:point];
For clicks, you are already doing it the right way, I think. One thing you should also do is set the click count properly on both the mouse down and mouse up events, like so:
CGEventSetIntegerValueField(event, kCGMouseEventClickState, 1);
... because some applications need it.
(See also Simulating mouse clicks on Mac OS X does not work for some applications)
If your code doesn't work, I'm not sure why; it looks OK to me. Try posting to kCGSessionEventTap instead of kCGHIDEventTap and see if it helps. Also, you don't need the CGEventSetType() call since the type is already set in the creation call.