I have a couple questions about Unity3d and Game Controllers on PC.
In Unity3d is it possible to use a game pad as an input source?
If so, which game pads are supported (e.g. Xbox)? Do I need a plugin or someone else's code? Can I use the vibration?
Can I recieve input from multiple game pads at the same time (co-op for up to 4 players on same machine)?
I have looked several places, and it seems that using XInput will allow for Xbox controller support in Unity on Windows. I have seen nothing on multiple controller support (e.g. 2-4 controllers on the same PC). Thank you so much for your time!
You can get input of a specific joystick using this
Joystick Buttons (from a specific joystick): “joystick 1 button 0”, “joystick 1 button 1”, “joystick 2 button 0”, …
http://docs.unity3d.com/Manual/ConventionalGameInput.html
Near the end of the page
Related
Firstly I'm still fairly new to coding, saw people making games and fell in love with the idea of making your own game. So the past few months I've been learning unity tutorials and practicing basic games. I've come so far to be basically done with my first game. Everything is ready and I want to load and post it on playstore, but there's one last issue. When I run it on Unity on my pc it looks perfect but I loaded it onto my phone and the UI and some objects is either not showing or looking different than on the pc.Example1Example2Example3 These are examples of my problem. The IMG above is the way it should look like and the one underneath is how it shows on my phone when loaded.
It doesn´t look the same, due to the different resolutions.
Try to use a Canvas Scaler component. Set a reference resolution and the mode how you want to scale it.
If you want your UI elements to be anchored to the center/top/left etc. you should also set the anchor points. Here is a good Tutorial
A good way to instantly check the result is the "device simulator" It is a unity package
That's because of your Canvas settings & your UI GameObjects Anchor. But I think the easiest way to solve this for you - because you don't have that much experience about it - is to separate canvases for mobile & pc. This is the code:
private void Start()
{
if (Application.platform == RuntimePlatform.WindowsPlayer) // If your game's running on android
{
pcCanvas.SetActive(true); // Use PC designed canvas
mobileCanvas.SetActive(false); // Disable Mobile designed canvas
}
else // Your game is running on mobile
{
pcCanvas.SetActive(false);
mobileCanvas.SetActive(true);
}
}
Add this to a GameObject, Design 2 Canvases & assign them to script. (This will be a litte complicated, but will work) This link for more info
But if you want to use 1 canvas, you have to set its settings & its GameObjects anchors.
I'm building an openvr app for steamvr to assist with seated play (my room is small so my tracking area isn't ideal). My app pretty much just adjusts the play-area height when I hold the grip button and "scroll" on the touchpad so that I can reach objects that are too low/high at variable heights. (I tried "OpenVR Advanced Settings" but the options for keybinding with it is limited to simple button presses so I decided to make my own version).
I'd like to prevent touchpad input from being sent to the game while the grip button is being held, so that the moving on the touchpad doesn't cause movement in game, is this possible at all?
I'm assuming it's not possible, but wondering whether anyone has had any experience with this.
After your clarification in the comments the answer is no, you can not "eat up" device inputs in an application, I usually work on OpenVR drivers and there after you submit a device input and/or any other event its available to anything that expects pose update events, and event subscribers can not stop others from receiving the said events
However there might be a work around (if its still an issue) I know of at least 1 application that can do what you want and that application is OVR Toolkit (when the overlay is active and you try to click something in the overlay, the game running in parallel will not receive the input, however that will only happen if OVR Toolkit overlay surface receives input, it may be a built in OpenVR overlay feature and you don't have to do anything or it can be defined by the developer, I don't really have a want to test this right now)
Sadly though OVR Toolkit is not open source, but there is an open source toolkit for unity for making overlays, which is open source and might be the solution you're looking for, it can be found here
Good people of SO,
I'm currently working on a game in Unity3D using xbox 360 (wired or not) controllers.
I'm searching for some king of "best practice" to achieve Windows and MacOS support for the game.
STEP 1
My first approach was to create a full InputManager.asset mapped for 4 controllers based on :
http://wiki.unity3d.com/index.php/Xbox360Controller
http://wiki.etc.cmu.edu/unity3d/index.php/Joystick/Controller
(and some other ...)
and use the Input.getAxis().
ISSUE
The main issue I have is when you disconnect and reconnect a controller : the Axis ID changes ... and it's veeeeery hard to find re-assign the controller to the right player instance in the game.
The ONLY information that Unity provides me is the Input.GetJoystickNames() to know at any moment how many controllers are connected ... but it's not enought informations to know who is plugged where ...
STEP 2
Then I heard of the XInput dll :
http://forum.unity3d.com/threads/37542-XInput-NET-full-support-for-Xbox-360-Controller-(Windows)
that would have solve everything regarding using the controller...
ISSUE
... but sadly it doesn't work in macOs ...
Any solutions ?
Thanks :)
i need to merge 5 monitors in XNA (something like Eyefinity).
I have two graphics cards (HD 5450), which have DP connector, of course,
5x flat monitors with resolution 1024*768.
I need to merge/group this monitors in XNA, because i want fullscreen this over 5 monitors.
(fullscreen over multiple monitors)
I just need the visual studio to detect one graphics device with resolution 5120x768.
How i should modify GraphicsDeviceManager / GraphicsAdapter, make it work ?
I cant use Eyefinity, because i have two graphic cards and that i'm trying do "my own eyefinity" in xna.
In my app, i have 5 models dividing to 5 viewports, which are moved every 1024px.
OR, how i should to make it looking like a fullscreen. I don't want the border being visible and i want to have in the middle of screen - how center it ?
Thanks for answers.
To be honest this is going to be difficult if not impossible to do using XNA. And you'd have to get so far outside of what the XNA framework is providing you that there would be little benefit in the end to even using XNA at that point.
Here's a great thread on the App Hub forums talking about different ways of potentially hacking around the XNA framework to achieve multiple monitor fullscreen using XNA.
http://forums.create.msdn.com/forums/p/5562/571993.aspx
As you can see, no one really had any great suggestions and by the time you were dong you were basically programming at such a low level that you might as well be doing C++ and DirectX. Which is exactly what I would recommend to you.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb206364(v=vs.85).aspx
Using DirectX you can see that you're going to get a game/application running fullscreen with a multiple monitor setup much faster and without having to hack your way into it.
I was wondering if it was possible to capture from both cameras simultaneously using AVFoundation framework. Specifically, my question is whether both front and rear AVCaptureDevices can be active at the same time or not.
Currently I know that an AVCaptureSession instance can support only one input (and output). I create two AVCaptureSessions, attach front camera device to one and rear to other, I then point the outputs of the sessions to different SampleBufferDelegate functions. What I see is that one delegate function is active for a few frames, then the other takes over. It seems as if AVFoundation somehow turns off a camera device if another one is being used. Can anyone confirm this or share their experiences regarding this subject?
Thanks in advance
Answering my own question:
This is not possible.
Switching between front and rear camera to emulate similar behavior is too slow
(Takes about 500ms per switch according to my tests)
Source: https://devforums.apple.com/message/369748#369748
From iOS 13, it's possible. One can now simultaneously record the output from the front and back cameras into a single movie file by using a multi-camera
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/avmulticampip_capturing_from_multiple_cameras