I am a beginner in Unity. I want to implement a full body gesture recognition in Unity with vive controllers and trackers. I want to build a tool, where you are also able to include new gestures as a user. I was looking for a plugin or something like that, which helps me to use full body recognition. Do you have any recommendations? I already checked the internet, but nothing really seamed to fit with the trackers.
If there is nothing like that, I was thinking to calculate all the position changes between all trackers and calculate this way, what kind of movement it could have been. Do you have a smart idea to help me out?
Related
Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation
I'm as the new of AR development. I want to create AR demo app, but I face some problems.
could anyone help me to solve below problems:
. Does it possible to recognize the floor, if I want to placement with big 3D object ( around 3 meter x 1.5 meter )?
. How can I touch screen to placement only one object on the floor? after that, can disable or enable (buttons) plane detection and still appear 3D object, that we have added to interact on 3D object.
. After added one 3D object, How can we make interaction on 3D object? ex: rotation or scale.
could you share me the tutorials or other links to solving that problems?
Thank you very much.
you're in luck. there is a video that shares how to make almost everything exactly like you want it. Also if you want to read over the components that enable you this I'll give you the link to the Vuforia official website documentation when they go over each component and how it works.
Video link: https://www.youtube.com/watch?v=0O6VxnNRFyg
Vuforia link: https://library.vuforia.com/content/vuforia-library/en/features/overview.html
I trying to change the barrel distortion coefficients for the HTC Vive to create a distortion in the HMD. Is OpenVR the best method to do this?
The only thing I can suggest to you at this point is to search for OculusRiftEffect.
This is an old plugin for THREE.js that is now useless in normal use because it needed to show you the deformed view on your screen. In most application you don't want that, but you might want to show that to the students. The example was hardcoded to the lenses of Oculus Rift DK2 (or DK1 if you uncomment some stuff inside), but the optics don't differ that much and the effect should be even more visible.
It is removed from their current version, so check out old THREE.js revisions or some stale demos on the internet, and you'll find something. Search something around 3 years back.
I'm building a game for the HTC Vive, in Unity using the VRTK plugin as the basis for the VR interactions.
I know that VRTK provides a way to send short pulses (vibration) to the controllers, but I'm looking for a way to implement much more precise haptic feedback similar to the one used in the Bow and Arrow demo of Valve's "The Lab": The one where you could fill the bow stretching while pulling the arrow back.
I think that I had read somewhere (could not find it) that Valve had implemented this as audio signal that was played through the controllers as vibration.
Is there any public implementation of this approach, or should I implement this by myself?
Thank you
Is there any program that allows custom gestures recording and exporting?
Of course custom gestures for Leap Motion.
The pre-made gestures are not enough for me to make the app.
I tried this old system:
LeapTrainer
However, I have a problem on importing and exporting, and the data exported seems not useful out of LeapTrainer.
Update 1: I tried to find gadgets from Unity Asset Store, but to no avail. Can anyone suggest some tools/SDKs? My main purpose is to use gestures as dynamic slashing(vertically/horizontally/diagonally).
Please anyone can help me?
I started the same way as you did, but i end up building my own gestures base on the API outputs.
Its not that hard you just need to think it a bit.
For example working with fingers, isExtended and Angle between them helps alot.
For the palm you can use the GetPosition and where is pointing at.
So you can do if palm pointing to my face and the hand is open(base on fingers) you will mimic the ARM HMD menu that you see on the LM samples out there.
Or if IndexFinger is extended and the thumb draw a gun on your hand, if the thumb angle is < 10 make the gun shoot.
I truly recommend you to go that way, it will help to expand your knowledge on the device API. 3rd party tools might work but you need to learn how to use them, so better spend that time learning the LM API.