How to hold gun with leap motion hand - unity3d

Newbie here, I'm developing game where I need to pick up gun and other objects. I'm using leap motion hands. How to pick up OR Connect gun with Leap Motion so that gun (or other) object move with hand motion.
P.S. I searched but failed to find any material across Stackoverflow.

Due to the limitations of the Leap Motion imposed by physics, this is almost certainly going to be impossible.
This is due to the location of the Leap Motion's IR camera and what it can (and more importantly, what it cannot) see. When your hand is in a fist, your fingers block the camera from being able to detect the position of most of your fingers, making any sort of typical gun-holding position impossible to detect. Note that this may change based on the location of your sensor bar (which you did not include in your question and I have limited experience with other mountings, but I can't think of any mounting where the Leap Motion would have the necessary unobstructed view).
There was a project I worked on where I tried to use that same kind of pose and a "trigger pull" motion to activate an effect inside the Unity application. However, due to the location of the sensor bar (on the desk) this was virtually impossible and we had to reconfigure for a horizontal hand position (location relative to sensor was movement, hand in a fist was "fire" and would not reset and allow a second fire until hand was in an open palm gesture).

Related

HoloLens/Unity shared experience: How to track a user's "world" position instead of Unity's position?

I have here an AR game I'm developing for the HoloLens that involves rendering holograms according the the users's relative position. It's a multiplayer shared experience where everyone in the same physical room connects to the same instance (shared Unity scene) hosted via cloud or LAN, and the players who have joined can see holograms rendering at other player's positions.
For example: Player A, and B join an instance, they're in the same room together. Player A can see a hologram above Player B tracking Player B's position (A Sims cursor if you will). Then once Player A gets closer to Player B, a couple more holographic panels can open up displaying the stats of Player B. These panels are also tracking Player B's position and are always rendered with a slight offset relative to Player B's headset position. Player B also sees the same on Player A and vice versa.
That's fundamentally what my AR game does for the time being.
Problem:
The problem I'm trying to solve is tracking the user's position absolutely to the room itself instead of using the coordinate positions Unity says Player A's game object is at and Player B's.
My app works beautifully if I mark a physical position on the floor and a facing direction that all the players must assume when starting the Unity app. This then forces the coordinate system in all the player's Unity app to have a matching origin point and initial heading in the real world. Only then am I able to render holograms relative to a User's position and have it correlate 1:1 between the Unity space and real physical space around the headset.
But what if I want Player A to start the app on one side of the room and have Player B start the app on the other side of the room? When I do this, the origin point of Player A's Unity world is at different physical spot than Player B. Then this would result in Holograms rendering A's position or B's position at a tremendous offset.
I have some screenshots showing what I mean.
In this one, I have 3 HoloLenses. The two on the floor, plus the one I'm wearing to take screenshots.
There's a blue X on the floor (It's the sheet of paper. I realized you can't see it in the image.) where I started my Unity app on all three HoloLenses. So the origin of the Unity world for all three is that specific physical location. As you can see, the blue cursor showing connected players works to track the headset's location beautifully. You can even see the headsets's location relative to the screenshooter on the minimap.
The gimmick here to make the hologram tracking be accurate is that all three started in the same spot.
Now in this one, I introduced a red X. I restarted the Unity app on one of the headsets and used the red X as it's starting spot. As you can see in this screenshot, the tracking is still precise, but it comes at a tremendous offset. Because my relative origin point in Unity (the blue X) is different than the others headset's relative origin point (the red X).
Problem:
So this here is the problem I'm trying to solve. I don't want all my users to have to initialize the app in the same physical spot one after the other to make the holograms appear in the user's correct position. The HoloLens does a scan of the whole room right?
Is there not a way to synchronize these maps together with all the connected HoloLenses then they can share what their absolute coordinates are? Then I can use those as a transform point in the Unity scene instead of having to track multiplayer game objects.
Here's a map on my headset I used the get the screenshots from the same angel
This is tricky with inside-out tracking as everything is relative to the observer (as you've discovered). What you need is to be able to identify a common, unique real-location that your system will then treat as 'common origin'. Either a QR code or unique object that the system can detect and localise should suffice, then keep track of your user's (and other tracked objects) offset from that known origin within the virtual world.
My answer was deleted because reasons, so round #2. Something about link-only answers.
So, here's the link again.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-05
And to avoid the last situation, I'm going to add that whomever wants a synchronized multiplayer experience with HoloLens should read through the whole tutorial series. I am not providing a summary on how to do this wihtout having to copy and paste the docs. Just know that you need a spatial anchor that others load into their scene.

Unity ball friction either too much or not enough

So i am making a simple Golf game, when trying to replicate the balls movement i have noticed that on slopes the ball functions very oddly, it either never stops or stops too quickly and when travelling down slopes will reach a very slow terminal velocity quickly (ie it stops accelerating down slopes). So it either doesnt deaccelerate enough going up a slope or accelerates too slowly going down a slope.
I have been messing about with angular and static Friction of both the ball and the surface to see if that changes it at all and i have also been changing the friction combine of the surface and the ball to see if that makes any difference and so far haven't had any luck.
Is this a common issue with Unity because i haven't been able to find any other questions about it on here. If anyone could give me some advice on how to have my ball not roll forever but still accelerate when going down a slope that would be great
EDIT: The ball has a rigidbody with continous collision, then the course uses a mesh collider. Both however have attached physics materials
The ball is currently just a basic sphere from unity using a sphere collider, i havent tried changing the rigidbody about much yet other than mass. When the ball is hit i addforce to it, the slopes are an asset i have purchased that are perfectly smooth.
Rolling object physics are indeed difficult in game engines.
As I mention in the comment for such questions it's necessary to know (a) what sort of collider and (b) what sort of object it is, since there are at least 4 major approaches to this problem.
But in general you usually have to manually add the "slowing down" function, in a sense representing air resistance.
For a moment set aside the collider choices, and imagine in the abstract you have a ball rolling along a flat plane. You've somehow started it moving at 2 m/s say.
There's really no reason at all it will stop rolling or slow down. Why would it? There's no air resistance in physX and what you "want" it to do in the game engine physics is keep rolling.
Thus what you do is add a script that, essentially, "slows it down a little" every frame. In pseudocode, something like
velocity = 0.99 * velocity
Note however that alternately, to "manually slow it down", you may have to simply add force to it.
The trick is you do that in the opposite direction to movement
yourBalls.addForce( v.normalized * -1 * some small force )
(It's easy to think of that as basically "air resistance")
You usually also, simply, just add a top speed. In this way on downslopes it won't just get "infinitely fast"
if (v.magnitude > 3.0) v = v.normalized * 3.0
That's basically how you make objects roll around on hilly surfaces.
Note that you basically should not fool with the friction settings in anyway, it's really not relevant in most cases.
Unfortunately there is a vast amount of detail but, I feel your question is more asking for the "basic principles" - and there they are!
Tip: it could be this question is more suited to the gameDev site, where "techniques" of game physics are QA'd.

Moving Leap Motion Hands Coordinate System Following Spawned iPhone Player Camera

I'm fixing a legacy project two years ago. It uses a Windows Unity host to utilize Leap Motion device capturing hand movements, and an iPhone Player (with Cardboard headset) to control how the view ports move relatively in the "game world".
Now I find that only when my Leap Motion device keeps still (e.x. be pinned on my chest) and only the iPhone player moves with my head can I find everything okay. Otherwise, when I wear both the Leap Motion device and the iPhone on my head, the hand model sways with my head's moving.
I've concluded that the captured position of hands by Leap Motion device has been interpreted as position relative to the "world coordination system", but in fact it should be a local one relative to my headset (i.e. the iPhone player camera which is spawned as a game object in my windows host).
I've made a simplified scene to illustrate my situation. The hierarchy when network is not connected is like below:
The hierarchy when the Windows program is connected to itself as the host:
When iPhone End is also connected:
I'm trying to give command to "Hands" so that it rotates with "Camera(Clone)/Head", but it doesn't work. (In the following picture, "RotateWith" and "CameraFacing" are different trials to let it move with "Camera(Clone)/Head".)
It sounds like the problem is caused by the camera and the Leap Motion having different latencies and operating at different frame-rates, which can be solved with temporal warping. This has already been implemented by Leap Motion and is done automatically if you use the Leap XR Service Provider.
Attach the LeapXRServiceProvider component to your Main Camera and ensure the Temporal Warping Mode is set to "Auto". This will tell the Leap Motion code to compensate for the differences between the hand tracking frame and the Unity frame.

Detecting palms on a touch screen

everyone!
I'm doing a research project involving detecting the simultaneous detection of palms placed on a multitouch screen.
I've done quite a bit googling and found out that there's a lot of libraries both for gesture recognition (AS3, https://github.com/fljot/Gestouch for instance) and computer vision. I'm working with JSTouchController (https://github.com/sebleedelisle/JSTouchController), but it tracks only 5 fingers at a time. So if I place one palm on a screen and the library finds all five fingers, it won't track second palm being placed at all. It works correctly from time to time, though.
So, the question is: are there any libraries to track ten fingers simultaneously with acceptable quality on modern touch screens?
The number of touch points is usually restricted by the device or OS. If you want to prototype something quickly, one possibility is to use the LEAP motion sensor. It can track ten fingers in front of it, although not via touch.

Gravity as frame of reference in accelerometer data in iOS

I'm working on an iPhone app for motorcyclist that will detect a crash after it has occurred. Currently we're in the data acquisition process and plotting graphs and looking at data. What i need to log is the forward user acceleration and tilt angle of the bike relative to bike standing upright on the road. I can get the user acceleration vector, i.e. the forward direction the rider is heading by sqrt of the x,y and z accelerometer values squared. But for the tilt angle i need a reference that is constant, so i thought lets use the gravity vector. Now, i realize that deviceMotion API has gravity and user acceleration values, where do these values come from and what do they mean? If i take the sqrt of the x,y and z squared components of the gravity will that always give me my up direct? How can i use that to find the tilt angle of the bike relative to an upright bike on the road? Thanks.
Setting aside "whiy" do this...
You need a very low-pass filter. So once the phone is put wherever-it-rides on the bike, you'll have various accelerations from maneuvers and the accel from gravity ever present in the background. That gives you an on-going vector for "down", and you can then interpret the accel data in that context... Fwd accel would tip the bike opposite of braking, so I think you could sort out fwd direction in real time too.
Very interesting idea.
Assuming that it's not a "joke question" you will need a reference point to compare with i.e. the position taken when the user clicks "starting". Then you can use cos(currentGravity.z / |referenceGravity|) with |referenceGravity| == 1 because Core Motion measures accelerations in g.
But to be honest there are a couple of problems for instance:
The device has to be in a fixed position when taking the reference frame, if you put it in a pocket and it's just moving a little bit inside, your measurement is rubbish
Hmm, the driver is dead but device is alive? Chances are good that the iPhone won't survive as well
If an app goes to the background Core Motion falls asleep and stops delivering values
It has to be an inhouse app because forget about getting approval for the app store
Or did we misunderstand you and it's just a game?
Since this is not a joke.
I would like to address the point of mount issue. How to interpret the data depends largely on how the iPhone is positioned. Some issues might not be apparent to those that don't actually ride motorcycles.
Particularly when it comes to going around curves/corners. In low speed turns the motorcycle leans but the rider does not or just leans slightly. In higher speed turns both the rider and the motorcycle lean. This could present an issue if not addressed. I won't cover all scenarios but..
For example, most modern textile motorcycle jackets have a cell phone pocket just inside on the left. If the user were to put there phone in this pocket, you could expect to see only 'accelerating' & 'braking'(~z) acceleration. In this scenario you would expect to almost never see significant amounts of side to side (~x) acceleration because the rider leans proportionally into the g-force of the turn. So while going around a curve one would expect to see an increase in (y)down from it's general 1g state. So essentially the riders torso is indexed to gravity as far as (x) measurements go.
If the device were mounted to the bike you would have to adjust for what you would expect to see given that mounting point.
As far as the heuristics of the algorithm to detect a crash go, that is very hard to define. Some crashes are like you see on television, bike flips ripping into a million pieces, that crash should be extremely easy to detect, Huh 3gs measured up... Crash! But what about simple downs?(bike lays on it's side, oops, rider gets up, picks up bike rides away) They might occur without any particularly remarkable g-forces.(with the exception of about 1g left or right on the x axis)
A couple more suggestions:
Sensitivity adjustment, maybe even with some sort of learn mode (where the user puts the device in this mode and rides, the device then records/learns average riding for that user)
An "I've stopped" or similar button; maybe the rider didn't crash, maybe he/she just broke down, it does happen and since you have some sort of ad-hoc network setup it should be easy to spread the news.