Moving Leap Motion Hands Coordinate System Following Spawned iPhone Player Camera - unity3d

I'm fixing a legacy project two years ago. It uses a Windows Unity host to utilize Leap Motion device capturing hand movements, and an iPhone Player (with Cardboard headset) to control how the view ports move relatively in the "game world".
Now I find that only when my Leap Motion device keeps still (e.x. be pinned on my chest) and only the iPhone player moves with my head can I find everything okay. Otherwise, when I wear both the Leap Motion device and the iPhone on my head, the hand model sways with my head's moving.
I've concluded that the captured position of hands by Leap Motion device has been interpreted as position relative to the "world coordination system", but in fact it should be a local one relative to my headset (i.e. the iPhone player camera which is spawned as a game object in my windows host).
I've made a simplified scene to illustrate my situation. The hierarchy when network is not connected is like below:
The hierarchy when the Windows program is connected to itself as the host:
When iPhone End is also connected:
I'm trying to give command to "Hands" so that it rotates with "Camera(Clone)/Head", but it doesn't work. (In the following picture, "RotateWith" and "CameraFacing" are different trials to let it move with "Camera(Clone)/Head".)

It sounds like the problem is caused by the camera and the Leap Motion having different latencies and operating at different frame-rates, which can be solved with temporal warping. This has already been implemented by Leap Motion and is done automatically if you use the Leap XR Service Provider.
Attach the LeapXRServiceProvider component to your Main Camera and ensure the Temporal Warping Mode is set to "Auto". This will tell the Leap Motion code to compensate for the differences between the hand tracking frame and the Unity frame.

Related

HoloLens/Unity shared experience: How to track a user's "world" position instead of Unity's position?

I have here an AR game I'm developing for the HoloLens that involves rendering holograms according the the users's relative position. It's a multiplayer shared experience where everyone in the same physical room connects to the same instance (shared Unity scene) hosted via cloud or LAN, and the players who have joined can see holograms rendering at other player's positions.
For example: Player A, and B join an instance, they're in the same room together. Player A can see a hologram above Player B tracking Player B's position (A Sims cursor if you will). Then once Player A gets closer to Player B, a couple more holographic panels can open up displaying the stats of Player B. These panels are also tracking Player B's position and are always rendered with a slight offset relative to Player B's headset position. Player B also sees the same on Player A and vice versa.
That's fundamentally what my AR game does for the time being.
Problem:
The problem I'm trying to solve is tracking the user's position absolutely to the room itself instead of using the coordinate positions Unity says Player A's game object is at and Player B's.
My app works beautifully if I mark a physical position on the floor and a facing direction that all the players must assume when starting the Unity app. This then forces the coordinate system in all the player's Unity app to have a matching origin point and initial heading in the real world. Only then am I able to render holograms relative to a User's position and have it correlate 1:1 between the Unity space and real physical space around the headset.
But what if I want Player A to start the app on one side of the room and have Player B start the app on the other side of the room? When I do this, the origin point of Player A's Unity world is at different physical spot than Player B. Then this would result in Holograms rendering A's position or B's position at a tremendous offset.
I have some screenshots showing what I mean.
In this one, I have 3 HoloLenses. The two on the floor, plus the one I'm wearing to take screenshots.
There's a blue X on the floor (It's the sheet of paper. I realized you can't see it in the image.) where I started my Unity app on all three HoloLenses. So the origin of the Unity world for all three is that specific physical location. As you can see, the blue cursor showing connected players works to track the headset's location beautifully. You can even see the headsets's location relative to the screenshooter on the minimap.
The gimmick here to make the hologram tracking be accurate is that all three started in the same spot.
Now in this one, I introduced a red X. I restarted the Unity app on one of the headsets and used the red X as it's starting spot. As you can see in this screenshot, the tracking is still precise, but it comes at a tremendous offset. Because my relative origin point in Unity (the blue X) is different than the others headset's relative origin point (the red X).
Problem:
So this here is the problem I'm trying to solve. I don't want all my users to have to initialize the app in the same physical spot one after the other to make the holograms appear in the user's correct position. The HoloLens does a scan of the whole room right?
Is there not a way to synchronize these maps together with all the connected HoloLenses then they can share what their absolute coordinates are? Then I can use those as a transform point in the Unity scene instead of having to track multiplayer game objects.
Here's a map on my headset I used the get the screenshots from the same angel
This is tricky with inside-out tracking as everything is relative to the observer (as you've discovered). What you need is to be able to identify a common, unique real-location that your system will then treat as 'common origin'. Either a QR code or unique object that the system can detect and localise should suffice, then keep track of your user's (and other tracked objects) offset from that known origin within the virtual world.
My answer was deleted because reasons, so round #2. Something about link-only answers.
So, here's the link again.
https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/tutorials/mr-learning-sharing-05
And to avoid the last situation, I'm going to add that whomever wants a synchronized multiplayer experience with HoloLens should read through the whole tutorial series. I am not providing a summary on how to do this wihtout having to copy and paste the docs. Just know that you need a spatial anchor that others load into their scene.

How to hold gun with leap motion hand

Newbie here, I'm developing game where I need to pick up gun and other objects. I'm using leap motion hands. How to pick up OR Connect gun with Leap Motion so that gun (or other) object move with hand motion.
P.S. I searched but failed to find any material across Stackoverflow.
Due to the limitations of the Leap Motion imposed by physics, this is almost certainly going to be impossible.
This is due to the location of the Leap Motion's IR camera and what it can (and more importantly, what it cannot) see. When your hand is in a fist, your fingers block the camera from being able to detect the position of most of your fingers, making any sort of typical gun-holding position impossible to detect. Note that this may change based on the location of your sensor bar (which you did not include in your question and I have limited experience with other mountings, but I can't think of any mounting where the Leap Motion would have the necessary unobstructed view).
There was a project I worked on where I tried to use that same kind of pose and a "trigger pull" motion to activate an effect inside the Unity application. However, due to the location of the sensor bar (on the desk) this was virtually impossible and we had to reconfigure for a horizontal hand position (location relative to sensor was movement, hand in a fist was "fire" and would not reset and allow a second fire until hand was in an open palm gesture).

unity3d - how to control the movement of the main Camera in Unity3d

I am trying to make an mobile application that contains AR(Augumented Reality)-Mode using Unity3D. So I have connected my mobile device with my unity3d program, and the camera works fine. But when move the mobile device, the main camera inside unity program does not move the same orbit that the mobile device moves. Does any one know how to change or control the orbit of the main Camera in unity3d?
This could be happening due to a number of reasons. It could be due to non centered pivots, or coordinate systems for example.
Could you please specify which AR system are you using? As a side note, at work we recently had a project involving Unity3d and Metaio and it was a nightmare to bend the system to do what we needed, specially when we needed to do a lot of object positioning based on the local coordinate system.
When you refer to the orbit of the camera, I imagine it could be that the pivot of the camera is somehow offset and the camera is rotating around that offset. Or maybe that the camera is a child of the actual Game Object that is controlled by the AR system, in which case this parent node acts as a pivot to the camera.
In the picture below you can see that the camera is away from that center point and when it rotates it does it based on that center point, in other words the camera always tries to look at that center point and it gives that feeling of "orbiting" when it moves.
Here's the link to the image (I can't post pictures yet on this forum -.- )
http://i.stack.imgur.com/fIcY2.png

No hands detection in Unity

I have downloaded core assets of Leap Motion from the official website. What I was trying to do is to see my hands in Oculus Rift. There are some predefined scenes that are already added into core assets, for example 500Blocks. However, when I'm trying to load this scene I just get a scene with blocks but hands are not detected. I'm pretty sure that Oculus Rift and Leap Motion are turned on. You can see on the picture of what I get.
What I want is simply to have detected my hands and being able to interact with cubes. How can I do this?
I have Leap Motion of 2.2.7, Oculus Rift 2, and Unity 5.1.1. I built the scene and launched the version with directToRift.
Is your leap motion plugged into oculus rift (won't work) or directly to your machine? (more likely to work)
Is your leap motion working fine, e.g. try the Visualizer while oculus rift is running
Do you have "Allow Images" enabled in the Leap Motion Control Panel (accessible through the taskbar icon)? The white background suggests that passthrough is turned off. 500 Blocks uses our Image Hands assets, so passthrough is needed to see your hands.

Do there any dev who wrote iPhone wifi/bluetooth multiplay before?

do there any dev who wrote iPhone wifi/bluetooth multiplay before?
Recently, I'm trying to make my latest game Doodle Kart to have mulityplay via bluetooth. But I found that there are hugh data need to share between the 2 devices.
-your car's position and direction
-your car's status(it is in normal state, it is hitting by bullet, it is falling into to hole....)
-CUP car's position, dicretion, and their status
-items position and status (pencil, bullet...)
I'm thinking about one device calculate all the things, and the other device just wait and receive the data to display on the screen. Does it make sense?
Hey, I should ask you the most important question first: Do you think it's possible to make bluetooth multiplay work on my game? It's just too much data need to share between the device.
Usually Multiplayer Games just share "events", like:
Player begins to turn left/right.
Player begins to accelerate.
Player shoots from x/y/z to direction x/y/z.
Item spawns at x/y/z.
Player aquires item.
The other parts just calculate the rest themselves as if everything would happen for them.
This reduces the data needed to transmit but requires periodic "full updates" that sync the game state again (i.e. every 10 seconds).
In short:
Transfer actions, not data.