Where is the Center of a HTC Vive Controller? - virtual-reality

OpenVR (aka SteamVR) provides the position of the vive controller. The question is where that point is exactly located on the controller itself?
To be more specific, where (on the hardware) the following method call (from OpenVR) refers to?
virtual void GetDeviceToAbsoluteTrackingPose( ETrackingUniverseOrigin eOrigin, float fPredictedSecondsToPhotonsFromNow, VR_ARRAY_COUNT(unTrackedDevicePoseArrayCount) TrackedDevicePose_t *pTrackedDevicePoseArray, uint32_t unTrackedDevicePoseArrayCount ) = 0;

Open the 3D model (located in Steam\steamapps\common\SteamVR\resources\rendermodels\vr_controller_vive_1_5\vr_controller_vive_1_5.obj) in a modelling tool such as Blender. It will place the controller right on its origin so you can measure it exactly. It's closest to the red dot as you can see here:

Related

How to mimic HoloLens 2 hand tracking wIth Windows Mixed Reality controllers [MRTK2]?

The HoloLens 2 will feature hand tracking and the ability to reach out and poke UI elements. With Unity and the Mixed Reality Toolkit V2, the input for the hand-tracked near interactions (ie poking) comes the PokePointer class and to generate events for GameObjects having BaseNearInteractionTouchable components.
My question is how can we get the same PokePointer events from virtual reality controllers such as the Windows Mixed Reality controllers? This would make it possible to prototype on the desktop using a VR headset and even directly use the same near interactions of the Mixed Reality Toolkit within VR applications.
Can the PokePointer component be attached to a hand GameObject that is a controller model? Or is there a better way to do this through the MRTK profiles system?
Actually, it's possible to add a poke pointer and a grab pointer to a VR device. In fact, adding basic functionality without visualization can be done without even writing any code!
Getting the existing grab and poke pointers to work with VR
Open your current pointer configuration profile by selecting the MixedRealityToolkit object in the scene view, going to the inspector window, then navigating to Input -> Pointers.
Under pointer options, set the controller type for the PokePointer and the Grab Pointer to include your VR Controller type (in my case, it was Windows Mixed Reality, though you may wish to use OpenVR)
The poke pointer is configured to follow the Index Finger Pose, which does not exist for VR. So you will need to open the PokePointer.prefab file and in the inspector, Under Poke Poker -> Pose Action, set the value to "Pointer Pose"
Hit play. The grab pointer will be slightly below and do the right of the motion controller gizmo, and the poke pointer will appear be right at the origin.
Bonus: Improving the grab, poke pointers by using custom pointer
You can greatly improve the pointers you have by using custom pointers instead of the default pointers. For example, you can:
have the poke pointer be offset from the gizmo origin by setting the PokePointer's raycastorigin field to a custom transform
Add visuals to actually show where the pointers are
I've created an example that demonstrates a custom grab and poke pointer which visualizes the grab and poke locations, and also offsets the poke position to be more convenient. You can download a unitypackage of the sample here, or just clone the mrtktips repository and look at the VRGrabPokePointers scene.
Note: to get the visuals to actually show up, use the following script (pointers currently disable all renderers on startup to avoid flickering).
using UnityEngine;
public class EnableRenderers : MonoBehaviour
{
void Start()
{
foreach (var renderer in GetComponentsInChildren<Renderer>())
{
renderer.enabled = true;
}
}
}
You can see an example of a custom MRTK and pointer profile in the example here, and also in the VRGrabPokePointersUnity scene

Align HTC Vive Coordinate with OptiTrack System in Unity

I am working on aligning HTC Vive Controller (for example, right controller) with a rigid body marker which is tracked by Optitrack. Since the coordinate system of both systems are different how can I align these two systems? I am trying to move the rigid body marker similar to the right-hand controller of the htc vive.
Scenario:
I have a unity environment which is viewed using HTC Vive and now I want to have a rigid body marker which is tracked by Optitrack and have to align properly while I move the marker in the environment.
Any suggestions would be very helpful.
Thank you.
I am currently facing the same problem and have no solution yet, but some ideas:
1st idea match the centers
When you calibrate OptiTrack you set the center of your OptiTrack Space.
Maybe you can track the OptiTrack center point with the vive controller and then shift the vive's coordinate space appropriately.
I don't have an idea how to solve the rotation missmatch, maybe you have one?
2nd idea track some reference points with both systems
If you have the possibility to track 4 reference points in both systems, you should be able to define a transformation matrix from one vector space to the other.
For now, i have not tried the ideas, but i will soon.
Have you found a solution yet?

Disabling head-tracking on a SteamVR camera in Unity?

I am trying to develop a HUD for my VR players using two cameras. One is a stationary camera that points at my HUD objects with a depth=1, the other is the main camera with depth=0 which tracks the HMD and has the HUD overlaid onto it.
However Unity, SteamVR or OpenVR is forcing my camera to track the player's HMD. After my exhaustive search I was unable to find any way to stop this behavior. Then I tried to parent all of my HUD objects to my HUD camera and hoped for the best.
This results in undesirable jerky motion in all HUD objects:
Project Hierarchy:
[WorldHUD]
|__ Container (HUD)
|__ Camera (eye)
|__ TextMesh (Status) <- Is child of camera
[CameraRig]
|__ Camera (Head)
|__ Camera (eye)
[SteamVR]
I believe I really need control over my camera's transform so I may prevent it from moving or stopping it from tracking the player's HMD.
I have tried updating the cameras position in Update() and LateUpdate() but that does not have any affect. I also spent many hours modifying/debugging SteamVR and OVR scripts which had no positive results and the cameras never stopped tracking.
I have tried using this suggested script on my cameras parent:
NegateTracking.cs:
using UnityEngine;
using UnityEngine.VR;
public class NegateTracking : MonoBehaviour
{
void LateUpdate()
{
transform.position = -InputTracking.GetLocalPosition(VRNode.CenterEye);
transform.rotation = Quaternion.Inverse(InputTracking.GetLocalRotation(VRNode.CenterEye));
}
}
This caused the camera to point in one direction but was still translating a bit. However I noticed that things seemed even more choppy this way and feel I'm either using it wrong or it just isn't sufficient.
I'm now back to looking for ways to disable headtracking on certain cameras so I am able to position them in my scene.
Does anyone know how SteamVR/OpenVR takes control of these camera's transforms? Is there any way to disable or overridden this behavior?
Thank you.
So, just spent the whole night developing a functional workaround and put together a project that uses a little post processing magic to essentially "spoof" a stationary camera. My idea was to generate two renderTextures from a custom stereo camera setup consisting of two separate cameras that are not being tracked (which allows control of position, rotation, convergence plane. fov and so on). I then pass those camera's (left and right) renderTextures to the left and right eyes of one of the "tracked" cameras within the OnRenderImage() function. It worked better than I expected, So I decided to put everything up on GitHub and write a short tutorial for anyone else needing to do this.
Stationary Stereo Camera (Download From GitHub)
Or you can change the target display and the target type (to main display) in your Camera component. (can also be done in a script)

Using PointCloud Prefab on Unity 3D

I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.

Implementing ITangoDepth in Unity. Project Tango

I'm creating a programfor project tango in Unity and i'm trying to make a class implementing ITangoDepth. Just for testing I've made this class implement the method OnTangoDepthAvailable just for printing a text, so I can see the thing workin. I can't -.-' This is what I have:
public void Start(){
m_tangoApplication = FindObjectOfType<TangoApplication>();
m_tangoApplication.Register(this);
}
public void OnTangoDepthAvailable(TangoUnityDepth tangoDepth)
{
// Calculate the time since the last successful depth data
// collection.
debug = "Depth Available";
}
I've enabled Depth in TangoManager too.
I've been a long time studying the code in Point Cloud example but i don't see what else do I have to set to enable the depth sensor. Can anyone help me make this work?
Thanks so much :)
EDIT: OK. I think i found the problem, but it created another one: in my app I'm using a material that shows what the camera sees in a plane in front of the camera. When i disable this plane it all works properly. Is it possible that camera and depth sensor can't work the same time??
You can only use the Tango API to use access the camera if you are using the depth. That's being said, the webcam texture in Unity won't work when the depth is enabled. The augmented reality example uses both depth and color image together, you can take a look of that.