I am trying to get the Magnetometer data from the HTC VIVE using openvr lib. Any help and direction will be appreciated.
It will be ideal if i can access this in Unity.
Unfortunately, as of version 1.0.2 such API is not available.
Due to the nature of hardware abstraction, all device-specific raw sensory data are handled within the HMD driver. They don't need to and cannot be shared via the OpenVR API.
Related
I'd like the Hololens to take in through the camera and project an image over tracked images and I can't seem to find a concrete way as to how online. I'd like to avoid using Vuforia etc for this.
I'm currently using AR Foundation Tracked Image manager (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation#2.1/manual/tracked-image-manager.html) in order to achieve the same functionality on mobile, however it doesn't seem to work very well on hololens.
Any help would be very appreciated, thanks!
AR Foundation is a Unity tool and 2D Image tracking feature of AR Foundation is not supported on HoloLens platforms for now, you can refer to this link to learn more about feature support per platform:Platform Support
Currently, Microsoft does not provide an official library to support image tracking for Hololens. But that sort of thing is possible with OpenCV, you can implement it yourself, or refer to some third-party libraries.
Besides, if you are using HoloLens2 and making the QR code as a tracking image is an acceptable option in your project, recommand using Microsoft.MixedReality.QR to detect an QR code in the environment and get the coordinate system for the QR code, more information please see: QR code tracking
I work with ARCore in Unity and would like to know how I can synchronize coordinate system between 2 devices with help Network Manager. Maybe somebody knows if it is possible/impossible. Thanks.
Using Cloud Anchors is probably the most reliable way of coordinating multiple viewers in the same AR Scene.
The Cloud Anchors Sample uses the Unity networking to share information.
i have to build an app like this:
https://www.youtube.com/watch?v=vetDCkbQGM4
It should simply detect the cockpit of a car and should show informations. For example "this is air conditioning", "this is switch button for the radio". The targets will be pre defined. Basically the app should detect everything and should show information.
Can I realize this with Vuforia? Which framework is suitable for this task?
I hope you guys can help me.
Cheers!
Since your targets are pre-defined, the simplest solution would be to use aruco markers to get 3D world positions/rotations through your user's camera feed.
See the AR Marker Detector in the Unity Asset Store for an example. Vuforia uses 'VuMarks' that are more intricate versions of this.
If you can't add computer-readable labels to the real world for your project, then you are talking about real-time object recognition. That is a much harder problem and not yet easily solvable in Unity as far as I know. It would require something like Google's Cloud Vision API. There is a Unity Cloud Vision project on GitHub, but I have no idea how well it works or what it's capabilities are.
Yes it is possible, you were first require to google. There are different SDK/Framework and Unity Asset store packages available.
You can use Free Vuforia AR Starter Kit from asset store to up and run your logic. Or You can also use Free AR Toolkit. There are different kind of tut available which can show you how to implement these pacakges.
I want to load a 3D object from a URL to a CloudRecoTarget, anyone knows how to do it?
Can you specify your intention more precise? I'm not sure if if get it.
Where is your problem and what kind of technology are you using (App, Server...)?
I guess you want to upload a 3D object, that you are downloading from somewhere of the web, to the Vuforia Cloud Database to create a Vuforia Marker that can be used in Unity for an AR application.
Usually you have a server that handles the communication between the app and the Vuforia API. Your server can simply add a target (your 3d object) to the cloud database and after analysing the target you can download the marker from Vuforia. That's the way I've done it.
Probably you can do this without a server using Unity and C# only. Have a look at the Vuforia API to write you own Vuforia Client in C#. Maybe there is a code snippet somewhere.
Does anyone know anything about using Kinect input for Unity3d with the official SDK? I've been assigned a project to try and integrate these two, but my super doesn't want me to use the open Kinect stuff. Last news out of the Unity site was that Kinect SDK requires 4.0 .Net and Unity3D only takes 3.5
Workarounds? Point me toward resources if you know anything about it please.
The OpenNI bindings for Unity are probably the best way to go. The NITE skeleton is more stable than the Microsoft Kinect SDK, but still requires calibration (PrimeSense mentioned that they'll have a calibration-free skeleton soon).
There are bindings to OpenNI from the Kinect SDK, that make the Kinect SDK work like SensorKinect, this module also exposes The KinectSDK calibration-free skeleton as an OpenNI module:
https://www.assembla.com/code/kinect-mssdk-openni-bridge/git/nodes/
Because the KinectSDK also provides ankles and wrists, and OpenNI already supported it (even though NITE didn't support it) all the OpenNI stuff including Unity character rigs that had included the ankles and wrists just all work and without calibration. The KinectSDK bindings for OpenNI also support using NITE's skeleton and hand trackers, with one caveat, it seems like the NITE gesture detection aren't working with the Kinect SDK yet. The work-around when using the KinectSDK with NITE's handGenerator is to use skeleton-free tracking to provide you with a hand point. Unfortunately you lose the ability to just track hands when your body isn't visible to the sensor.
Still, NITE's skeleton seems more stable and more responsive than the KinectSDK.
How much of the raw Kinect data do you need? For a constrained problem, like just getting limb articulation, have you thought about using an agnostic communication schema like a TcpClient. Just create a simple TCP server, in .net 4.0, that links to the Kinect SDK and pumps out packets w/ the info you need every 30ms or something. Then just write a receiving client in Unity. I had a similar problem with a different SDK. I haven't tried the Kinect though so maybe my suggestion is overkill.
If you want real-time depth/color data you might need something a bit faster, perhaps using Pipes?