I'm trying to obtain the eye tracking data at a fixed frame rate (30 Hz, as provided by the eye tracker in HoloLens2) and not tied to the Update() function of Unity because the rendering frame rate of my application is not stable. It seems that if I use EyeGazeProvider, some gaze samples are missed because it outputs the gaze data not at a fixed rate but depending on the rendering frame rate, even though I check for new gaze data asynchronously (every 10ms) using a timer. Using Windows.UI.Input.Spatial.SpatialPointerPose I could get the gaze samples at a fixed rate on a deployed HL2 app but I need to use the Remoting due to high rendering load of my scene.
However, I'm a bit confused about the usage of UWP APIs in Holographic Remoting (in play mode or standalone app) for HoloLens 2. Is it possible to use the class SpatialPointerPose to obtain the eye tracking data with Remoting? Or is it mandatory to use the MRTK interface EyeGazeProvider, for the case of a non-UWP app (editor or standalone) as in the case of holographic remoting?
The easiest way to get eye gaze data at a fixed rate from a non-UWP app (editor or standalone) would be using Mixed Reality OpenXR Plugin. The code would be similar to FollowEyeGaze.cs. But instead of using "Update()" method, we can use a timer to get the data at a fixed rate.
While using Unity OpenXR Plugin, we can use the InputDevice.TryGetFeatureUsages method to get a list of every InputFeatureUsage that a device provides. For the EyeTracking device, there is no time related feature usage. So, OpenXR Plugin also can't provide the eye gaze sample timestamp.
Related
We are using Vuforia for image tracking with hololens and unity engine. Vuforia works fine. We are also using Azure Spacial Anchors to fix the location of objects. However, Anchors do not seem to work with Vuforia. It appears that Vuforia captures camera events and does not pass them on to Azure Anchors, maybe?
Is there a way to get both technologies working at the same time?
the major issue would be Vuforia occupied the Camera pipeline
you may stop Vuforia and switch to ASA and switch back.
Or you may use pics and time stamps and ASA
Please read this page
https://library.vuforia.com/platform-support/working-camera-unity
may help you get the camera frame. then you may make the parameter transferd to a service you hosted in linux server, with Spatial https://github.com/microsoft/azure_spatial_anchors_ros
I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.
I'm a sound designer working on VR for mobile phone; prototyping on Galaxy S8.
We use Unity and Fmod, thus GVR plugins ( formerly resonance-audio ).
It is known that GVR bypass group busses in Fmod for a more precise control on spatialisation for each sources.
Now i have a problem as i'm definitely not a dev so my coding skills are not that great.
I simply want to automate the volume of certain Fmod events, make them fade out on like 10/15 seconds at the end of a scene. Thus i feel like i need automation either on GVR Source Gain from each track, OR the master volume of each event.
I added a parameter in Fmod, so by coding in Unity i want to tell fmod to smoothly go from a value to another thus fading out the volume.
Issue : the parameter appears in the inspector in Unity, but can't have access to it / can't control it.
I can tick the box of the said parameter, but it automatically untick as soon as i start the scene, and i don't know what to type to control the value.
I have some devs in the team with me that will help, but we're kind in a rush thus im trying to find solutions myself.
TLDR : how to simply automate parameter values of GVR plugins or Fmod event master bus ( not session master bus ) by coding in Unity ? Considering GVR / Resonance-audio bypass groupbusses.
If anyone had the same kind of issue or know how to sort it out... i'll be more than grateful.
regards,
Guillaume
i'm developing VR using google cardboard SDK..
i want to move on virtual environment when i walk on real world, like this : https://www.youtube.com/watch?v=sZG5__Z9pzs&feature=youtu.be&t=48
is it possible to make VR application like that for android...? maybe using accelerometer sensor ? how can i implement this using unity...?
i try to record accelerometer sensor while i walk with smartphone, here are the result : https://www.youtube.com/watch?v=ltPwS7-3nOI [i think the accelerometer value is so random -___- ]
Actually it is not possible with only mobile:
You're up against a fundamental limitation of the humble IMU (the primary motion sensor in a smartphone).
I won't go into detail, but basically you need an external reference frame when trying to extract positional data from acceleration data. This is the topic of a lot of research right now, and it's why VR headsets that track position like the Oculus Rift have external tracking cameras.
Unfortunately, what you're trying to do is impossible without using the camera on your phone to track visual features in the scene and use those as the external reference point, which is a hell of a task better suited to a lab full of computer vision experts.
One another possible but difficult way is:
This may be possible if you connect device to internet then watch it's position from satelite(google maps or something like that)but that is a very hard thing to do.
We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.
We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.