I want to manually place and scale a hologram on an application for the HoloLens2. When I restart the application I want the hologram to be placed and orientated in the same pose relative to the real-world as placed in the previous session.
Starting recently, the documentation (https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/spatial-anchors-in-unity?tabs=wlt) suggests to use World Locking Tools instead of World Anchors.
However, I cannot get the persistance across sessions on the same device (HoloLens2, HL) running.
I tried the minimal example of World Locking Tools (https://microsoft.github.io/MixedReality-WorldLockingTools-Unity/DocGen/Documentation/HowTos/UsingWLT/JustWorldLock.html). But my hologram will always appear at the same position relative to the start pose of the HL.
What addional steps need to be performaned to have World Locking Tools save and load the manipulated hologram's transform (locally on the device)?
I found the answer to the problem:
I activated the development visualizer of World Locking Tools (WLT) and it showed that no anchors were created on the Hololens while in the unity simulation it seemed to work fine.
Changing the XR Plugin Provider from OpenXR to Windows Mixed Reality solved the problem as WLT is not compatible with OpenXR. See Changing XR Plugin Provider for how to change the plugin in the project settings.
Related
I need Unity to run with single threaded Opengl rendering on Windows. There is a command line option in the player and in the editor to do this (-force-opengl -force-gfx-direct). However, it appears as though it is just there for developer debugging since it is only a command line option (not available in Player Settings anymore) and I need it in production.
Question 1: Is there a way to bake single threaded opengl rendering into the windows player so I don't have to create a windows shortcut to launch it?
Question 2: Any idea if this option is going to disappear soon? Given that it is no longer on the Player Settings dialog for Windows, I'm nervous that it is being deprecated...
More background on why:
I'm writing a native plugin to Unity 2019.2 that does OpenGL calls AND calls back into Unity methods that are marked as [MonoPInvokeCallback] to perform things like loading resources.
My problem is that Unity does multithreaded rendering for OpenGL by default, so you need to perform your OpenGL calls on their rendering thread (how to do that is described here).
BUT: The Unity scripting APIs throw exceptions if you access them outside of the "main unity thread" and my code needs to do things like load a resource using a MonoPInvokeCallback AND make an OpenGL call in one function. Thus the need to shut off multi-threaded rendering.
while a simple mobile unity game project (or a simple project with a cube and a camera) provides almost uniform CPU time with less variation while profiling it.
But adding the googleVR (tried latest and old versions) module to it, split into left and right eye (GVRviewer) and profile the VR mobile app provides a lot of spikes or variations which is due to VR.waitforGPU() and in turn gfx.processcommands(). can anyone please tell me what's the reason behind those performance spikes while adding GVR module ?
I have tried multiple optimizations (e.g. static batching, disabling vsync, optimizing mesh data) but nothing seems to be useful. Is there any possible way to get rid of this issue?
I recently finished to develop a UWP based on the SDK example CameraFrames. On the second screen of this example (and also on my app) you can choose to preview the frames taken by the Kinect v2.0 as shown below.
On the top of the image on the right of "Grupo de fontes" (Source group) I am able to choose between the different cameras that are connected to my PC. The problem comes when I deploy my app to another PC, I am unable to see "Kinect V2 Video Sensor". Thus rendering my app obsolete as it needs to be portable between PCs. I have checked that inside "Package.appxmanifest->Capabilities->Web Cam" checkbox is ticked.
I am out of ideas as I don't have a clue why my app works flawlesly on my computer but not on the rest. If you need any extra info to help me with my problem please let me know.
It is worth noting that on the other PCs that I've tried my app can read frames via Kinect Studio or MatLab.
Inside "Camera privacy settings" my app has privileges to use the camera.
Thank you in advance.
Update due to Nico Zhu's comment
My OS is Windows 10 and my target version is "10.0.16299.0". I have downloading and deploying CameraFrames on the second computer that I'm working with but it doesn't recognize the Kinect as in input source. Even though CameraFrames doesn't work doesn't read anything, I can properly make use of the kinect using Kinect Studio.
It seems that my problem is focused on my second computer not being able to make use of the Kinect on any deployed UWPs. What should I installed in order to make sure that I have all that's needed to properly read from the Kinect?
At the end of a lot of headaches I just needed to upgrade my drivers.
I have planned to start building an application for the Hololens in a month from now. So right now, I am just in the preliminary design and feasibility check. (For the record I have built simple applications for the Hololens using Unity and also have used the camera for some image recognition)
My main concern is methods of inputing data to my application. In a normal application you have GUI widgets such as spinners or sliders, if you want to enter a numberic number.
How can I input numeric values to a Hololens application?
Since you've made a few applications for HoloLens before I'm guessing you know about the MixedRealityToolkit That Microsoft offers. If you don't know about it yet, and want to use it, here is a quick guide for how to set it up (which can also be found on the MixedRealityToolkit Github). In this toolkit there are a lot of tools that can help you with building the interactions for the HoloLens.
In this toolkit there are also a few examples on how to go about making sliders and other sorts of input.
If you look under Examples/UX you'll see a few scenes/prefabs/scripts that give an example on how you could go about making such GUI widgets for hololens
I am writing a game for the Microsoft PixelSense written in Unity, communicating with the table through the SurfaceToTUIO Bridge and the unity3d-tuio Unity Plugin.
I am currently trying to get the game to play nicely with the Microsoft PixelSense launcher. I have been able to get my application to appear in the launcher by mimicking the Surface Bing Application - duplicating a link to an XML file in C:\ProgramData\Microsoft\Surface\v2.0\Programs and creating the corresponding XML in the proper format.
When I go into Surface Mode - either through a dedicated Surface User Account, or through the Surface Shell on the Administrator's Profile, the game appears correctly on the launcher bar with the custom icon I set in the XML. The game launches correctly from the bar, and I can play it without any errors for about two minutes.
At that point, the Launcher closes my game. With a little research, I learned that its the Application's Responsibility to dismiss the Launcher.
Being that this is part of the Microsoft PixelSense SDK and not accessible to Unity, I've tried various methods to get around this:
I tried running the game in Single Application Mode. It turns out there is a different timeout that still waits for the SignalApplicationLoadComplete call.
I tried using the CriticalProcessMonitoring Disable and ApplicationProcessMonitoring Disable keys in the Registry.
I tried setting the LoadingScreenTimeout and SingleAppLoadingScreenTimeout Registry Keys to 0 - below their valid threshold.
What I want is to provide my users with the smoothest experience getting into and out of the Unity game. Currently we have to launch our game from the Windows Desktop, which can frustrate users due to the fact that Windows can't differientiate between a finger touching the screen, and a palm hovering above the screen.
Does anyone have a workaround, an good understanding of how I could spoof a SignalApplicationLoadingCall from Unity, or a suggestion of something to try?
Thanks!
Update: I've found some more things that didn't work:
I found the Microsoft.Surface DLL at C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.Surface\v4.0_2.0.0.0__31bf3856ad364e35. I imported it into my Unity project, but recieved a System.TypeLoadException that appears to be that the DLL is compiled with .NET 4.0, which Unity does not currently support.
I have been unable to find any COM objects that would allow me to communicate with the launcher without needing the DLL.
I cannot use a XNA wrapper program, as System.Diagnostics.Process.Start doesn't work in Surface Mode according to this post.