Disabling Microsoft PixelSense (aka Surface) Surface Shell Timeouts - unity3d

I am writing a game for the Microsoft PixelSense written in Unity, communicating with the table through the SurfaceToTUIO Bridge and the unity3d-tuio Unity Plugin.
I am currently trying to get the game to play nicely with the Microsoft PixelSense launcher. I have been able to get my application to appear in the launcher by mimicking the Surface Bing Application - duplicating a link to an XML file in C:\ProgramData\Microsoft\Surface\v2.0\Programs and creating the corresponding XML in the proper format.
When I go into Surface Mode - either through a dedicated Surface User Account, or through the Surface Shell on the Administrator's Profile, the game appears correctly on the launcher bar with the custom icon I set in the XML. The game launches correctly from the bar, and I can play it without any errors for about two minutes.
At that point, the Launcher closes my game. With a little research, I learned that its the Application's Responsibility to dismiss the Launcher.
Being that this is part of the Microsoft PixelSense SDK and not accessible to Unity, I've tried various methods to get around this:
I tried running the game in Single Application Mode. It turns out there is a different timeout that still waits for the SignalApplicationLoadComplete call.
I tried using the CriticalProcessMonitoring Disable and ApplicationProcessMonitoring Disable keys in the Registry.
I tried setting the LoadingScreenTimeout and SingleAppLoadingScreenTimeout Registry Keys to 0 - below their valid threshold.
What I want is to provide my users with the smoothest experience getting into and out of the Unity game. Currently we have to launch our game from the Windows Desktop, which can frustrate users due to the fact that Windows can't differientiate between a finger touching the screen, and a palm hovering above the screen.
Does anyone have a workaround, an good understanding of how I could spoof a SignalApplicationLoadingCall from Unity, or a suggestion of something to try?
Thanks!
Update: I've found some more things that didn't work:
I found the Microsoft.Surface DLL at C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.Surface\v4.0_2.0.0.0__31bf3856ad364e35. I imported it into my Unity project, but recieved a System.TypeLoadException that appears to be that the DLL is compiled with .NET 4.0, which Unity does not currently support.
I have been unable to find any COM objects that would allow me to communicate with the launcher without needing the DLL.
I cannot use a XNA wrapper program, as System.Diagnostics.Process.Start doesn't work in Surface Mode according to this post.

Related

How to force Unity Editor to use OpenGl single-threaded rendering

I need Unity to run with single threaded Opengl rendering on Windows. There is a command line option in the player and in the editor to do this (-force-opengl -force-gfx-direct). However, it appears as though it is just there for developer debugging since it is only a command line option (not available in Player Settings anymore) and I need it in production.
Question 1: Is there a way to bake single threaded opengl rendering into the windows player so I don't have to create a windows shortcut to launch it?
Question 2: Any idea if this option is going to disappear soon? Given that it is no longer on the Player Settings dialog for Windows, I'm nervous that it is being deprecated...
More background on why:
I'm writing a native plugin to Unity 2019.2 that does OpenGL calls AND calls back into Unity methods that are marked as [MonoPInvokeCallback] to perform things like loading resources.
My problem is that Unity does multithreaded rendering for OpenGL by default, so you need to perform your OpenGL calls on their rendering thread (how to do that is described here).
BUT: The Unity scripting APIs throw exceptions if you access them outside of the "main unity thread" and my code needs to do things like load a resource using a MonoPInvokeCallback AND make an OpenGL call in one function. Thus the need to shut off multi-threaded rendering.

Kinect not being detected on other computers after deployment of UWP

I recently finished to develop a UWP based on the SDK example CameraFrames. On the second screen of this example (and also on my app) you can choose to preview the frames taken by the Kinect v2.0 as shown below.
On the top of the image on the right of "Grupo de fontes" (Source group) I am able to choose between the different cameras that are connected to my PC. The problem comes when I deploy my app to another PC, I am unable to see "Kinect V2 Video Sensor". Thus rendering my app obsolete as it needs to be portable between PCs. I have checked that inside "Package.appxmanifest->Capabilities->Web Cam" checkbox is ticked.
I am out of ideas as I don't have a clue why my app works flawlesly on my computer but not on the rest. If you need any extra info to help me with my problem please let me know.
It is worth noting that on the other PCs that I've tried my app can read frames via Kinect Studio or MatLab.
Inside "Camera privacy settings" my app has privileges to use the camera.
Thank you in advance.
Update due to Nico Zhu's comment
My OS is Windows 10 and my target version is "10.0.16299.0". I have downloading and deploying CameraFrames on the second computer that I'm working with but it doesn't recognize the Kinect as in input source. Even though CameraFrames doesn't work doesn't read anything, I can properly make use of the kinect using Kinect Studio.
It seems that my problem is focused on my second computer not being able to make use of the Kinect on any deployed UWPs. What should I installed in order to make sure that I have all that's needed to properly read from the Kinect?
At the end of a lot of headaches I just needed to upgrade my drivers.

Hololens Methods of Input and Output

I have planned to start building an application for the Hololens in a month from now. So right now, I am just in the preliminary design and feasibility check. (For the record I have built simple applications for the Hololens using Unity and also have used the camera for some image recognition)
My main concern is methods of inputing data to my application. In a normal application you have GUI widgets such as spinners or sliders, if you want to enter a numberic number.
How can I input numeric values to a Hololens application?
Since you've made a few applications for HoloLens before I'm guessing you know about the MixedRealityToolkit That Microsoft offers. If you don't know about it yet, and want to use it, here is a quick guide for how to set it up (which can also be found on the MixedRealityToolkit Github). In this toolkit there are a lot of tools that can help you with building the interactions for the HoloLens.
In this toolkit there are also a few examples on how to go about making sliders and other sorts of input.
If you look under Examples/UX you'll see a few scenes/prefabs/scripts that give an example on how you could go about making such GUI widgets for hololens

Couldn't find Crashlytics GameObject in Unity 5.6

Struggling with adding Crashalytics to our build.
We've downloaded and added the Fabric UnityPackage, upgraded to the latest version, signed in to our Fabric account from within the Unity Interface, and dragged the GameObject from the final step of the 'Prepare Fabric' modal into our first scene. Finally, we've built the game to Android.
After this, playing the game in the editor is prompting "Couldn't find Crashlytics GameObject" warnings, even though FabricInit and CrashlyticsInit are in the scene. The message pops up twice when running the game -- There are two consecutive Unity Scenes that this startup scene leads to.
There doesn't seem to be any specific documentation on the website, and the Fabric website leads us to the download page, as opposed to the dashboard.
Mike from Fabric here. The goal of this warning was to let you and other developers know that they haven't dragged in the CrashlyticsInit game object within a game, but this is being flagged in other cases due to how we try and detect this.
We're working on a more graceful way to get this point across, but in the meantime, if you comment out line 65 of Fabric/Editor/CommonBuild/FabricCommonBuild.cs then this will go away.
I removed the [PostProcessScene(0)] in Fabric/Editor/CommonBuild/FabricCommonBuild.cs
Because I only initialize and put the Crashlytics GameObject in my first scene.
I don't need it to check every scene to confirm if there is an object or if it needs to be initialized.
But still not sure if it will have some side effect.

Inkcanvas with Citrix Receiver Mobility

We currently deploy a CMS type application through the Citrix Environment and i have added an electronic signature feature which I wrote using an WPF Inkcanvas. This part of our application works well when using the pen mouse through a desktop version of the receiver, but very poorly when accessing it through a tablet, Ipad or Droid. When you go to try to scribble you signature you either to hold your finger to initiate the left-click hold, this is longer of droid than Ipad. Does anyone have any experience with this? I want it to work just like Square signing feature a just draw by touch.
You have a few options here. The simplest is to tell admins to set the Application Description to the following when publishing the application:
keywords:mobile
On the mobile receivers (iOS and Android) this does a few things, the useful one for you is it puts them into a different input mode where the receiver does less gesture detection and pushes through events more directly.
You can perform more finer grained control of the input mode using the Mobility SDK for Windows Apps. You can probably get away without the added complexity of using the SDK and just use the extra keywords in the publishing step. But if you're interested there are multiple language bindings for the SDK, including .NET. The main SDK link is here:
http://www.citrix.com/mobilitysdk/
The specific class you use to set the input mode with the .NET binding is here (see BeginSetTouchInputMode):
http://www.citrix.com/mobilitysdk/docs/cmp.net/index.html
Finally the last option is to get your customers using the latest XenDesktop 7 using Windows Server 2012. This is the latest release and it supports touch remoting, so the receiver will not perform any gesture translation that delays user input. Instead it will pass all the touch events directly up to the server for processing. The iOS receiver has implemented touch remoting, however I'm not sure if it's been added to the Android receiver yet.
So the tl;dr is use "keywords:mobile", and then when your customers eventually upgrade to XenDesktop 7 this should become a non-issue.