So I have a new problem with (potentially) Windows 10's latest patch (1703) and Unity 2017 (any)
Post patch the following behaviour is exhibited:
Starting a new project in Unity:
Splash Screen opens
Select Project (includes and location)
Unity starts the file structure and build process
All progress bars finish but Editor never launches
Unity.exe becomes an Un-Endable process in memory
Two copies of UnityHelper.exe load but do nothing (they can be ended)
Same behaviour for opening an existing project adjusted as follows:
Splash screen opens
Select Project
Unity Project Version Import Dialog will prompt as needed
All progress bars finish but Editor never launches
Unity.exe becomes an Un-Endable process in memory
Two copies of UnityHelper.exe load but do nothing (they can be ended)
In both cases Unity has been allowed to sit for 24hrs with no change in state.
Please read before comment:
System is an i7 w/32gb, GTX 1070, Windows 10 Pro 64bit and has run Unity for dev for the past 18 mos no problem
No hardware issues reported
1 hardware change (Oculus Rift added)
Unity has been completely uninstalled (including AppData files cleared and reg cleaned) and tested with reinstalls from 2017.1.2p3, 2017.2.0f3, 2017.2.1f1, no change
Roll Back of Windows 10 patch is not an option (compliance).
In digging through Unity Answer Forums it was suggested that we remove our shiny new Oculus from the equation.
Low and behold we have a fully functional Unity again. Tested on 2017.1.0p3 and the stepped up tot he present release of 2017.3.0f3
This required physical disconnect of the HMD and Towers from the desktop and a reboot. We did not need to uninstall or stop any of the Oculus services, just a clean removal of the device.
In meetup chats this has been revealed to us as "The Oculus Plague" and not entirely uncommon, I'm hoping either Unity or Oculus can get this resolved, discon/recon/reboot cycles are hard on the hardware, and knees.
Related
I want to manually place and scale a hologram on an application for the HoloLens2. When I restart the application I want the hologram to be placed and orientated in the same pose relative to the real-world as placed in the previous session.
Starting recently, the documentation (https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/spatial-anchors-in-unity?tabs=wlt) suggests to use World Locking Tools instead of World Anchors.
However, I cannot get the persistance across sessions on the same device (HoloLens2, HL) running.
I tried the minimal example of World Locking Tools (https://microsoft.github.io/MixedReality-WorldLockingTools-Unity/DocGen/Documentation/HowTos/UsingWLT/JustWorldLock.html). But my hologram will always appear at the same position relative to the start pose of the HL.
What addional steps need to be performaned to have World Locking Tools save and load the manipulated hologram's transform (locally on the device)?
I found the answer to the problem:
I activated the development visualizer of World Locking Tools (WLT) and it showed that no anchors were created on the Hololens while in the unity simulation it seemed to work fine.
Changing the XR Plugin Provider from OpenXR to Windows Mixed Reality solved the problem as WLT is not compatible with OpenXR. See Changing XR Plugin Provider for how to change the plugin in the project settings.
I am attempting to learn to code via flutter (I'm a total noob) and I am having some troubles with my emulator when I am trying to test my code. I am using Visual Studio Code, and when i try to boot my emulator, the phone will appear but the screen is completely blank. the power button (on the emulator) is unresponsive and I get an error "emulator didn't respond in..."
I have literally been fighting with this for hours and I could really use a knowledgeable hand. Can anybody help me troubleshoot, by chance? I have searched here, but nothing matches exactly what I'm going through and I haven't been able to solve it yet.
errors with android emulator
My computer spec:
Processor Intel(R) Core(TM) i5-1035G7 CPU # 1.20GHz 1.50 GHz
Installed RAM 16.0 GB (15.7 GB usable)
Device ID 45818A1F-CFA5-4E12-AB9C-8192B75D2308
Product ID 00325-96713-52283-AAOEM
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
Okay, so this probably seems obvious, but I seemed to struggle with this at the beginning since my course has me learning in VSCode, but I went into the android studio instead and I have deleted the old emulator and created a new one. It's taking a while to finish loading, but the screen is operating and I think my problems are likely solved.
So I guess no, I never figured out why it wasn't working, nor did I get it to work. but the end result is the same.
I an running a script on Bluestacks for a game. Sometimes, the game gets crashed. I want to make a Program that will detect when the App crashes, and then do certain action.
How can I do it?
Once you app goes into crash state either your system will get freeze may because of low configuration as it will consume lots of resources and our script is running into Bluestacks environment. Script have no control over this situation. It takes input only from mouse and keyboard strokes.
Can you please open your Settings (gear wheel to the right) and match these settings and see if that resolves the issue for you?
Engine Tab:
Graphics Engine: Performance
Graphics Renderer: OpenGL
GPU Settings: Prefer dedicated graphics (if possible)
ASTC: Hardware decoding (if possible)
CPU: 4 Cores
RAM: 4 GB
ABI: Auto
Device Tab:
Device Profile: One Plus 3T
I recently finished to develop a UWP based on the SDK example CameraFrames. On the second screen of this example (and also on my app) you can choose to preview the frames taken by the Kinect v2.0 as shown below.
On the top of the image on the right of "Grupo de fontes" (Source group) I am able to choose between the different cameras that are connected to my PC. The problem comes when I deploy my app to another PC, I am unable to see "Kinect V2 Video Sensor". Thus rendering my app obsolete as it needs to be portable between PCs. I have checked that inside "Package.appxmanifest->Capabilities->Web Cam" checkbox is ticked.
I am out of ideas as I don't have a clue why my app works flawlesly on my computer but not on the rest. If you need any extra info to help me with my problem please let me know.
It is worth noting that on the other PCs that I've tried my app can read frames via Kinect Studio or MatLab.
Inside "Camera privacy settings" my app has privileges to use the camera.
Thank you in advance.
Update due to Nico Zhu's comment
My OS is Windows 10 and my target version is "10.0.16299.0". I have downloading and deploying CameraFrames on the second computer that I'm working with but it doesn't recognize the Kinect as in input source. Even though CameraFrames doesn't work doesn't read anything, I can properly make use of the kinect using Kinect Studio.
It seems that my problem is focused on my second computer not being able to make use of the Kinect on any deployed UWPs. What should I installed in order to make sure that I have all that's needed to properly read from the Kinect?
At the end of a lot of headaches I just needed to upgrade my drivers.
I am writing a game for the Microsoft PixelSense written in Unity, communicating with the table through the SurfaceToTUIO Bridge and the unity3d-tuio Unity Plugin.
I am currently trying to get the game to play nicely with the Microsoft PixelSense launcher. I have been able to get my application to appear in the launcher by mimicking the Surface Bing Application - duplicating a link to an XML file in C:\ProgramData\Microsoft\Surface\v2.0\Programs and creating the corresponding XML in the proper format.
When I go into Surface Mode - either through a dedicated Surface User Account, or through the Surface Shell on the Administrator's Profile, the game appears correctly on the launcher bar with the custom icon I set in the XML. The game launches correctly from the bar, and I can play it without any errors for about two minutes.
At that point, the Launcher closes my game. With a little research, I learned that its the Application's Responsibility to dismiss the Launcher.
Being that this is part of the Microsoft PixelSense SDK and not accessible to Unity, I've tried various methods to get around this:
I tried running the game in Single Application Mode. It turns out there is a different timeout that still waits for the SignalApplicationLoadComplete call.
I tried using the CriticalProcessMonitoring Disable and ApplicationProcessMonitoring Disable keys in the Registry.
I tried setting the LoadingScreenTimeout and SingleAppLoadingScreenTimeout Registry Keys to 0 - below their valid threshold.
What I want is to provide my users with the smoothest experience getting into and out of the Unity game. Currently we have to launch our game from the Windows Desktop, which can frustrate users due to the fact that Windows can't differientiate between a finger touching the screen, and a palm hovering above the screen.
Does anyone have a workaround, an good understanding of how I could spoof a SignalApplicationLoadingCall from Unity, or a suggestion of something to try?
Thanks!
Update: I've found some more things that didn't work:
I found the Microsoft.Surface DLL at C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.Surface\v4.0_2.0.0.0__31bf3856ad364e35. I imported it into my Unity project, but recieved a System.TypeLoadException that appears to be that the DLL is compiled with .NET 4.0, which Unity does not currently support.
I have been unable to find any COM objects that would allow me to communicate with the launcher without needing the DLL.
I cannot use a XNA wrapper program, as System.Diagnostics.Process.Start doesn't work in Surface Mode according to this post.