I have planned to start building an application for the Hololens in a month from now. So right now, I am just in the preliminary design and feasibility check. (For the record I have built simple applications for the Hololens using Unity and also have used the camera for some image recognition)
My main concern is methods of inputing data to my application. In a normal application you have GUI widgets such as spinners or sliders, if you want to enter a numberic number.
How can I input numeric values to a Hololens application?
Since you've made a few applications for HoloLens before I'm guessing you know about the MixedRealityToolkit That Microsoft offers. If you don't know about it yet, and want to use it, here is a quick guide for how to set it up (which can also be found on the MixedRealityToolkit Github). In this toolkit there are a lot of tools that can help you with building the interactions for the HoloLens.
In this toolkit there are also a few examples on how to go about making sliders and other sorts of input.
If you look under Examples/UX you'll see a few scenes/prefabs/scripts that give an example on how you could go about making such GUI widgets for hololens
Related
I'm looking at developing a desktop app that should overlay Word on windows. I have looked into office plugins, but they are too limited in their functionality. Essentially, I want to achieve something like what is shown in the image below.
These highlighted text segments should move as the user scrolls etc., but do not have to be instant or could be shown when the user has been at one location in the document for more than a second, etc.
Is this possible? The overlay should not interfere with the word functionality, and what framework should one go for in the development Electron? Windows is the main platform where it should work but would be great with an easy port to Mac.
Any resources are much appreciated, and I was thinking it could be built kind of like Loom video. Have already looked at this and trying out this one.
I'm doing research that requires a camera that is automated, but it also has to coordinate with the rotation of a filter wheel and take a series of images relatively quickly (4 images in less than 2 seconds). I'd like to do this by writing a Matlab script to control everything and handle incoming data.
I know there are scientific cameras out there that can do this job and have very good SDKs, but they are also very expensive if they have the sensor size that I need (APS-C or larger). Using a simple Sony mirrorless camera would work perfectly for my needs as long as I can control it.
I'd like to use Matlab or LabView to automate the data acquisition, but I'm not sure what is possible with this API Beta SDK. My understanding is that it is designed to allow the user to create a stand-alone app, but not to integrate camera commands into a programming environment like Matlab. I know there are ways to call an external application from within Matlab, but I've also read one person's account of trying this indirect method and it sounds like it takes a long time to trigger the camera this way (five seconds or more for a single image). That would be too slow.
Does the SDK allow camera control directly from a program like Matlab?
My understanding is that it is designed to allow the user to create a stand-alone app, but not to integrate camera commands into a programming environment like Matlab.
Don't trust marketing statements, that's just how they advertise their SDK. If you take a closer look into the documentation, you will realize your Camera runs a server which accepts JSON-RPC over HTTP commands. I would use an already exiting examples for Android (Java) and adapt it to run on your operating system, you can directly call java code from your matlab console.
I've had great success communicating between MatLab and a Sony QX1 (the 'webwrite' function is your friend!).
That said, you will definitely struggle to implement anything like precise triggering. The call-response times vary greatly (~5 seconds +-2 ish).
You might be able to get away with shooting video and then pulling the relevant frames out of the sequence?
I am presently trying to develop an Image Processing based app for android mobiles using Eclipse. My app consists of several buttons and sub-menu buttons as well. I am trying to make it universal(so that it can run on any resolutions) using switch case for the different resolutions, and thereby different resources for different resolutions. The problem is, I am encountering memory overload problems. It runs fine on Xperia U, but not on Galaxy S, and also crashes in the Emulator. I haven't used XMl for my app, and have designed the entire UI programmatically. Please advice me on how to solve this problem. Any help will be highly appreciated. Thanks in advance!
Well, the question sis very general but here are some points that might help:
Designing everything programmatically means your app will be slow and will create everything on runtime. It is not using the design optimization of Android UI by not using XML.
What context are you using in order to create UI objects. If you are tying the UI objects to the apps context rather than activities' context, all the components of UI will remain in memory unless the app is killed. Unlike in activity as soon as the activity is destroyed all its UI components are killed.
You could use XMl inflator in order to reduce work in Java, by reusing components created in XML . This will help you optimize some sub components that you are using repetitively.
I am writing a game for the Microsoft PixelSense written in Unity, communicating with the table through the SurfaceToTUIO Bridge and the unity3d-tuio Unity Plugin.
I am currently trying to get the game to play nicely with the Microsoft PixelSense launcher. I have been able to get my application to appear in the launcher by mimicking the Surface Bing Application - duplicating a link to an XML file in C:\ProgramData\Microsoft\Surface\v2.0\Programs and creating the corresponding XML in the proper format.
When I go into Surface Mode - either through a dedicated Surface User Account, or through the Surface Shell on the Administrator's Profile, the game appears correctly on the launcher bar with the custom icon I set in the XML. The game launches correctly from the bar, and I can play it without any errors for about two minutes.
At that point, the Launcher closes my game. With a little research, I learned that its the Application's Responsibility to dismiss the Launcher.
Being that this is part of the Microsoft PixelSense SDK and not accessible to Unity, I've tried various methods to get around this:
I tried running the game in Single Application Mode. It turns out there is a different timeout that still waits for the SignalApplicationLoadComplete call.
I tried using the CriticalProcessMonitoring Disable and ApplicationProcessMonitoring Disable keys in the Registry.
I tried setting the LoadingScreenTimeout and SingleAppLoadingScreenTimeout Registry Keys to 0 - below their valid threshold.
What I want is to provide my users with the smoothest experience getting into and out of the Unity game. Currently we have to launch our game from the Windows Desktop, which can frustrate users due to the fact that Windows can't differientiate between a finger touching the screen, and a palm hovering above the screen.
Does anyone have a workaround, an good understanding of how I could spoof a SignalApplicationLoadingCall from Unity, or a suggestion of something to try?
Thanks!
Update: I've found some more things that didn't work:
I found the Microsoft.Surface DLL at C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.Surface\v4.0_2.0.0.0__31bf3856ad364e35. I imported it into my Unity project, but recieved a System.TypeLoadException that appears to be that the DLL is compiled with .NET 4.0, which Unity does not currently support.
I have been unable to find any COM objects that would allow me to communicate with the launcher without needing the DLL.
I cannot use a XNA wrapper program, as System.Diagnostics.Process.Start doesn't work in Surface Mode according to this post.
I'm building an app for iOS with Adobe Flex builder and compiling it into an .ipa using Adobe's tools.
Through initial testing, I see that the end result isn't as rich as native code, nor is it as fast or smooth.
Without simply saying 'why dont you just use objective-c', are there any documentation as to the overhead to building an app this way?
Specifically, what kind of performance hit can you expect when using Adobe's platform instead?
Make sure you are using the latest AIR 3.0 SDK for iOS packaging. It is notably higher performance.
Consider best practices when developing your app:
http://www.adobe.com/devnet/flash/articles/optimize_content_ios.html
http://help.adobe.com/en_US/as3/mobile/flashplatform_optimizing_content.pdf
http://www.mikechambers.com/blog/files/presentations/fitc_amsterdam_2010/flash_iphone_fitc_2010.pdf
Blanket comparisons to native Objective-C is a wide topic, to which capability of Flash ubiquitous deployment to multiple platforms should also be considered if you're targeting Android and BlackBerry.
Perhaps citing specific issues of your implementation would help yield insight.
I too have been developing a Flash-based iOS app. My initial prototype was useless in an iPad 1. I had to look for ways to optimize. My second prototype is performing quite well. So here are some pointers.
1) Don't use timers. I had to write my own utility "FrameWorker" Singleton class to manage and delegate all my animations, or even delayed actions to a single enterFrame event. This alone will give you a huge speed boost.
2) Don't use many enterFrame events on different objects. As I said on point one, find a way to use a single enterFrame that you can add and remove processes to.
3) Avoid vectors as much as possible-use images. If you need to draw objects in the Flash IDE or via action script, use cacheAsBitmap = true.
4) Don't use visual objects that are much larger than the screen area. If you need to use large objects across the screen, then manage them off the display list and learn blitting techniques to draw to the screen ONLY the rect that will be display at that time. Lee Brimlow has a couple of good starter tutorials.
5) Be very disciplined about managing events. Make sure you always remove listeners that are not necessary anymore for instance.
6) Distribute your app's load to different frames. Don't do too many intensive things on a single frame.
If you follow these pointers your app will be as fast as any out there.