Object Recognition with Mixed Reality Capture (MRC) - unity3d

We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.

We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.

Related

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

How to capture player moment with webcam and use that as an input in a game?

I want to make a boxing game and move the player with the help of capturing moment of the player by using image processing. I have tried capturing moment in python using OpenCV but how could i use that input in a game environment ?
and which tool i should use for that ??
This is my first question here please cooperate.
Thanks
Just buy a kinect and use Microsoft's SDK. It has skeleton tracking built into it.
As far as game input, you can build standard serial stuff in unity3d in its background scripts. Either implement the camera directly into unity or else create a serial forwarder that runs on the computer, which reads the camera data, processes it, and then streams computed information to unity.

Can MATLAB do realtime motion tracking?

I tried to track motion in Matlab by using this tutorial (http://www.mathworks.com/help/vision/examples/motion-based-multiple-object-tracking.html) and it works fine but it implies video as source to work.
I wanna know if it's possible to track motion by using the same tutorial but in real time by using camera as source!
Everything is possible, just please try to find some stuff by yourself before asking here.
I think you may find the information you need in this link:
http://www.matlabtips.com/realtime-processing/
Alternately, you could of course just store the camera output as a (very short) video and continuously analyse that instead.
As of release R2014a, MATLAB includes support for USB webcams. If you have an older version, or if you want to use a high-end camera, you would need the Image Acquisition Toolbox.
Once you are able to get frames from the camera, you can reuse almost all of the code in the multiple object tracking example. You would only need to rewrite the readFrame function with code to get a frame from the camera.

Actionscript 3: Any way to get data to and from a local program?

I've recently been constructing a pixel shader to apply shading to the player-character in a Flash game I'm currently coding for. As it turns out, however, Actionscript 3 is... Not handling this gracefully, and the framerate hit is pretty huge.
What I would like to do is write something in C++ that can then use OpenGL buffers to store the pixels I want to tweak, tweak them in a hardware accelerated fashion, and then pass it back to the Flash file.
Is there any way of getting AS3 to pass bitmap data to a local shader plugin .exe, and then accept returned data back, or should I just give up and rewrite the entire damn thing in Unity or something?
Failing this, is there any way of forcing the GPU to do the number crunching in AS3? I know OpenGL can't be used with AS3, but DirectDraw, perhaps?
I'm aware this entire enterprise is a) ridiculous overkill and b) probably doomed, but it's currently all that's preventing me from having to work on my reflective essay for the project. (University coursework)
You can't interact with any local program. Since Flash Player 10, there is some silly hardware acceleration support, but mostly for video and not for 3D. The best AS3 (Flash Player 10+) can provide is the native support for working with triangles in 3D (method drawTriangles()). For example Flash 3D engine "Away3D Lite" uses this feature.
Have you tried looking at Alchemy?
http://labs.adobe.com/technologies/alchemy/
It is a way of compiling C++ code into the swf bytecode.

How do you make maps in Flash CS4 and then use them in iPhone games?

I was watching a video showing an ngmoco rolando2 level designer.
He seemed to be using flash CS4 to make the maps.
Would anyone know how I would go about doing this?
Just in case you need to know, I am an intermediate programmer, I know both Java and Objective-C pretty well.
I don't know if any of what I'm about to say is true or not, but hopefully my input will be helpful:
It could be simply that the level used in Rolando are simply vector graphic images and the designer you saw in the video simply preferred Flash CS4 as his vector editor?
Again, I could be wrong here.
It's also possible that the game has some code that decodes flash files into usable levels somehow - assuming this would be permitted by Apple in regards to their "no interpreters" rules.
My final thought, which in my opinion would be the least likely, is that the game may be a flash game compiled to run on the iPhone using Adobe's beta flash-iphone SDK. I say this would be the least likely as I believe ngmoco haven't used this method in any of their previous games and I don't see why they would suddenly resort to this method of developing iPhone apps.
In my game Hudriks I also used flash to design levels and even make some animations.
There is no any tool to do this, so you need to develop it yourself with requirements for your game.
First of all, it depends on your game and what exactly you need to design in flash - just putting images, defining their parameters (bonus values), ground path, etc.
After that it is important to define structure of your flash file - how you store different levels (in symbols or scenes), what layers for each level you have (boundaries, objects, obstacles, etc).
If you need to have some extra information for you objects in flash, most probably, you will need to develop custom panel in flash to setup all parameters. I used setPersistentData for storing information for flash objects.
After that you need to develop script that goes through all objects in your symbols and extracts basic information, like transformation, and your custom data. I faced some problems with getting correct transformation values, especially for rotation. Had to do extra heuristics.
For animations I just used motion tween data. In my animation framework did simple implementation supporting basic parameters (transformation and alpha) and only linear curves. Fortunately, in Flash CS4 there is function copyMotion that gives you XML for the animation. You just need to parse it or convert to your own format.