How to capture player moment with webcam and use that as an input in a game? - unity3d

I want to make a boxing game and move the player with the help of capturing moment of the player by using image processing. I have tried capturing moment in python using OpenCV but how could i use that input in a game environment ?
and which tool i should use for that ??
This is my first question here please cooperate.
Thanks

Just buy a kinect and use Microsoft's SDK. It has skeleton tracking built into it.
As far as game input, you can build standard serial stuff in unity3d in its background scripts. Either implement the camera directly into unity or else create a serial forwarder that runs on the computer, which reads the camera data, processes it, and then streams computed information to unity.

Related

Is there a way to stream Unity3D camera view out as a real camera output?

I am thinking of streaming out a Unity3D camera view as it were a real camera (same output, streams and options). I would need to do the following:
Encode the frames in either: MJPEG/ MXPEG/ MPEG-4/ H.264/ H.265/ H.264+/ H.265+.
Send metadata: string input/output.
I have not seen anything about streaming out unity camera views, except 1 question (Streaming camera view to website in unity?).
Would anyone know if this were possible? and if so what would the basic outline to follow be?
Thank you for the feedback.
I would probably start with keijiro's FFMPEG Out plugin, I have a strong feeling FFPMEG allows streaming the video via commandline, which is exactly what keijiro is doing in his plugin, should be relatively easy to modify it to stream instead of recording to disk https://github.com/keijiro/FFmpegOut
You can also do it via ROS creating a publisher and publishing camera stream from the Unity to the ROS topic :)

Unity. Move player when mobile moves (android VR)

i'm developing VR using google cardboard SDK..
i want to move on virtual environment when i walk on real world, like this : https://www.youtube.com/watch?v=sZG5__Z9pzs&feature=youtu.be&t=48
is it possible to make VR application like that for android...? maybe using accelerometer sensor ? how can i implement this using unity...?
i try to record accelerometer sensor while i walk with smartphone, here are the result : https://www.youtube.com/watch?v=ltPwS7-3nOI [i think the accelerometer value is so random -___- ]
Actually it is not possible with only mobile:
You're up against a fundamental limitation of the humble IMU (the primary motion sensor in a smartphone).
I won't go into detail, but basically you need an external reference frame when trying to extract positional data from acceleration data. This is the topic of a lot of research right now, and it's why VR headsets that track position like the Oculus Rift have external tracking cameras.
Unfortunately, what you're trying to do is impossible without using the camera on your phone to track visual features in the scene and use those as the external reference point, which is a hell of a task better suited to a lab full of computer vision experts.
One another possible but difficult way is:
This may be possible if you connect device to internet then watch it's position from satelite(google maps or something like that)but that is a very hard thing to do.

Object Recognition with Mixed Reality Capture (MRC)

We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.
We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.

What are good ways to load kinect input into unity?

I'd like to design a simple tower defense game, with the twist that every input is done via the Kinect. I want to give the player the option to build a real maze and project the minions on it via beamer.
The input from the Kinect should mainly be range data and color data. I'm at the beginning of the Project and up till now I only found Kinect fusion which seems to have the functionality I need.
Can you suggest any other options, which I should take a look at?

Can MATLAB do realtime motion tracking?

I tried to track motion in Matlab by using this tutorial (http://www.mathworks.com/help/vision/examples/motion-based-multiple-object-tracking.html) and it works fine but it implies video as source to work.
I wanna know if it's possible to track motion by using the same tutorial but in real time by using camera as source!
Everything is possible, just please try to find some stuff by yourself before asking here.
I think you may find the information you need in this link:
http://www.matlabtips.com/realtime-processing/
Alternately, you could of course just store the camera output as a (very short) video and continuously analyse that instead.
As of release R2014a, MATLAB includes support for USB webcams. If you have an older version, or if you want to use a high-end camera, you would need the Image Acquisition Toolbox.
Once you are able to get frames from the camera, you can reuse almost all of the code in the multiple object tracking example. You would only need to rewrite the readFrame function with code to get a frame from the camera.