What are good ways to load kinect input into unity? - unity3d

I'd like to design a simple tower defense game, with the twist that every input is done via the Kinect. I want to give the player the option to build a real maze and project the minions on it via beamer.
The input from the Kinect should mainly be range data and color data. I'm at the beginning of the Project and up till now I only found Kinect fusion which seems to have the functionality I need.
Can you suggest any other options, which I should take a look at?

Related

Making a trackable human body - Oculus Rift

I'm very new to this. During my research for my PhD thesis I found a way to solve a problem and for that I need to move my lab testing in the virtual environment. Anyway, I have an Oculus Rift and an OPTOTRAK system that allows me to motion capture a full body for VR (in theory). What my question is, can someone point me in the right direction, of what materials do I need to check out to start working on a project. I have a background in programming, so it's just that I need a nudge in the right direction (or if someone knows a similar project)
https://www.researchgate.net/publication/301721674_Insert_Your_Own_Body_in_the_Oculus_Rift_to_Improve_Proprioception - I want to make something like this :)
Tnx a lot
Nice challenge too.. how accurate and how real time is the image of your body in the Oculus Rift world ? my two - or three - cents
A selfie-based approach would be the most comfortable to the user.. there's an external camera somewhere and the software transforms your image to reflect the correct perspective, as you would see your body, through the oculus, at any moment. This is not trivial and quite expensive vision software. To let it work 360 degrees there should be more than 1 camera, watching all individual oculus users in a room !
An indirect approach could be easier.. model your body, only show dynamics. There's WII style electronics in bracelets and on/in special user clothing, involving multiple tilt and acceleration sensors. They form a cluster of "body state" sensor information, to be accessed by the modeller in the software. No camera is needed, and the software is not that complicated when you'd use a skeleton model.
Combine. Use the camera for the rendering texture and drive the skeleton model via dynamics drive by the clothing sensors. Maybe deep learning could be applied, in conjunction with a large number of tilt sensors in the clothing, a variety of body movement patterns are to be trained and connected to the rendering in the oculus. This would need the same hardware as the previous solution, but the software could be easier and your body looks properly textured and it moves less "mechanistic". There will be some research needed to find the correct deep learning strategy..

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

How to capture player moment with webcam and use that as an input in a game?

I want to make a boxing game and move the player with the help of capturing moment of the player by using image processing. I have tried capturing moment in python using OpenCV but how could i use that input in a game environment ?
and which tool i should use for that ??
This is my first question here please cooperate.
Thanks
Just buy a kinect and use Microsoft's SDK. It has skeleton tracking built into it.
As far as game input, you can build standard serial stuff in unity3d in its background scripts. Either implement the camera directly into unity or else create a serial forwarder that runs on the computer, which reads the camera data, processes it, and then streams computed information to unity.

Unity. Move player when mobile moves (android VR)

i'm developing VR using google cardboard SDK..
i want to move on virtual environment when i walk on real world, like this : https://www.youtube.com/watch?v=sZG5__Z9pzs&feature=youtu.be&t=48
is it possible to make VR application like that for android...? maybe using accelerometer sensor ? how can i implement this using unity...?
i try to record accelerometer sensor while i walk with smartphone, here are the result : https://www.youtube.com/watch?v=ltPwS7-3nOI [i think the accelerometer value is so random -___- ]
Actually it is not possible with only mobile:
You're up against a fundamental limitation of the humble IMU (the primary motion sensor in a smartphone).
I won't go into detail, but basically you need an external reference frame when trying to extract positional data from acceleration data. This is the topic of a lot of research right now, and it's why VR headsets that track position like the Oculus Rift have external tracking cameras.
Unfortunately, what you're trying to do is impossible without using the camera on your phone to track visual features in the scene and use those as the external reference point, which is a hell of a task better suited to a lab full of computer vision experts.
One another possible but difficult way is:
This may be possible if you connect device to internet then watch it's position from satelite(google maps or something like that)but that is a very hard thing to do.

Is there a Unity plug-in that would allow you to generate a 3d model using your webcam?

I've looked into Metaio which can do Facial 3d reconstructions
video here: https://www.youtube.com/watch?v=Gq_YwW4KSjU
but I'm not looking to do that. I want to simply be able to have the user scan in a small simple object and a 3d model be created from that. I don't need it textured or anything. As far as I can tell Metaio cannot do what I'm looking for, or at least I can't find the documentation for it.
Since you are targeting mobile, you would have to take multiple pictures from different angles and use an approach used in this CSAIL paper.
Steps
For finding the keypoints, I would use FAST, or a method using the Laplacian of Gaussian. Other options include SURF and SIFT.
Once you identify the points, use triangulation to find where the points will be in 3D.
With all of the points, create a point cloud. In unity, I would recommend doing something similar to this project, which used particle systems as the points.
You now have a 3d reconstruction of the object!
Now, in implementing each of these steps, you could reinvent the wheel, or use C++ native plugins in Unity. This enables you to use OpenCV which has many of these operations already implemented (SURF, SIFT, possibly even some 3D reconstruction classes/methods, which use Stereo Calibration*).
That all being said... the Android Computer Vision Plugin(also apparently called "Starry Night") seems to have these capabilities. However, in version 1.0, only PrimeSense sensors are supported. See the description of the plugin**
Starry Night is an easy to use Unity plugin that provides high-level 3D computer vision processing functions that allow applications to interact with the real world. Version 1.0 provides SLAM (Simultaneous Localization and Mapping) functions which can be used for 3D reconstruction, augmented reality, robot controls, and many other applications. Starry Night will interface to any type of 3D sensor or stereo camera. However, version 1.0 will interface only to a PrimeSense Carmine sensor.
*Note: That tutorial is in matlab, but I think the overview section gives a good understanding of stereo calibration
**as of May 12th, 2014