Useful examples of Kinect in combination with Unity? - unity3d

My colleague and I are working on a project where we need to use Kinect to control an application made in Unity. We have familiarized ourselves with Kinect and Unity, but we can't find any examples or tutorials about how to use the two together. Does anybody have any useful resources for this? As my project partner says: the more examples, the merrier.

here is a good site with a lot of kinect projects people have worked on, with a bunch being Unity+KinectSDK
http://channel9.msdn.com/search?term=kinect+unity

Kinect with MSSDK - https://www.assetstore.unity3d.com/#/content/7747
Kinect with OpenNI 1.5 - https://www.assetstore.unity3d.com/#/content/7225

Markerless motion capture with a kinect (or asus xtion live pro or carmine), and face animations in unity: http://www.faceshift.com/unity

Related

How do I render over a tracked image on Hololens?

I'd like the Hololens to take in through the camera and project an image over tracked images and I can't seem to find a concrete way as to how online. I'd like to avoid using Vuforia etc for this.
I'm currently using AR Foundation Tracked Image manager (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation#2.1/manual/tracked-image-manager.html) in order to achieve the same functionality on mobile, however it doesn't seem to work very well on hololens.
Any help would be very appreciated, thanks!
AR Foundation is a Unity tool and 2D Image tracking feature of AR Foundation is not supported on HoloLens platforms for now, you can refer to this link to learn more about feature support per platform:Platform Support
Currently, Microsoft does not provide an official library to support image tracking for Hololens. But that sort of thing is possible with OpenCV, you can implement it yourself, or refer to some third-party libraries.
Besides, if you are using HoloLens2 and making the QR code as a tracking image is an acceptable option in your project, recommand using Microsoft.MixedReality.QR to detect an QR code in the environment and get the coordinate system for the QR code, more information please see: QR code tracking

Can anyone explain the technicality of zspace AR-VR product and how it works?

I just came to know about a AR-VR company for educational interactive content. I know about Augmented reality apps which can be developed using Unity framework and know Virtual reality too.
But can anyone try to explain how they are doing it or any idea or direction will be helpful?
Can we use existing Google cardboard and some tool to interact with the 3D object? Like this - DIY hand tracking VR controller.
Thanks in advance and let me know if you guys have more questions.
After a quick look at the official documentation: it looks like the Z Space system is a 3d display (working like NVidia 3dVision or any some 3d television sets) with head tracking (to render in correct perspective) and a 3d-tracked stylus for interaction.
TL;DR: It's a 3D VR-like portal through a laptop screen.
Cardboard controllers won't work with it and would be completely redundant because of the stylus.

ARCore in unity vs Sceneform features/use cases?

The way I understand it is that there are several environments that support ARCore and Unity and Sceneform SDK are some of the options.
I was wondering how are they different from each other besides one being in Java and the other being in C#? Why would someone choose one over the other aside from language preference?
Thank you
Sceneform empowers Android developers to work with ARCore without learning 3D graphics and OpenGL. It includes a high-level scene graph API, realistic physically based renderer, an Android Studio plugin for importing, viewing, and building 3D assets, and easy integration into ARCore that makes it straightforward to build AR apps. Visit this video link of Google I/O '18.
Whereas ARCore in Unity uses three key capabilities to integrate virtual content with the real world as seen through your phone's camera:
Motion tracking
Environmental understanding allows the phone to detect the size
and location of all type of surfaces: horizontal, vertical and
angled surfaces like the ground, a coffee table or walls.
Light estimation allows the phone to estimate the environment's
current lighting conditions.
ARCore is Google’s platform for building augmented reality experiences. Using different APIs, ARCore enables your phone to sense its environment, understand the world and interact with information. Some of the APIs are available across Android and iOS to enable shared AR experiences.

Official Kinect SDK and Unity3d

Does anyone know anything about using Kinect input for Unity3d with the official SDK? I've been assigned a project to try and integrate these two, but my super doesn't want me to use the open Kinect stuff. Last news out of the Unity site was that Kinect SDK requires 4.0 .Net and Unity3D only takes 3.5
Workarounds? Point me toward resources if you know anything about it please.
The OpenNI bindings for Unity are probably the best way to go. The NITE skeleton is more stable than the Microsoft Kinect SDK, but still requires calibration (PrimeSense mentioned that they'll have a calibration-free skeleton soon).
There are bindings to OpenNI from the Kinect SDK, that make the Kinect SDK work like SensorKinect, this module also exposes The KinectSDK calibration-free skeleton as an OpenNI module:
https://www.assembla.com/code/kinect-mssdk-openni-bridge/git/nodes/
Because the KinectSDK also provides ankles and wrists, and OpenNI already supported it (even though NITE didn't support it) all the OpenNI stuff including Unity character rigs that had included the ankles and wrists just all work and without calibration. The KinectSDK bindings for OpenNI also support using NITE's skeleton and hand trackers, with one caveat, it seems like the NITE gesture detection aren't working with the Kinect SDK yet. The work-around when using the KinectSDK with NITE's handGenerator is to use skeleton-free tracking to provide you with a hand point. Unfortunately you lose the ability to just track hands when your body isn't visible to the sensor.
Still, NITE's skeleton seems more stable and more responsive than the KinectSDK.
How much of the raw Kinect data do you need? For a constrained problem, like just getting limb articulation, have you thought about using an agnostic communication schema like a TcpClient. Just create a simple TCP server, in .net 4.0, that links to the Kinect SDK and pumps out packets w/ the info you need every 30ms or something. Then just write a receiving client in Unity. I had a similar problem with a different SDK. I haven't tried the Kinect though so maybe my suggestion is overkill.
If you want real-time depth/color data you might need something a bit faster, perhaps using Pipes?

opengl game programme

i want to develop same game on different platforms like android,webos,iphone(ios)
i heared we can port native code on this platforms.i am using windows.is there any chance for
me develop game in windows and port that into above platforms.
Thanks in advance
Aswan
You cannot just simply port them, OpenGLES has only a subset of OpenGL functionalities, and there's no GLUT in ES. Besides, each openGL setup code eg:preparing canvas, loading texture can be different from platform to platform.
You can't just copy-paste a Windows OpenGL game to one of those platforms and have it work flawlessly. There are tools and engines which can help you develop games once and then have it work on multiple platforms (how well they work I can't say, however, I've never used them). If you check this question here on SO, there is a good list of choices from the OP, some of which had the aforementioned ability.
Be careful. iPhone uses OpenGL ES which is a subset of Open GL.
A good thought would be to look at this before you proceed.
Hope this helps.
Thanks,
Madhup