Rendering Mediapipe Hands in Three JS for a Physics Simulation - unity3d

I'm looking to use Handsfree.js and Three to create a game where you can pick up and move around blocks with your hand. I'm wondering where to start: I have a basic understanding of Three but don't know how I should approach this project. I'm trying to render and animate a hand dynamically based on the coordinates that Mediapipe outputs.
I've looked into Three's Skinned Mesh, but I'm not sure if it would be feasible to control and entire hand using it. I've also tried rigging a hand in Blender and then importing it in Three js, but I couldn't find any documentation on how to control imported rigs. How should I go about dynamically animating a hand? Also, how would I add physics to a hand that I import in? Is Three the right tool for the job?
I've also tried using unity, but can't find any libraries that output 3d coordinates for the hands.
(for reference, mediapipe outputs an array of Vector3s with the positions of each joint on the hand)

Related

AR Foundation face mesh for creating custom assets

I'm looking for either a 3D model or an image file over which I can apply my own custom graphical elements, such as eyeliner or lipstick.
In the ARCore docs, the solution to this issue is very well described. You can get either an FBX file or a PSD template, over which you place your own elements.
From what I can tell, the principle of ARCore and ARKit are very much the same - there's a standard face mesh which gets contorted to the shape of a detected face, however, I'm unable to find any such materials using Google.
Just use the same face model and use slightly larger copies of it for the makeup. No one is going to get close enough to see how thick its caked on, because all the polys would start disappearing anyway...

How to grab the 2D views/textures from a 3D Object in Unity

I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)

Unity 2018: 2D Object - SpriteMesh

Ok, so I have looked around the internet but I cannot find the sprite mesh. I should be able to right click my sprite> 2D Object> SpriteMesh.
Problem is that I don't see the option "SpriteMesh" anywhere.
Here's the deal. I created a bunch of 2D pieces for a character: head, body, two arms, two legs, two hands, and two feet. I imported the sprite as a PNG file and changed SpriteMode to multiple. I used the Sprite Editor to slice the char into pieces automatically. There's also nothing inside of the sprite editor that allows me to rig bones either.
Now I need to Rig the toon with bones and skin. However, I cannot find a way to do this. Watching a few tutorials, the guy adds a SpriteMesh to each of the parts. However, when I try to do this, the option just doesn't exist. I see SpriteMask but no SpriteMesh.
I'm using Unity 2018.2.18f1.
I have zero experience in animations like this. Normally I create a player/enemy without legs/arms. So they just float and I use the animation tab to change size/shape to insinuate movement. However, I'd like to take this next step and make the game look better.
How can I rig my toon? What steps do I need to follow?
All help is appreciated!
I guess you want to use the new 2D Features from Unity, if you want to rig your 2D Character.
I'm using Unity 2018.2.18f1.
You need to use Unity 2018.3 or later to use these tools.
I suggest you to use Unity Hub to download multiples versions and Beta versions.
There is a really nice video from Brackeys about this subject also.
When you have the 2018.3 or later version installed, open your project and go to the Window/Package Manager window, you need to install these packages :
I don't think you need the 2D Pixel Perfect but it's always nice to have.

Tango Predefined Objects

I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.

Is it possible to combine textures in Maya into some sort of atlas?

I'm starting to use Maya to do some bone animations of a 2D character, which is composed of several different parts (legs, body, head, weapon, etc.). I have each part in a separate .PNG image file. Right now I have a polygon for each part, with its own material and texture:
I was wondering if there's a way to automatically combine the textures into an atlas, make all the polygons use the same material with the atlas, and correct their UV maps so they still point to the right part of the atlas. Right now, I can do it manually in reverse: I can make the atlas outside Maya with other tools, then use the atlas on a material and manually correct the UV maps of each polygon. But it's a very long process and if I need to change a texture, I have to do it all over again. So I was wondering if there's a way to automate it.
The reason why I'm trying to do this is to save draw calls in Unity. From what I understand, Unity can batch objects as long as they share the same material. So instead of having a draw call for each polygon in the character, I'd like to have a single draw call for the whole character. I'm pretty new to Maya, so any help would be greatly appreciated!
If you want to do the atlassing in Maya, you can do it by duplicating your mesh and using the Transfer Maps tool to bake all of the different meshes onto the duplicate as a single model. The steps would be:
1) Duplicate the mesh
2) Use UV layout to make sure that the duplicated mesh has no overlapping UVs (or only has them where appropriate, like mirrored pieces.
3) Use the Transfer Maps... tool to project the original mesh onto the new one, using the "Use topology" option to ensure that the projection is clean.
The end result should be that the new model has the same geometry and appearance as the original, but with all of it's textures combined onto a single sheet attached to a single material.
The limitation of this method is that some kinds of mesh (particularly meshes that self-intersect) may not project properly, leaving you to manually touch up the atlassed texture. As with any atlassing solution you will probably see some softening in the textures, since the atlas texture is not a pixel for pixel copy but rather a a projection, and thus a resampling.
You may find it more flexible to reprocess the character in Unity with a script or assetPostprocessor. Unity has a native texture atlassing function, documented here. Unity comes with a script for combining static meshes, but you'd need to implement your own; theres'a an example on the unity wiki that probably does what you want : http://wiki.unity3d.com/index.php?title=SkinnedMeshCombiner (Caveat: we do something similar to this at work, but I can't share it; I have not used the one in this link). FWIW Unity's native atlassing works only in rectangles, so it's not as memory efficient as something you could do for yourself.