Getting 3D points coordinates of a person as person is walking in Unity - unity3d

For a research project, I need to find the coordinates of 3D points on the surface of a person body as the person is walking straight. I know that unity is rendering an object using a mesh based on 3D points coordinates.
I know very little about unity. I wonder if it is possible that I could use unity to create one person character and make him walk and get the 3D points of that person for each 50ms or 1sec, etc and save them to them to a file? So that I could read the points coordinates later using either C# or python and perform my simulation? How easy is that? is there any sample code or example or ready character which I could use in a relatively short time?
If there is any suggestion for any tool or software which I could achieve that would be great.
Thanks

Easiest thing to do in my opinion would be using either Kinect or photogrammetry to create your model as Point Cloud which will have vertices on the surface only. This is one of the reasons why i am suggesting Point Cloud because you do not have to find vertices of a mesh on the surface this way.
Then import it to Unity using Point Cloud Viewer.
At last in Unity you can log all the global positions of the model using transform.TransformPoint(meshVert) over time easily.

Related

How to add 3D elements into the Hololens 2 field of view

I'm trying to build a Remote Assistance solution using the Hololens 2 for university, i already set up the MRTK WebRTC example with Unity. Now i want to add the functionality of the desktop counterpart being able to add annotations in the field of view of the Hololens to support the remote guidance, but i have no idea how to achieve that. I was considering the Azure Spatial Anchors, but i haven't found a good example of adding 3D elements in the remote field of view of the Hololens from a 2D desktop environment. Also i'm not sure if the Spatial Anchors is the right framework, as they are mostly for persistent markers in the AR environment, and i'm rather looking for a temporary visual indicator.
Did anyone already work on such a solution and can give me a few frameworks/hints where to start?
To find the actual world location of a point from a 2D image, you can refer this answer: https://stackoverflow.com/a/63225342/11502506
In short, cameraToWorldMatrix and projectionMatrix transforms define for each pixel a ray in 3D space representing the path taken by the photons that produced the pixel. But anything along a certain ray will show up on the same pixel. So to find the actual world location of a point, you'll need either use Physics.Raycast method to calculate the impact point in world space where the ray hit the SpatialMapping.

AR Foundation face mesh for creating custom assets

I'm looking for either a 3D model or an image file over which I can apply my own custom graphical elements, such as eyeliner or lipstick.
In the ARCore docs, the solution to this issue is very well described. You can get either an FBX file or a PSD template, over which you place your own elements.
From what I can tell, the principle of ARCore and ARKit are very much the same - there's a standard face mesh which gets contorted to the shape of a detected face, however, I'm unable to find any such materials using Google.
Just use the same face model and use slightly larger copies of it for the makeup. No one is going to get close enough to see how thick its caked on, because all the polys would start disappearing anyway...

How to generate surface/plane around a real world Object (Like bottle) using Unity & ARCore?

I built an apk using the HelloAR scene (which is provided with ARcore package). The app is only detecting Horizontal surface like table and creates it's own semi-transparent plane over it. When I moved my phone around a bottle, the app again, only created a horizontal plane cutting through the bottle. I expected ARCore to create planes along the bottle as I move my phone around, like polygons in a mesh.
Another scenario is, I placed 2 books on the floor, and each of them have different thickness. But the HelloAR app creates only one semi-transparent horizontal surface over the thicker book, instead of creating two surfaces (one for each book).
What is going wrong here? How can I fix it and make the HelloAR app work more precisely? Please help.
Software: Unity v2018.2,
ARcore v1.11.0
ARCore generates an approximate point cloud using a soft movement of the device to identify the featured points, this points are detected by contrast in the different shapes, if you use your application in test mode in unity you can see how the points are placed in your empty scene.
Once the program has enough points at the "same height" (I don't know the exact precision), it generates the plane that you can see, but it won't detect planes separated by a difference of 5cm or even more distance.
If you want to know the approximate accuracy of the app, test it with unity and make a script to capture the generated points that have been used to generate the planes, then check the Y difference to see which is the tolerance distance.
Okay so Vuforia is currently one of the leading SDKs for augmented reality providing a wide area of detection options (Images, Ground, Point, 3D objects, ...)
So regarding your question about detecting a bottle I would most certainly use the 3D model detection feature. You can read the official docs here.
You need to first generate an approximate of the object in a 3d modeling software and the use their program to generate the detection model. Then you put this in Unity and setup the detection. (no coding needed)
I have some experience with this kind of detection. I used it to detect a large 2mx2m scale model of an electric vehicle. It works great, you can walk around it and it tracks it through and through. You can see a short official demo here
Hope it helped to explain this in short!

How to prevent the car from getting out of the way in a Unity3D driving simulation that uses MapBox maps?

I'm trying to make a car simulation that uses real world map. I'm currently using Mapbox for getting map features. For car asset I'm using Unity's Standart Asset.
My question is how can prevent the car from get off the road. There are many another features like park, lake, etc,.. And I want to make the driver use only the roads for driving.
Is there anything I can do? I thought about add collider for all other features(park, garden, ..) but there are good amount of features for adding collider. Is there any other solution?
If you get the road information in terms of coordinates from Mapbox (which I don't know) you could write a script wich would automatically create a mesh with mesh collider on each side of the road.
You can also create a collision mesh in a software like Blender, Maya, 3DSMAX or other and import it to Unity3D. You could then use this imported model with the mesh collider.
Here you can see one of many tutorials on Creating Custom Collision for your Unity Scenes.

How to set dynamic hotspot for 360 image with unity 3D

I am trying to build a visitors tour with Unity 3D. I have panaromic picture of bedrooms within an hotel and I would like to add points (hot spots) to my pictures that leads to another picture.
The problem is that I want to add this point dynamically via a backend, and I can't find a way to achieve that in Unity.
I will try to answer this question.
Unity has a XYZ coordinate system that can be translated to real world. I would measure real distances to these points (from the center where you took your picture) in your location/room and send these coordinates via backend to Unity3D client.
In Unity you can create Vector3 positions or directions based on coordinates you sent before. Use these positions/directions to instantiate 'hotspots' objects prefabs in right positions and directions. It might be necessary to adjust the scale/units to get the right result.
Once you have your 'hotspot' objects in place add a script to them that will load new scene (on click) with another location/image and repeat the process.
This is a very brief suggestion on how to do it. The code would be quite simple.