Hy!
I have a set of points in 3d which all lie on a plane and represent the floor of a room.
Now I want to create walls. My idea is to extrude the floor, or to use translational sweeps.
I tried to finds some code in the web or information where to start, but I wasn't lucky.
Does anyone now some good tutorials or libs?
THX
Related
I have a grid of terrains, I want to translate the players, or clicked location to a spherical planet coordinates output.
Not out to unity world space, just - the grid is "the sphere" and i am looking for the current geo coordinates on that sphere such as earth has.
I honestly have no idea how to begin this conversion, and would appreciate any answers.
Im not bad at coding and can work out a general directed 'heres how' solution, but would love for one of the pros out there to give a deeper dive on how to do it right.
Thank you in advance.
I want to access Unreal Mesh Distance Field in or order to do some A* or alternative path finding.
My environment needs wall and ceiling walkers as well as flying bots. The out of the box Navmesh is 2D and won't work for me.
Has anyone tried this?
For a research project, I need to find the coordinates of 3D points on the surface of a person body as the person is walking straight. I know that unity is rendering an object using a mesh based on 3D points coordinates.
I know very little about unity. I wonder if it is possible that I could use unity to create one person character and make him walk and get the 3D points of that person for each 50ms or 1sec, etc and save them to them to a file? So that I could read the points coordinates later using either C# or python and perform my simulation? How easy is that? is there any sample code or example or ready character which I could use in a relatively short time?
If there is any suggestion for any tool or software which I could achieve that would be great.
Thanks
Easiest thing to do in my opinion would be using either Kinect or photogrammetry to create your model as Point Cloud which will have vertices on the surface only. This is one of the reasons why i am suggesting Point Cloud because you do not have to find vertices of a mesh on the surface this way.
Then import it to Unity using Point Cloud Viewer.
At last in Unity you can log all the global positions of the model using transform.TransformPoint(meshVert) over time easily.
I want to do, what this guy is doing in this video..
Please see this Video
(Interacting a live human with unity interactive cloth)
My Strategy So Far:
I took a simple 3d plane gameObject in unity, added an interactive cloth component and 2 Box colliders with hand joints of Kinect Skeleton to attach a cloth.
Then I added sphere colliders with all 24 joints of Skeleton Stream to make the cloth collided by my body but the results are unsatisfied.
Problem:
The cloth is behaving very strangely.. It jumps off weirdly whenever it falls on my body(joints). I am stuck here. I just want a sample or a jump start or any help of how to do that.
Please always take into account that the positions of the joints reported by Kinect may jump or jitter. Render the sphere colliders so you can see their positions and better understand your problem; and apply a smoothing algorithm on the joint positions (a common technique is to store the last 30 positions or so, and average them), or simply enable Kinect SDK's skeleton smoothing options.
I'm starting study opengl, and im tring to make a 3d chess like, but i cant figureout, how i can know where i have clicked in the "table" to make the proper animations, any advice ?
This is called "3D picking". You have to translate screen coordinates into world coordinates. From there, do a ray/collision object (bounding box?) intersection test. If they intersect, that's where the user clicked.
You'll have to do a little bit more than this to solve the depth-order problem, like finding the first time to intersection of each object, then selecting the one with the lowest (positive) time.
If you google for "3D picking" you might find what you are looking for.
Here is a tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=32
Note that this is not specific to any shape of bounding object, be it a bounding box, a polygon, a curve, etc. You just have to figure out the math for the intersection test for each type of object you want to support.
Edit:
I didn't read that tutorial before I linked it, I just figured NEHE is where all the cool kids learn OpenGL (admittedly ten years ago...).
Here is something from the OpenGL FAQ about picking:
http://www.opengl.org/resources/faq/technical/selection.htm
waldecir, look for a raypick function. It's the name for sending a ray from the scene's camera center through the pixel you clicked on (actually, through that pixel's translated position on the camera's plane representing the "glass surface of the screen" in the 3D world) and return the frontmost polygon the ray hits together with some information. Usually coordinates within the polygon's surface axes, e.g. UV or texture coordinates. By checking the coordinates, you can determine which square the user clicked on.
Rays can be sent from any position and in any direction, so likely you'd have to get the camera position and its plane center, but the documentation should be able to help you there.