Calculate world coordinates from a grid of terrains in unity - unity3d

I have a grid of terrains, I want to translate the players, or clicked location to a spherical planet coordinates output.
Not out to unity world space, just - the grid is "the sphere" and i am looking for the current geo coordinates on that sphere such as earth has.
I honestly have no idea how to begin this conversion, and would appreciate any answers.
Im not bad at coding and can work out a general directed 'heres how' solution, but would love for one of the pros out there to give a deeper dive on how to do it right.
Thank you in advance.

Related

Voxel Water in unity engine

I am trying to create a voxel style RPG like the one shown in Cube world. And I am trying to get an efficient low-GPU-intensive way to create Voxel water; like the water shown in these
https://www.youtube.com/watch?v=ZCFIchEZk2s
https://www.youtube.com/watch?v=qJa2w-7edKA
However instead of Blender in unity. I feel it would be good to use a procedual shader (for foam, and waves, and adapting to players jumping in it) of somesort to be efficient for my uses (Ocean, rivers, lakes etc...). though I cannot think of a way to create this kind of shader. I have attempted to throw toghether a shader however I am not the most experienced in the non-programming field.
Any help would be greatly appreciated. Thanks!
A shader will not be a good idea for water simulation. I would recommend you use cellular automata to get these water voxel some realistic movement.
Cellular automata works when you divide your world into a grid of cells and every update you change the cell state or position depending on its cell neighbours. There are some good examples of this in games like Conway's game of life and Noita:
Conway's game of life Wiki
Noita trailer
But I will guess that you are going for more of a 3D
style. There is this voxel game which does water simulations very nice:
John Lin's voxel engine

How to calibrate the size and speed of 3d objects

everyone
I am just starting with a new project in VR and have some problems.
I want to simulate a simple 3D-Ball like the real one in real world.
I am using OptiTrack to register my camera( 3d glasses) and Middle VR(free edition) to realize the 3D effect in 3D room. And also Unity for 3D models and programming with C#.
My Problem:
I have a real plastic ball and use it to compare with that 3D-Ball. If i go to the real ball in (real world) it will be bigger and if i go away from it, it will be smaller visually.
The 3d ball has the same diameter as the real one and stand at the same position. But if i go to ihm with 3d glasses it will be (quicker) bigger than the one in real world and if i go away from it, it will quick smaller than the real one...
Can anyone explain to me, how to solve the problem, what should i do. I need your help.
Thank you and have a nice day.
Carvin.
Get the size right inside unity. 1 unit in unity is 1 meter in real life.
Also play with the field of view of your maincamera in unity to get the desired result

How to move the game object according the movement of real world object in the web cam in unity?

i want to develop a tan gram game in unity with the concept of augmented reality. i want to make tan gram figures using real tan grams in front of a webcam ,according to the tan gram figure in the screen. For that i want to place the game object with respect to the real tan gram in the camera frame. i also want to change the position and angle accordingly. please suggest a way to achive this. Thanks in advance!!!!
With difficulty.
If you want to do this without some sort of custom built hardware controller on the real tan gram, you will need some quite intricate image processing techniques. The following are some vague steps and pointers to achieve what you want. If there is a better option I cannot think of it, but this is very conceptual and by no means guaranteed to work - Just how I would attempt the task if I really had to.
Use a Laplacian operator on the image to calculate the edges
Use this, along with the average colour information in pixels to the left/right and above/below of each "edge" pixel (within a certain tolerance) to detect the individual shapes, corners, and relative positions starting from the centre of the image.
Calculate the relative sizes of each shape and and approximate the rotation using basic trigonometry.
However I can't help but feel like this is an incredibly large amount of work for such a concept, and could be so intensive to calculate this for each pixel to make it truly not worth your time. Furthermore it depends a lot on the quality of the camera used, and parallax errors would probably be nightmarish to resolve. Unless you are truly committed to this idea, I would either search for some pre-existing asset that does this for you or not undertake the project.

Extrude convex hull

Hy!
I have a set of points in 3d which all lie on a plane and represent the floor of a room.
Now I want to create walls. My idea is to extrude the floor, or to use translational sweeps.
I tried to finds some code in the web or information where to start, but I wasn't lucky.
Does anyone now some good tutorials or libs?
THX

OpenGL ES tiled object (cube?), with clickable tiles

I'm starting study opengl, and im tring to make a 3d chess like, but i cant figureout, how i can know where i have clicked in the "table" to make the proper animations, any advice ?
This is called "3D picking". You have to translate screen coordinates into world coordinates. From there, do a ray/collision object (bounding box?) intersection test. If they intersect, that's where the user clicked.
You'll have to do a little bit more than this to solve the depth-order problem, like finding the first time to intersection of each object, then selecting the one with the lowest (positive) time.
If you google for "3D picking" you might find what you are looking for.
Here is a tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=32
Note that this is not specific to any shape of bounding object, be it a bounding box, a polygon, a curve, etc. You just have to figure out the math for the intersection test for each type of object you want to support.
Edit:
I didn't read that tutorial before I linked it, I just figured NEHE is where all the cool kids learn OpenGL (admittedly ten years ago...).
Here is something from the OpenGL FAQ about picking:
http://www.opengl.org/resources/faq/technical/selection.htm
waldecir, look for a raypick function. It's the name for sending a ray from the scene's camera center through the pixel you clicked on (actually, through that pixel's translated position on the camera's plane representing the "glass surface of the screen" in the 3D world) and return the frontmost polygon the ray hits together with some information. Usually coordinates within the polygon's surface axes, e.g. UV or texture coordinates. By checking the coordinates, you can determine which square the user clicked on.
Rays can be sent from any position and in any direction, so likely you'd have to get the camera position and its plane center, but the documentation should be able to help you there.