How to place part on top of the terrain in Roblox? - roblox

I have a terrain generated in my ROBLOX game (hills, flats, rivers) and I have a model of a tree. Tree is just a group of "bricks" so it looks like kind-of a Minecraft tree.
Now I want to clone that tree programatically at run-time and I need to place that copy (clone) somewhere else on the map.
All of this is easy. But after I give my clone new coordinates (I keep coordinate for Y axis the same like my original tree), it's placed or in the air, or it is under the surface of my terrain.
How do I place it directly ON the terrain?
Or how do I get the Y-axis coordinate of that particular place on map?
Thanks!
EDIT
While answer from #Kylaaa is perfectly complete, here I include my solution, while I needed also an information about the material of the terrain in that place (rocks / grass / etc.)
function FindTerrainHeight(pos)
local VOXEL_SIZE = 4
local voxelPos = workspace.Terrain:WorldToCell(pos)
// my region will start at -100 under the desired position and end in +100 above desired position
local voxelRegion = Region3.new((voxelPos - Vector3.new(0,100,0)) * VOXEL_SIZE, (voxelPos + Vector3.new(1,100,1)) * VOXEL_SIZE)
local materialMap, occupancyMap = workspace.Terrain:ReadVoxels(voxelRegion, VOXEL_SIZE)
local steps = materialMap.Size.Y // this is actually 200 = height of the region
// now going from the very top of the region downwards and checking the terrain material
// (while it's AIR = no terrain found yet)
for i = steps, 1, -1 do
if (materialMap[1][i][1] ~= Enum.Material.Air) then
materialMap[1][i][1] ---> this is the material found on surface
((i-100) * VOXEL_SIZE) ---> this is the Y coordinate of the surface in world coordinates
end
end
return false
end
EDIT 2
So it turns out that #Kylaaa's answer solves everything. It includes also terrain material! As 4th item in the tuple. It means my solution is too complicated with no reason. Use his solution.

Try raycasting downward from your starting point until you find the terrain. game.Workspace:FindPartOnRayWithWhitelist() is great for this!
-- choose some starting point in world space
local xPos = 15
local yPos = 20
local zPos = 30
-- make a ray and point it downward.
-- NOTE - It's unusual, but you must specify a magnitude for the ray
local origin = Vector3.new(xPos, yPos, zPos)
local direction = Vector3.new(0, -100000, 0)
local ray = Ray.new(origin, direction)
-- search for the collision point with the terrain
local whitelist = { game.Workspace.Terrain }
local ignoreWater = true
local _, intersection, surfaceNormal, surfaceMaterial = game.Workspace:FindPartOnRayWithWhitelist(ray, whitelist, ignoreWater)
-- spawn a part at the intersection.
local p = Instance.new("Part")
p.Anchored = true
p.Size = Vector3.new(1, 1, 1)
p.Position = intersection -- <---- this Vector3 is the intersection point with the terrain
p.Parent = game.Workspace
If your model is moving around in unusual ways when you call SetPrimaryPartCFrame(), be aware that Roblox physics does not like interpenetrating objects. So models will often get pushed upwards until they are no longer interpenetrating. Objects that are CanCollide = false will not have this issue, but they will also fall through of the world if they are not Anchored or welded to a part is set to CanCollide = true.

Related

how do I get mouse world position. X Y plane only in unity

how do I get mouse world position. X Y plane only in unity . ScreenToWorldPosition isn't working. I think I need to cast a ray to mouse but not sure.
This is what I am using. doesnt seem to give the correct coordinates or right plane. need for targeting and raycasting.
private void Get3dMousePoint()
{
var screenPosition = Input.mousePosition;
screenPosition.z = 1;
worldPosition = mainCamera.ScreenToWorldPoint(screenPosition);
worldPosition.z = 0;
}
Just need XY coords.
I tried with ScreenToWorldPoint () and it works.
The key I think is in understanding the z coordinate of the position.
Geometrically, in 3D space we need 3 coordinates to define a point. With only 2 coordinates we have a straight line with variable z parameter. To obtain a point from that line, we must choose at what distance (i.e. set z) we want the point sought to be.
Obviously, since the camera is perspective, the coordinates you have at z = 1 are different from those at z = 100, differently from the 2D plane.
If you can figure out how far away, that is, to set the z correctly, you can find the point you want.
Just remember that the z must be greater than the minimum rendering distance of the chamber. I set that very value in the script.
Also remember that the resulting vector will have the z equal to the z position of the camera + the z value of the vector used in ScreenToWorldPoint.
void Get3dMousePoint()
{
Vector3 worldPosition = Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.nearClipPlane));
print(worldPosition);
}
if you think my answer helped you, you can mark it as accepted and vote positively. I would very much appreciate it :)

Car Collision Return Force - 3D Car Game

As per my game requirements, I was giving manual force when two cars collide with each other and move back.
So I want the correct code that can justify this. Here is the example, collision response that I want to get:
As per my understanding, I have written this code:
Vector3 reboundDirection = Vector3.Normalize(transform.position - other.transform.position);
reboundDirection.y = 0f;
int i = 0;
while (i < 3)
{
myRigidbody.AddForce(reboundDirection * 100f, ForceMode.Force);
appliedSpeed = speed * 0.5f;
yield return new WaitForFixedUpdate();
i++;
}
I am moving, my cars using this code:
//Move the player forward
appliedSpeed += Time.deltaTime * 7f;
appliedSpeed = Mathf.Min(appliedSpeed, speed);
myRigidbody.velocity = transform.forward * appliedSpeed;
Still, as per my observation, I was not getting, collision response in the proper direction. What is the correct way for getting above image reference collision response?
Until you clarify why you have use manual forces or how you handle forces generated by Unity Engine i would like to stress one problem in your approach. You calculate direction based on positions but positions are the center of your cars. Therefore, you are not getting a correct direction as you can see from the image below:
You calculate the direction between two pivot or center points therefore, your force is a bit tilted in left image. Instead of this you can use ContactPoint and then calculate the direction.
As more detailed information so that OP can understand what i said! In the above image you can see the region with blue rectangle. You will get all the contact points for the corresponding region using Collision.contacts
then calculate the center point or centroid like this
Vector3 centroid = new Vector3(0, 0, 0);
foreach (ContactPoint contact in col.contacts)
{
centroid += contact.point;
}
centroid = centroid / col.contacts.Length;
This is the center of the rectangle to find the direction you need to find its projection on your car like this:
Vector3 projection = gameObject.transform.position;
projection.x = centroid.x;
gameObject.GetComponent<Rigidbody>().AddForce((projection - centroid )*100, ForceMode.Impulse);
Since i do not know your set up i just got y and z values from car's position but x value from centroid therefore you get a straight blue line not an arrow tilted to left like in first image even in the case two of second image. I hope i am being clear.

What is the best way to evenly distribute objects to fill a curved space in Unity 3D?

I would like to fill this auditorium seating area with chairs (in the editor) and have them all face the same focal point (the stage). I will then be randomly filling the chairs with different people (during runtime). After each run the chairs should stay the same, but the people should be cleared so that during the next run the crowd looks different.
The seating area does not currently have a collider attached to it, and neither do the chairs or people.
I found this code which has taken care of rotating the chairs so they target the same focal point. But I'm still curious if there are any better methods to do this.
//C# Example (LookAtPoint.cs)
using UnityEngine;
[ExecuteInEditMode]
public class LookAtPoint : MonoBehaviour
{
public Vector3 lookAtPoint = Vector3.zero;
void Update()
{
transform.LookAt(lookAtPoint);
}
}
Additional Screenshots
You can write a editor script to automatically place them evenly. In this script,
I don't handle world and local/model space in following code. Remember to do it when you need to.
Generate parallel rays that come from +y to -y in a grid. The patch size of this grid depends on how big you chair and the mesh(curved space) is. To get a proper patch size. Get the bounding box of a chair(A) and the curved space mesh(B), and then devide them(B/A) and use the result as the patch size.
Mesh chairMR;//Mesh of the chair
Mesh audiMR;//Mesh of the auditorium
var patchSizeX = audiMR.bounds.size.X;
var patchSizeZ = audiMR.bounds.size.Z;
var countX = audiMR.bounds.size.x / chairMR.bounds.size.x;
var countZ = audiMR.bounds.size.z / chairMR.bounds.size.z;
So the number of rays you need to generate is about countX*countZ. Patch size is (patchSizeX, patchSizeZ).
Then, origin points of the rays can be determined:
//Generate parallel rays that come form +y to -y.
List<Ray> rays = new List<Ray>(countX*countZ);
for(var i=0; i<countX; ++i)
{
var x = audiMR.bounds.min.x + i * sizeX + tolerance /*add some tolerance so the placed chairs not intersect each other when rotate them towards the stage*/;
for(var i=0; i<countZ; ++i)
{
var z = audiMR.bounds.min.z + i * sizeZ + tolerance;
var ray = new Ray(new Vector3(x, 10000, z), Vector3.down);
//You can also call `Physics.Raycast` here too.
}
}
Get postions to place chairs.
attach a MeshCollider to your mesh temporarily
foreach ray, Physics.Raycast it (you can place some obstacles on places that will not have a chair placed. Set special layer for those obstacles.)
get hit point and create a chair at the hit point and rotate it towards the stage
Reuse these hit points to place your people at runtime.
Convert each of them into a model/local space point. And save them into json or asset via serialization for later use at runtime: place people randomly.

swift: orient y-axis toward another point in 3-d space

Suppose you have two points in 3-D space. Call the first o for origin and the other t for target. The rotation axes of each are alligned with the world/parent coordinate system (and each other). Place a third point r coincident with the origin, same position and rotation.
How, in Swift, can you rotate r such that its y-axis points at t? If pointing the z-axis is easier, I'll take that instead. The resulting orientation of the other two axes is immaterial for my needs.
I've been through many discussions related to this but none satisfy. I have learned, from reading and experience, that Euler angles is probably not the way to go. We didn't cover this in calculus and that was 50 years ago anyway.
Got it! Incredibly simple when you add a container node. The following seems to work for any positions in any quadrants.
// pointAt_c is a container node located at, and child of, the originNode
// pointAtNode is its child, position coincident with pointAt_c (and originNode)
// get deltas (positions of target relative to origin)
let dx = targetNode.position.x - originNode.position.x
let dy = targetNode.position.y - originNode.position.y
let dz = targetNode.position.z - originNode.position.z
// rotate container node about y-axis (pointAtNode rotated with it)
let y_angle = atan2(dx, dz)
pointAt_c.rotation = SCNVector4(0.0, 1.0, 0.0, y_angle)
// now rotate the pointAtNode about its z-axis
let dz_dx = sqrt((dz * dz) + (dx * dx))
// (due to rotation the adjacent side of this angle is now a hypotenuse)
let x_angle = atan2(dz_dx, dy)
pointAtNode.rotation = SCNVector4(1.0, 0.0, 0.0, x_angle)
I needed this to replace lookAt constraints which cannot, easily anyway, be archived with a node tree. I'm pointing the y-axis because that's how SCN cylinders and capsules are directed.
If anyone knows how to obviate the container node please do tell. Everytime I try to apply sequential rotations to a single node, the last overwrites the previous one. I haven't the knowledge to formulate a rotation expression to do it in one shot.

Detecting touch position on 3D objects in openGL

I have created a 3D object in opengl for one of my application. The object is something like a human body and can be rotated on touch. How can I detect the position of touch on this 3D object. Means if the user touches the head, I have to detect that it is the head. If touch is on the hand, then that has to be identified. It should work even if the object is rotated to some other direction. I think the coordinates of touch on the 3D object is required.
This is the method where I am getting the position of touch on the view.
- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];
m_applicationEngine->OnFingerDown(ivec2(location.x, location.y));
}
Can anyone help? Thanks in advance!
Forget about RayTracing and other Top Notch Algorithms. We have used a simple trick for one of our applications(Iyan 3D) on App Store. But this technique need one extra render pass everytime you finish rotating the scene to a new angle. Render different objects (head, hand, leg etc) in different colors (not actual colors but unique ones). Read the color in the rendered image corresponding to the screen position. You can find the object based on its color.
In this method you can use change rendered image resolution to balance accuracy and performance.
To determine the 3D location of the object I would suggest ray tracing.
Assuming the model is in worldspace coordinates you'll also need to know the worldspace coordinates of the eye location and the worldspace coordinates of the image plane. Using those two points you can calculate a ray which you will use to intersect with the model, which I assume consists of triangles.
Then you can use the ray triangle test to determine the 3D location of the touch, by finding the triangle that has the closest intersection to the image plane. If you want which triangle is touched you will also want to save that information when you do the intersection tests.
This page gives an example of how to do ray triangle intersection tests: http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-9-ray-triangle-intersection/ray-triangle-intersection-geometric-solution/
Edit:
Updated to have some sample code. Its some slightly modified code I took from a C++ raytracing project I did a while ago so you'll need to modify it a bit to get it working for iOS. Also the code in its current form wouldn't even be useful since it doesn't return the actual intersection point but rather if the ray intersects the triangle or not.
// d is the direction the ray is heading in
// o is the origin of the ray
// verts is the 3 vertices of the triangle
// faceNorm is the normal of the triangle surface
bool
Triangle::intersect(Vector3 d, Vector3 o, Vector3* verts, Vector3 faceNorm)
{
// Check for line parallel to plane
float r_dot_n = (dot(d, faceNorm));
// If r_dot_n == 0, then the line and plane are parallel, but we need to
// do the range check due to floating point precision
if (r_dot_n > -0.001f && r_dot_n < 0.001f)
return false;
// Then we calculate the distance of the ray origin to the triangle plane
float t = ( dot(faceNorm, (verts[0] - o)) / r_dot_n);
if (t < 0.0)
return false;
// We can now calculate the barycentric coords of the intersection
Vector3 ba_ca = cross(verts[1]-verts[0], verts[2]-verts[0]);
float denom = dot(-d, ba_ca);
dist_out = dot(o-verts[0], ba_ca) / denom;
float b = dot(-d, cross(r.o-verts[0], verts[2]-verts[0])) / denom;
float c = dot(-d, cross(verts[1]-verts[0], o-verts[0])) / denom;
// Check if in tri or if b & c have NaN values
if ( b < 0 || c < 0 || b+c > 1 || b != b || c != c)
return false;
// Use barycentric coordinates to calculate the intersection point
Vector3 P = (1.f-b-c)*verts[0] + b*verts[1] + c*verts[2];
return true;
}
The actual intersection point you would be interested in is P.
Ray tracing is an option and is used in many applications for doing just that (picking). The problem with ray tracing is that this solution is a lot of work to get a pretty simple basic feature working. Also ray tracing can be slow but if you have only one ray to trace (the location of your finger say), then it should be okay.
OpenGL's API also provides a technique to pick object. I suggest you look at for instance: http://www.lighthouse3d.com/opengl/picking/
Finally a last option would consist of projecting the vertices of an object in screen space and use simple 2d techniques to find out which faces of the object your finger overlaps.