I'm calculating the normals of a mesh that I've generated using the marching cubes algorithm but when I run it the object looks blurry like in the picture.
Variables:
CurrentTri is a Vector3int with the indexes of each vertex
CurrentNorm is a Vector3 with the current normal
Verts is a Vector3 array of the positions of the vertices
VertNorm is a Vector3 array of the normals of the vertices
The c# code where I calculate the normals:
// Repeated for each triangle
CurrentNorm = Vector3.Cross(Verts[CurrentTri.y] - Verts[CurrentTri.x], Verts[CurrentTri.z] - Verts[CurrentTri.x]);
VertNorm[CurrentTri.x] += CurrentNorm;
VertNorm[CurrentTri.y] += CurrentNorm;
VertNorm[CurrentTri.z] += CurrentNorm;
Normalising the normals:
for(int i = 0; i < VertNorm.Length; i++)
{
VertNorm[i] = VertNorm[i].normalized;
}
mesh.normals = VertNorm;
It's supposed to be like that (I think). I was a complete idiot.
Related
I would like to convert some Vector3 points into 2d, projecting them onto a plane just like a camera would, with perspective.
I am starting with 3d points, a camera position and look direction, and fov.
My attempt:
void Update()
{
Matrix4x4 proj = Matrix4x4.LookAt(FakeCam.transform.position, Vector3.zero, FakeCam.transform.up) *
Matrix4x4.Perspective(fov, aspect, zNear, zFar);
for (int i=0; i<8; i++)
{
var point = proj.MultiplyPoint(Points[i]);
Dots.SetPosition(i, new Vector3(point.x, 0, point.z));
}
Dots.Draw(transform.localToWorldMatrix);
}
The LookAt matrix seems to work as expected here. If I only use the LookAt matrix, I get my object shown (with an orthographic projection). If I add the perspective matrix, it is twisted in the middle (the back face of the cube is rotated 180 ??)
How do I calculate the distance of a game object (inside a cube collider) from the cube collider surface? The existing calculations were made from the cube surface outwards so I got 0 when I used the collider.closestpoint or collider.closestpointonbounds.
The simplest (but computationally not the cheapest) would be to not rely on your current collider for the distance, but to add a set of small colliders around the edge of the object (so 6 colliders, one per face of the cube). Using Collider.ClosestPoint() on all 6 faces and calculating the distance like that would give you the results you need.
First convert a point to local space.
var localPoint = transform.InverseTransformPoint(worldPoint);
var extents = collider.size * 0.5f;
var closestPoint = localPoint;
Compute the distance to each face.
var disx = extents.x - Mathf.Abs(localPoint.x);
var disy = extents.y - Mathf.Abs(localPoint.y);
var disz = extents.z - Mathf.Abs(localPoint.z);
Find the closest face (smallest distance) and move the closest point along this axis.
if(disx < disy)
{
if (disx < disz)
closestPoint.x = extents.x * Mathf.Sign(localPoint.x); //disx
else
closestPoint.z = extents.z * Mathf.Sign(localPoint.z); //disz
}
else
{
//......
}
Plus the offset of the collider, convert to world space.
closestPoint += collider.center;
transform.TransformPoint(closestPoint);
I don't know how efficient this is, but here is how I solved it:
public static Vector3 ClosetPointOnBounds(Vector3 point, Bounds bounds)
{
Plane top = new Plane(Vector3.up, bounds.max);
Plane bottom = new Plane(Vector3.down, bounds.min);
Plane front = new Plane(Vector3.forward, bounds.max);
Plane back = new Plane(Vector3.back, bounds.min);
Plane right = new Plane(Vector3.right, bounds.max);
Plane left = new Plane(Vector3.left, bounds.min);
Vector3 topclose = top.ClosestPointOnPlane(point);
Vector3 botclose = bottom.ClosestPointOnPlane(point);
Vector3 frontclose = front.ClosestPointOnPlane(point);
Vector3 backclose = back.ClosestPointOnPlane(point);
Vector3 rightclose = right.ClosestPointOnPlane(point);
Vector3 leftclose = left.ClosestPointOnPlane(point);
Vector3 closest = point;
float bestdist = float.MaxValue;
foreach (Vector3 p in new Vector3[] {
topclose, botclose, frontclose, backclose, leftclose, rightclose
})
{
float dist = Vector3.Distance(p, point);
if (dist < bestdist)
{
bestdist = dist;
closest = p;
}
}
return closest;
}
(note: this assumes and axis-aligned box, which is all I needed at the time. If you want to rotate it you will have to do more work to transform the point.)
You can Calculate by Vector3.Distance
some example
float minDistance =2;
float Distance = Vector3.Distance(other.position, transform.position);
if(Distance < minDistance)
{
//some code stuffs
}
else if(Distance > minDistance){
//some code stuffs
}
Useful information about Vector3.Distance and getting Distance from object
source: https://docs.unity3d.com/ScriptReference/30_search.html?q=Distance
I am trying to have a gameobject in unity react with sound if another object is inside it. I want the gameobject to use the entering objects location to then see what voxel is closest and then play audio based on the voxel intensity/colour. Does anyone have any ideas? I am working with a dataset that is 512x256x512 voxels. I want it to work if the object is resized as well. Any help is much appreciated :).
The dataset I'm working with is a 3d .mhd medical scan of a body. Here is how the texture is added to the renderer on start:
for (int k = 0; k < NumberOfFrames; k++) {
string fname_ = "T" + k.ToString("D2");
Color[] colors = LoadData(Path.Combine (imageDir, fname_+".raw"));
_volumeBuffer.Add (new Texture3D (dim [0], dim [1], dim [2], TextureFormat.RGBAHalf, mipmap));
_volumeBuffer[k].SetPixels(colors);
_volumeBuffer [k].Apply ();
}
GetComponent<Renderer>().material.SetTexture("_Data", _volumeBuffer[0]);
The size of the object is defined by using the mdh header files spacing as well as voxel dimensions:
transform.localScale = new Vector3(mhdheader.spacing[0] * volScale, mhdheader.spacing[1] * volScale * dim[1] / dim[0], mhdheader.spacing[2] * volScale * dim[2] / dim[0]);
I have tried making my own function to get the index from the world by offsetting it to the beginning of the render mesh (not sure if this is right). Then, scaling it by the local scale. Then, multiplying by the amount of voxels in each dimension. However, I am not sure if my logic is right whatsoever... Here is the code I tried:
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 startOfTex = gameObject.GetComponent<Renderer>().bounds.min;
Vector3 localPos = transform.InverseTransformPoint(worldPos);
Vector3 localScale = gameObject.transform.localScale;
Vector3 OffsetPos = localPos - startOfTex;
Vector3 VoxelPosFloat = new Vector3(OffsetPos[0] / localScale[0], OffsetPos[1] / localScale[1], OffsetPos[2] / localScale[2]);
VoxelPosFloat = Vector3.Scale(VoxelPosFloat, new Vector3(voxelDims[0], voxelDims[1], voxelDims[2]));
Vector3Int voxelPos = Vector3Int.FloorToInt(VoxelPosFloat);
return voxelPos;
}
You can try setting up a large amount of box colliders and the OnTriggerEnter() function running on each. But a much better solution is to sort your array of voxels and then use simple math to clamp the moving objects position vector to ints and do some maths to map the vector to an index in the array. For example the vector (0,0,0) could map to voxels[0]. Then just fetch that voxels properties as you like. For a voxel application this would be a much needed faster calculation than colliders.
I figured it out I think. If anyone sees any flaw in my coding, please let me know :).
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 deltaBounds = rend.bounds.max - rend.bounds.min;
Vector3 OffsetPos = worldPos - rend.bounds.min;
Vector3 normPos = new Vector3(OffsetPos[0] / deltaBounds[0], OffsetPos[1] / deltaBounds[1], OffsetPos[2] / deltaBounds[2]);
Vector3 voxelPositions = new Vector3(normPos[0] * voxelDims[0], normPos[1] * voxelDims[1], normPos[2] * voxelDims[2]);
Vector3Int voxelPos = Vector3Int.FloorToInt(voxelPositions);
return voxelPos;
}
I am trying to get scaled vertices from a mesh after scaling the gameObject.
The gameObject will be printed scaled in the editor but if I print the vertices of mesh, they will not be scaled.
gameObject.transform.localScale *= 10;
_mesh = gameObject.GetComponent<MeshFilter>().mesh;
//mesh.recalculateAllStuff if it is not done before get the mesh
for (int i = 0; i < _mesh.vertexCount; i++)
{
print(_mesh.vertices) //Not right scale
}
I am wondering how it is working.
I think this will work if you want the scaled points, local to world position:
var scale = 12f;
gameObject.transform.localScale *= scale;
_mesh = gameObject.GetComponent<MeshFilter>().mesh;
for (int i = 0; i < _mesh.vertexCount; i++)
{
print(transform.TransformPoint(_mesh.vertices[i]));
}
If you want relative scaled points, try multiplying the _mesh.vertices[i] components by the local scale vector components - using Vector3.Scale:
In a surface shader, given the world's up axis (and the others too), a world space position and a normal in world space, how can we rotate the worldspace position into the space of the normal?
That is, given a up vector and a non-orthogonal target-up vector, how can we transform the position by rotating its up vector?
I need this so I can get the vertex position only affected by the object's rotation matrix, which I don't have access to.
Here's a graphical visualization of what I want to do:
Up is the world up vector
Target is the world space normal
Pos is arbitrary
The diagram is bidimensional, but I need to solve this for a 3D space.
Looks like you're trying to rotate pos by the same rotation that would transform up to new_up.
Using the rotation matrix found here, we can rotate pos using the following code. This will work either in the surface function or a supplementary vertex function, depending on your application:
// Our 3 vectors
float3 pos;
float3 new_up;
float3 up = float3(0,1,0);
// Build the rotation matrix using notation from the link above
float3 v = cross(up, new_up);
float s = length(v); // Sine of the angle
float c = dot(up, new_up); // Cosine of the angle
float3x3 VX = float3x3(
0, -1 * v.z, v.y,
v.z, 0, -1 * v.x,
-1 * v.y, v.x, 0
); // This is the skew-symmetric cross-product matrix of v
float3x3 I = float3x3(
1, 0, 0,
0, 1, 0,
0, 0, 1
); // The identity matrix
float3x3 R = I + VX + mul(VX, VX) * (1 - c)/pow(s,2) // The rotation matrix! YAY!
// Finally we rotate
float3 new_pos = mul(R, pos);
This is assuming that new_up is normalized.
If the "target up normal" is a constant, the calculation of R could (and should) only happen once per frame. I'd recommend doing it on the CPU side and passing it into the shader as a variable. Calculating it for every vertex/fragment is costly, consider what it is you actually need.
If your pos is a vector-4, just do the above with the first three elements, the fourth element can remain unchanged (it doesn't really mean anything in this context anyway).
I'm away from a machine where I can run shader code, so if I made any syntactical mistakes in the above, please forgive me.
Not tested, but should be able to input a starting point and an axis. Then all you do is change procession which is a normalized (0-1) float along the circumference and your point will update accordingly.
using UnityEngine;
using System.Collections;
public class Follower : MonoBehaviour {
Vector3 point;
Vector3 origin = Vector3.zero;
Vector3 axis = Vector3.forward;
float distance;
Vector3 direction;
float procession = 0f; // < normalized
void Update() {
Vector3 offset = point - origin;
distance = offset.magnitude;
direction = offset.normalized;
float circumference = 2 * Mathf.PI * distance;
angle = (procession % 1f) * circumference;
direction *= Quaternion.AngleAxis(Mathf.Rad2Deg * angle, axis);
Ray ray = new Ray(origin, direction);
point = ray.GetPoint(distance);
}
}