I am trying to create my whole mesh from 5 submeshes via script in Unity. For each submesh I've got a separated indice array and material assigned. Curiously Unity only renders the first submesh, but if I inspect the mesh assigned to the mesh filter it says that there are more vertices and triangle than actually are rendered.
GameObject go = new GameObject("Island Prototype");
Mesh mesh = new Mesh();
mesh.vertices = this.vertices.ToArray();
mesh.subMeshCount = this.indices.Count;
int c = 0;
foreach (List<int> l in this.indices)
{
Debug.Log(l.Count);
mesh.SetTriangles(l.ToArray(), c);
c++;
}
mesh.RecalculateNormals();
List<Material> materials = new List<Material>();
materials.Add(fieldMaterial);
foreach (TileSettings ts in tiles)
{
materials.Add(fieldMaterial);
}
Debug.Log("Number of materials: " + materials.Count);
//mesh.RecalculateBounds();
//mesh.RecalculateNormals();
MeshRenderer mr = go.AddComponent<MeshRenderer>();
mr.sharedMaterials = materials.ToArray();
MeshFilter mf = go.AddComponent<MeshFilter>();
mf.mesh = mesh;
At the screenshot you can see, that the mesh inspector says the correct count of submeshes. There are also 5 materials attached to the renderer.
At the console I've printed the count of vertices, so submesh 3-5 doesn't own triangles at the moment, but this shouldn't be a problem, should it? At least submesh 2 should be rendered...
Related
I am making a game with Unity, the project model is 2D.
What I have to do is a wheel divided into segments, each segment is an individual object, this wheel turns on itself with a certain speed, here is an imamgine for better understanding:
I have a "Selector", i.e. something to select a slice, so I made a temporary sprite, i.e. the red triangle, and a script to generate a Ray Cast to locate the selected slice, some images to better understand:
So far so good, my problem lies in the fact that the mesh of the slices is also the mesh of the collider, which is not convex but concave, so from what I've read on the Internet Unity does not allow to intercept objects with concave mesh by the Ray Cast due to calculation problems, so I can not intercept the slices, the only way to hit them is to tick the parameter "Convex"
of the Collider component, but I create a collider with a square shape, and so the selection precision is lacking, here are some pictures to better understand:
So I looked on the internet for a solution and found that the solution was to split the collider into several smaller but convex colliders, so I tried this, i.e. for each pair, i.e. 2 triangles, I created a collider that had a mesh made from the two triangles in question, so I got this:
But it is still not intercepted by RayCast, unless I tick the "Convex" parameter of the collider component, but even then a collider with a square shape is created.
Finally, here are some parts of the code within the post:
Code for generating ray cast:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Selector : MonoBehaviour
{
public GameObject objPosRef;
void Update()
{
if (Input.GetMouseButtonUp(1))
{
RaycastHit hit;
Debug.DrawRay(objPosRef.transform.position, new Vector3(0, 0.5f, 0), Color.green, 0.5f);
if (Physics.Raycast(objPosRef.transform.position, new Vector3(0, 0.5f, 0), out hit))
{
Debug.Log("Colpito: " + hit.collider.name);
}
}
}
}
Code to update the segment mesh:
...
private void UpdateMesh()
{
mesh.Clear();
mesh.vertices = vertex;
mesh.triangles = triangles;
createColldiers();
mesh.RecalculateNormals();
mesh.RecalculateBounds();
GetComponent<MeshRenderer>().material = new Material(material);
GetComponent<MeshRenderer>().material.color = color;
}
Code for creating segment colliders:
private void createColldiers()
{
int numColliders = (numVertex * 2 - 2) / 2;
for (int i = 0; i < numColliders; i++)
{
MeshCollider collider = gameObject.AddComponent<MeshCollider>();
Mesh mesh = new Mesh();
int[] tr = new int[6];
int k = 6 * i;
for (int j = 0; j < 6; j++)
{
tr[j] = triangles[k];
k++;
}
mesh.vertices = vertex;
mesh.triangles = tr;
collider.sharedMesh = mesh;
}
}
In summary I create a new mesh, the vertices of this mesh are identical to those of the segment mesh, although the new mesh does not need all the vertices, the triangles of the new mesh are only 2, and are taken in pairs from the triangle array of the segment mesh.
Sorry if my post is full of pictures, I hope I have been able to give you the best possible understanding of my problem, most likely the probelma will be something popping, thank you in advance for your help.
p.s: very few people say that raycasts can safely intercept objects with a concave collider, thus going against the grain of all other claims to the contrary, who is right?
I am trying to have a gameobject in unity react with sound if another object is inside it. I want the gameobject to use the entering objects location to then see what voxel is closest and then play audio based on the voxel intensity/colour. Does anyone have any ideas? I am working with a dataset that is 512x256x512 voxels. I want it to work if the object is resized as well. Any help is much appreciated :).
The dataset I'm working with is a 3d .mhd medical scan of a body. Here is how the texture is added to the renderer on start:
for (int k = 0; k < NumberOfFrames; k++) {
string fname_ = "T" + k.ToString("D2");
Color[] colors = LoadData(Path.Combine (imageDir, fname_+".raw"));
_volumeBuffer.Add (new Texture3D (dim [0], dim [1], dim [2], TextureFormat.RGBAHalf, mipmap));
_volumeBuffer[k].SetPixels(colors);
_volumeBuffer [k].Apply ();
}
GetComponent<Renderer>().material.SetTexture("_Data", _volumeBuffer[0]);
The size of the object is defined by using the mdh header files spacing as well as voxel dimensions:
transform.localScale = new Vector3(mhdheader.spacing[0] * volScale, mhdheader.spacing[1] * volScale * dim[1] / dim[0], mhdheader.spacing[2] * volScale * dim[2] / dim[0]);
I have tried making my own function to get the index from the world by offsetting it to the beginning of the render mesh (not sure if this is right). Then, scaling it by the local scale. Then, multiplying by the amount of voxels in each dimension. However, I am not sure if my logic is right whatsoever... Here is the code I tried:
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 startOfTex = gameObject.GetComponent<Renderer>().bounds.min;
Vector3 localPos = transform.InverseTransformPoint(worldPos);
Vector3 localScale = gameObject.transform.localScale;
Vector3 OffsetPos = localPos - startOfTex;
Vector3 VoxelPosFloat = new Vector3(OffsetPos[0] / localScale[0], OffsetPos[1] / localScale[1], OffsetPos[2] / localScale[2]);
VoxelPosFloat = Vector3.Scale(VoxelPosFloat, new Vector3(voxelDims[0], voxelDims[1], voxelDims[2]));
Vector3Int voxelPos = Vector3Int.FloorToInt(VoxelPosFloat);
return voxelPos;
}
You can try setting up a large amount of box colliders and the OnTriggerEnter() function running on each. But a much better solution is to sort your array of voxels and then use simple math to clamp the moving objects position vector to ints and do some maths to map the vector to an index in the array. For example the vector (0,0,0) could map to voxels[0]. Then just fetch that voxels properties as you like. For a voxel application this would be a much needed faster calculation than colliders.
I figured it out I think. If anyone sees any flaw in my coding, please let me know :).
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 deltaBounds = rend.bounds.max - rend.bounds.min;
Vector3 OffsetPos = worldPos - rend.bounds.min;
Vector3 normPos = new Vector3(OffsetPos[0] / deltaBounds[0], OffsetPos[1] / deltaBounds[1], OffsetPos[2] / deltaBounds[2]);
Vector3 voxelPositions = new Vector3(normPos[0] * voxelDims[0], normPos[1] * voxelDims[1], normPos[2] * voxelDims[2]);
Vector3Int voxelPos = Vector3Int.FloorToInt(voxelPositions);
return voxelPos;
}
I have lots of meshes for a 2D project that I place on quads (as a material to the quads). They're like "maps"; filled from the inside but have transparent edges.
I make a polygon collider for each map and place it on top of it so that I can use Physics2D.Raycast() to detect whether the user has placed an object on the map or off the map. They're like shapes (polygons).
The process of making the polygon collider is time-consuming and the quality isn't so good. Is there some Mesh collider that detects transparency and therefore shapes itself to the shape of the map? Or is there a way to make a script that shapes the collider to the shape of the map?
Turns out that the Polygon Collider 2D has the feature to generate a polygon to such a transparent mesh. Just drag and drop a sprite on the polygon collider component.
Here is the solution: use RaycastAll to retrieve all the objects hit, then return the closest object such that the alpha value of the pixel is not 0.
private static RaycastHit? RaycastWithTransparency(Ray ray)
{
var res = Physics.RaycastAll(ray, float.MaxValue).ToList().OrderBy(h => h.distance);
foreach (var h in res)
{
var col = h.collider;
Renderer rend = h.transform.GetComponent<Renderer>();
Texture2D tex = rend.material.mainTexture as Texture2D;
var xInTex = (int) (h.textureCoord.x*tex.width);
var yInTex = (int) (h.textureCoord.y*tex.height);
var pix = tex.GetPixel(xInTex, yInTex);
if (pix.a > 0)
{
//Debug.Log("You hit: " + col.name + " position " + h.textureCoord.x + " , " + h.textureCoord.y);
return h;
}
}
return null;
}
I am trying out unity for a project that i am on.
I am attempting to draw 3D polygon from a set of coordinate that I have.
So what i am doing now is to build a row of cube btw the two points. I plan to build these points into either a solid shape or just "walls" to form a room.
However, it doesn't seem to work as expected. Please advise.
drawCube( Vector3(10,0,14),Vector3(70,0,14));
drawCube( Vector3(90,0,14),Vector3(60,87,45));
function drawCube(v1,v2) {
pA = v1;
pB = v2;
var plane : GameObject = GameObject.CreatePrimitive(PrimitiveType.Cube);
var between:Vector3 = pB - pA;
var distance:float = between.magnitude;
plane.transform.localScale.x = distance;
plane.transform.localScale.y=10;
plane.transform.position = pA + (between / 2.0);
plane.transform.LookAt(pB);
}
updated: I have also tried using a mesh but all i got was the below image. What am i doing wrong?
I am trying to achieve something like this
You could make primitives and manipulate them but that would limit you very much if you needed to scale or change your requirements in the future. I would recommend using a procedural mesh to create the geometry you need as you need it. The basics aren't too hard, it's just a matter of constructing the Mesh object from it's base components given some vertices. Here's an example of constructing a 3d quadrilateral:
using UnityEngine;
using System.Collections.Generic;
public class SquareMaker : MonoBehaviour {
public List<Vector3> points;
void Start()
{
GameObject threeDSquare = new GameObject("3DSquare");
threeDSquare.AddComponent<MeshRenderer>();
threeDSquare.AddComponent<MeshFilter>();
threeDSquare.GetComponent<MeshFilter>().mesh = CreateMesh(points);
}
private Mesh CreateMesh(List<Vector3> points)
{
List<int> tris = new List<int>(); // Every 3 ints represents a triangle
List<Vector2> uvs = new List<Vector2>(); // Vertex position in 0-1 UV space
/* 4 points in the list for the square made of two triangles:
0 *--* 1
| /|
|/ |
3 *--* 2
*/
tris.Add(1);
tris.Add(2);
tris.Add(3);
tris.Add(3);
tris.Add(0);
tris.Add(1);
// uvs determine vert (point) coordinates in uv space
uvs.Add(new Vector2(0f, 1f));
uvs.Add(new Vector2(1f, 1f));
uvs.Add(new Vector2(1f, 0f));
uvs.Add(new Vector2(0f, 0f));
Mesh mesh = new Mesh();
mesh.vertices = points.ToArray();
mesh.uv = uvs.ToArray();
mesh.triangles = tris.ToArray();
mesh.RecalculateNormals();
return mesh;
}
}
I want to draw lines grid in 3d Unity.
I found that i can draw lines using MeshTopology.Lines in Unity 4.
But I can not found example how do it with MeshTopology.Lines.
How draw lines in Unity 3d?
Vector3[] verts = new Vector3[]{Vector3.up, Vector3.right, Vector3.down, Vector3.left};
int[] indicesForLineStrip = new int[]{0,1,2,3,0};
//int[] indicesForLines = new int[]{0,1,1,2,2,3,3,0};
Mesh mesh = new Mesh();
mesh.vertices = verts;
mesh.SetIndicies(indicesForLineStrip, MeshTopology.LineStrip, 0);
//mesh.SetIndicies(indicesForLines, MeshTopology.Lines, 0);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
Another recommendation -
Initialize a mesh, triangles, vertices, etc. etc. What worked for me was to then do the following (where meshMaterials is an array of materials, material 0 -> submesh 0, and material 1 -> submesh 1):
mesh.subMeshCount = 2;
GetComponent<MeshRenderer>().materials = meshMaterials;
mesh.SetIndices(triangles, MeshTopology.Lines, 1);
Useful discussion How to Render/Draw mesh lines OVER texture at runtime/play mode