Unity adds vertices to the original mesh - unity3d

I have a simple teapot mesh and a point cache animation that matches that mesh.
Everything is exported from 3DS Max.
When I try to load it into unity, and load the point cache into the mesh, there is a vertices mismatch.
Upon further debugging I saw that indeed unity adds more vertices than there is in the original mesh, which means I cannot match the point cache animation to the mesh now.
I saw on the RecalculateNormals page and it say's:
Imported Meshes sometimes don't share all vertices. For example, a
vertex at a UV seam is split into two vertices, so the
RecalculateNormals function creates normals that are not smooth at the
UV seam.
So, unity adds more vertices to the original mesh.
What can I do to fix that so my point cache match the mesh? there is no documentation how unity does that, nor is there a way to turn it off.
Note: I tried changing the import setting (and export setting on max) like:
Mesh compression -> Nothing
Optimize mesh -> Nothing
Keep Quads -> ON
Weld Vertices -> OFF
Smoothness Source -> None
And more...
Everything is set and tested separately & together. Nothing seem to lower the vertex count.

Vertex duplication is generally unavoidable.
Turning off split per-vertex normals in the fbx exporter settings in max is supposed to solve this problem, but it will remove all seams from your model.
Personally, I would work around this in maxscript by exporting the entire mesh as fbx at each $t instead of writing out the point cache.

I found this tool, the Oasis Mesh Editor, that can Split Vertices and Merge Vertices back together, https://assetstore.unity.com/packages/slug/166155
The Merge Verts tool also has a Max Tolerance setting so that only verts that are closer than this distance will be merged.
Hope this helps anyone looking =)

Related

Unity Point-cloud to mesh with texture/color

I have a point-cloud and a rgb texture that fit together from a depth camera. I procedurally created a mesh from a selected part of the point-cloud implementing the quickhull 3D algorithm for mesh creation.
Now, somehow I need to apply the texture that I have to that mesh. Note that there can be multiple selected parts of the point-cloud thus making multiple objects that need the texture. The texture is just a basic 720p file that should be applied to the mesh material.
Basically I have to do this: https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/ but inside Unity. (I'm also using a RealSense camera)
I tried with a decal shader but the result is not precise. The UV map is completely twisted from the creation process, and I'm not sure how to generate a correct one.
UV and the mesh
I only have two ideas but don't really know if they'll work/how to do them.
Try to create a correct UV and then wrap the texture around somehow
Somehow bake colors to vertices and then use vertex colors to create the desired effect.
What other things could I try?
I'm working on quite a similar problem. But in my case I just want to create a complete mesh from the point cloud. Not just a quickhull, because I don't want to lose any depth information.
I'm nearly done with the mesh algorithm (just need to do some optimizations). Quite challenging now is to match the RGB camera's texture with the depth camera sensor's point cloud, because they of course have a different viewport.
Intel RealSense provides an interesting whitepaper about this problem and as far as I know the SDK corrects these different perspectives with uv mapping and provides a red/green uv map stream for your shader.
Maybe the short report can help you out. Here's the link. I'm also very interested in what you are doing. Please keep us up to date.
Regards

I want to create a mesh from a silhouette

I'm working in Unity and thus coding in C#, but any idea or a place to start is welcome.
I don't really know how to describe my problem, and if there is a 'simple' solution for it, but I'll try.
I have an object (probably going to limit myself to simple shapes) that casts 2 shadows.
I'd like to generate a mesh that is the shape of that shadow.
As you can see on the image below, I drew the desired meshes in green.
desired meshes drawn in green
I've messed around with altering the vertices of my initial mesh, and in some specific cases (objects with no rotation) found a solution, but I haven't found one that's works well enough.
Does anyone have an idea that could work?
Thanks in advance,
Bart
I took the time to create a project that does exactly what you said:
Preview it here
The Method
Using raycasts I calculated the projected vertices of a specific object from a light source. The method may seem inefficient but as long as the specified mesh has a low vert count everything should be fine.
Then by taking the average of projected vertices I calculated the position of the projected cube.
Vector3 averagePosition = new Vector3(verticies.Average(vector => vector.x),
verticies.Average(vector => vector.y),
verticies.Average(vector => vector.z));
And by taking the range each of the projected vertex position components (x,y,z) I calculated the scale of the cube.Vector3 averageScale = new Vector3(verticies.Max(vector => vector.x) - verticies.Min(vector => vector.x), verticies.Max(vector => vector.y) - verticies.Min(vector => vector.y), normalScale);
Note: I am not generating a whole new mesh. I am just manipulating the transform of a pre-made cube with a script attached.
Downside is that this method is only limited to one axis so far. Can be fixed.
Download the project from Github
GitHub link: https://github.com/MyIsaak/Shadow-Mesh/tree/master
Would be great if you could commit any improvements you make to help the community. You are free to use this project for commercial and non commercial use.

Do I have to use shared vertices in mesh in Unity?

I want to create procedurally generated landscape meshes with a flat shaded look in Unity3D.
I thought it would be the best to create three unique vertices per triangle and use one calculated normal for the vertices. Building this mesh would lead to redundant vertex position information. (Would it have an impact on render time?)
Anyway... the problem is that I would like to use shading techniques e.g. ambient occlusion on this mesh. I don't want to mess up the mesh topology that Unity3D expects for its shaders.
Is it better to create the meshwith shared vertices, add perhaps a custom vertex attribute e.g. 'flat_normal' and customize the shaders to use this?
The simple answer is
No,
Unity does not, in the slightest, "look for" shared verts. No 3D pipeline has anything to do with shared verts. Shared verts does not help or hinder the 3D pipeline in anyway at all.
(Very often, when for example we are making dynamic mesh, we just "never use shared verts at all" because, as you have probably found, it's often far simpler to not use shared verts.)
The one and only reason to use shared verts is if, for some reason, it happens to make it more convenient for you. In that case the 3D pipeline (Unity or elsewhere) "allows" shared verts, with no downside.
There might be 2 considerations for using shared vertices:
To reduce memory usage. (Only slightly, but if you need less UV1's, UV2's, normals, vertices it can add up)
to make it more convenient to share normals. (You only have to alter one normal of a vertex if you want to keep the surface smooth. )
These are no big reasons, but as most of the meshes from other 3D programs that you encounter, it's probably best to get used to shared vertices.

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

Appropriate settings for importing unlit Maya models into Unity3d?

If I am not using lighting in my game in Unity3d and all models have lighting baked in their textures, then which of the two methods of importing models is better.
(import settings in Unity inspector).
importing models with no normals
importing with normals.
importing with calculate normals and setting the smoothing angle to 180.
The shader I'm using does not use normals so I don't have a problem with no normal importing.
The 1st method reduces the vertex count most. But I'm wondering if no normals reduces some of the optimisations such as back face culling etc.
In short which is the best settings for importing models if no lighting is used.
Importing without normals will work for the specific case you're mentioning: prelit models that don't do any shading at all. However, if you want any angle-based effects at all (fresnel highlights or toon shading, for example) you'll need normal information.
You will get better optimization on the model if you don't have normal information, since none of the verts will have to be split to handle edge creases.
If you have manually edited normals in Maya (particularly flipped normals or used 'conform') you may find that importing the mesh without normals causes triangle to revert to their natural orientation - which may affect your culling.
So: using no normals is ok for the limited case of lightmapped unshaded geo, but not for much else.
The "best" settings are specific to each model resource. If your mesh data represents a hard surface then you'll want to use the Maya smoothing angle so that you can fine-tune where the shading is blended in your output.
For organic shapes I've found that a smoothing angle of 89ยบ is best but your mileage may vary (it's entirely dependent upon the visual output you're looking for.)
Normals have nothing to do with backface culling in Unity3d. If you don't need them to draw your content smoothly; then you can discard them altogether.