I am sampling data from the point cloud and trying to display the selected points using a mesh renderer.
I have the data but I can't visualize it. I am using the Augmented Reality application as template.
I am doing the point saving and mesh population in a coroutine. There are no errors but I can't see any resulting mesh.
I am wondering if there is a conflict with an existing mesh component from the point cloud example that I use for creating the cloud.
I pick a point on screen (touch) and use the index to find coordinates and populate a Vector3[]. The mesh receiveds the vertices( 5000 points out of 500000 in the point cloud)
this is where I set the mesh:
if (m_updateSubPointsMesh)
{
int[] indices = new int[ctr];
for (int i = 0; i < ctr; ++i)
{
indices[i] = i;
}
m_submesh.Clear();
m_submesh.vertices = m_subpoints;
int vertsInMesh = m_submesh.vertexCount;
m_submesh.SetIndices(indices, MeshTopology.Points, 0);
}
m_subrenderer.material.SetColor("_SpecColor", Color.yellow);
I am using Unity pro 5.3.3 and VS 2015 on windows 10.
Comments and advice are very much appreciated even if they are not themselves a solution.
Jose
I sort it out. The meshing was right it turn out to be a bug on a transform (not tango-defined). The mesh was rendered in another point. Had to walk around to find it.
Thanks
You must convert the Tango mesh data to mesh data for unity, its not structured in the same way I believe its the triangles thats different. You also need to set triangles and normals to the mesh.
Related
We develop an application that uses models exported as an fbx from max in unity (seems to work), changes them and then communicates the changes back to 3DSMax for a clean render.
We rotate the model pivot in max in such a way in max that it is shown correctly in Unity after the export.
What we got so far:
Position:
x(max) = x(unity)
y(max) = z(unity)
z(max) = y(unity)
Rotation:
x(max) = x(unity)
y(max) = -y(unity)
z(max) = z(unity)
Simple rotations seem to work, complex do not. I suspect we did not properly take the mirroring when going from left handed to right handed or the different rotation multiplication order into account. How is the mapping done correctly?
There is a related question with no answers:
Unity rotation convertion
The issue was the different rotation order of Unity (XYZ) and Max (ZYX). That explains that single rotations work but not complex ones. If you do the transformation in the question above and then just do each rotation in the same order consecutively in Unity, it works.
We are working on AI for our game, and currently the detection system. How can I read the lightprobe interpolation data off a mesh? If in shadow it will take longer time and closer distances for the AI to detect the player
edit: https://docs.unity3d.com/ScriptReference/LightProbes.GetInterpolatedProbe.html
Ok so the best way is to use GetInterpolatedProbe
You call it like
SphericalHarmonicsL2 probe;
LightProbes.GetInterpolatedProbe(Target.position, renderer, out probe);
Make sure the position is not inside the mesh since realtime shadows will effect the result
Then you can query the SphericalHarmonicsL2 doing
Vector3[] directions = {
new Vector3(0, -1, 0.0f)
};
var colors = new Color[1];
probe.Evaluate(directions, colors);
In above example you will get the color at the point from the upward direction. Above example will create garbage, make sure to re use arrays in real example
I want to make a shapable world (and maybe also procedurally generated), but I don't know how to make it via script.
There are a few examples whom I have a few questions about:
Minecraft
It is easy to make a procedurally generated shapable world from cubes, but I don't know how to make it optimal. Does unity strong enough the handle a lot of cubes?
Landmark
In this game you can shape the world and it uses Unity like terrain. It's is similar to Minecraft but it's not as cubic. (So when you dig in the ground, you dig ~like in real life. So you don't dig cube by cube like in Minecraft)
Is it possible to shape the terrain runtime?
Thanks for your help in advance!
It is easy to make a procedurally generated shapable world from cubes
Short answer, no it is not easy. You would have to use some type of noise to generate a heightmap (like voxel noise, here's a blog tutorial)
[Is] unity strong enough the handle a lot of cubes?
No, on it's own unity will not handle the amount of cubes needed for a minecraft clone very well. Statistically speaking you will never be able to see all 6 faces of a cube, so rendering all 6 is wasteful. Also, each cube will have it's own collider which will quickly clutter. Also you do not need to render a cube if it is blocked by other cubes. All this requires complex optimization code to make it run efficiently as you are modifying the terrain and moving through the world.
Is it possible to shape the terrain runtime?
Yes, here's some code I stole from this question:
function Start()
{
terrain = GetComponent(Terrain);
var nRows = 50;
var nCols = 50;
var heights = new float[nRows, nCols];
for (var j = 0; j < nRows; j++)
for (var i = 0; i < nCols; i++)
heights[j,i] = Random.Range(0.0,1.0);
terrain.terrainData.SetHeights (0,0,heights);
}
and here's the documention on TerrainData.SetHeights()
https://docs.unity3d.com/ScriptReference/TerrainData.SetHeights.html
You can modify the Unity built in terrain's heightmap: TerrainData.SetHeights. You will need to define some kind of a brush like draw crater, depends on your needs.
I'm a bit stuck with Tango device. I want to export a mesh built with the dynamic mesh prefab in Unity. I have seen the Tango3DRExtractWholeMesh function, but it's not working for me.
I have defined the output variables to fill out, and then call the function saving the status in another variable. Something like this:
Vector3[] verts;
Vector3[] normals;
Color32[] colors;
int[] indices;
int numV, numT;
Tango3DReconstruction.Status status =m_tangoApplication.Tango3DRExtractWholeMesh(verts, normals, colores, indexes, out numV, out numT)
But Tango does not do anything at this point. I have checked Tango manager parameters and activate all 3D reconstruction's.
Should I do something else?
I know you got it working but for the other people.
Use TangoApplication.Tango3DRExtractWholeMesh()
from:
https://developers.google.com/project-tango/apis/unity/unity-meshing#tango_application_settings
Dont forget to initialize your arrays (vertices, normals, triangles and colors) so they are big enough to contain the data from the mesh and then it works.
The problem is we dont yet know what size to initialize the arrays with.
Everything is fixed in Mira release which has new exporting functions. It is possible to export the mesh obj model.
I have been trying to develop a 3D game for a long time now. I went through
this
tutorial and found that I didn't know enough to actually make the game.
I am currently trying trying to add a texture to the icosahedron (in the "Look at Basic Drawing" section) he used in the tutorial, but I cannot get the texture on more than one side. The other sides are completely invisible for no logical reason (they showed up perfectly until I added the texture).
Here are my main questions:
How do I make the texture show up properly without using a million vertices and colors to mimic the results?
How can I move the object based on a variable that I can set in other functions?
Try to think of your icosahedron as a low poly sphere. I suppose Lamarche's icosahedron has it's center at 0,0,0. Look at this tutorial, it is written for directX but it explains the general principle of sphere texture mapping http://www.mvps.org/directx/articles/spheremap.htm. I used it in my project and it works great. You move the 3D object by applying various transformation matrices. You should have something like this
glPushMatrix();
glTranslatef();
draw icosahedron;
glPopMatrix();
Here is my code snippet of how I did texCoords for a semisphere shape, based on the tutorial mentioned above
GLfloat *ellipsoidTexCrds;
Vector3D *ellipsoidNorms;
int numVerts = *numEllipsoidVerticesHandle;
ellipsoidTexCrds = calloc(numVerts * 2, sizeof(GLfloat));
ellipsoidNorms = *ellipsoidNormalsHandle;
for(int i = 0, j = 0; i < numVerts * 2; i+=2, j++)
{
ellipsoidTexCrds[i] = asin(ellipsoidNorms[j].x)/M_PI + 0.5;
ellipsoidTexCrds[i+1] = asin(ellipsoidNorms[j].y)/M_PI + 0.5;
}
I wrote this about a year and a half ago, but I can remember that I calculated my vertex normals as being equal to normalized vertices. That is possible because when you have a spherical shape centered at (0,0,0), then vertices basically describe rays from the center of the sphere. Normalize them, and you got yourself vertex normals.
And by the way if you're planning to use a 3D engine on the iPhone, use Ogre3D, it's really fast.
hope this helps :)