I want to make a shapable world (and maybe also procedurally generated), but I don't know how to make it via script.
There are a few examples whom I have a few questions about:
Minecraft
It is easy to make a procedurally generated shapable world from cubes, but I don't know how to make it optimal. Does unity strong enough the handle a lot of cubes?
Landmark
In this game you can shape the world and it uses Unity like terrain. It's is similar to Minecraft but it's not as cubic. (So when you dig in the ground, you dig ~like in real life. So you don't dig cube by cube like in Minecraft)
Is it possible to shape the terrain runtime?
Thanks for your help in advance!
It is easy to make a procedurally generated shapable world from cubes
Short answer, no it is not easy. You would have to use some type of noise to generate a heightmap (like voxel noise, here's a blog tutorial)
[Is] unity strong enough the handle a lot of cubes?
No, on it's own unity will not handle the amount of cubes needed for a minecraft clone very well. Statistically speaking you will never be able to see all 6 faces of a cube, so rendering all 6 is wasteful. Also, each cube will have it's own collider which will quickly clutter. Also you do not need to render a cube if it is blocked by other cubes. All this requires complex optimization code to make it run efficiently as you are modifying the terrain and moving through the world.
Is it possible to shape the terrain runtime?
Yes, here's some code I stole from this question:
function Start()
{
terrain = GetComponent(Terrain);
var nRows = 50;
var nCols = 50;
var heights = new float[nRows, nCols];
for (var j = 0; j < nRows; j++)
for (var i = 0; i < nCols; i++)
heights[j,i] = Random.Range(0.0,1.0);
terrain.terrainData.SetHeights (0,0,heights);
}
and here's the documention on TerrainData.SetHeights()
https://docs.unity3d.com/ScriptReference/TerrainData.SetHeights.html
You can modify the Unity built in terrain's heightmap: TerrainData.SetHeights. You will need to define some kind of a brush like draw crater, depends on your needs.
Related
I work in construction and we are trying to visualize our projects using Unity and Oculus Rift.
Basically all our models are created using Revit and we export them out to fbx and bring them into Unity. For each model we have (electrical, mechanical, architectural, facade...) we generate a fbx in Revit and bring into Unity.
The models have around 3000 to 60000 objects(meshes) and around 3 million to 40 million polygons. When we try to visualize the models in Unity we are getting very low FPS, around 2 to 3 fps, and batch draw calls around 15000 to 20000.
I believe the problem is the complexity of all our models that we bring together into Unity. I wonder if there is any way to optimize it, I already tried decimating, disabling shadows, occlusion but nothing seems to work. Collapsing the models into a single object is not an option because we have to allow the user to select and inspect individual elements.
I am working on something similar and i can share some experiences for tasks like this with many vertices or meshes. I am trying to visualize point clouds in Unity and it is a very challenging task. In my case though i create point clouds myself and i do not triangulate them. It helps but i still have to apply optimizations.
From my experience if you have vertices more than 10 million rendered at a frame you start to have fps issues. This can vary based on your hardware of course. Also i am sure this will be even worse with triangulated meshes. What i have done to optimize things are following:
I first started by rendering objects that are in Camera Frustum In order to do this i used a function called IsVisibleFrom which is an extension to Renderer like this:
using UnityEngine;
public static class RendererExtensions
{
public static bool IsVisibleFrom(this Renderer renderer, Camera camera)
{
Plane[] planes = GeometryUtility.CalculateFrustumPlanes(camera);
return GeometryUtility.TestPlanesAABB(planes, renderer.bounds);
}
}
Then you can use it like this by traversing all the meshes you have:
Renderer grid;
IEnumerator RenderVisibleGameObject()
{
for (int i = 0; i < PointCloud.transform.childCount; i++)
{
grid = PointCloud.transform.GetChild(i).GetComponent<Renderer>();
if (!grid.IsVisibleFrom(cam))
{
grid.gameObject.SetActive(false);
}
else
{
grid.gameObject.SetActive(true);
}
if (i == (PointCloud.transform.childCount - 1))
yield return null;
}
StartCoroutine(RenderVisibleGameObject());
}
Second option would be if possible and if you can create meshes with lower detail using Level of Detail. Basically what it does is rendering low detail meshes that are further away from camera.
Last option i can recommend is using Occlusion Culling. This is similar to first option but it also takes care of occlusions which was not the case for me because i had only points.
You may also find the Forge Unity AR/VR toolkit of interest:
Overview
Introduction
23-minute video
As you probably know, Forge is highly optimised for professional visualisation of large CAD models.
Given a mesh in Unity & C# (that itself was created in realtime by merging simpler base meshes), how could we during runtime* turn it into a smooth, almost like wrapped-in-cloth mesh version of itself? Not quite a fully convex version, but more rounded, softening sharp edges, bridging deep gaps and so on. The surface would also ideally look like when the "smoothing angle" normals setting is applied to imported objects. Thanks!
Before & after sketch
*The mesh setup is made by people and its specifics unknown beforehand. All its basic shape parts (before we merge them) are known though. The base parts may also remain unmerged if that helps a solution, and it would be extra terrific if there was a runtime solution that would fastly apply the wrapper mash even with base parts that change their transform over time, but a static one-time conversion would be great too.
(Some related keywords may be: marching cube algorithm & metaballs, skin above bones, meshfilter converting, smoothing shader, softening, vertices subdivision.)
There are many ways to get something similar so you can pick your preferred one:
Marching Cubes
This algorithm is easy to use but the result always inherits the blocky 'style' of it. If that's the look you want then use it. If you need something more smooth and/or pixel perfect then look for other ways.
Ray Marching and Signed Distance Functions
This is quite interesting technique that may give you a lot of control. You can represent your base parts with simple cube/cylinder/etc. equations and blend them together with simple math.
Here you can see some examples:
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm
The best thing here is that it's very simple to setup, you don't even need to merge your base parts, you just push your data to renderer. Worse, is that it may get computationaly hard on rendering part.
Old school mesh modifications
Here you have the most options but it's also most complicated. You start with your base parts which don't have much data by themselves so you should probably join them into one mesh using CSG Union operation.
Having this mesh you can compute neighbors data for your primitives:
for each vertex find triangles containing it.
for each vertex find edges containing it.
for each edge find triangles containing it.
etc.
With such data you may be able to do things like:
Find and cut some sharp vertex.
Find and cut some sharp edge.
Move the vertex to minimize angle between triangles/edges it creates.
and so on...
There are really a lot of details that may work for you or not, you just need to test some to see which one gives the preferred results
.
One simple thing I'd start with:
For each vertex find all vertices connected to it by any edge.
Compute average position of all those vertices.
Use some alpha parameter in [0,1] range to blend between initial vertex position and averaged one.
Implement multiple iterations of this algorithm and add parameter for it.
Experiment with alpha and number of iterations.
Using this way you also have two distinct phases: computation and rendering, so doing it with animation may become too slow, but just rendering the mesh will be faster than in Ray Marching approach.
Hope this helps.
EDIT:
Unfortunately I've never had such need so I don't have any sample code but here you have some pseudo-code that may help you:
You have your mesh:
Mesh mesh;
Array of vertex neighbors:
For any vertex index N, triNeighbors[N] will store indices of other vertices connected by edge
List<HashSet<int>> triNeighbors = new List<HashSet<int>>();
int[] meshTriangles = mesh.triangles;
// iterate vert indices per triangle and store neighbors
for( int i = 0; i < meshTriangles.Length; i += 3 ) {
// three indices making a triangle
int v0 = meshTriangles[i];
int v1 = meshTriangles[i+1];
int v2 = meshTriangles[i+2];
int maxV = Mathf.Max( Mathf.Max( v0, v1 ), v2 );
while( triNeighbors.Count <= maxV )
triNeighbors.Add( new HashSet<int>() );
triNeighbors[v0].Add( v1 );
triNeighbors[v0].Add( v2 );
triNeighbors[v1].Add( v0 );
triNeighbors[v1].Add( v2 );
triNeighbors[v2].Add( v0 );
triNeighbors[v2].Add( v1 );
}
Now, for any single vertex, with index N you can compute its new, averaged position like:
int counter = 0;
int N = 0;
Vector3 sum = Vector3.zero;
if( triNeighbors.Count > N && triNeighbors[N] != null )
{
foreach( int V in triNeighbors[N] ) {
sum += mesh.vertices[ V ];
counter++;
}
sum /= counter;
}
There may be some bugs in this code, I've just made it up but you should get the point.
We are working on AI for our game, and currently the detection system. How can I read the lightprobe interpolation data off a mesh? If in shadow it will take longer time and closer distances for the AI to detect the player
edit: https://docs.unity3d.com/ScriptReference/LightProbes.GetInterpolatedProbe.html
Ok so the best way is to use GetInterpolatedProbe
You call it like
SphericalHarmonicsL2 probe;
LightProbes.GetInterpolatedProbe(Target.position, renderer, out probe);
Make sure the position is not inside the mesh since realtime shadows will effect the result
Then you can query the SphericalHarmonicsL2 doing
Vector3[] directions = {
new Vector3(0, -1, 0.0f)
};
var colors = new Color[1];
probe.Evaluate(directions, colors);
In above example you will get the color at the point from the upward direction. Above example will create garbage, make sure to re use arrays in real example
I am sampling data from the point cloud and trying to display the selected points using a mesh renderer.
I have the data but I can't visualize it. I am using the Augmented Reality application as template.
I am doing the point saving and mesh population in a coroutine. There are no errors but I can't see any resulting mesh.
I am wondering if there is a conflict with an existing mesh component from the point cloud example that I use for creating the cloud.
I pick a point on screen (touch) and use the index to find coordinates and populate a Vector3[]. The mesh receiveds the vertices( 5000 points out of 500000 in the point cloud)
this is where I set the mesh:
if (m_updateSubPointsMesh)
{
int[] indices = new int[ctr];
for (int i = 0; i < ctr; ++i)
{
indices[i] = i;
}
m_submesh.Clear();
m_submesh.vertices = m_subpoints;
int vertsInMesh = m_submesh.vertexCount;
m_submesh.SetIndices(indices, MeshTopology.Points, 0);
}
m_subrenderer.material.SetColor("_SpecColor", Color.yellow);
I am using Unity pro 5.3.3 and VS 2015 on windows 10.
Comments and advice are very much appreciated even if they are not themselves a solution.
Jose
I sort it out. The meshing was right it turn out to be a bug on a transform (not tango-defined). The mesh was rendered in another point. Had to walk around to find it.
Thanks
You must convert the Tango mesh data to mesh data for unity, its not structured in the same way I believe its the triangles thats different. You also need to set triangles and normals to the mesh.
I have been trying to develop a 3D game for a long time now. I went through
this
tutorial and found that I didn't know enough to actually make the game.
I am currently trying trying to add a texture to the icosahedron (in the "Look at Basic Drawing" section) he used in the tutorial, but I cannot get the texture on more than one side. The other sides are completely invisible for no logical reason (they showed up perfectly until I added the texture).
Here are my main questions:
How do I make the texture show up properly without using a million vertices and colors to mimic the results?
How can I move the object based on a variable that I can set in other functions?
Try to think of your icosahedron as a low poly sphere. I suppose Lamarche's icosahedron has it's center at 0,0,0. Look at this tutorial, it is written for directX but it explains the general principle of sphere texture mapping http://www.mvps.org/directx/articles/spheremap.htm. I used it in my project and it works great. You move the 3D object by applying various transformation matrices. You should have something like this
glPushMatrix();
glTranslatef();
draw icosahedron;
glPopMatrix();
Here is my code snippet of how I did texCoords for a semisphere shape, based on the tutorial mentioned above
GLfloat *ellipsoidTexCrds;
Vector3D *ellipsoidNorms;
int numVerts = *numEllipsoidVerticesHandle;
ellipsoidTexCrds = calloc(numVerts * 2, sizeof(GLfloat));
ellipsoidNorms = *ellipsoidNormalsHandle;
for(int i = 0, j = 0; i < numVerts * 2; i+=2, j++)
{
ellipsoidTexCrds[i] = asin(ellipsoidNorms[j].x)/M_PI + 0.5;
ellipsoidTexCrds[i+1] = asin(ellipsoidNorms[j].y)/M_PI + 0.5;
}
I wrote this about a year and a half ago, but I can remember that I calculated my vertex normals as being equal to normalized vertices. That is possible because when you have a spherical shape centered at (0,0,0), then vertices basically describe rays from the center of the sphere. Normalize them, and you got yourself vertex normals.
And by the way if you're planning to use a 3D engine on the iPhone, use Ogre3D, it's really fast.
hope this helps :)