I have been trying to develop a 3D game for a long time now. I went through
this
tutorial and found that I didn't know enough to actually make the game.
I am currently trying trying to add a texture to the icosahedron (in the "Look at Basic Drawing" section) he used in the tutorial, but I cannot get the texture on more than one side. The other sides are completely invisible for no logical reason (they showed up perfectly until I added the texture).
Here are my main questions:
How do I make the texture show up properly without using a million vertices and colors to mimic the results?
How can I move the object based on a variable that I can set in other functions?
Try to think of your icosahedron as a low poly sphere. I suppose Lamarche's icosahedron has it's center at 0,0,0. Look at this tutorial, it is written for directX but it explains the general principle of sphere texture mapping http://www.mvps.org/directx/articles/spheremap.htm. I used it in my project and it works great. You move the 3D object by applying various transformation matrices. You should have something like this
glPushMatrix();
glTranslatef();
draw icosahedron;
glPopMatrix();
Here is my code snippet of how I did texCoords for a semisphere shape, based on the tutorial mentioned above
GLfloat *ellipsoidTexCrds;
Vector3D *ellipsoidNorms;
int numVerts = *numEllipsoidVerticesHandle;
ellipsoidTexCrds = calloc(numVerts * 2, sizeof(GLfloat));
ellipsoidNorms = *ellipsoidNormalsHandle;
for(int i = 0, j = 0; i < numVerts * 2; i+=2, j++)
{
ellipsoidTexCrds[i] = asin(ellipsoidNorms[j].x)/M_PI + 0.5;
ellipsoidTexCrds[i+1] = asin(ellipsoidNorms[j].y)/M_PI + 0.5;
}
I wrote this about a year and a half ago, but I can remember that I calculated my vertex normals as being equal to normalized vertices. That is possible because when you have a spherical shape centered at (0,0,0), then vertices basically describe rays from the center of the sphere. Normalize them, and you got yourself vertex normals.
And by the way if you're planning to use a 3D engine on the iPhone, use Ogre3D, it's really fast.
hope this helps :)
Related
I am trying to create shader through amplify shader for a cube to cut through plane or any mesh when cross section. I know that I should be using size, rotation and position for that but what exactly to do with them that I don't know. Yup by that it means that I am new to amplify shader and also in shader programming so please don't provide shader code as I need to make it customizable for future so please help me out in amplify shader nodes.
Currently I have this effect but I want to make it more box bounding specific not plane normals based.
I want not this effect but the box effect shown below. This was achieved through ray marching concept but this I want to achieve with Amplify Shader. Kindly guide me through this.
This is what I have done so far with the amplify nodes
Result:
Here is the result of doing the shader using "Amplify Shader":
Solution:
First we'll call the green cube the "intersector" and the red cube the "intersectee".
So as you've done with the plane, the cutout works because the back face of the intersector is shown when inside the intersectee and the intersectee front face is show when it is inside the intersector.
Create a shader (which is used by both cubes) and put them into two seperate materials - apply individual materials to each cube. After this we can get into the actually shader node stuff.
First we need to make sure "Cull Mode" is off (Output Node > Cull Mode > off). This will ensure the back face is actually rendered (This can be optimized by decided depending on where the cube is in the intersector).
Next we need to get the surface point in object space:
Most of the variables will be defined in script. The rotation matrix is used to rotate a point. However, it is inversed as the rotation matrix rotates the cube into world space, therefore, inversing this would rotate the world space point into object space. We also get a "_Cubepos" which is the position of the cube to intersect with (E.g it would be the intersector if shader is on the intersectee). This is subracted by the world pos as the rotation matrix rotates around the origin. After this it is added back to be in the correct position.
This leads to the next section where the extents are added and subtracted to the "_Cubepos" and "_CubeExtent" to find the minimum and maximum extents.
Unfortunately, Amplify shader has no good way to check if a vector lies within two vectors. So we have to break it into components. (I encourage you to learn how to write shaders). Each compare with range returns 1 if the point in object space is within the extents for each axis. If one returns 0 we use the last multiply node to make sure the final output will be 0.
Finally, we get to the last part of the shader. The "IsIntersector" is set in script to be 1 or 0 depending on whether the cube we are refering to is used to intersect or is an intersectee. Depending on the scenario, here we set the opacity mask to 1 or 0.
After this we have to define the script to attach to each object. Add a new script and type the following in:
[ExecuteInEditMode]
public class SetVar : MonoBehaviour
{
//Transform of opposite cube
public Transform intersectingCube;
//Is this an intersector or intersectee
public bool isIntersector;
//Material of object
public Material mat;
// Start is called before the first frame update
void Start()
{
//Get material
mat = GetComponent<Renderer>().material;
}
// Update is called once per frame
void OnRenderObject()
{
//Calculate rotation matrix
Matrix4x4 m = Matrix4x4.TRS(-intersectingCube.position, intersectingCube.rotation, Vector3.one);
//Set shader variables
mat.SetMatrix("RotationMatrix", m);
mat.SetVector("_Cubepos", intersectingCube.position);
mat.SetVector("_CubeExtent", intersectingCube.localScale / 2.0f);
mat.SetFloat("_IsIntersector", (isIntersector) ? 0 : 1);
}
}
Then we can set the correct inspector values depending if the cube is an intersector or intersectee. Here is an example for the intersector cube:
Make sure to have the IsIntersector ticked depending if the cube is an intersector or not.
Here is a link to the shader: http://paste.amplify.pt/view/raw/4b248bc3. Also to do this for any mesh is a very complicated operation - too complicated for nodes. Learn about shader code and use a raycasting algorithm to determine if the point is inside the cube.
Also, alternatively for any convex shape. You could calculate each plane and then using your method already used, can check if the world position point works for every plane. For a cube there would be 6 planes, however, its a bit slower than the above method (as it is optimized for a cube).
Given a mesh in Unity & C# (that itself was created in realtime by merging simpler base meshes), how could we during runtime* turn it into a smooth, almost like wrapped-in-cloth mesh version of itself? Not quite a fully convex version, but more rounded, softening sharp edges, bridging deep gaps and so on. The surface would also ideally look like when the "smoothing angle" normals setting is applied to imported objects. Thanks!
Before & after sketch
*The mesh setup is made by people and its specifics unknown beforehand. All its basic shape parts (before we merge them) are known though. The base parts may also remain unmerged if that helps a solution, and it would be extra terrific if there was a runtime solution that would fastly apply the wrapper mash even with base parts that change their transform over time, but a static one-time conversion would be great too.
(Some related keywords may be: marching cube algorithm & metaballs, skin above bones, meshfilter converting, smoothing shader, softening, vertices subdivision.)
There are many ways to get something similar so you can pick your preferred one:
Marching Cubes
This algorithm is easy to use but the result always inherits the blocky 'style' of it. If that's the look you want then use it. If you need something more smooth and/or pixel perfect then look for other ways.
Ray Marching and Signed Distance Functions
This is quite interesting technique that may give you a lot of control. You can represent your base parts with simple cube/cylinder/etc. equations and blend them together with simple math.
Here you can see some examples:
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm
The best thing here is that it's very simple to setup, you don't even need to merge your base parts, you just push your data to renderer. Worse, is that it may get computationaly hard on rendering part.
Old school mesh modifications
Here you have the most options but it's also most complicated. You start with your base parts which don't have much data by themselves so you should probably join them into one mesh using CSG Union operation.
Having this mesh you can compute neighbors data for your primitives:
for each vertex find triangles containing it.
for each vertex find edges containing it.
for each edge find triangles containing it.
etc.
With such data you may be able to do things like:
Find and cut some sharp vertex.
Find and cut some sharp edge.
Move the vertex to minimize angle between triangles/edges it creates.
and so on...
There are really a lot of details that may work for you or not, you just need to test some to see which one gives the preferred results
.
One simple thing I'd start with:
For each vertex find all vertices connected to it by any edge.
Compute average position of all those vertices.
Use some alpha parameter in [0,1] range to blend between initial vertex position and averaged one.
Implement multiple iterations of this algorithm and add parameter for it.
Experiment with alpha and number of iterations.
Using this way you also have two distinct phases: computation and rendering, so doing it with animation may become too slow, but just rendering the mesh will be faster than in Ray Marching approach.
Hope this helps.
EDIT:
Unfortunately I've never had such need so I don't have any sample code but here you have some pseudo-code that may help you:
You have your mesh:
Mesh mesh;
Array of vertex neighbors:
For any vertex index N, triNeighbors[N] will store indices of other vertices connected by edge
List<HashSet<int>> triNeighbors = new List<HashSet<int>>();
int[] meshTriangles = mesh.triangles;
// iterate vert indices per triangle and store neighbors
for( int i = 0; i < meshTriangles.Length; i += 3 ) {
// three indices making a triangle
int v0 = meshTriangles[i];
int v1 = meshTriangles[i+1];
int v2 = meshTriangles[i+2];
int maxV = Mathf.Max( Mathf.Max( v0, v1 ), v2 );
while( triNeighbors.Count <= maxV )
triNeighbors.Add( new HashSet<int>() );
triNeighbors[v0].Add( v1 );
triNeighbors[v0].Add( v2 );
triNeighbors[v1].Add( v0 );
triNeighbors[v1].Add( v2 );
triNeighbors[v2].Add( v0 );
triNeighbors[v2].Add( v1 );
}
Now, for any single vertex, with index N you can compute its new, averaged position like:
int counter = 0;
int N = 0;
Vector3 sum = Vector3.zero;
if( triNeighbors.Count > N && triNeighbors[N] != null )
{
foreach( int V in triNeighbors[N] ) {
sum += mesh.vertices[ V ];
counter++;
}
sum /= counter;
}
There may be some bugs in this code, I've just made it up but you should get the point.
We are working on AI for our game, and currently the detection system. How can I read the lightprobe interpolation data off a mesh? If in shadow it will take longer time and closer distances for the AI to detect the player
edit: https://docs.unity3d.com/ScriptReference/LightProbes.GetInterpolatedProbe.html
Ok so the best way is to use GetInterpolatedProbe
You call it like
SphericalHarmonicsL2 probe;
LightProbes.GetInterpolatedProbe(Target.position, renderer, out probe);
Make sure the position is not inside the mesh since realtime shadows will effect the result
Then you can query the SphericalHarmonicsL2 doing
Vector3[] directions = {
new Vector3(0, -1, 0.0f)
};
var colors = new Color[1];
probe.Evaluate(directions, colors);
In above example you will get the color at the point from the upward direction. Above example will create garbage, make sure to re use arrays in real example
I want to make a shapable world (and maybe also procedurally generated), but I don't know how to make it via script.
There are a few examples whom I have a few questions about:
Minecraft
It is easy to make a procedurally generated shapable world from cubes, but I don't know how to make it optimal. Does unity strong enough the handle a lot of cubes?
Landmark
In this game you can shape the world and it uses Unity like terrain. It's is similar to Minecraft but it's not as cubic. (So when you dig in the ground, you dig ~like in real life. So you don't dig cube by cube like in Minecraft)
Is it possible to shape the terrain runtime?
Thanks for your help in advance!
It is easy to make a procedurally generated shapable world from cubes
Short answer, no it is not easy. You would have to use some type of noise to generate a heightmap (like voxel noise, here's a blog tutorial)
[Is] unity strong enough the handle a lot of cubes?
No, on it's own unity will not handle the amount of cubes needed for a minecraft clone very well. Statistically speaking you will never be able to see all 6 faces of a cube, so rendering all 6 is wasteful. Also, each cube will have it's own collider which will quickly clutter. Also you do not need to render a cube if it is blocked by other cubes. All this requires complex optimization code to make it run efficiently as you are modifying the terrain and moving through the world.
Is it possible to shape the terrain runtime?
Yes, here's some code I stole from this question:
function Start()
{
terrain = GetComponent(Terrain);
var nRows = 50;
var nCols = 50;
var heights = new float[nRows, nCols];
for (var j = 0; j < nRows; j++)
for (var i = 0; i < nCols; i++)
heights[j,i] = Random.Range(0.0,1.0);
terrain.terrainData.SetHeights (0,0,heights);
}
and here's the documention on TerrainData.SetHeights()
https://docs.unity3d.com/ScriptReference/TerrainData.SetHeights.html
You can modify the Unity built in terrain's heightmap: TerrainData.SetHeights. You will need to define some kind of a brush like draw crater, depends on your needs.
I am sampling data from the point cloud and trying to display the selected points using a mesh renderer.
I have the data but I can't visualize it. I am using the Augmented Reality application as template.
I am doing the point saving and mesh population in a coroutine. There are no errors but I can't see any resulting mesh.
I am wondering if there is a conflict with an existing mesh component from the point cloud example that I use for creating the cloud.
I pick a point on screen (touch) and use the index to find coordinates and populate a Vector3[]. The mesh receiveds the vertices( 5000 points out of 500000 in the point cloud)
this is where I set the mesh:
if (m_updateSubPointsMesh)
{
int[] indices = new int[ctr];
for (int i = 0; i < ctr; ++i)
{
indices[i] = i;
}
m_submesh.Clear();
m_submesh.vertices = m_subpoints;
int vertsInMesh = m_submesh.vertexCount;
m_submesh.SetIndices(indices, MeshTopology.Points, 0);
}
m_subrenderer.material.SetColor("_SpecColor", Color.yellow);
I am using Unity pro 5.3.3 and VS 2015 on windows 10.
Comments and advice are very much appreciated even if they are not themselves a solution.
Jose
I sort it out. The meshing was right it turn out to be a bug on a transform (not tango-defined). The mesh was rendered in another point. Had to walk around to find it.
Thanks
You must convert the Tango mesh data to mesh data for unity, its not structured in the same way I believe its the triangles thats different. You also need to set triangles and normals to the mesh.