I followed the lovely tutorial by Sebastian Lague here Link to tutorial. I applied it to my own scenario where I want to generate landmass, and ended up with a cool result:
As you can see in the image there is a grid, this is simply a texture that is repeated (tiled) x amount of times and applied to the generated mesh. The code for that looks like this:
Vector2[] uvs = new Vector2[vertices.Count];
for (int i = 0; i < vertices.Count; i++)
{
float percentX = Mathf.InverseLerp(-map.GetLength(0) / 2 * squareSize, map.GetLength(0) / 2 * squareSize, vertices[i].x) * tileAmount;
float percentY = Mathf.InverseLerp(-map.GetLength(0) / 2 * squareSize, map.GetLength(0) / 2 * squareSize, vertices[i].z) * tileAmount;
uvs[i] = new Vector2(percentX, percentY);
}
mesh.uv = uvs;
I am wondering, if there is any way to tint each tile a different shade during this process, either in this script or using a shader.
Vertex colors
They will be automatically interpolated for smooth gradients. If you don't want that, you'll have to build the mesh so that each square has separate vertices, not shared with the neighboring squares.
Related
I am trying to have a gameobject in unity react with sound if another object is inside it. I want the gameobject to use the entering objects location to then see what voxel is closest and then play audio based on the voxel intensity/colour. Does anyone have any ideas? I am working with a dataset that is 512x256x512 voxels. I want it to work if the object is resized as well. Any help is much appreciated :).
The dataset I'm working with is a 3d .mhd medical scan of a body. Here is how the texture is added to the renderer on start:
for (int k = 0; k < NumberOfFrames; k++) {
string fname_ = "T" + k.ToString("D2");
Color[] colors = LoadData(Path.Combine (imageDir, fname_+".raw"));
_volumeBuffer.Add (new Texture3D (dim [0], dim [1], dim [2], TextureFormat.RGBAHalf, mipmap));
_volumeBuffer[k].SetPixels(colors);
_volumeBuffer [k].Apply ();
}
GetComponent<Renderer>().material.SetTexture("_Data", _volumeBuffer[0]);
The size of the object is defined by using the mdh header files spacing as well as voxel dimensions:
transform.localScale = new Vector3(mhdheader.spacing[0] * volScale, mhdheader.spacing[1] * volScale * dim[1] / dim[0], mhdheader.spacing[2] * volScale * dim[2] / dim[0]);
I have tried making my own function to get the index from the world by offsetting it to the beginning of the render mesh (not sure if this is right). Then, scaling it by the local scale. Then, multiplying by the amount of voxels in each dimension. However, I am not sure if my logic is right whatsoever... Here is the code I tried:
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 startOfTex = gameObject.GetComponent<Renderer>().bounds.min;
Vector3 localPos = transform.InverseTransformPoint(worldPos);
Vector3 localScale = gameObject.transform.localScale;
Vector3 OffsetPos = localPos - startOfTex;
Vector3 VoxelPosFloat = new Vector3(OffsetPos[0] / localScale[0], OffsetPos[1] / localScale[1], OffsetPos[2] / localScale[2]);
VoxelPosFloat = Vector3.Scale(VoxelPosFloat, new Vector3(voxelDims[0], voxelDims[1], voxelDims[2]));
Vector3Int voxelPos = Vector3Int.FloorToInt(VoxelPosFloat);
return voxelPos;
}
You can try setting up a large amount of box colliders and the OnTriggerEnter() function running on each. But a much better solution is to sort your array of voxels and then use simple math to clamp the moving objects position vector to ints and do some maths to map the vector to an index in the array. For example the vector (0,0,0) could map to voxels[0]. Then just fetch that voxels properties as you like. For a voxel application this would be a much needed faster calculation than colliders.
I figured it out I think. If anyone sees any flaw in my coding, please let me know :).
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 deltaBounds = rend.bounds.max - rend.bounds.min;
Vector3 OffsetPos = worldPos - rend.bounds.min;
Vector3 normPos = new Vector3(OffsetPos[0] / deltaBounds[0], OffsetPos[1] / deltaBounds[1], OffsetPos[2] / deltaBounds[2]);
Vector3 voxelPositions = new Vector3(normPos[0] * voxelDims[0], normPos[1] * voxelDims[1], normPos[2] * voxelDims[2]);
Vector3Int voxelPos = Vector3Int.FloorToInt(voxelPositions);
return voxelPos;
}
For a masking object, I am trying to scale each triangle individually. If I scale the object as a whole, the points further away from the center will get moved too far and I just want the object to have 'more body'. Since I use it as a mask, it doesn't matter if the triangles end up overlapping.
Although looking at this might hurt someone deep inside, this is actually what I'm trying to achieve:
I thought this was best done in a shader and I thought this could be achieved in the geometry shader since I need to know the center of the triangle. I came up with the code below, but things keep acting... strange.
float3 center = (IN[0].vertex.xyz + IN[1].vertex.xyz + IN[2].vertex.xyz) / 3;
for (int i = 0; i < 3; i++)
{
float3 distance = IN[i].vertex.xyz - center.xyz;
float3 normal = normalize(distance);
distance = abs(distance);
float scale = 1;
float3 pos = IN[i].vertex.xyz + (distance * normal.xyz * (scale - 1));
o.pos.xyz = pos.xyz;
o.pos.w = IN[i].vertex.w;
tristream.Append(o);
}
My plan was to calculate the center of the triangle and than calculate the distance between the center and each point. I would than take the normal of this distance to know in which direction I would have to move the vertex and change the position by adding the distance * normal(direction) * scale to the original position of the vertex. Yet, it seems the triangles change when you rotate the camera, so I would doubt it if this is right. Does anyone know what could be wrong?
(Just some notes:
the mesh is basically 2D, only changing across the x- and z-axis (if this matters).
I did abs(distance) since I thought it would cancel out the normal if both would be negative. I'm not sure if this is necessary.
I did scale -1 since a scale of 1 would result in the mesh staying the same. A scale of 2 should result in all triangles being twice as big.
I have no clue on what to do with the w value, but keeping the old value at least doesn't screw up that much. Perhaps here lays the problem? I thought this value should always be 1 for matrix multiplications.
)
Oke, so besides using a way to 'complex' formula to calculate the new position of each point. (Better way at https://math.stackexchange.com/questions/1563249/how-do-i-scale-a-triangle-given-its-cartesian-cooordinates). I found out that it somehow indeed had to do with the w-value. As I always thought this was mainly a helper variable, it would be awesome if someone could explain how that values screwed things over.
Anyways, including that value in the equation it works fine.
float4 center = (IN[0].vertex.xyzw + IN[1].vertex.xyzw + IN[2].vertex.xyzw) / 3;
for (int i = 0; i < 3; i++)
{
float scale = 2;
float4 pos = (IN[i].vertex.xyzw * scale) - center.xyzw;
o.pos.xyzw = pos.xyzw;
tristream.Append(o);
}
This works just fine :)
I have a method that creates a cylinder based on variables that contain the height, radius and number of sides.
The mesh generates fine with any number of sides, however I am really struggling with understanding how this should be UV mapped.
Each side of the cylinder is a quad made up of two triangles.
The triangles share vertices.
I think the placement of the uv code is correct, however I have no idea what values would be fitting?
Right now the texture is stretched/crooked on all sides of the mesh.
Please help me understand this.
private void _CreateSegmentSides(float height)
{
if(m_Sides > 2) {
float angleStep = 360.0f / (float) m_Sides;
BranchSegment seg = new BranchSegment(m_NextID++);
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
int index_tr = 0, index_tl = 3, index_br = 2, index_bl = 1;
float u0 = (float)1 / (float) m_Sides;
int max = m_Sides - 1;
// Make first triangles.
seg.vertexes.Add(rotation * (new Vector3(m_Radius, height, 0f)));
seg.vertexes.Add(rotation * (new Vector3(m_Radius, 0f, 0f)));
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 1]);
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 3]);
// Add triangle indices.
seg.triangles.Add(index_tr); // 0
seg.triangles.Add(index_bl); // 1
seg.triangles.Add(index_br); // 2
seg.triangles.Add(index_tr); // 0
seg.triangles.Add(index_br); // 2
seg.triangles.Add(index_tl); // 3
seg.uv.Add(new Vector2(0, 0));
seg.uv.Add(new Vector2(0, u0));
seg.uv.Add(new Vector2(u0, u0));
seg.uv.Add(new Vector2(u0, 0));
for (int i = 0; i < max; i++)
{
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 2]); // new vertex
seg.triangles.Add(seg.vertexes.Count - 1); // new vertex
seg.triangles.Add(seg.vertexes.Count - 2); // shared
seg.triangles.Add(seg.vertexes.Count - 3); // shared
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 2]); // new vertex
seg.triangles.Add(seg.vertexes.Count - 3); // shared
seg.triangles.Add(seg.vertexes.Count - 2); // shared
seg.triangles.Add(seg.vertexes.Count - 1); // new vertex
// How should I set up the variables for this part??
// I know they are not supposed to be zero.
if (i % 2 == 0) {
seg.uv.Add(new Vector2(0, 0));
seg.uv.Add(new Vector2(0, u0));
} else {
seg.uv.Add(new Vector2(u0, u0));
seg.uv.Add(new Vector2(u0, 0));
}
}
m_Segments.Add(seg);
}
else
{
Debug.LogWarning("Too few sides in the segment.");
}
}
Edit: Added pictures
This is what the cylinder looks like (onesided triangles):
This is what the same shader should look like (on a flat plane):
Edit 2: Wireframe pics
So your wireframe is okey(you linked only wireframe but i asked for shaded wireframe: this is a shaded wireframe buts its okey).
The reason your texture looks like this, is because its trying to strecth your image alongside any height, so it might look good on an 1m height cylinder, but would look stretched on an 1000m height one, so you actually need to dynamically strecth this uv map.
Example for 1m height cylinder, texture is okey cos it is for 1x1 dimension:
Example for 2m height cylinder texture stretched because double the length 2x1 dimension:
So what you can do is if you generate always the same height cylinder you can just adjust it inside unity, at the texture properties its called tiling, just increase the x or y dimension of your texture and dont forget to make the texture repeat itself.
Also your cylinder cap should look like this(it is not a must have thing but yeah):
Hello friendly computer people,
I've been studying openGL with the book iPhone 3D programming from O'Reilly. Below I've posted an example from the text which shows how to draw a cone. I'm still trying to wrap my head around it which is a bit difficult since I'm not super familiar with C++.
Anyway, what I would like to do is draw a cube. Could anyone suggest the best way to replace the following code with one that would draw a simple cube?
const float coneRadius = 0.5f;
const float coneHeight = 1.866f;
const int coneSlices = 40;
{
// Allocate space for the cone vertices.
m_cone.resize((coneSlices + 1) * 2);
// Initialize the vertices of the triangle strip.
vector<Vertex>::iterator vertex = m_cone.begin();
const float dtheta = TwoPi / coneSlices;
for (float theta = 0; vertex != m_cone.end(); theta += dtheta) {
// Grayscale gradient
float brightness = abs(sin(theta));
vec4 color(brightness, brightness, brightness, 1);
// Apex vertex
vertex->Position = vec3(0, 1, 0);
vertex->Color = color;
vertex++;
// Rim vertex
vertex->Position.x = coneRadius * cos(theta);
vertex->Position.y = 1 - coneHeight;
vertex->Position.z = coneRadius * sin(theta);
vertex->Color = color;
vertex++;
}
}
Thanks for all the help.
If all you want is an OpenGL ES 1.1 cube, I created such a sample application (that has texture and lets you rotate it using your finger) that you can grab the code for here. I generated this sample for the OpenGL ES session of my course on iTunes U (I've since fixed the broken texture rendering you see in that class video).
The author is demonstrating how to build a generic 3-D engine in C++ in the book, so his code is a little more involved than mine. In this part of the code, he's looping through an angle from 0 to 2 * pi in a number of steps corresponding to coneSlices. You could replace his loop with a series of manual vertex additions corresponding to the vertices I have in my sample application in order to draw a cube instead of his cone. You'd also need to remove the code he has elsewhere for drawing the circular base of the cone.
In OpenGLES 1 you would probably draw a cub using glVertexPointer to submit geometry and glDrawArrays to draw the cube. See these tutorials:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
OpenGLES is a C based library.
There are three main ways I know of to draw a simple circle in OpenGL ES, as provided by the iPhone. They are all based on a simple algorithm (the VBO version is below).
void circleBufferData(GLenum target, float radius, GLsizei count, GLenum usage) {
const int segments = count - 2;
const float coefficient = 2.0f * (float) M_PI / segments;
float *vertices = new float[2 * (segments + 2)];
vertices[0] = 0;
vertices[1] = 0;
for (int i = 0; i <= segments; ++i) {
float radians = i * coefficient;
float j = radius * cosf(radians);
float k = radius * sinf(radians);
vertices[(i + 1) * 2] = j;
vertices[(i + 1) * 2 + 1] = k;
}
glBufferData(target, sizeof(float) * 2 * (segments + 2), vertices, usage);
glVertexPointer(2, GL_FLOAT, 0, 0);
delete[] vertices;
}
The three ways that I know of to draw a simple circle are by using glDrawArray from an array of vertices held by the application; using glDrawArray from a vertex buffer; and by drawing to a texture on initialization and drawing the texture when rendering is requested. The first two methods I know fairly well (though I have not been able to get anti-aliasing to work). What code is involved for the last option (I am very new to OpenGL as a whole, so a detailed explanation would be very helpful)? Which is most efficient?
Antialiasing in the iOS OpenGL ES impelmentation is severely limited. You won't be able to draw antialiased circles using traditional methods.
However, if the circles you're drawing aren't that large, and are filled, you could take a look at using GL_POINT_SMOOTH. It's what I used for my game, Pizarro, which involves a lot of circles. Here's a detailed writeup of my experience with drawing antialiased circles on the iOS:
http://sveinbjorn.org/drawing_antialiased_circles_opengl_iphone