I have recently been working on a voxel game that uses greedy meshes. Faces will vary from 1*1 to 64*64 unity. For the flat areas in the game it makes more sense to combine multiple smaller terrain tiles into bigger ones with a tiled texture, but this poses a problem for my sprite atlas. Each uv has a reference to a spot on the atlas, but for larger greedy faces the texture gets stretched. I want the uv to tile the correct amount of times on a per-face basis to produce the same result as if the larger faces were a bunch of smaller ones, only without the extra geometry.
Here is an example of what I want that was achieved in openGl:
OpenGl Face-Based-Tiling
See how the larger faces are tiled to give the impression of smaller ones? The texture was from an atlas similar to the following:
Texture Atlas
I only have a basic knowledge of shaders in unity, but how would I write a shader in unity to accomplish this?
In order to do this, you need to first pass the shader a few bits of information: The x and y dimensions of the face (in terms of tiles) and the uv coordinates of the lower left and upper right corners of the tile. You can then use the following equations to calculate the uv coordinates assuming the face's uv values (inputUV) range from the lower left corner to the upper right corner of the sprite:
float2 newUV = (inputUV - llCorner)/(urCorner - llCorner); //converts from input uv to values between (0.0, 0.0) and (1.0, 1.0)
newUV.x = newUV.x * xDim % 1; //makes the newUV coordinates repeat xDim times on the x axis
newUV.y = newUV.y * yDim % 1; //makes the newUV coordinates repeat yDim times on the y axis
newUV = newUV*(urCorner - llCorner) + llCorner; //converts values between (0.0, 0.0) and (1.0, 1.0) to values between the lower left corner and the upper right corner
I haven't actually tested it, but I think this should work. I hope this helps!
Related
I want to position different equilateral triangular models, in a 3d space in unity. The problem is that the 3 known vertices aren't equilateral triangular, some of them aren't even isosceles so I need to wrap my model to match it's corners to the given vertices.
I would like to model those triangles different from each other that's why want to use pre-created models.
Currently I do the following calculation to position and scale the triangles onto a isosceles triangle:
Middle-point of the given 3 vertices
Vector3 middlepoint = (points[0]+points[1]+points[2])/3;
Distance from Middle-point
pointdistance[i] = Vector3.Distance(points[i],middlepoint);
The closest point is the one I will rotate the triangle to, so I know the triangles height (y-Axis), let's say the corner point is points[0] so float height = Vector3.Distance(points[0],middlepoint);
(I'm certain this step is wrong for a non isosceles triangle) I calculate it's width by determining the circumscribed circle radius, with the help of the remaining points
float width = (float)(Vector3.Distance(points[1] , points[2])*Math.Sqrt(3)/3);
Apply the scale to the model
float scale = new Vector3(height,width,1);
I calculate the normal normalVec of those 3 points to get the x and y orientation right, this works well so i think I don't need to change it
Instantiate the triangle
this.Triangle = (GameObject)Instantiate(standardTriangleModel,middlepoint, Quaternion.LookRotation(normalVec,points[0]));
The result looks pretty good until the triangles are not isosceles anymore
(Blue line = middlepoint to closest point, Green lines = connection between the given vertices)
So does anyone have a clue how i could position and resize my triangular models to match those points?
No code as I don't have unity handy at the moment. This answer is based on how to shear using unity gameobject transforms by trejkaz on the Unity Q&A site.
Start with gameobjects that are a right triangle of height and width 1:
Then for Triangle ABC, Set the X scale of the right triangle gameobject (which we can call mainObject) to be the length of AB, and set the Y scale to be the shortest distance between C and the line that travels through AB (the height of the triangle measured from the base AB).
Consider the angle CAB = θ.
Then, put mainObject inside of a parent gameobject called Outer1. Scale Outer1 with Y=sqrt(2)/sin(90-θ), X=sqrt(2).
Then, put Outer1 inside of a parent gameobject called Outer2. Rotate Outer2 around mainObject.forward by (θ-90) (which should be a clockwise rotation of 90-θ).
Then, put Outer2 inside of a parent gameobject called Outer3. Scale Outer3 with Y=sin((90-θ)/2), X=cos((90-θ)/2).
At this point, mainObject should be sheared and scaled into the correct shape. You will just need to position and rotate Outer3 so that the (pre-shearing) right angle corner of mainObject is at A,mainObject.right points from A to B, and mainObject.forward points normal to the triangle.
TLDR: Can't figure out the correct Shader Graph setup for using UV and vertex displacement to cheaply animate a (unrigged) mesh.
I am trying to rotate a part of the mesh based on the UV coordinates, e.g: fromX 0 toX 0.4, fromY 0 toY 0.6. The mesh is created uv-mapped with this in mind.
I have no problem getting the affected vertices in this area. Problem is that I want to rotate these verts for customizable axis e.g. axis(X:1, Y:0, Z:1) using a weight so that the rotation takes place around a pivoted point. I want the bottom selection to stay connected to the rest of the mesh while the other affected vertices neatly rotate around this point.
The weight can be painted by using split UV channels as seen in the picture:
I multiply the weighted area with a rotation node to rotate it.
And I add that to the negative multiplied position (the rest of the verts, excluding the rotated area) to get the final output displacement.
But the rotated mesh is bent. I need it to be stiff as in the whole part rotated with weight=1 except for the very pivoting vertex.
I can get it as described using a weight=1 based rotation, but the pivot point becomes the center of the mesh, not the desired point.
How can I do this correctly?
Been at it for days, please help :')
I started using Unity about a month ago, and this is one of the first issues I faced.
The node you are using will always transform the vertices around the origin.
I think you have two options available:
Translate the vertices by the offset of where you want to rotate the wings. This would require storing the pivot point of the wings in the mesh somehow - This could done by utilizing a spare UV channel, or by using the vertex color channel.
Use bones and paint the weights in your chosen 3D package. This way, you can record the animation, and use Unity's skinned mesh shader to render it.
Hope that helps.
Try this:
I've used the UV ranges from your example applied to a sphere of unit size. The spheres original pivot is in the centre, and its adjusted pivot is shifted 0.5 on the Y axis.
The only variable the shader doesn't know, is the adjusted pivot position; so I pass this through the material.
I've not implemented your weight in the graph, as I just wanted to show you the process. You can easily plug that in.
The color output is just being used for debug purposes.
The first image is with the default object pivot.
The second image is with the adjusted pivot.
The final image is the graph. (Note the logic group is driving the vertex rotation based on the UV mask).
I'd like to create a fixed size circle that will have a varying number (between 6 - 12) of rectangle sprites positioned on it. I've read about a cocos2d function called drawCircle which is great for displaying a circle. I'd like to display a circle, but I'd also like to include the rectangle sprites on top of it, spaced evenly depending on the number of sprites.
Is there a function that would layout the rectangle sprites in a circle?
I see a little bit of trigonometry in your future! Perhaps draw the circle using a drawing function, and then compute points for the center of each box?
You'll need to know the radius of your circle, obviously, but from there it should be pretty simple. It looks like you want to place them at 45 degree angles. So the first box would be placed at point (radius, 0), the second at (radius*cos(45), radius*sin(45)), third at (0, radius), etc.
The above math is assuming standard counter-clockwise rotation from 0-360 degrees. You can also use radians - you would then compute all these points with theta = 0, pi/4, pi/2, 3pi/4, pi, 5pi/4, 3pi/2, and 7pi/4
Basically is the circle center is x0, y0, your calculated points will be (x0 + radius*cos(theta), y0 + radius*sin(theta))
Should be fairly simple math at play there :)
When I render a cube and texture it I end up with white edges along the cube. I've checked the vertex and texture coordinates and they look fine to me. My texture is a power of 2. It is a texture map containing 4x4 textures in which each texture is 16x16 pixels. Does anyone have any suggestions?
I guess you are experiencing texture bleeding. You can solve it by either using GL_CLAMP on your textures or adjusting slightly your UV coordinates to 0.0005 and 0.0095 (for instance) instead of 0 and 1 to compensate for the texture sampling artifacts.
An answer to my question suggests that DOT3 lighting can help with OpenGL ES rendering, but I'm having trouble finding a decent definition of what DOT3 lighting is.
Edit 1
iPhone related information is greatly appreciated.
DOT3-lighting is often referred to as per-pixel lighting. With vertex lighting the lighting is calculated at every vertex and the resulting lighting is interpolated over the triangle. In per-pixel lighting, as the name implies, the object is to calculate the lighting at every pixel.
The way this is done on fixed function hardware as the iPhone is with so called register combiners. The name DOT3 comes from this render state:
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);
Look at this blog entry on Wolfgang Engels blog for more info on exactly how to set this up.
When doing per-pixel lighting it's popular to also utilize a so called normal map. This means that the normal of every point on an object is stored in a special texture map, a normal map. This was popularized in the game DOOM 3 by ID software where pretty low polygon models where used but with high resolution normal maps. The reason for using this technique is that the eye is more sensitive to variation in lighting than variation in shape.
I saw in your other question that the reason this came up was that you wanted to reduce the memory footprint of the vertex data. This is true, instead of storing three components for a normal in every vertex, you only need to store two components for the texture coordinates to the normal map. Enabling per-pixel lighting will come with a performance cost though so I'm not sure if this will be a net win, as usual the advice is to try and see.
Finally the diffuse lighting intensity in a point is proportional to the cosine of the angle between the surface normal and the direction of the light. For two vector the dot product is defined as:
a dot b = |a||b| cos(theta)
where |a| and |b| is the length of the vectors a and b respectively and theta is the angle between them. If the length is equal to one, |a| and |b| are referred to as unit vectors and the formula simplifies to:
a dot b = cos(theta)
this means that the diffuse lighting intensity is given by the dot product between the surface normal and the direction of the light. This means that all diffuse lighting is a form of DOT3-lighting, even if the name has come to refer to the per-pixel kind.
From here:
Bumpmapping is putting a texture on a model where each texel's brightness defines the height of that texel.
The height of each texel is then used to perturb the lighting across the surface.
Normal mapping is putting a texture on a model where each texel's color is really three values that define the direction that location on the surface points.
A color of (255, 0, 0) for example, might mean that the surface at that location points down the positive X axis.
In other words, each texel is a normal.
The Dot3 name comes from what you actually do with these normals.
Let's say you have a vector which points in the direction your light source points. And let's say you have the vector which is the normal at a specific texel on your model that tells you which direction that texel points.
If you do a simple math equation called a "dot product" on these two normal vectors, like so:
Dot = N1xN2x + N1yN2y + N1z*N2z
Then the resulting value is a number which tells you how much those two vectors point in the same direction.
If the value is -1, then they point in opposite directions, which actually means that the texel is pointing at the light source, and the light source is pointing at the texel, so the texel should be lit.
If the value is 1, then they point in the same direction, which means the texel is pointing away from the light source.
And if the value is 0, then one of the vectors points at 90 degrees relative to the other. Ie: If you are standing on the ground looking forward, then your view vector is 90 degrees relative to the normal of the ground which points up.