Please explain this effect, Unity Textures - unity3d

The following code was taken from here. How Does this Particular Line Work -
texture.SetPixel(x, y, new Color((x + 0.5f) * stepSize % 0.1f, (y + 0.5f) * stepSize % 0.1f, 0f) * 10f); Multiplying Color with 10 and modulus with 0.1f is confusing me ?

All he's done in that line is have the color pattern repeat itself ten times over.
By calculating the modulo with 0.1, each loop of 0 - 1 will yield ten values (0, 0.1, 0.2...)
Further, by multiplying by 10, the color stays visible, and the result is the 10 x 10 grid pattern
Just take a look at the images the author has put up

Related

Why does order matter for Sinusoidal paths in Unity

just a quick question regarding possibly how Unity2D engine compile or runtime works, or maybe something I don't understand at all, so the following code works properly:
pos -= Time.deltaTime * moveSpeed * transform.right;
transform.position = magnitude * pos + axis * Mathf.Sin(Time.time * frequency);
However if I move the pos + axis (both are Vector3) then the pathing does not do what is expected, I was just wondering why this is the case. For example the following code would not work how I want it to:
pos -= Time.deltaTime * moveSpeed * transform.right;
transform.position = magnitude * Mathf.Sin(Time.time * frequency) * pos + axis;
If anyone has any insight I'd like to know.
Thank you.
Unity will resolve math equations following the pemdas order of operations. To clarify, it will handle everything in the order of:
Parathesis
Exponents
Multiplication / Division
Addition / Subtraction
Along with this, the order of operations are read left to right, so whatever appears on the left will be handled first, which is how the tie breakers of Addition / Subtraction and Multiplication / Division are handled.
In your example case, moving the variables as you have results in a completely different operation. For simplicity, I will substitute the vectors for whole numbers and just write out the multiplication as vector * vector and vector * scalar are just scaled vectors, so I can equally substitute all of them for ints.
pos = 5
axis = 3
Mathf.Sin(frequency * Time.time) = 2
magnitude = 12
Now substituting these values into your two equations:
12 * 5 + 3 * 2 (12 * 5 is handled first, next 3 * 2 and then 60 + 6 = 66)
12 * 2 * 5 + 3 (12 * 2 is handled first, next 24 * 5 and then 120 + 3 = 123)
Following the pemdas rule I explained above, the solutions would work out to be:
66
123
If you would like an explanation using vectors I can write one out.

Point tangent to circle

Using this as a [reference][1]: Find a tangent point on circle?
cx = 0;
cy = 0;
px = -3;
py = -8;
dx = cx - px;
dy = cy - py;
a = asin(5 / ((dx*dx + dy*dy)^0.5));
b = atan2(dy, dx);
t_1 = deg2rad(180) + b - a;
t_2 = deg2rad(180) + b + a;
For a point (7,6) the angles are 7.9572/73.4434 and for (-3, -8) are 213.6264/285.2615. So for the first quadrant, the angles do not make sense, but for third quadrant they do. What I am doing wrong?
Your formula for a is wrong. You should use
a = acos(5 / ((dx*dx + dy*dy)^0.5))
instead of
a = asin(5 / ((dx*dx + dy*dy)^0.5))
i.e. use acos(...) instead of asin(...). The reason is shown in the image below. The formula for angle a is a=acos(r/H), where r is the radius of the circle and H is the length of the hypotenuse of the right angle triangle. So this has nothing to do with the fact that asin(...) has no way to know which of the two possible quadrants the value that is passed in lies. the argument of the asin is always positive, and you always want the answer in the range 0 to 90 degrees.
So the answer for the two angles that you want are b+a and b-a. Using acos instead of asin in your two cases produces 97.7592 & -16.5566 (or equivalently 343.4434) for your first quadrant example, and -164.7385 & -56.3736 (or equivalently 195.2615 and 303.6264) for your third quadrant example. (NB: instead of adding 180 degrees in the formula for t_1 and t-2, you could just switch the signs of dx and dy)
First -- I spent like 10 minutes figuring out what the heck you're trying to do (which in the end, I got from a comment in one of the answers), while solving your problem took 2 minutes. So, for future reference, please give a description of your problem as clear as you can first.
Now, I think you just have your signs messed up. Try the following:
%// difference vector
%// NOTE: these go the other way around for the atan2 to come out right
dx = px - cx;
dy = py - cy;
%// tip angle of the right triangle
a = asin( 5 / sqrt(dx*dx + dy*dy) );
%// angle between the (local) X-axis and the line of interest
b = atan2(dy, dx);
%// the third angle in the right triangle
%// NOTE: minus a here instead of plus b
g = pi/2 - a;
%// Angles of interest
%// NOTE1: signs are flipped; this automatically takes care of overshoots
%// NOTE2: don't forget to mod 360
t_1 = mod( rad2deg(b - g), 360)
t_2 = mod( rad2deg(b + g), 360)
Alternatively, you could skip computing the intermediate angle a by using acos instead of asin:
%// difference vector
dx = px - cx;
dy = py - cy;
%// Directly compute the third angle of the right triangle
%// (that is, the angle "at the origin")
g = acos( 5 / sqrt(dx*dx + dy*dy) );
%// angle between the (local) X-axis and the line of interest
b = atan2(dy, dx);
%// Angles of interest
t_1 = mod( rad2deg(b - g), 360)
t_2 = mod( rad2deg(b + g), 360)
Just another wayto re-discover the trigonometric identity acos(x) = pi/2 - asin(x) :)
This MathWorld entry is what you want: http://mathworld.wolfram.com/CircleTangentLine.html.
Alright, it looks like you are not accounting for the fact that asin, atan, ( any a-trig function ) has no way to know which of the two possible quadrants the value you passed in lies. To make up for that, a-trig function will assume that your point is in the first or fourth quadrant ( northeast / southeast ). Therefore, if you call atan function and your original point was in the second or third quadrant, you need to add 180 degrees / pi radians onto whatever value it returns.
See the documentation here stating that asin returns a value from [-pi/2, pi/2] :
http://www.mathworks.com/help/matlab/ref/asin.html
Hope that helps :)
EDIT
I misunderstood the situation originally
Here is what I think you have calculated :
t_1 and t_2 represent the angles you would travel at if you started on the circle from the tangent point and wanted to travel to your original starting point.
Viewed with this perspective your angles are correct.
For the point (7,6)
If you started on the circle at approx. (0,5) and traveled at 7 degrees, you would hit the point.
If you started on the circle at approx. (5,0) and traveled at 70 degrees, you would hit the point.
Now, what is going to be more useful and less confusing than angles, will be to know the slope of the line. To get this from the angle, do the following with angle in degrees:
angle = (angle + 90 + 360) % 180 - 90 // this gives us the angle as it would be in quad 1 or 4
slope = tan( deg2rad( angle ) )

Implementation of gluLookAt and gluPerspective

I've written a small 2D engine in opengl in the process of making a game. I'm using OpenGL ES 2 and the code compiles and runs on iOS and Mac OSX.
Now I'm extending it to support 3D and I'm having a problem setting up the camera.
I've checked the code a hundred times and I can't finde where the problem is, so maybe someone with experience on this can give an idea.
This is the code I have: I'm posting the part of the code where I think the problem might be, but if something else is needed just ask me.
Matrix4 _getFrustumMatrix(float left, float right, float bottom, float top, float near, float far){
Matrix4 res = Matrix4(2.0 * near / (right - left), 0, 0, 0,
0, 2.0 * near / (top - bottom), 0, 0,
(right + left) / (right - left), (top + bottom) / (top - bottom), -(far + near) / (far - near), -1.0,
0,0, -2.0 * far * near / (far - near), 0);
return res;
}
Matrix4 _getPerspectiveMatrix(float near, float far, float angleOfView){
static float aspectRatio = float(SCREENW)/float(SCREENH);
float top = near * tan(angleOfView * 3.1415927 / 360.0);
float bottom = -top;
float left = bottom * aspectRatio;
float right = top * aspectRatio;
return _getFrustumMatrix(left, right, bottom, top, near, far);
}
Matrix4 _getLookAtMatrix(Vector3 eye, Vector3 at, Vector3 up){
Vector3 forward, side;
forward = at - eye;
forward.normalize();
side = forward ^ up;
side.normalize();
up = side ^ forward;
Matrix4 res = Matrix4(side.x, up.x, -forward.x, 0,
side.y, up.y, -forward.y, 0,
side.z, up.z, -forward.z, 0,
0, 0, 0, 1);
res.translate(Vector3(0 - eye));
return res;
}
void Scene3D::_deepRender(){
cameraEye = Vector3(10,0,40);
cameraAt = Vector3(0,0,0);
cameraUp = Vector3(0,1,0);
MatrixStack::push();
Matrix4 projection = _getPerspectiveMatrix(1, 100, 45);
Matrix4 view = _getLookAtMatrix(cameraEye, cameraAt, cameraUp);
MatrixStack::set(projection * view);
Space3D::_deepRender();
MatrixStack::pop();
}
The drawn object is a representation of the axes where x=red, y=green, z=blue, and it's located at (0,0,0).
If I put the eye at (0,0,40) everything looks as expected:
If I put the eye at (10,0,40) then the object is not drawn in the middle of the screen as it should be.
This is the Matrix4::translate method:
void Matrix4::translate(const Vector3& v) {
a14 += a11 * v.x + a12 * v.y + a13 * v.z;
a24 += a21 * v.x + a22 * v.y + a23 * v.z;
a34 += a31 * v.x + a32 * v.y + a33 * v.z;
a44 += a41 * v.x + a42 * v.y + a43 * v.z;
}
EDIT: To add some information:
Using _getLookAtMatrix() with this parameters:
cameraEye = Vector3(40,40,40);
cameraAt = Vector3(0,0,0);
cameraUp = Vector3(0,1,0);
Should give me an equivalent matrix to this one?
Matrix4 view;
view.setIdentity();
view.translate(Vector3(0,0,-69.2820323)); // 69.2820323 is the length of Vector3(40,40,40)
view.rotate(45, Vector3(1,0,0));
view.rotate(-45, Vector3(0,1,0));
At least those transformations makes sense to me and the resulting image looks as what I should expect.
But this matrix compared to the one I get using _getLookAtMatrix() are very different:
view:
0.707106769, -0.49999997, 0.49999997, 0,
0, 0.707106769, 0.707106769, 0,
-0.707106769, -0.49999997, 0.49999997, 0,
0, 0, -69.2820358, 1
_getLookAtMatrix(cameraEye, cameraAt, cameraUp):
0.707106769, 0, -0.707106769, 0,
-0.408248276, 0.816496551, -0.408248276, 0,
0.577350259, 0.577350259, 0.577350259, 0,
-35.0483475, -55.7538719, 21.520195, 1
You seem to have some serious ordering inconsistencies in your matrix class.
For example I assumed your Matrix4 constructor takes it arguments (the matrix elements) as column-major, otherwise your functions wouldn't match the reference implementations of glFrustum and gluLookAt and you would get completely screwed results.
And the code of your translate function also looks correct, since it has to modify the last column of the matrix, which are the elements (a14, a24, a34 and a44).
But the your print out of the view matrix suggests that translate actually modifies the last row, unless you print the matrix in column-major format and therefore transposed. But in this case the print of the _getLookAtMatrix suggests that the Matrix4 constructor takes its arguments in row-major order, which indeed invalidates other things.
Of course all this is also depending on how you send the matrices to OpenGL and how you use them in the vertex shader (I assume ES 2.0, otherwise there would be no need for your own matrix library). If you indeed use ES 1 then you need to send the matrix elements to OpenGL in column-major order, but the translation has to be in the last column and not the last row.
But no matter what convention you use, there is definitely a severe inconsistency inside your matrix code. But without seeing the whole Matrix4 class, the vertex shader and the code where you upload the matrices to OpenGL, it is hard to tell where this inconsistency is.

Calculating Coordinates on Screen

I know there are a load of other questions on this topic, I think I've read most of them, along with Wikipedia and a bunch of other articles but I am missing (I think) some simple arithmetic to complete my coordinate calculations.
I have this code:
typedef struct {
double startX;
double startY;
double x2;
double y2;
double length;
double angle;
double lastAngle;
} LINE;
void lineCalc(LINE *lp) {
double radians = lp->angle * 3.141592653589793/180.0;
lp->x2 = lp->startX + (lp->length * cos(radians));
lp->y2 = lp->startY + (lp->length * sin(radians));
fprintf (stderr, "lineCalc:startX:%2.3f, startY:%2.3f, length:%2.3g, angle:%2.3f, cos(%2.3f):%2.3f, x2:%2.3f, y2:%2.3f\n", lp->startX, lp->startY, lp->length, lp->angle, lp->angle, cos(radians), lp->x2, lp->y2);
}
int main() {
// Initialise to origin of 250, 250. 0, 0 for initial end point. Length 150, first angle 60 (degrees), l.lastAngle currently not used
LINE l = {250, 250, 0, 0, 150, 60, 0};
lineCalc(&l);
//drawLine(&l);
l.startX = x2; l.startY = y2; // make last end point, new start point. Angle stays at 60 degrees
lineCalc(&l);
//drawLine(&l);
l.startX = x2; l.startY = y2;
lineCalc(&l);
//drawLine(&l);
}
Which calculates the end point of a line given its start point, length and angle. All fine and good, but what I want to be able to do is to draw a shape, a triangle would be a start.
At the moment the code will make the calculation, draw the line (in reality it is generating SVG), make the last end point the next origin, recalculate, draw the next line etc...
The crucial bit that I am missing is how to get angle to be relative to the last line drawn. At the moment, the moving of the origin works fine, but the angle stays the same, thus three lines with angles of 60 degrees will just draw a straight line because the angle is relative to the start rather than relative to the last line.
Just in case it is relevant, with SVG horizontal is zero degrees. Thus a line 50 units long, starting at y100, x100 at an angle of 90 degrees will have an end point of y150, x100.
Could someone point out the obvious thing that I a missing to make the angles correct relative to the last line please?
If you take the angle at which the first line is drawn at as theta:
theta + 180 deg OR theta - 180 deg will face you back down the line you just drew.
Then theta + 180 deg + 60 OR theta - 180 deg + 60 will face you at 60 degrees to the first line.
You need to choose whether to + or - the 180 based on the range of degrees that svg uses (does it go -180 to 180 or 0 to 360) and how big your starting theta is. Also you need to choose + or - 60 degrees based on the side of the first line that you want to draw the second line.
Once you've calculated the angle you're drawing the second line at (theta + 180 + 60 for instance) then you need to take that as your next theta to calculate the angle for the third line.

How to swap negative rotation values over to positive rotation values?

Example: I have a circle which is split up into two halfs. One half goes from 0 to -179,99999999999 while the other goes from 0 to 179,99999999999. Typical example: transform.rotation.z of an CALayer. Instead of reaching from 0 to 360 it is slip up like that.
So when I want to develop a gauge for example (in theory), I want to read values from 0 to 360 rather than getting a -142 and thinking about what that might be on that 0-360 scale.
How to convert this mathematically correctly? Sine? Cosine? Is there anything useful for this?
Isn't the normalization achieved by something as simple as:
assert(value >= -180.0 && value <= +180.0);
if (value < 0)
value += 360.0;
I'd probably put even this into a function if I'm going to need it in more than one place. If the code needs to deal with numbers that might already be normalized, then you change the assertion. If it needs to deal with numbers outside the range -180..+360, then you have more work to do (adding or subtracting appropriate multiples of 360).
while (x < 0) {
x = x + 360;
}
while (x > 360) {
x = x - 360;
}
This will work on any value, positive or negative.
((value % 360) + 360) % 360
The first (value % 360) makes it to -359 to 359.
The + 360 removes any negative number: Value now 1 to 719
The last % 360 makes it to 0
to 359
Say x is the value with range (-180, 180), y is the value you want display,
y = x + 180;
That will change shift reading to range (0, 360).
If you don't mind spending a few extra CPU cycles on values that are already positive, this should work on any value -360 < x < 360:
x = (x + 360) % 360;
I provide code to return 0 - 360 degree angle values from the layer's transform property in this answer to your previous question.