I have the following method to multiply two 32 bit numbers in fixed point
19.13 format. But I think there is a problem with this method:
1.5f is rounded up to 2.0f, while -1.5f is rounded up to -1.0f.
It seems to me that -1.5 should be rounded down to -2.0f.
First, does the current rounding make sense, and if not, how can I change it
to be more consistent?
static OPJ_INT32 opj_int_fix_mul(OPJ_INT32 a, OPJ_INT32 b) {
OPJ_INT64 temp = (OPJ_INT64) a * (OPJ_INT64) b ;
temp += 4096;
assert((temp >> 13) <= (OPJ_INT64)0x7FFFFFFF);
assert((temp >> 13) >= (-(OPJ_INT64)0x7FFFFFFF - (OPJ_INT64)1));
return (OPJ_INT32) (temp >> 13);
}
Since you are always adding 4096, code is doing rounding half-way cases toward positive infinity. It is kind of odd.
To round toward positive infinity, I'd expect
temp += 4096 + 4095;
To round in the usual fashion (to nearest), use instead add a bias away from 0.
temp += (temp < 0) ? -4096 : 4096;
To round to nearest and ties to even is more work. Not certain OP desires that.
Related
if (((float) (Math.Round(gameObject.transform.position.x, 1))) == ((float) (Math.Round(array[i].x, 1))) && (((float) (Math.Round(gameObject.transform.position.y, 1))) == (float) (Math.Round(array[i].y, 1))))
Hello! I am using C#, and the array is filled with vector positions. I am trying to see when an object gets to a certain position. This never triggers. Do you have any other tips how to do this?
You are comparing float values directly ... never do that. It leads to a problem with the floating point precision. float values are internally actually stored in increments of an Epsilon.
see e.g. from here
The nearest float to 16.67 is 16.6700000762939453125
The nearest float to 100.02 is 100.01999664306640625
or here for a broader explenation.
Use Vector3.Distance!
Preferably with a certain threshold distance. There is an example that looks exactly like what you want to do in Coroutines (in JavaScript but the difference to c# in this case is minimal)
public float threshold = 0.1f;
//...
if(Vector3.Distance(gameObject.transform.position, array[i]) <= threshold)
{
....
}
Adjust threshold so it has a value that is bigger than what the object can possibly move between two frames.
Or together with Mathf.Approximately
if(Math.Approximately(Vector3.Distance(object.transform.position, array[i]), 0.0f))
{
....
}
if your threshold is smaller than 0.00001 than you could also use
if(object.transform.position == array[i])
{
....
}
since == uses <= 0.00001 for equality.
But note: In the most cases the last two options will also fail because the opts for a moving GameObject to match an exact 3D position are almost 0 unless you set fixed values somewhere.
Vector3.Distance also works with a Vector2 as parameter since an implicit typecast to Vector3 exists.
I have a voxel based game in development right now and I generate my world by using Simplex Noise so far. Now I want to generate some other structures like rivers, cities and other stuff, which can't be easily generated because I split my world (which is practically infinite) into chunks of 64x128x64. I already generated trees (the leaves can grow into neighbouring chunks), by generating the trees for a chunk, plus the trees for the 8 chunks surrounding it, so leaves wouldn't be missing. But if I go into higher dimensions that can get difficult, when I have to calculate one chunk, considering chunks in an radius of 16 other chunks.
Is there a way to do this a better way?
Depending on the desired complexity of the generated structure, you may find it useful to first generate it in a separate array, perhaps even a map (a location-to-contents dictionary, useful in case of high sparseness), and then transfer the structure to the world?
As for natural land features, you may want to google how fractals are used in landscape generation.
I know this thread is old and I suck at explaining, but I'll share my approach.
So for example 5x5x5 trees. What you want is for your noise function to return the same value for an area of 5x5 blocks, so that even outside of the chunk, you can still check if you should generate a tree or not.
// Here the returned value is different for every block
float value = simplexNoise(x * frequency, z * frequency) * amplitude;
// Here it will return the same value for an area of blocks (you should use floorDiv instead of dividing, or you it will get negative coordinates wrong (-3 / 5 should be -1, not 0 like in normal division))
float value = simplexNoise(Math.floorDiv(x, 5) * frequency, Math.floorDiv(z, 5) * frequency) * amplitude;
And now we'll plant a tree. For this we need to check what x y z position this current block is relative to the tree's starting position, so we can know what part of the tree this block is.
if(value > 0.8) { // A certain threshold (checking if tree should be generated at this area)
int startX = Math.floorDiv(x, 5) * 5; // flooring the x value to every 5 units to get the start position
int startZ = Math.floorDiv(z, 5) * 5; // flooring the z value to every 5 units to get the start position
// Getting the starting height of the trunk (middle of the tree , that's why I'm adding 2 to the starting x and starting z), which is 1 block over the grass surface
int startY = height(startX + 2, startZ + 2) + 1;
int relx = x - startX; // block pos relative to starting position
int relz = z - startZ;
for(int j = startY; j < startY + 5; j++) {
int rely = j - startY;
byte tile = tree[relx][rely][relz]; // Get the needing block at this part of the tree
tiles[i][j][k] = tile;
}
}
The tree 3d array here is almost like a "prefab" of the tree, which you can use to know what block to set at the position relative to the starting point. (God I don't know how to explain this, and having english as my fifth language doesn't help me either ;-; feel free to improve my answer or create a new one). I've implemented this in my engine, and it's totally working. The structures can be as big as you want, with no chunk pre loading needed. The one problem with this method is that the trees or structures will we spawned almost within a grid, but this can easily be solved with multiple octaves with different offsets.
So recap
for (int i = 0; i < 64; i++) {
for (int k = 0; k < 64; k++) {
int x = chunkPosToWorldPosX(i); // Get world position
int z = chunkPosToWorldPosZ(k);
// Here the returned value is different for every block
// float value = simplexNoise(x * frequency, z * frequency) * amplitude;
// Here it will return the same value for an area of blocks (you should use floorDiv instead of dividing, or you it will get negative coordinates wrong (-3 / 5 should be -1, not 0 like in normal division))
float value = simplexNoise(Math.floorDiv(x, 5) * frequency, Math.floorDiv(z, 5) * frequency) * amplitude;
if(value > 0.8) { // A certain threshold (checking if tree should be generated at this area)
int startX = Math.floorDiv(x, 5) * 5; // flooring the x value to every 5 units to get the start position
int startZ = Math.floorDiv(z, 5) * 5; // flooring the z value to every 5 units to get the start position
// Getting the starting height of the trunk (middle of the tree , that's why I'm adding 2 to the starting x and starting z), which is 1 block over the grass surface
int startY = height(startX + 2, startZ + 2) + 1;
int relx = x - startX; // block pos relative to starting position
int relz = z - startZ;
for(int j = startY; j < startY + 5; j++) {
int rely = j - startY;
byte tile = tree[relx][rely][relz]; // Get the needing block at this part of the tree
tiles[i][j][k] = tile;
}
}
}
}
So 'i' and 'k' are looping withing the chunk, and 'j' is looping inside the structure. This is pretty much how it should work.
And about the rivers, I personally haven't done it yet, and I'm not sure why you need to set the blocks around the chunk when generating them ( you could just use perlin worms and it would solve problem), but it's pretty much the same idea, and for your cities too.
I read something about this on a book and what they did in these cases was to make a finer division of chunks depending on the application, i.e.: if you are going to grow very big objects, it may be useful to have another separated logic division of, for example, 128x128x128, just for this specific application.
In essence, the data resides is in the same place, you just use different logical divisions.
To be honest, never did any voxel, so don't take my answer too serious, just throwing ideas. By the way, the book is game engine gems 1, they have a gem on voxel engines there.
About rivers, can't you just set a level for water and let rivers autogenerate in mountain-side-mountain ladders? To avoid placing water inside mountain caveats, you could perform a raycast up to check if it's free N blocks up.
I would like to have a function where I can input a radius value and have said function spit out the area for that size circle. The catch is I want it to do so for integer based coordinates only.
I was told elsewhere to look at Gauss's circle problem, which looks to be exactly what I'm interested in, but I don't really understand the math behind it (assuming it is actually accurate in calculating what I'm wanting).
As a side note, I currently use a modified circle drawing algorithm which does indeed produce the results I desire, but it just seems so incredibly inefficient (both the algorithm and the way in which I'm using it to get the area).
So, possible answers for this to me would be actual code or pseudocode for such a function if such a thing exists or something like a thorough explanation of Gauss's circle problem and why it is/isn't what I'm looking for.
The results I would hope the function would produce:
Input: Output
0: 1
1: 5
2: 13
3: 29
4: 49
5: 81
6: 113
7: 149
8: 197
9: 253
I too had to solve this problem recently and my initial approach was that of Numeron's - iterate on x axis from the center outwards and count the points within the upper right quarter, then quadruple them.
I then improved the algorithm around 3.4 times.
What I do now is just calculating how many points there are within an inscribed square inside that circle, and what's between that square and the edge of the circle (actually in the opposite order).
This way I actually count one-eighth of the points between the edge of the circle, the x axis and the right edge of the square.
Here's the code:
public static int gaussCircleProblem(int radius) {
int allPoints=0; //holds the sum of points
double y=0; //will hold the precise y coordinate of a point on the circle edge for a given x coordinate.
long inscribedSquare=(long) Math.sqrt(radius*radius/2); //the length of the side of an inscribed square in the upper right quarter of the circle
int x=(int)inscribedSquare; //will hold x coordinate - starts on the edge of the inscribed square
while(x<=radius){
allPoints+=(long) y; //returns floor of y, which is initially 0
x++; //because we need to start behind the inscribed square and move outwards from there
y=Math.sqrt(radius*radius-x*x); // Pythagorean equation - returns how many points there are vertically between the X axis and the edge of the circle for given x
}
allPoints*=8; //because we were counting points in the right half of the upper right corner of that circle, so we had just one-eightth
allPoints+=(4*inscribedSquare*inscribedSquare); //how many points there are in the inscribed square
allPoints+=(4*radius+1); //the loop and the inscribed square calculations did not touch the points on the axis and in the center
return allPoints;
}
Here's a picture to illustrate that:
Round down the length of the side of an inscribed square (pink) in the upper right quarter of the circle.
Go to next x coordinate behind the inscribed square and start counting orange points until you reach the edge.
Multiply the orange points by eight. This will give you the yellow
ones.
Square the pink points. This will give you the dark-blue ones. Then
multiply by four, this will get you the green ones.
Add the points on the axis and the one in the center. This gives you
the light-blue ones and the red one.
This is an old question but I was recently working on the same thing. What you are trying to do is as you said, Gauss's circle problem, which is sort of described here
While I too have difficulty understaning the serious maths behind it all, what it more or less pans out to when not using wierd alien symbols is this:
1 + 4 * sum(i=0, r^2/4, r^2/(4*i+1) - r^2/(4*i+3))
which in java at least is:
int sum = 0;
for(int i = 0; i <= (radius*radius)/4; i++)
sum += (radius*radius)/(4*i+1) - (radius*radius)/(4*i+3);
sum = sum * 4 + 1;
I have no idea why or how this works and to be honest Im a bit bummed I have to use a loop to get this out rather than a single line, as it means the performance is O(r^2/4) rather than O(1).
Since the math wizards can't seem to do better than a loop, I decided to see whether I could get it down to O(r + 1) performance, which I did. So don't use the above, use the below. O(r^2/4) is terrible and will be slower even despite mine using square roots.
int sum = 0;
for(int x = 0; x <= radius; x++)
sum += Math.sqrt(radius * radius - x * x);
sum = sum * 4 + 1;
What this code does is loop from centre out to the edge along an orthogonal line, and at each point adding the distance from line to edge in a perpendicualr direction. At the end it will have the number of points in a quater, so it quadruples the result and adds one because there is also central point. I feel like the wolfram equation does something similar, since it also multiplies by 4 and adds one, but IDK why it loops r^2/4.
Honestly these aren't great solution, but it seems to be the best there is. If you are calling a function which does this regularly then as new radii come up save the results in a look-up table rather than doing a full calc each time.
Its not a part of your question, but it may be relevant to someone maybe so I'll add it in anyway. I was personally working on finding all the points within a circle with cells defined by:
(centreX - cellX)^2 + (centreY - cellY)^2 <= radius^2 + radius
Which puts the whole thing out of whack because the extra +radius makes this not exactly the pythagorean theorem. That extra bit makes the circles look a whole lot more visually appealing on a grid though, as they don't have those little pimples on the orthogonal edges. It turns out that, yes my shape is still a circle, but its using sqrt(r^2+r) as radius instead of r, which apparently works but dont ask me how. Anyway that means that for me, my code is slightly different and looks more like this:
int sum = 0;
int compactR = ((radius * radius) + radius) //Small performance boost I suppose
for(int j = 0; j <= compactR / 4; j++)
sum += compactR / (4 * j + 1) - compactR / (4 * j + 3);
sum = sum * 4 + 1;
I'm really scratching my head here in an effort to understand a quote i read somewhere that says "the more we zoom inside the fractal, the more iteration we will most likely need to perform".
so far, i haven't been able to find any mathematical / academical paper that proves that saying.
i've also managed to find a small code that calculates the mandelbrot set, taken from here :
http://warp.povusers.org/Mandelbrot/
but yet, wasn't able to understand how zooming affects iterations.
double MinRe = -2.0;
double MaxRe = 1.0;
double MinIm = -1.2;
double MaxIm = MinIm+(MaxRe-MinRe)*ImageHeight/ImageWidth;
double Re_factor = (MaxRe-MinRe)/(ImageWidth-1);
double Im_factor = (MaxIm-MinIm)/(ImageHeight-1);
unsigned MaxIterations = 30;
for(unsigned y=0; y<ImageHeight; ++y)
{
double c_im = MaxIm - y*Im_factor;
for(unsigned x=0; x<ImageWidth; ++x)
{
double c_re = MinRe + x*Re_factor;
double Z_re = c_re, Z_im = c_im;
bool isInside = true;
for(unsigned n=0; n<MaxIterations; ++n)
{
double Z_re2 = Z_re*Z_re, Z_im2 = Z_im*Z_im;
if(Z_re2 + Z_im2 > 4)
{
isInside = false;
break;
}
Z_im = 2*Z_re*Z_im + c_im;
Z_re = Z_re2 - Z_im2 + c_re;
}
if(isInside) { putpixel(x, y); }
}
}
Thanks!
This is not a scientific answer but a one with common sense. In theory, to decide whether a point belongs to the Mandelbrot set or not, you should iterate infinitely, and check if the value ever reaches Infinity. This is practically useless so we make assumptions:
We iterate only 50 times
We check that iteration value ever gets larger than 2
When you zoom into a Mandelbrot set, the second assumption remains valid. However zooming means increasing the significant fractional digits of the point coordinates.
Say you start with (0.4,-0.2i).
Iterating over and over this value increases the digits used, but won't lose significant digits. Now when your point coordinate looks such: (0.00000000045233452235, -0.00000000000943452634626i) to check if that point is in the set you need much more iteration to see if that iteration would ever reach 2 not to mention that if you use some kind of Float type, you will lose significant digits at some zoom level and you'll have to switch to an arbitrary precision library.
Trying is your best friend :-) Calculate a set with a low iteration and a high iteration and subtract the second image from the first. You will always see change at the edges (where black pixels meet colored pixels), but if your zooming level is high (meaning: the point coordinates have a lot of fractional digits) you will get a different image.
You asked how zooming affects iterations and my typical zoom to iterations ratio is that if you zoom in to a 9th of the size I increase iterations by 1.7. A 9th of the size of course means that both width and height is divided by 3.
Making this more generic I actually use this in my code
Complex middle = << calculate from click in image >>
int zoomfactor = 3;
width = width / zoomfactor;
maxiter = (int)(maxiter * Math.Sqrt(zoomfactor));
minimum = new Complex(middle.Real - width, middle.Imaginary - width);
maximum = new Complex(middle.Real + width, middle.Imaginary + width);
I find that this relation between zoom and iterations works out pretty well, the details in the fractals still come well on deep zooms without getting too crazy on the iterations too fast.
How fast you want to zoom if your own preference, I like a zoomfactor of 3 but anything goes. The important thing is that you need to keep the relation between the zoomfactor and the increase in interations.
Example: I have a circle which is split up into two halfs. One half goes from 0 to -179,99999999999 while the other goes from 0 to 179,99999999999. Typical example: transform.rotation.z of an CALayer. Instead of reaching from 0 to 360 it is slip up like that.
So when I want to develop a gauge for example (in theory), I want to read values from 0 to 360 rather than getting a -142 and thinking about what that might be on that 0-360 scale.
How to convert this mathematically correctly? Sine? Cosine? Is there anything useful for this?
Isn't the normalization achieved by something as simple as:
assert(value >= -180.0 && value <= +180.0);
if (value < 0)
value += 360.0;
I'd probably put even this into a function if I'm going to need it in more than one place. If the code needs to deal with numbers that might already be normalized, then you change the assertion. If it needs to deal with numbers outside the range -180..+360, then you have more work to do (adding or subtracting appropriate multiples of 360).
while (x < 0) {
x = x + 360;
}
while (x > 360) {
x = x - 360;
}
This will work on any value, positive or negative.
((value % 360) + 360) % 360
The first (value % 360) makes it to -359 to 359.
The + 360 removes any negative number: Value now 1 to 719
The last % 360 makes it to 0
to 359
Say x is the value with range (-180, 180), y is the value you want display,
y = x + 180;
That will change shift reading to range (0, 360).
If you don't mind spending a few extra CPU cycles on values that are already positive, this should work on any value -360 < x < 360:
x = (x + 360) % 360;
I provide code to return 0 - 360 degree angle values from the layer's transform property in this answer to your previous question.