Why fractional iteration step compiles into different JavaScript than integer step - coffeescript

I was wondering about slightly different JavaScript which ranges comprehensions in CoffeeScript compiles into. Is there any reason why following differencies in generated JavaScript?
Iterating a range by integer step
numbers = (i for i in [start..end] by 2)
compiles into:
for (i = start; i <= end; i += 2) {
_results.push(i);
}
But when iterating by fractional step
numbers = (i for i in [start..end] by 1/2)
generates bit more complicated JavaScript:
for (i = start, _ref = 1 / 2; start <= end ? i <= end : i >= end; i += _ref) {
_results.push(i);
}
So why this additional start <= end condition?

You'll get similarly elaborate code if you just do numbers = (i for i in [start..end]). This is because CoffeeScript doesn't know which direction the range goes when the beginning or ending is a variable. The compiler has a special optimization where it will output simpler code if a constant step is provided, but unfortunately 1/2 is counted as an expression rather than a constant.

Coffeescript doesn't know completely what the expression 1/2 evaluates to. It could be Math.random() - .5 and it would depend on the particular running of the script.
Therefore, it's impossible for Coffeescript to know if the step is negative or positive, so it just keys the condition based on the relative positioning of start and end rather than on the sign of the constant step.

This is constant vs. expression, rather than integer vs. fraction. When the step is a constant (such as 2), CoffeeScript knows whether step is positive at compile time and outputs the correct code for that. When the step is an expression (such as 1/2), it needs to determine whether it is positive at runtime.
Unfortunately, CoffeeScript appears to recognize fractional number as expressions regardless of how they are written (0.5 or 1/2), so there's no simple way to avoid this problem.

Related

While loop does not stop - calculating armstrong number [duplicate]

Obviously, float comparison is always tricky. I have a lot of assert-check in my (scientific) code, so very often I have to check for equality of sums to one, and similar issues.
Is there a quick-and easy / best-practice way of performing those checks?
The easiest way I can think of is to build a custom function for fixed tolerance float comparison, but that seems quite ugly to me. I'd prefer a built-in solution, or at least something that is extremely clear and straightforward to understand.
I think it's most likely going to have to be a function you write yourself. I use three things pretty constantly for running computational vector tests so to speak:
Maximum absolute error
return max(abs(result(:) - expected(:))) < tolerance
This calculates maximum absolute error point-wise and tells you whether that's less than some tolerance.
Maximum excessive error count
return sum( (abs(result(:) - expected(:))) < tolerance )
This returns the number of points that fall outside your tolerance range. It's also easy to modify to return percentage.
Root mean squared error
return norm(result(:) - expected(:)) < rmsTolerance
Since these and many other criteria exist for comparing arrays of floats, I would suggest writing a function which would accept the calculation result, the expected result, the tolerance and the comparison method. This way you can make your checks very compact, and it's going to be much less ugly than trying to explain what it is that you're doing in comments.
Any fixed tolerance will fail if you put in very large or very small numbers, simplest solution is to use eps to get the double precision:
abs(A-B)<eps(A)*4
The 4 is a totally arbitrary number, which is sufficient in most cases.
Don't know any special build in solution. Maybe something with using eps function?
For example as you probably know this will give False (i.e. 0) as a result:
>> 0.1 + 0.1 + 0.1 == 0.3
ans =
0
But with eps you could do the following and the result is as expected:
>> (0.1+0.1+0.1) - 0.3 < eps
ans =
1
I have had good experience with xUnit, a unit test framework for Matlab. After installing it, you can use:
assertVectorsAlmostEqual(a,b) (checks for normwise closeness between vectors; configurable absolute/relative tolerance and sane defaults)
assertElementsAlmostEqual(a,b) (same check, but elementwise on every single entry -- so [1 1e-12] and [1 -1e-9] will compare equal with the former but not with the latter).
They are well-tested, fast to use and clear enough to read. The function names are quite long, but with any decent editor (or the Matlab one) you can write them as assertV<tab>.
For those who understand both MATLAB and Python (NumPy), it would maybe be useful to check the code of the following Python functions, which do the job:
numpy.allclose(a, b, rtol=1e-05, atol=1e-08)
numpy.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)

Problems sampling Double-Ranges in Scala

I want to sample a numeric range (start, end and increment provided). I'm wondering why the last element sometimes exceeds my upper limit?
E.g.
(9.474 to 49.474 by 1.0).last>49.474 // gives true
This is because the last element is 49.474000000000004
I know that I could also use until to make an exclusive range, but then I "loose" 1 element (the range stops at 48.474000000000004).
Is there a way to sample a range having the start and end "exactely" set to the provided bounds? (background : The above solution gets me into trouble when e.g. doing interpolation using apache commons math (as extrapolation is not allowed))
What I'm currently doing to prevent this is rounding :
def roundToScale(d:Double,scale:Int) = BigDecimal(d).setScale(scale,BigDecimal.RoundingMode.HALF_EVEN).toDouble
(9.474 to 49.474 by 1.0).map(roundToScale(_:Double,5))
That's due to double arithmetics: 9.474 + 1.0 * 40 = 49.474000000000004.
You can follow that expression from the source: https://github.com/scala/scala/blob/v2.12.5/src/library/scala/collection/immutable/NumericRange.scala#L56
If you want to have something exactly, you should use Int or Long types.
you should go with BigDecimal:
(BigDecimal(9.474) to BigDecimal(49.474) by BigDecimal(1.0))
.map(_.toDouble)

Why is the sum of INT(RND*100)+1 always equal to 0? _QBasic

I'm writing a guessing game in QBasic , which kind of tells you that im not to this, and every time I run the code the rndnum is always 0. what should i change?
To get a different random number you must first seed it. Here's the example from the QB 4.5 Help file:
RANDOMIZE TIMER ' This is the best seed. The time is constantly changing
A = INT(RND*100)+1 ' Generate a random number
Print A
If you are saying that the very first returned number is zero every time the program is run then all you need is to add the randomize statement as a one-time called procedure. If you are instead saying that as you are iterating through the same code in a loop it is returning zero every time then there is something else wrong - most likely that for whatever reason QBasic does not recognize RND as a function and therefore assumes it's a variable, which would by default be set to zero. The correct syntax would be something like:
Lowerbound = 1
Upperbound = 100
RANDOMIZE
FOR X = 1 TO 10
PRINT INT((Upperbound - Lowerbound + 1) * RND + Lowerbound)
NEXT X

What are the best practices for floating-point comparisons in Matlab?

Obviously, float comparison is always tricky. I have a lot of assert-check in my (scientific) code, so very often I have to check for equality of sums to one, and similar issues.
Is there a quick-and easy / best-practice way of performing those checks?
The easiest way I can think of is to build a custom function for fixed tolerance float comparison, but that seems quite ugly to me. I'd prefer a built-in solution, or at least something that is extremely clear and straightforward to understand.
I think it's most likely going to have to be a function you write yourself. I use three things pretty constantly for running computational vector tests so to speak:
Maximum absolute error
return max(abs(result(:) - expected(:))) < tolerance
This calculates maximum absolute error point-wise and tells you whether that's less than some tolerance.
Maximum excessive error count
return sum( (abs(result(:) - expected(:))) < tolerance )
This returns the number of points that fall outside your tolerance range. It's also easy to modify to return percentage.
Root mean squared error
return norm(result(:) - expected(:)) < rmsTolerance
Since these and many other criteria exist for comparing arrays of floats, I would suggest writing a function which would accept the calculation result, the expected result, the tolerance and the comparison method. This way you can make your checks very compact, and it's going to be much less ugly than trying to explain what it is that you're doing in comments.
Any fixed tolerance will fail if you put in very large or very small numbers, simplest solution is to use eps to get the double precision:
abs(A-B)<eps(A)*4
The 4 is a totally arbitrary number, which is sufficient in most cases.
Don't know any special build in solution. Maybe something with using eps function?
For example as you probably know this will give False (i.e. 0) as a result:
>> 0.1 + 0.1 + 0.1 == 0.3
ans =
0
But with eps you could do the following and the result is as expected:
>> (0.1+0.1+0.1) - 0.3 < eps
ans =
1
I have had good experience with xUnit, a unit test framework for Matlab. After installing it, you can use:
assertVectorsAlmostEqual(a,b) (checks for normwise closeness between vectors; configurable absolute/relative tolerance and sane defaults)
assertElementsAlmostEqual(a,b) (same check, but elementwise on every single entry -- so [1 1e-12] and [1 -1e-9] will compare equal with the former but not with the latter).
They are well-tested, fast to use and clear enough to read. The function names are quite long, but with any decent editor (or the Matlab one) you can write them as assertV<tab>.
For those who understand both MATLAB and Python (NumPy), it would maybe be useful to check the code of the following Python functions, which do the job:
numpy.allclose(a, b, rtol=1e-05, atol=1e-08)
numpy.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)

find discretization steps

I have data files F_j, each containing a list of numbers with an unknown number of decimal places. Each file contains discretized measurements of some continuous variable and
I want to find the discretization step d_j for file F_j
A solution I could come up with: for each F_j,
find the number (n_j) of decimal places;
multiply each number in F_j with 10^{n_j} to obtain integers;
find the greatest common divisor of the entire list.
I'm looking for an elegant way to find n_j with Matlab.
Also, finding the gcd of a long list of integers seems hard — do you have any better idea?
Finding the gcd of a long list of numbers isn't too hard. You can do it in time linear in the size of the list. If you get lucky, you can do it in time a lot less than linear. Essentially this is because:
gcd(a,b,c) = gcd(gcd(a,b),c)
and if either a=1 or b=1 then gcd(a,b)=1 regardless of the size of the other number.
So if you have a list of numbers xs you can do
g = xs(1);
for i = 2:length(xs)
g = gcd(x(i),g);
if g == 1
break
end
end
The variable g will now store the gcd of the list.
Here is some sample code that I believe will help you get the GCD once you have the numbers you want to look at.
A = [15 30 20];
A_min = min(A);
GCD = 1;
for n = A_min:-1:1
temp = A / n;
if (max(mod(temp,1))==0)
% yay GCD found
GCD = n;
break;
end
end
The basic concept here is that the default GCD will always be 1 since every number is divisible by itself and 1 of course =). The GCD also can't be greater than the smallest number in the list, thus I start with the smallest number and then decriment by 1. This is assuming that you have already converted the numbers to a whole number form at this point. Decimals will throw this off!
By using the modulus of 1 you are testing to see if the number is a whole number, if it isn't you will have a decmial remainder left which is greater than 0. If you anticipate having to deal with negative you will have to tweak this test!
Other than that, the first time you find a number where the modulus of the list (mod 1) is all zeros you've found the GCD.
Enjoy!