Is it safe to use //1 as Math.floor in CoffeeScript? - coffeescript

I thought
Math.random() * (max-min) // 1
would be shorter and comfortable than
Math.floor Math.random() * (max-min)
But I'm not sure whether the former is safe or not.

I didn't know about the // operator but if we take a look at the JavaScript output of both versions we see that they are equivalent.
First:
Math.floor(Math.random() * (max - min) / 1);
Second:
Math.floor(Math.random() * (max - min));
(Dividing a number by 1 in JavaScript has no effect)

That usage is explicitly supported by the specs. So it is quite safe to use // as you intend. To quote the doc:
To simplify math expressions [...] // performs integer division
And latter:
CoffeeScript JavaScript
-------------------------------------
a // b Math.floor(a / b)
Please note this operator was added in CoffeScript 1.7.0

Related

Imprecise math in postgres using type field numeric?

Very strange issue I am noticing... This link says the numeric data type should be able to precisely handle 16383 digits after the decimal: https://www.postgresql.org/docs/10/datatype-numeric.html
So can someone plz explain to me why this function returns 9499.99999999999905:
(((60000::numeric / 50500 * 50500) - 50500) * (50500::numeric / 50500))::numeric
The correct answer is 9500.
When I use this function I get the right answer:
(((60000::numeric(20,11) / 50500 * 50500) - 50500) * (50500::numeric(20,11) / 50500))::numeric(20,11)
and this gives wrong answer:
(((60000::numeric(25,16) / 50500 * 50500) - 50500) * (50500::numeric(25,16) / 50500))::numeric(25,16) = 9499.9999999999990500
The odd thing is this same issue is happening on this website: https://web2.0calc.com/
if you paste the formula:
((60000.0000000 / 50500 * 50500) - 50500) * (50500.0000000 / 50500) = 9500.
But if I instead add an extra 0 to each of those:
((60000.00000000 / 50500 * 50500) - 50500) * (50500.00000000 / 50500) = 9499.99999999999999999999999999999999999999999999999999999999999905.
Even weirder, for both postgres and this website, if I break down the formula into two executions like this:
((60000.00000000 / 50500 * 50500) - 50500) = 9500
(50500.00000000 / 50500) = 1
9500 * 1 = 9500.
What the heck is going on here?
That you get the right answer with numeric(20,11) and the wrong one with numeric(25,16) is a coincidence: both will have rounding errors, because they calculate only to a limited precision, but in the first case the rounded result happens to be the correct one.
The same is the case for 60000.0000000 and 60000.00000000: they are interpreted as numeric values with different scale.
SELECT scale(60000.0000000), scale(60000.00000000);
scale | scale
-------+-------
7 | 8
(1 row
The only thing that is not obvious is why you get such a bad scale when you cast to numeric without any scale or precision.
The scale of 60000::numeric is 0, and 50500 is also converted to a numeric with scale 0. Now if you divide two numeric values, the resulting scale is calculated by the function select_div_scale, and a comment there clarifies the matter:
/*
* The result scale of a division isn't specified in any SQL standard. For
* PostgreSQL we select a result scale that will give at least
* NUMERIC_MIN_SIG_DIGITS significant digits, so that numeric gives a
* result no less accurate than float8; but use a scale not less than
* either input's display scale.
*/
NUMERIC_MIN_SIG_DIGITS has the value 16. The problem that this heuristic solves is that the scale of the division cannot be determined by looking at the scale of the arguments. So PostgreSQL chooses a value of 16, unless one of the arguments has a bigger scale. This avoids ending up with extremely large result values, unless someone explicitly asks for it.

What's the difference between M_PI and M_PI_2?

I forked a project from Github, Xcode shows a lot of warnings:
'M_PI' is deprecated: Please use 'Double.pi' or '.pi' to get the value
of correct type and avoid casting.
and
'M_PI_2' is deprecated: Please use 'Double.pi' or '.pi' to get the value
of correct type and avoid casting.
Since both M_PI and M_PI_2 are prompted to be replaced by Double.pi, I assume there are in fact the same value. However, there's this code in the project:
switch angle {
case M_PI_2:
...
case M_PI:
...
case Double.pi * 3:
...
default:
...
}
I'm really confused here, are M_PI and M_PI_2 different? or are they just the same?
UPDATE:
It turns out to be my blunder, Xcode says 'M_PI_2' is deprecated: Please use Double.pi / 2 or .pi / 2 to get the value of correct type and avoid casting. so it isn't a bug, just too hard to notice the difference of 2 prompts.
Use Double.pi / 2 for M_PI_2 and Double.pi for M_PI.
You can also use Float.pi and CGFloat.pi.
In Swift 3 & 4, pi is defined as a static variable on the floating point number types Double, Float and CGFloat.
These constants are related to the implementations of different functions in the math library:
s_cacos.c: __real__ res = (double) M_PI_2 - __real__ y;
s_cacosf.c: __real__ res = (float) M_PI_2 - __real__ y;
s_cacosh.c: ? M_PI - M_PI_4 : M_PI_4)
...
s_clogf.c: __imag__ result = signbit (__real__ x) ? M_PI : 0.0;
M_PI, M_PI_2, and M_PI_4 show up quite often but there's no 2.0 * M_PI. 2π is just not that useful, at least in implementing libm.
M_PI_2 and M_PI_4, their existences are well justified. The documentation of the GNU C library suggests that "these constants come from the Unix98 standard and were also available in 4.4BSD". Compilers were not that smart back at that time. Typing M_PI/4 instead of M_PI_4 may cause an unnecessary division. Although modern compilers can optimize that away (GCC uses mpfr since 2008 so even rounding is done correctly), using numeric constants is still a more portable way to write high-performance code.
M_PI is defined as a macro
#define M_PI 3.14159265358979323846264338327950288
in math.h and part of the POSIX standard.
Follow This
You can find reference answer here
Check This Solution - How you can use it

Avoid rounding to 0 when a result is very little

I've searched for this but i didn't find anything, i hope this is not a doubled question.
I'm doing a formula in TSQL like this:
#Temp = SQRT((((#Base1 - 1) * (#StDev1 * #StDev1))
+ ((#AvgBase - 1) * (#AvgStDev * #AvgStDev)))
* ((1 / #Base1) + (1 / #AvgBase))
/ (#Base1 + #AvgBase - 2))
But it always returns me a 0.
#Base1 and #AvgBase are int, the rest of parameters are float, but i've also tried with decimal(15,15).
I tried also changing the self multiplication with the function POWER() but the only problem i can't solve is this part: (1 / #Base1) + (1 / #AvgBase), because #Base1 and #AvgBase are so big, and the result of the calc is 0,0001... and some more numbers. How can i force the engine to not round the result to 0? Thanks
EDIT: I solved it changing the #AvgBase and the #Base1 to float type. I guess that the result 1/#param, with #param -> int gives you the rounded result and when you go for casting it or whatever, you are working on a rounded result anyway.
have you tried to create a #INVBase1 = (1/#Base1) ? will this also be rounded to 0? what happens when you play around with the data format of this new variable?
alternatively have you tried
/ ((#Base1) + (#AvgBase))
instead of
* ((1 / #Base1) + (1 / #AvgBase))

High-precision random numbers on iOS

I have been trying this for a while but thus far haven't had any luck.
What is the easiest way to retrieve a random number between two very precise numbers on iOS?
For example, I want a random number between 41.37783830549337 and 41.377730629131634, how would I accomplish this?
Thank you so much in advance!
Edit: I tried this:
double min = 41.37783830549337;
double max = 41.377730629131634;
double test = ((double)rand() / RAND_MAX) * (max - min) + min;
NSLog(#"Min:%lf, max:%lf, result:%lf",min,max,test);
But the results weren't quite as precise as I was hoping, and ended up like this::
Min:41.377838, max:41.377731, result:41.377838
You can normalise the output of rand to any range you want:
((double)rand() / RAND_MAX) * (max - min) + min
[Note: This is pure C, I'm assuming it works equivalently in Obj-C.]
[Note 2: Replace double with the data-type of your choice as appropriate.]
[Note 3: Replace rand with the random-number source of your choice as appropriate.]

Pythonic way to write a for loop that doesn't use the loop index [duplicate]

This question already has answers here:
Is it possible to implement a Python for range loop without an iterator variable?
(15 answers)
Closed 7 months ago.
This is to do with the following code, which uses a for loop to generate a series of random offsets for use elsewhere in the program.
The index of this for loop is unused, and this is resulting in the 'offending' code being highlighted as a warning by Eclipse / PyDev
def RandomSample(count):
pattern = []
for i in range(count):
pattern.append( (random() - 0.5, random() - 0.5) )
return pattern
So I either need a better way to write this loop that doesn't need a loop index, or a way to tell PyDev to ignore this particular instance of an unused variable.
Does anyone have any suggestions?
Just for reference for ignoring variables in PyDev
By default pydev will ignore following variables
['_', 'empty', 'unused', 'dummy']
You can add more by passing supression parameters
-E, --unusednames ignore unused locals/arguments if name is one of these values
Ref:
http://eclipse-pydev.sourcearchive.com/documentation/1.0.3/PyCheckerLauncher_8java-source.html
How about itertools.repeat:
import itertools
count = 5
def make_pat():
return (random() - 0.5, random() - 0.5)
list(x() for x in itertools.repeat(make_pat, count))
Sample output:
[(-0.056940506273799985, 0.27886450895662607),
(-0.48772848046066863, 0.24359038079935535),
(0.1523758626306998, 0.34423337290256517),
(-0.018504578280469697, 0.33002406492294756),
(0.052096928160727196, -0.49089780124549254)]
randomSample = [(random() - 0.5, random() - 0.5) for _ in range(count)]
Sample output, for count=10 and assuming that you mean the Standard Library random() function:
[(-0.07, -0.40), (0.39, 0.18), (0.13, 0.29), (-0.11, -0.15),\
(-0.49, 0.42), (-0.20, 0.21), (-0.44, 0.36), (0.22, -0.08),\
(0.21, 0.31), (0.33, 0.02)]
If you really need to make it a function, then you can abbreviate by using a lambda:
f = lambda count: [(random() - 0.5, random() - 0.5) for _ in range(count)]
This way you can call it like:
>>> f(1)
f(1)
[(0.03, -0.09)]
>>> f(2)
f(2)
[(-0.13, 0.38), (0.10, -0.04)]
>>> f(5)
f(5)
[(-0.38, -0.14), (0.31, -0.16), (-0.34, -0.46), (-0.45, 0.28), (-0.01, -0.18)]
>>> f(10)
f(10)
[(0.01, -0.24), (0.39, -0.11), (-0.06, 0.09), (0.42, -0.26), (0.24, -0.44) , (-0.29, -0.30), (-0.27, 0.45), (0.10, -0.41), (0.36, -0.07), (0.00, -0.42)]
>>>
you get the idea...
Late to the party, but here's a potential idea:
def RandomSample(count):
f = lambda: random() - 0.5
r = range if count < 100 else xrange # or some other number
return [(f(), f()) for _ in r(count)]
Strictly speaking, this is more or less the same as the other answers, but it does two things that look kind of nice to me.
First, it removes that duplicate code you have from writing random() - 0.5 twice by putting that into a lambda.
Second, for a certain size range, it chooses to use xrange() instead of range() so as not to unnecessarily generate a giant list of numbers you're going to throw away. You may want to adjust the exact number, because I haven't played with it at all, I just thought it might be a potential efficiency concern.
There should be a way to suppress code analysis errors in PyDev, like this:
http://pydev.org/manual_adv_assistants.html
Also, PyDev will ignore unused variables that begin with an underscore, as shown here:
http://pydev.org/manual_adv_code_analysis.html
Try this:
while count > 0:
pattern.append((random() - 0.5, random() - 0.5))
count -= 1
import itertools, random
def RandomSample2D(npoints, get_random=lambda: random.uniform(-.5, .5)):
return ((r(), r()) for r in itertools.repeat(get_random, npoints))
uses random.uniform() explicitly
returns an iterator instead of list