How do I discover the lowest negative possible CGFloat? - constants

I know of these values:
CGFloat.infinity, which is greater than
CGFloat.greatestFiniteMagnitude, which is greater than
CGFloat.leastNormalMagnitude, which is greater than
CGFloat.leastNonzeroMagnitude, which is greater than
0
but it stops there... as far as I can tell. Where are the negative values? I imagine maybe it's as easy as placing a - before them, but then I worry that there might be strange exceptions.
How do I find the key negative numbers in CGFloat? Critically, the lowest negative non-infinite number.

Floating point numbers have a sign bit and negating them changes this sign bit to the opposite value (there is actually a negative zero).
Just put a minus before them. The negation is well defined.

Related

Why does the multiplicative inverse of 0 result in infinity?

I am writing a program in Swift that takes the multiplicative inverse of random bytes. Sometimes, the byte is 0, and when the multiplicative inverse is taken, it results in inf.
The multiplicative inverse is being determined using
powf(Float(byte), -1.0)
byte is of type UInt8. If byte is equal to 0, the result is inf as mentioned earlier. How would the multiplicative inverse of 0 be infinity? Wouldn't the multiplicative inverse also be 0 since 0/0's multiplicative inverse is 0/0?
Short answer: By definition. In Swift (and many other languages), floating point numbers are backed by IEEE-754 definition of floats, which is directly implemented by the underlying hardware in most cases and thus quite fast. And according to that standard, division by 0 for floats is defined to be Infinity, and Swift is merely returning that result back to you. (To be precise, 0/0 is defined to be NaN, any positive number divided by 0 is defined to be Infinity, and any negative number divided by 0 is defined to be -Infinity.)
An interesting question to ask might be "why?" Why does IEEE-754 define division by 0 to be Infinity for floats, where one can reasonably also expect the machine to throw an error, or maybe define it as NaN (not-a-number), or perhaps maybe even 0? For an analysis of this, you should really read Kahan's (the designer of the semantics behind IEEE-754) own notes regarding this matter. Starting on page 10 of the linked document, he discusses why the choice of Infinity is preferable for division-by-zero, which essentially boils down to efficient implementation of numerical algorithms since this convention allows skipping of expensive tests in iterative numerical analysis. Start reading on page 10, and go through the examples he discusses, which ends on top of page 14.
To sum up: Floating point division by 0 is defined to be Infinity by the IEEE-754 standard, and there are good reasons for making this choice. Of course, one can imagine different systems adopting a different answer as well, depending on their particular need or application area; but then they wouldn't be IEEE-754 compliant.
Plugging in 0 just means it is 0 divided by some positive number. Then, the multiplicative inverse will be dividing by 0. As you probably know, this is undefined in mathematics, but in swift, it tries to calculate it. Essentially, it keeps subtracting 0 from the number, but never gets a result, so it will output infinity.
Edit: As Alias pointed out, Swift is not actually going through that process of continually subtracting 0. It will just return infinity anytime it is supposed to divide by 0.

what is positive float and it includes 0?

I have a question.
Maybe it is simple.. but I wondered.
Is "Positive float" mean k>=0?
I want to ask positive float includes zero?
I wondered it because of Ridge regression hyper parameters tuning.
Thank you
Depends on the context. If we are talking about numbers having a type of float, then the float number is positive if its sign bit is 0:
Sign bit s (bit 31). The most significant bit represents the sign of
the number (1 for negative, 0 for positive).
https://introcs.cs.princeton.edu/java/91float/
As a matter of fact, positive zero and negative zero are existent. Both are representing a number with an infinitely small absolute value, converging towards zero, but a positive zero is infinitely slightly bigger than 0 and a negative zero is infinitely slightly smaller than 0. Both positive and negative zero have all 0 values for all bits, except the sign bit. In the case of a sign bit, a positive zero's sign bit is 0, a negative zero's sign bit is 1.
However, if we talk about positive float, and negative float in general, we get scheduling concepts. In this context, a positive float means that there is excess time for an activity, while a negative float means that there is no excess time for an activity, so the activity has to start as soon as possible.

Numerical convergence and minimum number size

I have a program which calculates probability values
(p-values),
but it is entering a very large negative number into the
exp function
exp(-626294.830) which evaluates to zero instead of the very small
positive number that it should be.
How can I get this to evaluate as a very small floating point number?
I have tried
Math::BigFloat,
bignum, and
bigrat
but all have failed.
Wolfram Alpha says that exp(-626294.830) is 4.08589×10^-271997... zero is a pretty close approximation to that ;-) Although you've edited and removed the context from your question, do you really need to work with such tiny numbers, or perhaps there is some way you could optimize your algorithm or scale your numbers?
Anyway, you are correct that code like Math::BigFloat->new("-626294.830")->bexp seems to take quite some time, even with the support of use Math::BigFloat lib => 'GMP';.
The only alternative I can offer at the moment is Math::Prime::Util::GMP's expreal, although you need to specify a precision to it.
use Math::Prime::Util::GMP qw/expreal/;
use Math::BigFloat;
my $e = Math::BigFloat->new(expreal(-626294.830,272000));
print $e->bnstr,"\n";
__END__
4.086e-271997
But on my machine, even that still takes ~20s to run, which brings us back to the question of potential optimization in other places.
Floating point numbers do not have infinite precision. Assuming the number is represented as an IEEE 754 double, we have 52 bits for a fraction, 11 bits for the exponent, and one bit for the sign. Due to the way exponents are encoded, the smallest positive number that can be represented is 2^-1022.
If we look at your number e^-626294.830, we can do a change of base and see that it equals 2^(log_2 e · -626294.830) = 2^-903552.445, which is significantly smaller than 2^-1022. Approximating your number as zero is therefore correct.
Instead of calculating this value using arbitrary-precision numerics, you are likely better off solving the necessary equations by hand, then coding this in a way that does not require extreme precision. For example, it is unlikely that you need the exact value of e^-626294.830, but perhaps just the magnitude. Then, you can calculate the logarithm instead of using exp().

comparing float and double and printing them

I have a quick question. So, say I have a really big number up to like 15 digits, and I would take the input and assign it to two variables, one float and one double if I were to compare two numbers, how would you compare them? I think double has the precision up to like 15 digits? and float has 8? So, do I simply compare them while the float only contains 8 digits and pad the rest or do I have the float to print out all 15 digits and then make the comparison? Also, if I were asked to print out the float number, is the standard way of doing it is just printing it up to 8 digits? which is its max precision
thanks
Most languages will do some form of type promotion to let you compare types that are not identical, but reasonably similar. For details, you would have to indicate what language you are referring to.
Of course, the real problem with comparing floating point numbers is that the results might be unexpected due to rounding errors. Most mathematical equivalences don't hold for floating point artihmetic, so two sequences of operations which SHOULD yield the same value might actually yield slightly different values (or even very different values if you aren't careful).
EDIT: as for printing, the "standard way" is based on what you need. If, for some reason, you are doing monetary computations in floating point, chances are that you'll only want to print 2 decimal digits.
Thinking in terms of digits may be a problem here. Floats can have a range from negative infinity to positive infinity. In C# for example the range is ±1.5 × 10^−45 to ±3.4 × 10^38 with a precision of 7 digits.
Also, IEEE 754 defines floats and doubles.
Here is a link that might help http://en.wikipedia.org/wiki/IEEE_floating_point
Your question is the right one. You want to consider your approach, though.
Whether at 32 or 64 bits, the floating-point representation is not meant to compare numbers for equality. For example, the assertion 2.0/7.0 == 60.0/210.0 may or may not be true in the CPU's view. Conceptually, the floating-point is inherently meant to be imprecise.
If you wish to compare numbers for equality, use integers. Consider again the ratios of the last paragraph. The assertion that 2*210 == 7*60 is always true -- noting that those are the integral versions of the same four numbers as before, only related using multiplication rather than division. One suspects that what you are really looking for is something like this.

Matlab precision: simple subtraction is not zero

I compute this simple sum on Matlab:
2*0.04-0.5*0.4^2 = -1.387778780781446e-017
but the result is not zero. What can I do?
Aabaz and Jim Clay have good explanations of what's going on.
It's often the case that, rather than exactly calculating the value of 2*0.04 - 0.5*0.4^2, what you really want is to check whether 2*0.04 and 0.5*0.4^2 differ by an amount that is small enough to be within the relevant numerical precision. If that's the case, than rather than checking whether 2*0.04 - 0.5*0.4^2 == 0, you can check whether abs(2*0.04 - 0.5*0.4^2) < thresh. Here thresh can either be some arbitrary smallish number, or an expression involving eps, which gives the precision of the numerical type you're working with.
EDIT:
Thanks to Jim and Tal for suggested improvement. Altered to compare the absolute value of the difference to a threshold, rather than the difference.
Matlab uses double-precision floating-point numbers to store real numbers. These are numbers of the form m*2^e where m is an integer between 2^52 and 2^53 (the mantissa) and e is the exponent. Let's call a number a floating-point number if it is of this form.
All numbers used in calculations must be floating-point numbers. Often, this can be done exactly, as with 2 and 0.5 in your expression. But for other numbers, most notably most numbers with digits after the decimal point, this is not possible, and an approximation has to be used. What happens in this case is that the number is rounded to the nearest floating-point number.
So, whenever you write something like 0.04 in Matlab, you're really saying "Get me the floating-point number that is closest to 0.04. In your expression, there are 2 numbers that need to be approximated: 0.04 and 0.4.
In addition, the exact result of operations like addition and multiplication on floating-point numbers may not be a floating-point number. Although it is always of the form m*2^e the mantissa may be too large. So you get an additional error from rounding the results of operations.
At the end of the day, a simple expression like yours will be off by about 2^-52 times the size of the operands, or about 10^-17.
In summary: the reason your expression does not evaluate to zero is two-fold:
Some of the numbers you start out with are different (approximations) to the exact numbers you provided.
The intermediate results may also be approximations of the exact results.
What you are seeing is quantization error. Matlab uses doubles to represent numbers, and while they are capable of a lot of precision, they still cannot represent all real numbers because there are an infinite number of real numbers. I'm not sure about Aabaz's trick, but in general I would say there isn't anything you can do, other than perhaps massaging your inputs to be double-friendly numbers.
I do not know if it is applicable to your problem but often the simplest solution is to scale your data.
For example:
a=0.04;
b=0.2;
a-0.2*b
ans=-6.9389e-018
c=a/min(abs([a b]));
d=b/min(abs([a b]));
c-0.2*d
ans=0
EDIT: of course I did not mean to give a universal solution to these kind of problems but it is still a good practice that can make you avoid a few problems in numerical computation (curve fitting, etc ...). See Jim Clay's answer for the reason why you are experiencing these problems.
I'm pretty sure this is a case of ye olde floating point accuracy issues.
Do you need 1e-17 accuracy? Is this merely a case of wanting 'pretty' output?
In that case, you can just use a formatted sprintf to display the number of significant digits you want.
Realize that this is not a matlab problem, but a fundamental limitation of how numbers are represented in binary.
For fun, work out what .1 is in binary...
Some references:
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
http://www.mathworks.com/support/tech-notes/1100/1108.html