Idris, convert from a Double to a Nat by dropping the decimal points (floor) - double

In Idris, how to convert from a Double to a Nat by floor, dropping the decimal points.
I tried cast:
cast {to=Nat} num
However did not work.
When checking an application of function Main.takeLeftOfHalfLength:
Can't cast from Double to Nat
Well that's to be expected as its not very explicit how the cast would work, loss of information.
However I still wish to cast from Double to Nat, how can it be done?
I discovered divNat function which lets me divide a Nat, but I'll leave the question here

We can use floor, cast to Integer and then use integerToNat:
Main> integerToNat $ cast {to = Integer} $ floor 3.9
3
Similarly ceiling can be used to get the rounded up Nat:
Main> integerToNat $ cast {to = Integer} $ ceiling 3.9
4

Related

What is actually the doubleValue function in BigDecimal class of Scala

The BigDecimal class in scala contains a doubleValue function. The Double is 64 Bit size...But BigDecimal may contain anynumber of digits and any number of digits after decimal point.
I tried in scala REPL to see what it returns.
Actually it is useful in writing a program to find square root of BigDecimal to provide an initial guess of square root. My doubt is how a double can store a BigDecimal. Can anybody clarify this?
scala> BigDecimal("192837489983938382617887338478272884822843716738884788278828947828784888.1993883727818837818811881818818"
)
res6: scala.math.BigDecimal = 192837489983938382617887338478272884822843716738884788278828947828784888.199388372781883781881
1881818818
scala> res6.doubleValue()
res7: Double = 1.928374899839384E71
It is equivalent to Java's method with the same name which is documented as:
Converts this BigDecimal to a double. This conversion is similar to the narrowing primitive conversion from double to float as defined in The Java™ Language Specification: if this BigDecimal has too great a magnitude represent as a double, it will be converted to Double.NEGATIVE_INFINITY or Double.POSITIVE_INFINITY as appropriate. Note that even when the return value is finite, this conversion can lose information about the precision of the BigDecimal value.
It doesn't actually seem to say that, but it should return the double closest to the BigDecimal. Note that for sufficiently large numbers there are large gaps between closest doubles.
My doubt is how a double can store a BigDecimal.
It can't, in most cases. If you convert a BigDecimal to Double and back: BigDecimal(aBigDecimal.doubleValue), the result usually won't be equal to aBigDecimal. There's even an isExactDouble method to test it.
But for this specific use (an initial guess of square root) that doesn't matter (OTOH, possible infinity does, but you can just test for it).
The conversion is lossy. If the value cannot be represented as a double then the result might be one of
java.lang.Double.POSITIVE_INFINITY
java.lang.Double.NEGATIVE_INFINITY

Division not working properly in Swift

Here is my code:
println(Double(2/5))
When I run this, it prints out
0.0
How can I fix this? I want it to come out to 0.4. It there some issue with the rounding?
The problem is that you're not converting to a Double until after you've done integer division between two integers. Let's take a look at order of operations. We start at the inside and move outward.
Perform integer division between the integer 2 and the integer 5, which results in the integer 0.
Create a double from the integer 0, which creates the double 0.0.
Call description on the double 0.0, which returns the string "0.0"
Call println on the string "0.0"
We can fix this by calling the Double constructor on each side of the division before we divide them.
println((Double(2)/Double(5)))
Now the order of operations is:
Convert the integer 2 to the floating point 2.0
Convert the integer 5 to the floating point 5.0
Perform floating point division between these floating point numbers, resulting in 0.4
Call description on the floating point number 0.4, which returns the string "0.4".
Call println on the string "0.4".
Note that it's not strictly necessary to convert both sides of the division to Double.
And as long as we're dealing with literals, we can just write println(2.0/5.0).
We could also get away with writing println((2 * 1.0)/5) which should now interpret all of our literals as floating point (as we've multiplied it by a floating point).
As long as either side of a math operating is a floating point type, the integer literal will be interpreted as a floating point type by Swift, but in my opinion, it's far better to explicitly convert our types so that we're excruciatingly clear on exactly what we want to happen. So let's get all of our numbers into the same type and be explicitly clear what we actually want.
If we're dealing with literals, we can add .0 to them to force them as floating point numbers:
println(2.0/5.0)
If we're doing with variables, we can use a constructor:
let myTwoInt: Int = 2
let myFiveInt: Int = 5
println((Double(myTwoInt)/Double(myFiveInt))
I think your issue is that you are dividing two integers which normally will return an integer.
I had a similar issue in java, adding a .0 to one or the other integers or converting either to a double by using the double function should fix it.
It's a feature of typed languages that creates a result of the same type as the values being divided.
Digits is correct about the cause; instead of the approach you're taking, try this:
print(2.0 / 5.0)

Making a calculation in objective c

I need a variable a = 6700000^2 * (a - b) (2 + sinf(a)+ s inf(b)), where a and b are floats between -7 to 7. I need all the precision that floats can give me.
Which data type should a be? Is the sinf the proper function to get the best precision out of a and b? And should a and b be in radians or degrees?
Well I Made a mistake when I posted the expression, the correct expression is c=67000000^2*(a-b)(2+sinf(a)+sinf(b)) and my problem is with c ."a" and "b" are floats and they are passed to me as floats, they really are coordinates (latitude and longitude) so thats not my concern... My concern is when using sinf on them do I lose any precision? And which type should c be so I don't lose precision cause I'm using a long double variable d to store a sum of multiple different c variables and d is returned to me as being zero and it shouldn't (sould be about 1 or 2 )so I was gessing I was losing some precision when calculating the c parcels...I was using c as being a double...can it be that I am losing some precision when calculating c?
Thank you very much for your help.
I can't tell you whether float is good enough for your application. If you need more precision, use double, and then use sin() instead of sinf().
The standard trig functions take angles in radians, as you'll discover if you read the relevant documentation.
Instead of using float, you should use a double if you want no worries in regards to memory. Remember to then change sinf() to sin() and use radians.
If you want the best precision without rolling your own types, you should use double rather than float. In that case, you can just use sin(3). According to the man page, you should pass the argument in radians.

Getting double precision in fortran 90 using intel 11.1 compiler

I have a very large code that sets up and iteratively solves a system of non-linear partial differential equation, written in fortran. I need all variables to be double precision. In the additional module that I have written for the code, I declare all variables as the double precision type, but my module still uses variables from the old source code that are declared as type real. So my question is, what happens when a single-precision variable is multiplied by a double precision variable in fortran? Is the result double precision if the variable used to store the value is declared as double precision? And what if a double precision value is multiplied by a constant without the "D0" at the end? Can I just set a compiler option in Intel 11.1 to make all real/double precision/constants of double precision?
So my question is, what happens when a single-precision variable is multiplied by a double precision variable in fortran? The single precision is promote to double precision and the operation is done in double precision.
Is the result double precision if the variable used to store the value is declared as double precision? Not necessarily. The right-hand side is an expression that doesn't "know" about the precision of the variable on the left hand side, in to which it will be stored. If you have Double = SingleA * SingleB (using names to indicate the types), the calculation will be performed in single precision, then converted to double for storage. This will NOT gain extra precision for the calculation!
And what if a double precision value is multiplied by a constant without the "D0" at the end? This is just like the first question, the constant will be promoted to double precision and the calculation done in double precision. However, the constant is still single precision and even if you wrote down many digits as for a double-precision constant, the internal storage is single precision and cannot represent that accuracy. For example, DoubleVar * 3.14159265359 will be calculated in double precision, but will be something approximating DoubleVar * 3.14159 done in double precision.
If you want to have the compiler retain many digits in a constant, you must specific the precision of a constant. The Fortran 90 way to do this is to define your own real type with whatever precision that you need, e.g., to require at least 14 decimal digits:
integer, parameter :: DoubleReal_K = selected_real_kind (14)
real (DoubleReal_K) :: A
A = 5.0_DoubleReal_K
A = A * 3.14159265359_DoubleReal_K
The Fortran standard is very specific about this; other languages are like this, too, and it's really what you'd expect. If an expression contains an operation on two floating-point variables of different precisions, then the expression is of the type of the higher-precision operand. eg,
(real variable) + (double variable) -> (double)
(double variable)*(real variable) -> (double)
(double variable)*(real constant) -> (double)
etc.
Now, if you are storing the result in a lower-precision floating point variable, it'll get down-converted again. But if you are storing it in a variable of the higher precision, it'll maintain it's precision.
If there's any cases where you're concerned that a single-precision floating point variable is causing a problem, you can force it to be converted to double precision
using the DBLE() intrinsic:
DBLE(real variable) -> double
If you write numbers in the form 0.1D0 it will treat it as double precision number, otherwise if you write 0.1, the precision will be lost in the conversion.
Here is an example:
program main
implicit none
real(8) a,b,c
a=0.2D0
b=0.2
c=0.1*a
print *,a,b,c
end program
When compiled with
ifort main.f90
I get results:
0.200000000000000 0.200000002980232 2.000000029802322E-002
When compiled with
ifort -r8 main.f90
I get results:
0.200000000000000 0.200000000000000 2.000000000000000E-002
If you use the IBM XLF compiler, the equivalence is
xlf -qautodbl=dbl4 main.f90
Jonathan Dursi's answer is correct - the other part of your question was if there was a way to make all real variables double precision.
You can accomplish this with the ifort compiler by using the -i8 (for integers) and -r8 (for reals) options. I'm not sure if there is a way to force the compiler to interpret literals as double-precision without specifying them as such (e.g. by changing 3.14159265359 to 3.14159265359D0) - we ran into this issue a while back.

iPhone/Obj C: Why does convert float to int: (int) float * 100 does not work?

In my code, I am using float to do currency calculation but the rounding has yielded undesired results so I am trying to convert it all to int. With as little change to the infrastructure as possible, in my init functions, I did this:
-(id)initWithPrice:(float)p;
{
[self setPrice:(int)(p*100)];
}
I multiply by 100 b/c in the main section, the values are given as .xx to 2 decimals. I abnormally I notice is that for float 1.18, the int rounds it to 117. Does anyone know it does that? The float leaves it as 1.18. I expect it to be 118 for the int equiv in cents.
Thanks.
Floating point is always a little imprecise. With IEEE floating point encoding, powers of two can be represented exactly (like 4,2,1,0.5,0.25,0.125,0.0625,...) , but numbers like 0.1 are always an approximation (just try representing it as a sum of powers of 2).
Your (int) cast will truncate whatever comes in, so if p*100 is resolving to 117.9999995 due to this imprecision , that will become 1.17 instead of 1.18.
Better solution is to use something like roundf on p*100. Even better would be if you can go upstream and fully convert to fixed-point math using integers in the entire program.