On scala my BigDecimal is 3.721443204405954385563870541379242E+54
I would like to result to 3721443204405954385563870541379246659709506697378694300
My result is:
3.114968783111033211375362915188093E+41
I would like to result to be:
311496878311103321137536291518809134027240
I do not know the scale and the result should be only show the integer part.
val multimes:(Int, Int)=>BigDecimal=(c:Int, begin:Int)=>{
if(c==1)
BigDecimal.apply(begin)
else
multimes(c-1, begin)*(c+begin-1)
}
def mulCount(c:Int):BigDecimal={
val upper=multimes(c,c+1)
val down=multimes(c,2)
upper/down
}
the number is the result of function mulCount.
The BigDecimal class has a number of nonintuitive behaviors in Scala 2.10. This will get better in 2.11, but I can't quite tell from your example whether it will fix what you want. Probably not; Scala has a default MathContext which keeps about 128 bits of information (~34 decimal digits), and I think that's what you're running into here.
If you don't have a decimal problem--and here you don't--then the easiest thing to do is just use BigInt instead. It will scale to however many digits you actually need.
If you must express this as a decimal problem, you should explicitly supply a MathContext that has enough digits:
if (c==1) BigDecimal.apply(begin, new java.math.MathContext(60))
and that MathContext will, if always used on the left-hand side of operations, propagate through to your result.
It sounds as though you're mostly concerned about the appearance of the number, and don't want to see it in scientific notation with the exponent.
This is default behaviour for just printing a BigDecimal and can't be overridden. But you can explicitly convert it to a String before printing.
This should do the trick:
val bd: BigDecimal = ...
println(bd.bigDecimal.toPlainString)
That said... Why not just use BigInt?
Related
I know I can parse from Long from String like the following
"60".toLong
or convert Long from Double like the following
60.0.toLong
or convert Long from a String of Double like the following
"60.0".toDouble.toLong
However, I can't do the following
"60.0".toLong
So my question is whether using .toDouble.toLong is a best practice, or should I use something like try ... catch ...?
Meanwhile, there is another question, when I try to convert a very large Long to Double, there maybe some precision loss, I want to know how to fix that?
"9223372036854775800.31415926535827932".toDouble.toLong
You should wrap the operation in a Try anyway, in case the string is not valid.
What you do inside the Try depends on whether "60.0" is a valid value in your application.
If it is valid, use the two-step conversion.
Try("60.0".toDouble.toLong) // => Success(60)
If it is not valid, use the one-step version.
Try("60.0".toLong) // => Failure(java.lang.NumberFormatException...)
Answer to updated question:
9223372036854775800.31415926535827932 is outside the range for a Double, so you need BigDecimal for that.
Try(BigDecimal("9223372036854775800.31415926535827932").toLong)
However you are very close to maximum value for Long, so if the numbers really are that large I suggest avoiding Long and using BigDecimal and BigInt.
Try(BigDecimal("9223372036854775800.31415926535827932").toBigInt)
Note that toLong will not fail if the BigDecimal is too large, it just gives the wrong value.
How can I make Scala save Double to a binary file and read it back with no loss of precision?
Scala already has a good framework for saving Double to a text file, encoded as a String representation in base 10. However, doing so introduces roundoff error. The conversion of IEEE 754 64-bit floating point to decimal is imperfect and introduces slight roundoff error. A simulation that is "backed up" to disk every two hours and then reloaded from disk and resumed would not be deterministic. The simulation that was left running would differ from the one that was paused and resumed from a file. Your thoughts?
To solve this problem, I think you should use BigDecimal
Precautions
The BigDecimal(String) constructor should always be preferred over BigDecimal(Double) because using BigDecimal(double) is unpredictable due to the inability of the double to represent 0.1 as exact 0.1.
If double must be used for initializing a BigDecimal, use BigDecimal.valueOf(double), which converts the Double value to String using Double.toString(double) method
Rounding mode should be provided while setting the scale
StripTrailingZeros chops off all the trailing zeros
toString() may use scientific notation but, toPlainString() will never return exponentiation in its result
Refer to this link:
https://dzone.com/articles/never-use-float-and-double-for-monetary-calculatio
You can save decimals in binary format as-is byte-by-byte:
val fos: OutputStream = ???
val out = new DataOutputStream(fos)
out.writeDouble(1.0)
Following simpadjo 's suggestion ,
val sn:List[Double] = List(
-4.2745321334280,-0.8827640054242,
0.5781790299989,-2.5173973937094,
4.3955017758756);
val outputstm:OutputStream = new FileOutputStream( "numbers.bin" )
val dos:DataOutputStream = new DataOutputStream(outputstm)
for( k <- sn ) dos.writeDouble(k)
dos.close()
outputstm.close()
If long.isValidInt, then obviously, it evaluates to the corresponding Int value.
But what if it's not? Is it equivalent to simply dropping the leading bits?
Is it equivalent to simply dropping the leading bits?
Yes. To verify this you can either just try it or refer to the following section of the Scala specification:
Conversion methods toByte, toShort, toChar, toInt, toLong, toFloat, toDouble which convert the receiver object to the target type, using the rules of Java's numeric type cast operation. The conversion might truncate the numeric value (as when going from Long to Int or from Int to Byte) or it might lose precision (as when going from Double to Float or when converting between Long and Float).
And the corresponding section of the Java specification:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Why this isn't just described in the ScalaDocs for the toInt method, I don't know.
I am a former Java developer and I have recently watched the insightful and entertaining introduction to Scala for Java developers by professor Venkat Subramaniam (https://www.youtube.com/watch?v=LH75sJAR0hc).
A major point introduced is the elimination of declared types in lieu of "type inference". Presumably, this means the higher-order compiler recognizes the type I intend to use, by the context.
Being an application security expert by trade, the first thing I tried to do is break this type inference... Example:
// declare a function that returns the square of an input Int. The return type is to be inferred.
scala> val square = (x:Int) => x*x
square: Int => Int = <function1>
// I can see the compiler inferred an Int for the output value, which I do not agree with.
scala> square(2147483647)
res1: Int = 1
// integer overflow
My question is why did the compiler not see that "*" is an operator with a threat of overflow, and wrap the inputs in something a little more protective like a BigInteger?
According to the professor, I am supposed to forget about the internal implementation and just get on with my business logic. But after my quick demonstration I'm not so sure that Scala is safe for a programmer who doesn't understand what the compiler is doing with my methods.
I think #rightføld somewhat overstates how often overflows do or don't happen (particularly when considering an attacker who is actively trying to overflow you). But I agree with his basic point. Converting all math to BigInteger would almost certainly have created a massive performance impact over Java. For developers to choose such a language, they'd have to get something visible for that cost.
String objects have a much smaller performance overhead over cstrings for many operations. They also provide very visible benefits to the developer, which is why people use them, not security per se. There are many common things that string objects make easy to do over cstrings. BigInteger provides none of that. It requires exactly the same code at a fraction of the speed, but just won't overflow (a bug few developers see day to day, even if security guys see it more often).
The equivalent would have been a cstring (with strcmp, strcpy, strcat, etc.) that ran at a fraction of the speed, but just didn't require a null terminator. I don't think many people would have jumped to use that, either, no matter how much that would help security over null-terminated strings. And if the language required it, I don't see a lot of people anxious to use the language.
And as #rightføld suggests in the comments, interoperability with Java would be trashed, since most if not all numbers would wind up being BigInteger. You'd constantly be converting, which raises the same dangers of overflows while adding a lot of code complexity (and more performance impacts).
A from-scratch language might get away with ubiquitous BigInteger (like python) if the language had a lot of other compelling features, but it's a very hard thing to retrofit into a language that wants to be a natural transition from (and with) Java.
In addition to the above answers, I think this question misunderstands the purpose of type inference in a statically typed language. Type inference does not make the choices that you are referring to - promoting a Int to a BigInt. It is restricted to simply "inferring" the type of an expression based the the known types of subexpressions at compile time.
The * function in Int returns an Int when supplied with an Int input parameter
def *(x: Int): Int
In this case, since x is declared to be an Int, then x*x must be an Int based on the signature of *.
If we really wanted this behavior, we could define a function that promotes Int to BigInt when multiplying.
implicit class SafeInt(x: Int) {
def safeMult(a: Int): scala.math.BigInt = scala.math.BigInt(x)*a
}
Then when we can define a square with the desired property:
scala> val square = (x: Int) => x safeMult x
square: Int => scala.math.BigInt = <function1>
The compiler infers based on the methods available. Int has a method *(Int): Int that is, as far as the compiler knows, perfectly well defined; 2147483647*2147483647 is a perfectly good method call with the result 1, it doesn't throw ClassCastException or anything like that.
Why is the Int type written this way? Largely for Java/JVM compatibility; many parts of Scala have design compromises for the sake of Java compatibility. If you don't need that functionality, you might prefer to use Haskell or a similar language. (I suspect that even without the requirement for JVM compatibility, Scala would have wanted to expose the machine-native integer types so that users could make that performance/correctness tradeoff where desired. They might not have been the default though)
If you're doing numeric computation in Scala you probably want to use the Spire library, which makes it easy to abstract over numeric types, and provides several high-performance numeric types with particular properties. In particular it has a SafeLong type that handles arbitrary-precision integers but with much better performance than BigInt for values which fall within the Long range, similar to Python's integer type.
Because overflow occurs almost never in practice, and BigInteger is slow as a dog compared to Int. It is also most inconvenient to have all * operations on Ints return BigIntegers.
"Recognizes the type I intend to use" is not an accurate description of what scala tries to do. It infers the most generic type possible given the constraints imposed by the context. Hence if you write List(Nil, "1"), you'll get List[Serializable], because Serializable is an interface that List and String share - disregarding that Serializable was probably not on your mind at all.
The question you're asking could be asked more precisely as "why is Int the type of numeric literals instead of BigInteger?" - inference doesn't have much to do with it.
And we can opine all we want on that topic, but there's one most accurate answer describing why Scala is what it is: "because Java".
If you wanted the type of safety that you seem to want, then one approach is to define via a partial function which guards against numeric overflow and then returns either an Option[Int] or even perhaps an Either[Int, BigInteger].
The type inference for your square function is correct - given that it's inferred from the input types you've specified and the type of the * function...it's not really broken in my opinion.
Suppose I wish to have 2 functions, one that generates a random integer within a given range, and one that generates a random double within a given range.
int GetRandomNumber( int min, int max );
double GetRandomNumber( double min, double max );
Notice that the method names are the same. I'm trying to decide whether to name the functions that or...
int GetRandomInteger( int min, int max );
double GetRandomDouble( double min, double max );
The first option has the benefit of the user not having to worry about which one they are calling. They can just call GetRandomNumber with integers or doubles and get a result.
The second option is more explicit in the names, but it reveals unneeded information to the caller.
I know this is petty, but I care about petty things.
Edit: How would C++ behave regarding implicit conversion.
Example:
GetRandomNumber( 1, 1 );
This could be implicitly converted for the GetRandomNumber function for the double version. Obviously I don't want this to occur. Will C++ use the int version before the double version?
I prefer your second example, it is explicit and leaves no ambiguity in interpretation. It is better to err on the side of being explicit in method names to clearly illuminate the purpose and function of that member.
The only downside to your approach is that you have coupled the name of the method to the return type which is not ideal in the event that you want to change the return type of one of these methods. However in that case I would be better to add a new method and not break compatibility in your API anyways.
I prefer the second version. I like overloading a function when ultimately the two functions do the same thing; in this case they return different types so they're not quite the same thing. Another possibility if your language supports it is to define it as a generic/template method, like:
T GetRandom<T>(T min, T max);
A function name should tell what the function does; I do not see a point in cramming the types into the names. So definitely go for overloading - that's what it is for.
I prefer the overloaded method. Look at the Math.Min() method in .NET. It has 11 overloads: int, double, byte, etc.
I usually prefer the first example because it doesn't pollute the namespace. For example when calling it, if I pass ints, I'm expecting to get back an int. If I pass in doubles, I probably expect to get back a double. The compiler will give you an error if you write:
//this causes an error
double d = GetRandomNumber(1,10);
so it's not really a big issue. and you can always cast the arguments if you need an int but have doubles for input...
In some of languages you can not vary the return type of overloaded functions this would require the second example.
Assuming C++, the second also avoids problems with ambiguity. If you said:
GetRandomNumber( 1, 5.0 );
which one should be used? In fact, it is a compilation error.
I think the ideal solution would be
Int32.GetRandom(int min, int max)
Double.GetRandom(double min, double max)
but, alas, static extension method are not possible (yet?).
The .net Framwork seems to favor the first option (System.Math class):
public static decimal Abs(decimal value)
public static int Abs(int value)
Like Andrew, I would personally favor the second option to avoid ambiguity, although I do think this is a matter of taste.