What is the semantics of Long.toInt in Scala? - scala

If long.isValidInt, then obviously, it evaluates to the corresponding Int value.
But what if it's not? Is it equivalent to simply dropping the leading bits?

Is it equivalent to simply dropping the leading bits?
Yes. To verify this you can either just try it or refer to the following section of the Scala specification:
Conversion methods toByte, toShort, toChar, toInt, toLong, toFloat, toDouble which convert the receiver object to the target type, using the rules of Java's numeric type cast operation. The conversion might truncate the numeric value (as when going from Long to Int or from Int to Byte) or it might lose precision (as when going from Double to Float or when converting between Long and Float).
And the corresponding section of the Java specification:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Why this isn't just described in the ScalaDocs for the toInt method, I don't know.

Related

Precondition failed: Negative count not allowed

Error:
Precondition failed: Negative count not allowed: file /BuildRoot/Library/Caches/com.apple.xbs/Sources/swiftlang/swiftlang-900.0.74.1/src/swift/stdlib/public/core/StringLegacy.swift, line 49
Code:
String(repeating: "a", count: -1)
Thinking:
Well, it doesn't make sense repeating some string a negative number of times. Since we have types in Swift, why not use an UInt?
Here we have some documentation about it.
Use UInt only when you specifically need an unsigned integer type with
the same size as the platform’s native word size. If this isn’t the
case, Int is preferred, even when the values to be stored are known to
be nonnegative. A consistent use of Int for integer values aids code
interoperability, avoids the need to convert between different number
types, and matches integer type inference, as described in Type Safety
and Type Inference.
Apple Docs
Ok that Int is preferred, therefore the API is just following the rules, but why the Strings API is designed like that? Why this constructor is not private and the a public one with UInt ro something like that? Is there a "real" reason? It this some "undefined behavior" kind of thing?
Also: https://forums.developer.apple.com/thread/98594
This isn't undefined behavior — in fact, a precondition indicates the exact opposite: an explicit check was made to ensure that the given count is positive.
As to why the parameter is an Int and not a UInt — this is a consequence of two decisions made early in the design of Swift:
Unlike C and Objective-C, Swift does not allow implicit (or even explicit) casting between integer types. You cannot pass an Int to function which takes a UInt, and vice versa, nor will the following cast succeed: myInt as? UInt. Swift's preferred method of converting is using initializers: UInt(myInt)
Since Ints are more generally applicable than UInts, they would be the preferred integer type
As such, since converting between Ints and UInts can be cumbersome and verbose, the easiest way to interoperate between the largest number of APIs is to write them all in terms of the common integer currency type: Int. As the docs you quote mention, this "aids code interoperability, avoids the need to convert between different number types, and matches integer type inference"; trapping at runtime on invalid input is a tradeoff of this decision.
In fact, Int is so strongly ingrained in Swift that when Apple framework interfaces are imported into Swift from Objective-C, NSUInteger parameters and return types are converted to Int and not UInt, for significantly easier interoperability.

Scala implicit type casting

I'm just beginning Scala, coming from Java.
So I know that in Scala, all things are objects, and Scala matches the longest token (source: http://www.scala-lang.org/docu/files/ScalaTutorial.pdf), so if i understand correctly:
var b = 1.+(2)
then b is a Double, plus and Int, which in Java would be a Double.
But when I check its type via println(b.isInstanceOf[Int]) I see that it is an Int. Why is it not a Double like in Java?
According to the specification:
1. is not a valid floating point literal because the mandatory digit after the . is missing.
I believe it's done like that, exactly because expressions like 1.+(2) should be parsed as an integer 1, method call ., method name + and method argument (2).
The compiler would treat 1 and 2 as Ints by default. You could force either one of these to be a Double using 1.toDouble And the result (b) would be a double.
Btw - did you mean to write 1.0+2 - in which case b would be a double?

What is type of 123_456_789?

I see in Swift examples values like 123_456_789, numbers with underscores. What type do these values have by default?
Does it depend on the type of the variable I assign them to? They look quite funny and new to me, so I wonder, how are they treated if they are thrown just like they are, without defining a type?
From the documentation
(The Swift Programming Language -> Language Guide -> The Basics
-> Numeric Literals):
Numeric literals can contain extra formatting to make them easier to
read. Both integers and floats can be padded with extra zeros and can
contain underscores to help with readability. Neither type of
formatting affects the underlying value of the literal:
let paddedDouble = 000123.456
let oneMillion = 1_000_000
let justOverOneMillion = 1_000_000.000_000_1
So your 123_456_789 is a integer literal, and identical to 123456789.
You can insert the underscores wherever you want, not only as a
"thousands separator", such as 1_2_3_4_5_6_7_8_9 or 1_23_4567_89, if you like to write obfuscated code.
123_456_789 is an "integer literal" just like 123456789. "integer literal" is a type separate from Int or Int32 or Int8 or whatever. An "integer literal" can be assigned to any integer type (unlike for example an Int value which can only be assigned to an Int).
If you ask "can I treat them as integers", that doesn't make sense. It's a different type. For every type there are rules how it can be used. The rules for Int and "integer literal" are different.

[scala]How to make the BigDecimal is exact to integer part?

On scala my BigDecimal is 3.721443204405954385563870541379242E+54
I would like to result to 3721443204405954385563870541379246659709506697378694300
My result is:
3.114968783111033211375362915188093E+41
I would like to result to be:
311496878311103321137536291518809134027240
I do not know the scale and the result should be only show the integer part.
val multimes:(Int, Int)=>BigDecimal=(c:Int, begin:Int)=>{
if(c==1)
BigDecimal.apply(begin)
else
multimes(c-1, begin)*(c+begin-1)
}
def mulCount(c:Int):BigDecimal={
val upper=multimes(c,c+1)
val down=multimes(c,2)
upper/down
}
the number is the result of function mulCount.
The BigDecimal class has a number of nonintuitive behaviors in Scala 2.10. This will get better in 2.11, but I can't quite tell from your example whether it will fix what you want. Probably not; Scala has a default MathContext which keeps about 128 bits of information (~34 decimal digits), and I think that's what you're running into here.
If you don't have a decimal problem--and here you don't--then the easiest thing to do is just use BigInt instead. It will scale to however many digits you actually need.
If you must express this as a decimal problem, you should explicitly supply a MathContext that has enough digits:
if (c==1) BigDecimal.apply(begin, new java.math.MathContext(60))
and that MathContext will, if always used on the left-hand side of operations, propagate through to your result.
It sounds as though you're mostly concerned about the appearance of the number, and don't want to see it in scientific notation with the exponent.
This is default behaviour for just printing a BigDecimal and can't be overridden. But you can explicitly convert it to a String before printing.
This should do the trick:
val bd: BigDecimal = ...
println(bd.bigDecimal.toPlainString)
That said... Why not just use BigInt?

atoi() is not converting properly

I was trying to call atoi on the strings 509951644 and 4099516441. The first one got converted without any problem. The second one is giving me the decimal value 2,147,483,647 (0x7FFFFFFF). Why is this happening?
Your second integer is creating an overflow. The maximum 32-bit signed integer is 2147483647.
It's generally not recommended to use atoi anyway; use strtol instead, which actually tells you if your value is out of range. (The behavior of atoi is undefined when the input is out of range. Yours seems to be simply spitting out the maximum int value)
You could also check if your compiler has something like a atoi64 function, which would let you work with 64-bit values.
2147483647 is the maximum integer value in C (signed). It is giving the max that it can... the original is too large to convert to signed int. I suggest looking up how to convert into an unsigned int.