Scala newbie here
Trying
(1).+(2) returns a Int value of 3, so far so good
but
1.+(2) returns a Double value of 3.0.
But if you do
1 . +(2) it returns a Int value of 3.
Note: The only difference between this and the above is the space after the "1"
Does Spaces matter in Scala? Im more curious as to how 1.+(2) returned a Double as it looks like it parsed 1. as a Double and then added "2" to it.
1.+(2) is calling the + method on the Double "1.". This is a carry-over from Java syntax, where "1." is equivalent to 1.0.
Related
I have the following Scala snippet:
someLong.formatted("%016x")
As a result I receive the hex String.
However, I had to upgrade Scala version and now this line throws the following error:
method formatted in class StringFormat is deprecated (since 2.12.16): Use formatString.format(value)` instead of `value.formatted(formatString)`, or use the `f""` string interpolator. In Java 15 and later, `formatted` resolves to the new method in String which has reversed parameters.
When I swap formatString with value, as the hint suggest, I got type mismatch.
How can I make it valid, either with the swap or f interpolation?
The f interpolator is a good replacement and allows other text to be added easily:
f"$someLong%016x"
f"The result is 0x$someLong%016x"
I am trying to initialize a Float or Double with the result of an integer bit shifting operation. The passed parameter is an integer literal, shifted by an unsigned byte. As far as I understand Swift's type inference, that parameter should be of type Int. However, the resulting floating point value is 0.0. Oddly, the issue is gone as soon as I put the parameter expression in brackets.
let someByte = UInt8(16)
print(Double(1 << someByte)) //Prints "0.0" ?!
print(Double((1 << someByte))) //Prints "65536.0"
This looks like a bug in the compiler. As #Hamish said, the latest master has this problem fixed, I can confirm that as I have the toolchains for Swift 4.2 and Swift 5.0 installed:
with the Swift 4.2 toolchain the behaviour is as you described: the first print outputs 0.0, while the second one outputs 65536.0
while if using the latest Swift 5.0 toolchain, both calls print 65536.0
I'm just beginning Scala, coming from Java.
So I know that in Scala, all things are objects, and Scala matches the longest token (source: http://www.scala-lang.org/docu/files/ScalaTutorial.pdf), so if i understand correctly:
var b = 1.+(2)
then b is a Double, plus and Int, which in Java would be a Double.
But when I check its type via println(b.isInstanceOf[Int]) I see that it is an Int. Why is it not a Double like in Java?
According to the specification:
1. is not a valid floating point literal because the mandatory digit after the . is missing.
I believe it's done like that, exactly because expressions like 1.+(2) should be parsed as an integer 1, method call ., method name + and method argument (2).
The compiler would treat 1 and 2 as Ints by default. You could force either one of these to be a Double using 1.toDouble And the result (b) would be a double.
Btw - did you mean to write 1.0+2 - in which case b would be a double?
I'm pretty new in Swift and I was wondering what is the difference between this (that compiles successfully, and returns "A"):
var label = "Apoel"
label[label.startIndex]
and the following, for which the compiler is complaining:
label[0]
I know that label is not an array of chars like C but using the first approach, means that the string manipulation in Swift is similar to that of C.
Also, I understand that the word finishes with something like C's "\0" because
label[label.endIndex]
gives an empty character while
label[label.endIndex.predecessor()
returns "l" which is the last letter of the String.
startIndex is of type Index which is a struct and not a simple Integer.
Hello I have a problem with my Swift code.
In my application some SKLabelNodes are supposed to have their y coordinate set to
CGRectGetMaxY(self.frame) - (nodes[i].frame.size.height / 2 + 30) * (i + 4)
Where
var i:Int = 0
is a counter.
It works perfectly fine if instead of (i + 4) I just give it a literal value e.g. 5 or even (i == 0? 4 : 5) (just to see on two consecutive integers if the formula is correct itself).
But when I go with any variable or constant or expression containing one, it displays an error "CGFloat is not convertible to Int". It seems completely illogical, because 4 is an integer and so is i and even (i + 4), in which case changing 4 to i shouldn't change the whole expression's type.
Can anyone explain why do I get this error and how can I possibly solve it?
Thanks in advance.
You have already explained and solved the matter perfectly. It's simply a matter of you accepting your own excellent explanation:
It works perfectly fine if ... I just give it a literal value ... But when I go with any variable or constant or expression containing one, it displays an error "CGFloat is not convertible to Int".
Correct. Numeric literals can be automatically coerced to another type, such as CGFloat. But variables cannot; you must coerce them, explicitly. To do so, initialize a new number, of the correct type, based on the old number.
So, this works automagically:
let f1 : CGFloat = 1.0
let f2 = f1 + 1
But here, you have to coerce:
let f1 : CGFloat = 1.0
let f2 = 1
let f3 = f1 + CGFloat(f2)
It's a pain, but it keeps you honest, I guess. Personally I agree with your frustration, as my discussion here will show: Swift numerics and CGFloat (CGPoint, CGRect, etc.) It is hard to believe that a modern language is expected to work with such clumsy numeric handling, especially when we are forced against our will to bang into the Core Graphics CGFloat type so frequently in real-life Cocoa programming. But that, it seems, is the deal. I keep hoping that Swift will change in this regard, but it hasn't happened so far. And there is a legitimate counterargument that implicit coercion, which is what we were doing in Objective-C, is dangerous (and indeed, things like the coming of 64-bit have already exposed the danger).