Type inference issue with Swift's Double initializer for shifted integer parameters - swift

I am trying to initialize a Float or Double with the result of an integer bit shifting operation. The passed parameter is an integer literal, shifted by an unsigned byte. As far as I understand Swift's type inference, that parameter should be of type Int. However, the resulting floating point value is 0.0. Oddly, the issue is gone as soon as I put the parameter expression in brackets.
let someByte = UInt8(16)
print(Double(1 << someByte)) //Prints "0.0" ?!
print(Double((1 << someByte))) //Prints "65536.0"

This looks like a bug in the compiler. As #Hamish said, the latest master has this problem fixed, I can confirm that as I have the toolchains for Swift 4.2 and Swift 5.0 installed:
with the Swift 4.2 toolchain the behaviour is as you described: the first print outputs 0.0, while the second one outputs 65536.0
while if using the latest Swift 5.0 toolchain, both calls print 65536.0

Related

Swift String.Index variable doesn't have enough bits

I'm trying to work with some Swift strings. I know that Swift won't take text[0] since it's not a String.Index type.
However, Xcode is throwing me an error on this code:
let text = "Word"
let i: String.Index = text.startIndex
Xcode says:
i does not have enough bits to represent the passed value
...when it should since it's the correct type.
What is going on?
Thanks in advance!

Scala implicit type casting

I'm just beginning Scala, coming from Java.
So I know that in Scala, all things are objects, and Scala matches the longest token (source: http://www.scala-lang.org/docu/files/ScalaTutorial.pdf), so if i understand correctly:
var b = 1.+(2)
then b is a Double, plus and Int, which in Java would be a Double.
But when I check its type via println(b.isInstanceOf[Int]) I see that it is an Int. Why is it not a Double like in Java?
According to the specification:
1. is not a valid floating point literal because the mandatory digit after the . is missing.
I believe it's done like that, exactly because expressions like 1.+(2) should be parsed as an integer 1, method call ., method name + and method argument (2).
The compiler would treat 1 and 2 as Ints by default. You could force either one of these to be a Double using 1.toDouble And the result (b) would be a double.
Btw - did you mean to write 1.0+2 - in which case b would be a double?

UnsafeMutablePointer to expected argument type UnsafeMutablePointer<_> in Swift 3

In the main.swift file, we have a call to our receipt checking system (generated by Receigen). In Swift 2, main.swift read:
startup(Process.argc, UnsafeMutablePointer<UnsafePointer<Int8>>(Process.unsafeArgv))
After upgrading to Swift 3, I've got as far as:
startup(CommandLine.argc, UnsafeMutablePointer<UnsafePointer<Int8>>(CommandLine.unsafeArgv))
which shows the error:
Cannot convert value of type
UnsafeMutablePointer<UnsafeMutablePointer<Int8>?> (aka
UnsafeMutablePointer<Optional<UnsafeMutablePointer<Int8>>>) to
expected argument type UnsafeMutablePointer<_>
Update: Using the linked question so that it reads:
startup(CommandLine.argc, UnsafeMutableRawPointer(CommandLine.unsafeArgv)
.bindMemory(
to: UnsafeMutablePointer<Int8>.self,
capacity: Int(CommandLine.argc)))
Produces:
Cannot convert value of type UnsafeMutablePointer<Int8>.Type to
expected argument type UnsafePointer<Int8>?.Type (aka
Optional<UnsafePointer<Int8>>.Type)
Where the compiler is referring to the to:UnsafeMutablePointer.
The header for startup looks like:
int startup(int argc, const char * argv[]);
How can I successfully pass the variables to startup in main.swift?
Basically, this is a variant on the problem discussed here:
Xcode 8 beta 6: main.swift won't compile
The problem is that you have an impedance mismatch between the type of CommandLine.unsafeArgv and the type expected by your C function. And you can no longer cast away this mismatch merely by coercing from one mutable pointer type to another. Instead, you have to pivot (as it were) from one type to another by calling bindMemory. And the error message, demanding a Optional<UnsafePointer<Int8>>.Type, tells you what type to pivot to:
startup(
CommandLine.argc,
UnsafeMutableRawPointer(CommandLine.unsafeArgv)
.bindMemory(
to: Optional<UnsafePointer<Int8>>.self,
capacity: Int(CommandLine.argc))
)
That should allow you to compile. Testing on my machine with a stub of startup, it actually runs. But whether it will run on your machine, and whether it is safe, is anybody's guess! This stuff is undeniably maddening...
EDIT The problem with CommandLine.unsafeArgv is fixed in iOS 12 / Xcode 10, so it may be that this problem is fixed too.

Int extension not applied to raw negative values

My extensions to the Int type do not work for raw, negative values. I can work around it but the failure seems to be a type inference problem. Why is this not working as expected?
I first encountered this within the application development environment but I have recreated a simple form of it here on the Playground. I am using the latest version of Xcode; Version 6.2 (6C107a).
That's because - is interpreted as the minus operator applied to the integer 2, and not as part of the -2 numeric literal.
To prove that, just try this:
-(1.foo())
which generates the same error
Could not find member 'foo'
The message is probably misleading, because the error is about trying to apply the minus operator to the return value of the foo method.
I don't know if that is an intentional behavior or not, but it's how it works :)
This is likely a compiler bug (report on radar if you like). Use brackets:
println((-2).foo())

How can I use a float as an argument in LLDB?

I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…