I don't quite understand what new functionality the as! operator is supposed to add.
Apple's documentation says:
The as! operator performs a forced cast of the expression to the specified type. The as! operator returns a value of the specified type, not an optional type. If the cast fails, a runtime error is raised. The behavior of x as! T is the same as the behavior of (x as? T)!.
It seems that using as! only makes sense, when you know that the downcast will be successful, because otherwise a runtime error will be triggered. But when I know that the downcast will be successful I just use the as operator. And since neither as! nor as return an optional they even return the exact same type.
Maybe I'm missing something, but what's the point?
The as (no bang) operator is for casts that are guaranteed by the language to succeed — primarily, that means casts from a subtype to one of its ancestor types.
The as! and as? operators are for casts that the language can't guarantee will succeed. This comes into play when you have a general type and want to treat it as a more specific subtype. The compiler doesn't know which subtype might really be hiding behind the general type — that information can be decided at run time — so as far as the language knows, there's a possibility of the cast failing.
The difference between as? and as! is how that failure is handled. With the former, the result of the cast gets wrapped in an optional, forcing you to yeast for the failure and allowing you to handle it in a way that makes sense to your app. The latter assumes the cast will succeed, so you aren't forced to check for failure—you'll just crash if your assumption turns out to be wrong.
So why use as!? It fills the same role as the Implicitly Unwrapped Optional: you can use it (at your own risk) for situations where some behavior is "guaranteed" by some factor external to the language (even if the language itself can't guarantee it), allowing you to skip having to write code to test for failures you don't expect to happen. One of the most common cases of such is when you're depending on the runtime behavior of another class — the language doesn't say that the subviews array of a UIView must contain other UIViews, but its documentation assures you that it will.
Before Swift 1.2, there was no as! — as included both cases that could crash and cases that couldn't. The new operator requires you to be clear about whether you're doing something that will always succeed, something you expect to always succeed, and something you plan to test for failure.
There are cases where you as a programmer can know that as would succeed, but the compiler can't be sure of it. In such cases, as is a compile-time error, but as! will compile.
Related
I've read more than a few answers to similar questions as well as a few tutorials, but none address my main confusion. I'm a native Java coder, but I've programmed in Swift as well.
Why would I ever want to use optionals instead of nulls?
I've read that it's so there are less null checks and errors, but these are necessary or easily avoided with clean programming.
I've also read it's so all references succeed (https://softwareengineering.stackexchange.com/a/309137/227611 and val length = text?.length). But I'd argue this is a bad thing or a misnomer. If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
What am I missing?
Optionals provide clarity of type. An Int stores an actual value - always, whereas an Optional Int (i.e. Int?) stores either the value of an Int or a nil. This explicit "dual" type, so to speak, allows you to craft a simple function that can clearly declare what it will accept and return. If your function is to simply accept an actual Int and return an actual Int, then great.
func foo(x: Int) -> Int
But if your function wants to allow the return value to be nil, and the parameter to be nil, it must do so by explicitly making them optional:
func foo(x: Int?) -> Int?
In other languages such as Objective-C, objects can always be nil instead. Pointers in C++ can be nil, too. And so any object you receive in Obj-C or any pointer you receive in C++ ought to be checked for nil, just in case it's not what your code was expecting (a real object or pointer).
In Swift, the point is that you can declare object types that are non-optional, and thus whatever code you hand those objects to don't need to do any checks. They can just safely just use those objects and know they are non-null. That's part of the power of Swift optionals. And if you receive an optional, you must explicitly unpack it to its value when you need to access its value. Those who code in Swift try to always make their functions and properties non-optional whenever they can, unless they truly have a reason for making them optional.
The other beautiful thing about Swift optionals is all the built-in language constructs for dealing with optionals to make the code faster to write, cleaner to read, more compact... taking a lot of the hassle out of having to check and unpack an optional and the equivalent of that you'd have to do in other languages.
The nil-coalescing operator (??) is a great example, as are if-let and guard and many others.
In summary, optionals encourage and enforce more explicit type-checking in your code - type-checking that's done by by the compiler rather than at runtime. Sure you can write "clean" code in any language, but it's just a lot simpler and more automatic to do so in Swift, thanks in big part to its optionals (and its non-optionals too!).
Avoids error at compile time. So that you don't pass unintentionally nulls.
In Java, any variable can be null. So it becomes a ritual to check for null before using it. While in swift, only optional can be null. So you have to check only optional for a possible null value.
You don't always have to check an optional. You can work equally well on optionals without unwrapping them. Sending a method to optional with null value does not break the code.
There can be more but those are the ones that help a lot.
TL/DR: The null checks that you say can be avoided with clean programming can also be avoided in a much more rigorous way by the compiler. And the null checks that you say are necessary can be enforced in a much more rigorous way by the compiler. Optionals are the type construct that make that possible.
var length = text?.length
This is actually a good example of one way that optionals are useful. If text doesn't have a value, then it can't have a length either. In Objective-C, if text is nil, then any message you send it does nothing and returns 0. That fact was sometimes useful and it made it possible to skip a lot of nil checking, but it could also lead to subtle errors.
On the other hand, many other languages point out the fact that you've sent a message to a nil pointer by helpfully crashing immediately when that code executes. That makes it a little easier to pinpoint the problem during development, but run time errors aren't so great when they happen to users.
Swift takes a different approach: if text doesn't point to something that has a length, then there is no length. An optional isn't a pointer, it's a kind of type that either has a value or doesn't have a value. You might assume that the variable length is an Int, but it's actually an Int?, which is a completely different type.
If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
If text is nil then there is no object to send the length message to, so length never even gets called and the result is nil. Sometimes that's fine — it makes sense that if there's no text, there can't be a length either. You may not care about that — if you were preparing to draw the characters in text, then the fact that there's no length won't bother you because there's nothing to draw anyway. The optional status of both text and length forces you to deal with the fact that those variables don't have values at the point where you need the values.
Let's look at a slightly more concrete version:
var text : String? = "foo"
var length : Int? = text?.count
Here, text has a value, so length also gets a value, but length is still an optional, so at some point in the future you'll have to check that a value exists before you use it.
var text : String? = nil
var length : Int? = text?.count
In the example above, text is nil, so length also gets nil. Again, you have to deal with the fact that both text and length might not have values before you try to use those values.
var text : String? = "foo"
var length : Int = text.count
Guess what happens here? The compiler says Oh no you don't! because text is an optional, which means that any value you get from it must also be optional. But the code specifies length as a non-optional Int. Having the compiler point out this mistake at compile time is so much nicer than having a user point it out much later.
var text : String? = "foo"
var length : Int = text!.count
Here, the ! tells the compiler that you think you know what you're doing. After all, you just assigned an actual value to text, so it's pretty safe to assume that text is not nil. You might write code like this because you want to allow for the fact that text might later become nil. Don't force-unwrap optionals if you don't know for certain, because...
var text : String? = nil
var length : Int = text!.count
...if text is nil, then you've betrayed the compiler's trust, and you deserve the run time error that you (and your users) get:
error: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
Now, if text is not optional, then life is pretty simple:
var text : String = "foo"
var length : Int = text.count
In this case, you know that text and length are both safe to use without any checking because they cannot possibly be nil. You don't have to be careful to be "clean" -- you literally can't assign anything that's not a valid String to text, and every String has a count, so length will get a value.
Why would I ever want to use optionals instead of nulls?
Back in the old days of Objective-C, we used to manage memory manually. There was a small number of simple rules, and if you followed the rules rigorously, then Objective-C's retain counting system worked very well. But even the best of us would occasionally slip up, and sometimes complex situations arose in which it was hard to know exactly what to do. A huge portion of Objective-C questions on StackOverflow and other forums related to the memory management rules. Then Apple introduced ARC (automatic retain counting), in which the compiler took over responsibility for retaining and releasing objects, and memory management became much simpler. I'll bet fewer than 1% of Objective-C and Swift questions here on SO relate to memory management now.
Optionals are like that: they shift responsibility for keeping track of whether a variable has, doesn't have, or can't possibly not have a value from the programmer to the compiler.
While learning Swift, I have seen many cases of safe code vs. unsafe code. A perfect example of this is
as? and as!
as?, from my understanding tries to type cast, but if it doesn't cast, returns nil.
as!, from my understanding, tries to type cast, and crashes if it fails.
My question, is why does as!, (and other unsafe syntax with safe counterparts, for that matter) exist? Obviously as? is the better choice. The only answer I could come up with is that maybe using as! is faster for the compiler to compile, and therefore is mainly an optimization piece of syntax.
I'm just curious to know I'm right about safe syntax being a bit slower to compile, since performance matters to me a great deal.
Thank you for your time.
This question is based upon a false premise: The primary choice of as? vs as! is not generally one of performance. Sure, as! potentially might be a bit more efficient, but in an optimized build, the difference is generally negligible.
One chooses as! vs as? based upon the context:
If the cast can possibly fail at runtime, then you'd use as?. Of course you add code to check to see whether the cast succeeded or not.
A common example of a cast that could fail would be whenever the source of the data is some external source (e.g. user input or a network request). A user could paste text values into a field in which you were expecting only numeric values. An external web server could be unavailable or it could even return data in an unexpected format. You'd want to use as? to detect these situations and handle it gracefully.
But if the cast can not possibly fail at runtime, you use as!. But you do so less for performance reasons than for reasons of clarity and writing concise code.
A common example of a cast that cannot fail would be when retrieving a cell prototype from a storyboard, where you know the base type was set in Interface Builder, though the Cocoa API cannot know this. It would be ineffiencient to add a lot of code checking to see if the base type was set correctly (especially since this would be a fatal programming error, anyway) and leads to code with a lot of syntactic noise with no benefit. And if you don't do it correctly, you could easily accidentally suppress an error that you really would want to flush out during the testing process, making it harder to find the problem. This is a good use case for as! syntax.
as! is not unsafe unless your code before it has failed to ensure the cast can't fail. Imagine a situation where your code has already verified the object can indeed be cast to a particular protocol. If for whatever reason you find it convenient to not create a new local variable of that type and cast the first variable to it, then using as! can be appropriate.
The same logic goes for implicitly unwrapped optionals. Since when your view controller is instantiated, an IBOutlet property can't be assigned yet (storyboard hasn't loaded yet), the property has to be optional. But it's irksome to have to use the property as an optional throughout the class when you know the storyboard has loaded. In fact, if there is a problem with locating the storyboard element and assigning it, a 100% guaranteed crash is desired so that the developer can always spot and fix the problem.
So returning to as!, this raises a second case of when it's appropriate to use: whenever a crash is the most desirable outcome of the failed cast.
So it's not about compiler speed. It's about code logic convenience. There are appropriate times to use it, and for those times, it's a more compact and easier to read way of representing your logic goals.
As you say, as! should hardly ever be used. There are a few cases, though, where due to legacy issues, the compiler doesn't know what type something should be, even though the developer does. One example of this is the NSCopying protocol. Because Objective-C was not as strict about type safety as Swift, copy() returns an id, which is bridged to Any in Swift. However, when you use copy(), mutableCopy(), and friends, you usually know exactly what type you will get out of it, so as! is typically used to force a cast to the correct type.
let aString: NSAttributedString = ...
let mutableAString = aString.mutableCopy() as! NSMutableAttributedString
You can investigate this question for yourself by compiling something like
func f(a: Any) -> Int {
(a as! Int) + 5
}
vs.
func f(a: Any) -> Int {
((a as? Int) ?? 3) + 5
}
With the Swift 4 compiler, running swiftc -O -emit-assembly on this Swift code produces extremely similar output, which should square with your understanding of how as! works and how we would implement it if it wasn't part of the language:
func forceCast<T>(_ subject: Any) -> T {
return subject as? T ?? fatalError("Could not cast \(subject) to \(T)")
}
It's worth noting, as SmartCat already alluded to, that as! isn't actually "unsafe" in the sense that the Swift compiler authors define that word; because it, like as?, actually specifically checks the type of subject. There is a unsafe-in-the-swift-sense force cast in swift; it's called, unsurprisingly, unsafeBitCast; and is equivalent to a C/Objective-C cast; it doesn't check types and just assumes you knew what you were doing. Will it be faster than as? Without a doubt.
So, to answer your title's question: yes, safety has a cost in Swift. The flip side of the coin is that if you write code that avoids casts and optionality, Swift can (in theory) optimize your code far more aggressively than languages that make less strong safety guarantees.
To answer your actual question; there's no difference between as? and as! in terms of performance.
Other's have already covered why as! exists; it's so you can tell the compiler "I can't explain why to you, but this will always work; trust me".
Consider the following lookup function that I wrote, which is using optionals and optional binding, reports a message if key is not found in the dictionary
func lookUp<T:Equatable>(key:T , dictionary:[T:T]) -> T? {
for i in dictionary.keys {
if i == key{
return dictionary[i]
}
}
return nil
}
let dict = ["JO":"Jordan",
"UAE":"United Arab Emirates",
"USA":"United States Of America"
]
if let a = lookUp( "JO",dictionary:dict ) {
print(a) // prints Jordan
} else {
print("cant find value")
}
I have rewritten the following code, but this time, using error handling, guard statement, removing -> T? and writing an enum which conforms to ErrorType:
enum lookUpErrors : ErrorType {
case noSuchKeyInDictionary
}
func lookUpThrows<T:Equatable>(key:T , dic:[T:T])throws {
for i in dic.keys{
guard i == key else {
throw lookUpErrors.noSuchKeyInDictionary
}
print(dic[i]!)
}
}
do {
try lookUpThrows("UAE" , dic:dict) // prints united arab emirates
}
catch lookUpErrors.noSuchKeyInDictionary{
print("cant find value")
}
Both functions work well but:
which function grants better performance
which function is "safer"
which function is recommended (based on pros and cons)
Performance
The two approaches should have comparable performance. Under the hood they are both doing very similar things: returning a value with a flag that is checked, and only if the flag shows the result is valid, proceeding. With optionals, that flag is the enum (.None vs .Some), with throws that flag is an implicit one that triggers the jump to the catch block.
It's worth noting your two functions don't do the same thing (one returns nil if no key matches, the other throws if the first key doesn’t match).
If performance is critical, then you can write this to run much faster by eliminating the unnecessary key-subscript lookup like so:
func lookUp<T:Equatable>(key:T , dictionary:[T:T]) -> T? {
for (k,v) in dictionary where k == key {
return v
}
return nil
}
and
func lookUpThrows<T:Equatable>(key:T , dictionary:[T:T]) throws -> T {
for (k,v) in dic where k == key {
return v
}
throw lookUpErrors.noSuchKeyInDictionary
}
If you benchmark both of these with valid values in a tight loop, they perform identically. If you benchmark them with invalid values, the optional version performs about twice the speed, so presumably actually throwing has a small bit of overhead. But probably not anything noticeable unless you really are calling this function in a very tight loop and anticipating a lot of failures.
Which is safer?
They're both identically safe. In neither case can you call the function and then accidentally use an invalid result. The compiler forces you to either unwrap the optional, or catch the error.
In both cases, you can bypass the safety checks:
// force-unwrap the optional
let name = lookUp( "JO", dictionary: dict)!
// force-ignore the throw
let name = try! lookUpThrows("JO" , dic:dict)
It really comes down to which style of forcing the caller to handle possible failure is preferable.
Which function is recommended?
While this is more subjective, I think the answer’s pretty clear. You should use the optional one and not the throwing one.
For language style guidance, we need only look at the standard library. Dictionary already has a key-based lookup (which this function duplicates), and it returns an optional.
The big reason optional is a better choice is that in this function, there is only one thing that can go wrong. When nil is returned, it is for one reason only and that is that the key is not present in the dictionary. There is no circumstance where the function needs to indicate which reason of several that it threw, and the reason nil is returned should be completely obvious to the caller.
If on the other hand there were multiple reasons, and maybe the function needs to return an explanation (for example, a function that does a network call, that might fail because of network failure, or because of corrupt data), then an error classifying the failure and maybe including some error text would be a better choice.
The other reason optional is better in this case is that failure might even be expected/common. Errors are more for unusual/unexpected failures. The benefit of returning an optional is it’s very easy to use other optional features to handle it - for example optional chaining (lookUp("JO", dic:dict)?.uppercaseString) or defaulting using nil-coalescing (lookUp("JO", dic:dict) ?? "Team not found"). By contrast, try/catch is a bit of a pain to set up and use, unless the caller really wants "exceptional" error handling i.e. is going to do a bunch of stuff, some of which can fail, but wants to collect that failure handling down at the bottom.
#AirspeedVelocity already has a great answer, but I think it's worth going a bit further into why to use optionals vs errors.
There are basically four ways for something to go wrong:
Simple error: it fails in only one way, so you don't need to care about why something went wrong. The problem may come either from programmer logic or user data, so you need to be able to handle it at run time and design around it when coding.
This is the case for things like initializing an Int from a String (either the string is parseable as an integer or it's not) or dictionary-style lookups (either there's a value for the key or there's not). Optionals work really well for this in Swift.
Logic error: this is the kind of error that (in theory) comes up only during development, as a result of Doing It Wrong — for example, indexing beyond the bounds of an array.
In ObjC, NSException covers these kinds of cases. In Swift, we have functions like fatalError. I'd assume that part of why NSException isn't surfaced in Swift is that once your program encounters a logic error, it's not really safe to assume anything about its further operation. Logic errors should either be caught during development or cause a (nicely debuggable) crash rather than letting the program continue in an undefined (and thus unsafe) state.
Universal error: there are loads of ways to fail, but they aren't very connected to programmer logic or user action. You might run out of memory to allocate, get a low-level interrupt, or (wait for it...) overflow the stack, but those can happen with almost anything you do and not really because of any specific thing you do.
You see universal errors getting surfaced as exceptions in some other languages, but that means that you have to code around the possibility of any and every call you make being able to fail. And at that point you're writing more error handling than you are actual code.
Recoverable error: This is for when there are lots of ways to go wrong, but not in ways that preclude further operation, and what a program does upon encountering an error might change depending on what kind of error it is. Filesystems and networking are the common examples here: if you can't load a file, it might be because the user got the name wrong (so you should tell the user that) or because the wifi momentarily dropped and will be back shortly (so you might forego the alert and just try again).
In Cocoa, historically, this is what NSError parameters are for. Swift's error handling makes this pattern part of the language.
So, when you're writing new API (for yourself or someone else to call) in Swift, or using new ObjC annotations to make an existing API easier to use from Swift, think about what kind of errors you're dealing with.
Is there only one clear way to fail that isn't a result of API misuse? Use an Optional return type.
Can something fail only if a client doesn't follow your API contract — say, if you're writing a container class that has a subscript and a count, or requiring that some specific sequence of calls be made? Don't burden every bit of code that uses your API with error handling or optional unwrapping — just fatalError or assert (or throw NSException if your Swift API is a front to ObjC code) and document what the right way is for people to use your API.
Okay, so your ObjC init method returns nil iff [super init] returns nil. So should you mark your initializer as failable for Swift or add an error out-parpameter? Think about when that really happens — if -[NSObject init] is returning nil, it's because you chained it off of an alloc call that returned nil. If alloc fails, it's already The End Times for your process, so it's not worth handling that case.
Do you have multiple failure cases, some or all of which might be worth reporting to a user? Or that a client calling your API might want to ignore some but not all of? Write a Swift function that throws and a corresponding set of ErrorType values, or an ObjC method that returns an NSError out-parameter.
If you are using Swift 2.0 you could use both versions. The second version uses try/catch (introduced with 2.0) and is thus not backwards compatible which might be a disadvantage to consider.
If there is any performance difference then it will be neglectable.
My personal favorite is the first one as it is straight forward and well readable. If I had to do maintenance of the second version I would ask myself why the author took a try/catch approach for such a simple case. So I would rather be confused...
If you had many complex conditions with many exit points (throws) then I would go for the second one. But as I said, this is not the case here.
The ! mark is used for the optional value in Swift to extract the wrapped value of it, but the value can be extracted without the !:
var optionalString: String? = "optionalString"
println(optionalString)
another example using the !:
var optionalString: String? = "optionalString"
println(optionalString!)
both the codes above get the right value, so I wonder what's the difference between using and not using !, is it just for detect an error at runtime if the optional value is nil or something else? Thanks in advance.
As others have already stated, a really good place to start would be with the official documentation. This topic is extraordinarily well covered by the documentation.
From the official documentation:
Trying to use ! to access a non-existent optional value triggers a
runtime error. Always make sure that an optional contains a non-nil
value before using ! to force-unwrap its value.
println() is probably not the best way to test how the ! operator works. Without it, println() will either print the value or nil, with it it will either print the value or crash.
The main difference is when we're trying to assign our optional to another value or use it in an argument to a function.
Assume optionalValue is an optional integer.
let actualValue = optionalValue
Using this assignment, actualValue is now simply another optional integer. We haven't unwrapped the value at all. We have an optional rather than an integer.
Meanwhile,
let actualValue = optionalValue!
Now we're forcing the unwrap. Actual value will be an integer rather than an optional integer. However, this code will cause a runtime exception is optionalValue is nil.
Your question is answered in the book "The Swift Programming Language: iBooks
Download it and search it for "!".
You could've easily found the answer without having to ask here. For future reference, please always remember to look in the manual first.
From "The Swift Programming Language":
You can use an "if" statement to find out whether an optional contains a value. If an optional does have a value, it evaluates to "true"; if it has no value at all, it evaluates to "false".
Once you're sure that the optional does contain a value, you can access its underlying value by adding an exclamation mark (!) to the end of the optional's name. The exclamation mark effectively says, "I know that this optional definitely has a value; please use it." This is known as forced unwrapping of the optional's value.
I've never heard of Swift before now, but reading the documentation seems clear
"Using the ! operator to unwrap an optional that has a value of nil results in a runtime error."
If in your application nil is a valid and expected value in any normal circumstance and/or you want to trap and handle it in your code then I suggest that you don't use it. If you never expect it to be nil then use it as a debug aid. Runtime error suggest to me that your program will be terminated with a message reported by the OS.
In my own code, and on numerous mailing list postings, I've noticed confusion due to Nothing being inferred as the least upper bound of two other types.
The answer may be obvious to you*, but I'm lazy, so I'm asking you*:
Under what conditions is inferring Nothing in this way the most desirable outcome?
Would it make sense to have the compiler throw an error in these cases, or a warning unless overridden by some kind of annotation?
* Plural
Nothing is the subtype of everything, so it is in a certain sense the counter part of Any, which is the super-type of everything. Nothing can't be instantiated, you'll never hold a Nothing object. There are two situations (I'm aware of) where Nothing is actually useful:
A function that never returns (in contrast to a function that returns no useful value, which would use Unit instead), which happens for infinite loops, infinite blocking, throwing always an exception or exiting the application
As a way to specify the type of empty Containers, e.g. Nil or None. In Java, you can't have a single Nil object for an generic immutable lists without casting or other tricks: If you want to create a List of Dates, even the empty element needs to have the right type, which must be a subtype of Date. As Date and e.g. Integer don't share a common subtype in Java, you can't create such a Nil instance without tricks, despite the fact that your Nil doesn't even hold any value. Now Scala has this common subtype for all objects, so you can define Nil as object Nil extends List[Nothing], and you can use it to start any List you like.
To your second question: Yes, that would be useful. I'd guess there is already a compiler switch for turning on these warnings, but I'm not sure.
It's impossible to infer Nothing as the least upper bound of two types unless those two types are also both Nothing. When you infer the least upper bound of two types, and those two types have nothing in common, you'll get Any (In most such cases, you'll get AnyRef though, because you'll only get Any when a value type like Int or Long is involved.)