I see some code in Nimbus look like this:
if (nil == someObject)
but I usually type:
if (someObject == nil)
Are there any differences in these statements?
No.
(Extra blah blab blah for SO minimum post rules. Weren't they useful?)
They're functionally the same, it's just a coding style issue.
In the olden days, your compiler wouldn't warn you if you missed out an equals sign.
if (someObject = nil)
Probably doesn't do what you want. But if you invert them:
if (nil = someObject)
then the compiler will complain.
These days it probably doesn't make any difference.
Technically no. The former, Nimbus, is using what is endearingly called "Yoda Conditions".
The name of the game being foolproofing here. See, the problem is that this:
if (someObject = nil) // SETS someObject to nil
is totally valid, only one character away from == nil, and really easy to miss. However, if you attempt to do this:
if (nil = someObject)
your compiler will freak out, preventing the issue.
Personally, I hate Yoda Conditionals, as I think they're hard to read. It does mean being extra careful with my code, but hey, I'm the better for it, right? It all comes down to style here, so whatever makes you more comfortable, go for.
Oh, and if you're using Xcode, this is nearly a moot point. If you check out this question, you'll see that Xcode now warns you if you attempt to do an assignment within an if without extra parens. That is,
if (someObject = nil) // throws a warning, whereas
if ((someObject = nil)) // does not
making the issue much harder to miss.
No. but below code is more readability.
Left-hand side: The expression “being interrogated,” whose value is
more in flux.
Right-hand side: The expression being compared against,
whose value is more constant.
The only difference nowadays is that the second form is more readable (or maybe it's subjective and it's only me who prefer it).
Related
I've read more than a few answers to similar questions as well as a few tutorials, but none address my main confusion. I'm a native Java coder, but I've programmed in Swift as well.
Why would I ever want to use optionals instead of nulls?
I've read that it's so there are less null checks and errors, but these are necessary or easily avoided with clean programming.
I've also read it's so all references succeed (https://softwareengineering.stackexchange.com/a/309137/227611 and val length = text?.length). But I'd argue this is a bad thing or a misnomer. If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
What am I missing?
Optionals provide clarity of type. An Int stores an actual value - always, whereas an Optional Int (i.e. Int?) stores either the value of an Int or a nil. This explicit "dual" type, so to speak, allows you to craft a simple function that can clearly declare what it will accept and return. If your function is to simply accept an actual Int and return an actual Int, then great.
func foo(x: Int) -> Int
But if your function wants to allow the return value to be nil, and the parameter to be nil, it must do so by explicitly making them optional:
func foo(x: Int?) -> Int?
In other languages such as Objective-C, objects can always be nil instead. Pointers in C++ can be nil, too. And so any object you receive in Obj-C or any pointer you receive in C++ ought to be checked for nil, just in case it's not what your code was expecting (a real object or pointer).
In Swift, the point is that you can declare object types that are non-optional, and thus whatever code you hand those objects to don't need to do any checks. They can just safely just use those objects and know they are non-null. That's part of the power of Swift optionals. And if you receive an optional, you must explicitly unpack it to its value when you need to access its value. Those who code in Swift try to always make their functions and properties non-optional whenever they can, unless they truly have a reason for making them optional.
The other beautiful thing about Swift optionals is all the built-in language constructs for dealing with optionals to make the code faster to write, cleaner to read, more compact... taking a lot of the hassle out of having to check and unpack an optional and the equivalent of that you'd have to do in other languages.
The nil-coalescing operator (??) is a great example, as are if-let and guard and many others.
In summary, optionals encourage and enforce more explicit type-checking in your code - type-checking that's done by by the compiler rather than at runtime. Sure you can write "clean" code in any language, but it's just a lot simpler and more automatic to do so in Swift, thanks in big part to its optionals (and its non-optionals too!).
Avoids error at compile time. So that you don't pass unintentionally nulls.
In Java, any variable can be null. So it becomes a ritual to check for null before using it. While in swift, only optional can be null. So you have to check only optional for a possible null value.
You don't always have to check an optional. You can work equally well on optionals without unwrapping them. Sending a method to optional with null value does not break the code.
There can be more but those are the ones that help a lot.
TL/DR: The null checks that you say can be avoided with clean programming can also be avoided in a much more rigorous way by the compiler. And the null checks that you say are necessary can be enforced in a much more rigorous way by the compiler. Optionals are the type construct that make that possible.
var length = text?.length
This is actually a good example of one way that optionals are useful. If text doesn't have a value, then it can't have a length either. In Objective-C, if text is nil, then any message you send it does nothing and returns 0. That fact was sometimes useful and it made it possible to skip a lot of nil checking, but it could also lead to subtle errors.
On the other hand, many other languages point out the fact that you've sent a message to a nil pointer by helpfully crashing immediately when that code executes. That makes it a little easier to pinpoint the problem during development, but run time errors aren't so great when they happen to users.
Swift takes a different approach: if text doesn't point to something that has a length, then there is no length. An optional isn't a pointer, it's a kind of type that either has a value or doesn't have a value. You might assume that the variable length is an Int, but it's actually an Int?, which is a completely different type.
If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
If text is nil then there is no object to send the length message to, so length never even gets called and the result is nil. Sometimes that's fine — it makes sense that if there's no text, there can't be a length either. You may not care about that — if you were preparing to draw the characters in text, then the fact that there's no length won't bother you because there's nothing to draw anyway. The optional status of both text and length forces you to deal with the fact that those variables don't have values at the point where you need the values.
Let's look at a slightly more concrete version:
var text : String? = "foo"
var length : Int? = text?.count
Here, text has a value, so length also gets a value, but length is still an optional, so at some point in the future you'll have to check that a value exists before you use it.
var text : String? = nil
var length : Int? = text?.count
In the example above, text is nil, so length also gets nil. Again, you have to deal with the fact that both text and length might not have values before you try to use those values.
var text : String? = "foo"
var length : Int = text.count
Guess what happens here? The compiler says Oh no you don't! because text is an optional, which means that any value you get from it must also be optional. But the code specifies length as a non-optional Int. Having the compiler point out this mistake at compile time is so much nicer than having a user point it out much later.
var text : String? = "foo"
var length : Int = text!.count
Here, the ! tells the compiler that you think you know what you're doing. After all, you just assigned an actual value to text, so it's pretty safe to assume that text is not nil. You might write code like this because you want to allow for the fact that text might later become nil. Don't force-unwrap optionals if you don't know for certain, because...
var text : String? = nil
var length : Int = text!.count
...if text is nil, then you've betrayed the compiler's trust, and you deserve the run time error that you (and your users) get:
error: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
Now, if text is not optional, then life is pretty simple:
var text : String = "foo"
var length : Int = text.count
In this case, you know that text and length are both safe to use without any checking because they cannot possibly be nil. You don't have to be careful to be "clean" -- you literally can't assign anything that's not a valid String to text, and every String has a count, so length will get a value.
Why would I ever want to use optionals instead of nulls?
Back in the old days of Objective-C, we used to manage memory manually. There was a small number of simple rules, and if you followed the rules rigorously, then Objective-C's retain counting system worked very well. But even the best of us would occasionally slip up, and sometimes complex situations arose in which it was hard to know exactly what to do. A huge portion of Objective-C questions on StackOverflow and other forums related to the memory management rules. Then Apple introduced ARC (automatic retain counting), in which the compiler took over responsibility for retaining and releasing objects, and memory management became much simpler. I'll bet fewer than 1% of Objective-C and Swift questions here on SO relate to memory management now.
Optionals are like that: they shift responsibility for keeping track of whether a variable has, doesn't have, or can't possibly not have a value from the programmer to the compiler.
While learning Swift, I have seen many cases of safe code vs. unsafe code. A perfect example of this is
as? and as!
as?, from my understanding tries to type cast, but if it doesn't cast, returns nil.
as!, from my understanding, tries to type cast, and crashes if it fails.
My question, is why does as!, (and other unsafe syntax with safe counterparts, for that matter) exist? Obviously as? is the better choice. The only answer I could come up with is that maybe using as! is faster for the compiler to compile, and therefore is mainly an optimization piece of syntax.
I'm just curious to know I'm right about safe syntax being a bit slower to compile, since performance matters to me a great deal.
Thank you for your time.
This question is based upon a false premise: The primary choice of as? vs as! is not generally one of performance. Sure, as! potentially might be a bit more efficient, but in an optimized build, the difference is generally negligible.
One chooses as! vs as? based upon the context:
If the cast can possibly fail at runtime, then you'd use as?. Of course you add code to check to see whether the cast succeeded or not.
A common example of a cast that could fail would be whenever the source of the data is some external source (e.g. user input or a network request). A user could paste text values into a field in which you were expecting only numeric values. An external web server could be unavailable or it could even return data in an unexpected format. You'd want to use as? to detect these situations and handle it gracefully.
But if the cast can not possibly fail at runtime, you use as!. But you do so less for performance reasons than for reasons of clarity and writing concise code.
A common example of a cast that cannot fail would be when retrieving a cell prototype from a storyboard, where you know the base type was set in Interface Builder, though the Cocoa API cannot know this. It would be ineffiencient to add a lot of code checking to see if the base type was set correctly (especially since this would be a fatal programming error, anyway) and leads to code with a lot of syntactic noise with no benefit. And if you don't do it correctly, you could easily accidentally suppress an error that you really would want to flush out during the testing process, making it harder to find the problem. This is a good use case for as! syntax.
as! is not unsafe unless your code before it has failed to ensure the cast can't fail. Imagine a situation where your code has already verified the object can indeed be cast to a particular protocol. If for whatever reason you find it convenient to not create a new local variable of that type and cast the first variable to it, then using as! can be appropriate.
The same logic goes for implicitly unwrapped optionals. Since when your view controller is instantiated, an IBOutlet property can't be assigned yet (storyboard hasn't loaded yet), the property has to be optional. But it's irksome to have to use the property as an optional throughout the class when you know the storyboard has loaded. In fact, if there is a problem with locating the storyboard element and assigning it, a 100% guaranteed crash is desired so that the developer can always spot and fix the problem.
So returning to as!, this raises a second case of when it's appropriate to use: whenever a crash is the most desirable outcome of the failed cast.
So it's not about compiler speed. It's about code logic convenience. There are appropriate times to use it, and for those times, it's a more compact and easier to read way of representing your logic goals.
As you say, as! should hardly ever be used. There are a few cases, though, where due to legacy issues, the compiler doesn't know what type something should be, even though the developer does. One example of this is the NSCopying protocol. Because Objective-C was not as strict about type safety as Swift, copy() returns an id, which is bridged to Any in Swift. However, when you use copy(), mutableCopy(), and friends, you usually know exactly what type you will get out of it, so as! is typically used to force a cast to the correct type.
let aString: NSAttributedString = ...
let mutableAString = aString.mutableCopy() as! NSMutableAttributedString
You can investigate this question for yourself by compiling something like
func f(a: Any) -> Int {
(a as! Int) + 5
}
vs.
func f(a: Any) -> Int {
((a as? Int) ?? 3) + 5
}
With the Swift 4 compiler, running swiftc -O -emit-assembly on this Swift code produces extremely similar output, which should square with your understanding of how as! works and how we would implement it if it wasn't part of the language:
func forceCast<T>(_ subject: Any) -> T {
return subject as? T ?? fatalError("Could not cast \(subject) to \(T)")
}
It's worth noting, as SmartCat already alluded to, that as! isn't actually "unsafe" in the sense that the Swift compiler authors define that word; because it, like as?, actually specifically checks the type of subject. There is a unsafe-in-the-swift-sense force cast in swift; it's called, unsurprisingly, unsafeBitCast; and is equivalent to a C/Objective-C cast; it doesn't check types and just assumes you knew what you were doing. Will it be faster than as? Without a doubt.
So, to answer your title's question: yes, safety has a cost in Swift. The flip side of the coin is that if you write code that avoids casts and optionality, Swift can (in theory) optimize your code far more aggressively than languages that make less strong safety guarantees.
To answer your actual question; there's no difference between as? and as! in terms of performance.
Other's have already covered why as! exists; it's so you can tell the compiler "I can't explain why to you, but this will always work; trust me".
While checking whether an object is nil, someone use 1:
if (object == nil) {
//...
}
someone use 2:
if (nil == object) {
//...
}
Any difference between 1 and 2? Which one is better?
The difference is mainly that if you mistakingly forget a = e.g like this
(nil = myObject)
you will get an error cause you can't assign a value to nil. So it is some kind of faile-safe.
The use of nil == object is actually an idiom to prevent the unlucky case where you miss a = in your expression. Example, you want to write:
if (object == nil)
but write:
if (object = nil) {
this is a typical error and one that is very difficult to track down, since the assignment has also a value as an expression and thus the condition will evaluate to false (no error), but you will also have wiped out your object...
On the other hand, writing
if (nil == object)
you ensure that that kind of error will be detected by the compiler since
if (nil = object)
is not a regular assignment.
Actually, modern compilers (default settings) will provide a warning for the kind of "unintended" assignment, ie:
if (object = nil) {
will raise a warning; but still this can be tricky.
As others pointed out, they are equivalent. There is also another way to do it:
if (!object) {
// object is nil
}
The reason some developers prefer "Yoda conditionals" is that it's less likely to inadvertently write if (object = nil) (note the assignment).
This is not an issue any more since compilers warn when assigning in a conditional expression without extra parentheses.
Since Yoda conditionals are less readable they should be avoided.
They are equivalent. Back in the days it was common to write if (CONST == variable) to reduce the risk of accidental assignment. E.g. if (variable = CONST) would assign a constant to the variable and the if-statement would evaluate as true or false depending on the value of the constant, not the variable.
Nowadays, IDEs and compilers will usually be smart enough to issue a warning on such lines. And many people prefer the first version due to readability. But really it's a matter of style.
best practice when using the comparison operator == is to put the constant on the left of the operand. in this way it is impossible to accidentally mistype the assignment operator instead of the comparison.
example:
( iVarOne == 1 )
is functionally equal to
( 1 == iVarOne)
but
( iVarOne = 1 )
is much different than
( 1 = iVarOne )
this best practice works around the fact that compilers do not complain when you mistype an assignment for a comparison operator...
Nope, only in readability I prefer the first one, while some other developers may prefer the other.
Its just a coding style issue, it has no technical difference at all.
Some may say that the second is better, since it is more explicit, the nil comes first so its easier to note that we are testing for nil, but again it depends on the developer taste.
There is no difference at all. It's all about readability. If you want to write a clean code, you should take care of this.
If you place the "Object" to the right of the evaluation, it becomes less apparent what are you really doing.
It is not NIL, it is NULL.
They are one and the same. The == operator is a comparison operator. As a general trend, we use (object==NULL)
I have been programming iPhone SDK for around 6 months but am a bit confused about a couple of things...one of which I am asking here:
Why are the following considered different?
if (varOrObject == nil)
{
}
vs
if (nil == varOrObject)
{
}
Coming from a perl background this is confusing to me...
Can someone please explain why one of the two (the second) would be true whereas the first would not if the two routines are placed one after the other within code. varOrObject would not have changed between the two if statements.
There is no specific code this is happening in, just that I have read in a lot of places that the two statements are different, but not why.
Thanks in advance.
They are the same. What you have read is probably talking about if you mistakenly write == as =, the former will assign the value to the variable and the if condition will be false, while the latter would be a compile time error:
if (variable = nil) // assigns nil to variable, not an error.
if (nil = variable) // compile time error
The latter gives you the chance to correct the mistake at compile time.
They might be different if you are using Objective-C++ and you were overriding the '==' operator for the object.
They are exactly the same, it is just a style difference. One would never be true if the other is false.
if it's a recommendation that you use the latter, it's because the second is easier to catch if you accidentally type =. otherwise, they are identical and your book is wrong.
They can only be different if the "==" - Operator is overloaded.
Let's say someone redefines "==" for a list, so that the content of two lists is checked rather than the memory adress (like "equals() in Java). Now the same person could theoretically define that "emptyList == NULL".
Now (emptyList==NULL) would be true but (NULL==emptyList) would still be false.
While looking through some code, I found two different code snippets for initialization. I don't mean the method names, but the round brackets.
This one has just two of them:
if (self = [super initWithFrame:frame]) {
That's the way I do it all the time, and it seems to work. Now in an Apple example I found this:
if ((self = [super init])) {
Do I have to put it twice into round brackets here? Or is it just fine to put it in one pair of brackets, like the first example?
One pair of brackets is just fine
I call them "paranoia brackets" :)
EDIT: some C/C++ compilers will issue a warning because of the use of the assignment operator (it will say something like "did you mean ==" ?). Using extra parentheses prevents this warning. But XCode doesn't show this kind of warning, so there's no need to do that.
In some languages, if( foo = bar ) implies that the assignment was correctly applied. So the calls:
if( ( foo = bar ) )
would evaluate the assignment and return the result as the outer () act as a LHS, ie,
blah = foo = bar
the outer () act sort of like blah.
In ANSI C and it's children C++ and Objective-C this isn't strictly necessary. However as has been mentioned some compilers will issue a warning since the "=" / "==" type-o can be a nasty one. That type-o led to the idiom of putting the invariant or constant at the left hand side to cause compile time catching of the problem:
if( nil == foo )
if both sides are variables though it's still possibly a mistake.
There is a good reason for doing this even though gcc isn't warning you or evaluating things differently.
If you are writing code in a team environment your peers may not be sure you meant "=" or just mistyped "==", causing them to peer more closely at what you're doing even though there's no need to. In the style of "write once to be read 1000 times" you put in clues to prevent people from having to waste time when reading your code. For instance, use clear and spelled out variable names (no economy on bytes these days!). Don't use obtuse and overly optimized constructs in areas that aren't drags on performance - write your code cleanly.
In this case, use (( )) to indicate you knew it was "=" not "==".
Don't forget, Apple is writing their examples for not just a dozen people to read, but potentially every man, woman, and child on earth to read now and in the future. Clarity is of upmost importance.
Looks like a cut and paste error to me! (Or a lover of Lisp.) There is no good reason to have the second pair of brackets but, as you note, it's not actually harmful.
If I've got this right the first one checks that the assignment self = [super initWithFrame:frame] happens and the second one checks that the result of that assignment is true
But this could just be a lack of tea speaking...