This question already has answers here:
Testing for nil in Objective-C -- if(x != nil) vs if(x)
(4 answers)
Closed 9 years ago.
If you have an object like NSString *someString, what is the difference, if any, between
if (!someString)
vs
if (someString == nil)
Thanks!
The first syntax you use:
if (!someString)
exploits a sort of "ambiguity" of C deriving from the fact that the original standard of C lacked a proper boolean type. Therefore, any integer value equalling 0 was interpreted as "false", and any integer value different from "0" was taken as "true". The meaning of ! is therefore defined based on this convention and current versions of the C standard have kept the original definition for compatibility.
In your specific case, someString is a pointer, so it is first converted to an integer, then ! someString is interpreted as a bool value of true when someString points at the location 0x000000, otherwise it evals to "true".
This is fine in most conditions (I would say always), but in theory, NULL/nil could be different from 0x000000 under certain compilers, so (in very theory) it would be better to use the second syntax, which is more explicit:
if (someString == nil)
It is anyway more readable and since someString is not an integer (rather a pointer), IMO, better practice in general.
EDIT: about the definition of NULL...
Whether the C standard defines NULL to be 0 is an interesting topic for me...
According to C99 standard, section 7.17, "Common definitions ":
NULL [which] expands to an implementation-defined null pointer constant;
So, NULL is defined in stddef.h to an implementation-defined null pointer constant...
The same document on page 47 states:
An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant.55) If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
So, the null pointer constant (which is (void*)0) can be converted to a null pointer and this is guaranteed to compare unequal to a pointer to any object or function.
So, I think that basically it depends on whether the implementation decides that the result of converting a null pointer constant to a null pointer produces a pointer which converted back to an integer gives 0. It is not clear that a null pointer interpreted as an integer equals 0.
I would say that the standard really try and enforce the null pointer being 0, but leaves the door open to systems where the null pointer was not 0.
For most pointers, they're equivalent, though most coders I know prefer the former as it's more concise.
For weakly linked symbols, the former resolves the symbol (and will cause a crash if it's missing) while an explicit comparison against nil or NULL will not.
The bang, exclamation, ! prefix operator in C is a logical not. At least, its a version of it. If you looked at a typical logical not truth table you would see something like this:
Input Result
1 0
0 1
However in C the logical not operator does something more like this:
Input Result
non-zero 0
0 1
So when you consider that both NULL and nil in Objective-C evaluate to 0, you know that the logical not operator applied to them will result in a 1.
Now, consider the equality == operator. It compares the value of two items and returns 1 if they are equal, 0 if they are not. If you mapped its results to a truth table then it would look exactly like the results for logical not.
In C and Objective-C programs, conditionality is actually determined by int's, as opposed to real booleans. This is because there is no such thing as a boolean data type in C. So writing something like this works perfectly fine in C:
if(5) printf("hello\n"); // prints hello
and in addition
if(2029) printf("hello\n"); // also prints hello
Basically, any non-zero int will evaluate as 'true' in C. You combine that with the truth tables for logical negation and equality, and you quickly realize that:
(! someString) and (someString == nil)
are for all intents identical!
So the next logical question is, why prefer one form over another? From a pure C view-point it would be mostly a point of style, but most (good) developers would choose the equality test for a number of reasons:
It's closer to what you are trying to express in code. You are
trying to check if the someString variable is nil.
It's more portable. Languages like Java have a real boolean type.
You cannot use bang notation on their variables or their NULL
definition. Using equality where its needed makes it easier to port
C to such languages later on.
Apple may change the definition of nil. Ok, no they won't! But it
never hurts to be safe!
In your case it means the same thing. Any pointer that does not point to nil will return YES (true).
Normally the exclamation mark operator negates a BOOL value.
If you mean to test the condition "foo is nil" you should say that: foo == nil.
If you mean to test a boolean value for falsehood, !foo is okay, but personally I think that a skinny little exclamation point is easy to miss, so I prefer foo == NO.
Writing good code is about clearly conveying your intention not just to the compiler, but to the next programmer that comes along (possibly a future you). In both cases, the more explicit you can be about what you're trying to do, the better.
All that aside, ! and ==nil have the same effect in all the cases I can think of.
! is a negation operator. If your object isn't allocated, you will reach the same result from a truth table as you would with an == nil operation.
But, ! is usually more used for boolean operations.
if(!isFalse) {
//if isFalse == NO, then this operation evaluates to YES (true)
[self doStuff];
}
When you use ! on an object like !something it just checks to see if the pointer is pointing to nil, if it doesn't, it returns true, and the if statement fires.
Related
I've read more than a few answers to similar questions as well as a few tutorials, but none address my main confusion. I'm a native Java coder, but I've programmed in Swift as well.
Why would I ever want to use optionals instead of nulls?
I've read that it's so there are less null checks and errors, but these are necessary or easily avoided with clean programming.
I've also read it's so all references succeed (https://softwareengineering.stackexchange.com/a/309137/227611 and val length = text?.length). But I'd argue this is a bad thing or a misnomer. If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
What am I missing?
Optionals provide clarity of type. An Int stores an actual value - always, whereas an Optional Int (i.e. Int?) stores either the value of an Int or a nil. This explicit "dual" type, so to speak, allows you to craft a simple function that can clearly declare what it will accept and return. If your function is to simply accept an actual Int and return an actual Int, then great.
func foo(x: Int) -> Int
But if your function wants to allow the return value to be nil, and the parameter to be nil, it must do so by explicitly making them optional:
func foo(x: Int?) -> Int?
In other languages such as Objective-C, objects can always be nil instead. Pointers in C++ can be nil, too. And so any object you receive in Obj-C or any pointer you receive in C++ ought to be checked for nil, just in case it's not what your code was expecting (a real object or pointer).
In Swift, the point is that you can declare object types that are non-optional, and thus whatever code you hand those objects to don't need to do any checks. They can just safely just use those objects and know they are non-null. That's part of the power of Swift optionals. And if you receive an optional, you must explicitly unpack it to its value when you need to access its value. Those who code in Swift try to always make their functions and properties non-optional whenever they can, unless they truly have a reason for making them optional.
The other beautiful thing about Swift optionals is all the built-in language constructs for dealing with optionals to make the code faster to write, cleaner to read, more compact... taking a lot of the hassle out of having to check and unpack an optional and the equivalent of that you'd have to do in other languages.
The nil-coalescing operator (??) is a great example, as are if-let and guard and many others.
In summary, optionals encourage and enforce more explicit type-checking in your code - type-checking that's done by by the compiler rather than at runtime. Sure you can write "clean" code in any language, but it's just a lot simpler and more automatic to do so in Swift, thanks in big part to its optionals (and its non-optionals too!).
Avoids error at compile time. So that you don't pass unintentionally nulls.
In Java, any variable can be null. So it becomes a ritual to check for null before using it. While in swift, only optional can be null. So you have to check only optional for a possible null value.
You don't always have to check an optional. You can work equally well on optionals without unwrapping them. Sending a method to optional with null value does not break the code.
There can be more but those are the ones that help a lot.
TL/DR: The null checks that you say can be avoided with clean programming can also be avoided in a much more rigorous way by the compiler. And the null checks that you say are necessary can be enforced in a much more rigorous way by the compiler. Optionals are the type construct that make that possible.
var length = text?.length
This is actually a good example of one way that optionals are useful. If text doesn't have a value, then it can't have a length either. In Objective-C, if text is nil, then any message you send it does nothing and returns 0. That fact was sometimes useful and it made it possible to skip a lot of nil checking, but it could also lead to subtle errors.
On the other hand, many other languages point out the fact that you've sent a message to a nil pointer by helpfully crashing immediately when that code executes. That makes it a little easier to pinpoint the problem during development, but run time errors aren't so great when they happen to users.
Swift takes a different approach: if text doesn't point to something that has a length, then there is no length. An optional isn't a pointer, it's a kind of type that either has a value or doesn't have a value. You might assume that the variable length is an Int, but it's actually an Int?, which is a completely different type.
If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
If text is nil then there is no object to send the length message to, so length never even gets called and the result is nil. Sometimes that's fine — it makes sense that if there's no text, there can't be a length either. You may not care about that — if you were preparing to draw the characters in text, then the fact that there's no length won't bother you because there's nothing to draw anyway. The optional status of both text and length forces you to deal with the fact that those variables don't have values at the point where you need the values.
Let's look at a slightly more concrete version:
var text : String? = "foo"
var length : Int? = text?.count
Here, text has a value, so length also gets a value, but length is still an optional, so at some point in the future you'll have to check that a value exists before you use it.
var text : String? = nil
var length : Int? = text?.count
In the example above, text is nil, so length also gets nil. Again, you have to deal with the fact that both text and length might not have values before you try to use those values.
var text : String? = "foo"
var length : Int = text.count
Guess what happens here? The compiler says Oh no you don't! because text is an optional, which means that any value you get from it must also be optional. But the code specifies length as a non-optional Int. Having the compiler point out this mistake at compile time is so much nicer than having a user point it out much later.
var text : String? = "foo"
var length : Int = text!.count
Here, the ! tells the compiler that you think you know what you're doing. After all, you just assigned an actual value to text, so it's pretty safe to assume that text is not nil. You might write code like this because you want to allow for the fact that text might later become nil. Don't force-unwrap optionals if you don't know for certain, because...
var text : String? = nil
var length : Int = text!.count
...if text is nil, then you've betrayed the compiler's trust, and you deserve the run time error that you (and your users) get:
error: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
Now, if text is not optional, then life is pretty simple:
var text : String = "foo"
var length : Int = text.count
In this case, you know that text and length are both safe to use without any checking because they cannot possibly be nil. You don't have to be careful to be "clean" -- you literally can't assign anything that's not a valid String to text, and every String has a count, so length will get a value.
Why would I ever want to use optionals instead of nulls?
Back in the old days of Objective-C, we used to manage memory manually. There was a small number of simple rules, and if you followed the rules rigorously, then Objective-C's retain counting system worked very well. But even the best of us would occasionally slip up, and sometimes complex situations arose in which it was hard to know exactly what to do. A huge portion of Objective-C questions on StackOverflow and other forums related to the memory management rules. Then Apple introduced ARC (automatic retain counting), in which the compiler took over responsibility for retaining and releasing objects, and memory management became much simpler. I'll bet fewer than 1% of Objective-C and Swift questions here on SO relate to memory management now.
Optionals are like that: they shift responsibility for keeping track of whether a variable has, doesn't have, or can't possibly not have a value from the programmer to the compiler.
I wanted to check for below values firstName and lastName. But Swift will not allowing me to access. It is giving error like "Binary operator '&&' cannot be applied to two 'Int' operands".
if firstName.length && lastName.length {
}
In Swift you cannot check for non-null like in C and Objective-C, you have to write
if firstName.length > 0 && lastName.length > 0 { }
#Vadian has a good answer, but it could be put more clearly — and there's an extra wrinkle:
For things like logic operators (&& and ||) and the conditions of if and while statements, C accepts any type -- it just treats any zero value (which includes null pointers, but your string length here is just a zero integer) as false, and anything else as true.
In Swift, though, the only type allowed in such Boolean logic contexts is Bool — you can't pass an integer and expect to branch on whether the value is zero, so you have to use a comparison operator like > or ==, or a property or method that returns a Bool value.
Since you're dealing with Strings, you'd do best to use the isEmpty property anyway. Getting the length of a Swift String is not a "free" computation — the same number of bytes could be different numbers of human-readable characters depending on what those bytes are, so you can't count characters without examining bytes. It's a lot faster to answer whether the string is empty than to check all the bytes.
(Then again, maybe you're not using Swift strings anyway? Those don't have the length property you're using.)
I am tutoring someone in basic search and sorts. In insertion sort I iterate negatively when I have a value that is greater than the one previous to it in numerical terms. Now of course this approach can cause issues because there is a check which calls for array[-1] which does not exist.
As underlined in bold below, adding the and x > 0 boolean prevents the index issue.
My question is how is this the case? Wouldn't the call for array[-1] still be made to ensure the validity of both booleans?
the_list = [10,2,4,3,5,7,8,9,6]
for x in range(1,len(the_list)):
value = the_list[x]
while value < the_list[x-1] **and x > 0**:
the_list[x] = the_list[x-1]
x=x-1
the_list[x] = value
print the_list
I'm not sure I completely understand the question, and I don't know what programming language this is, but most modern programming languages use so-called short-circuit Boolean evaluation by default so that the logical expression isn't evaluated further once the outcome is known.
You can use that to guard against range overflow, like this:
while x > 0 and value < the_list[x-1]
but the check of x's range here must come before the use.
AND operation returns true if and only if both arguments are true, so if one of arguments is false there's no point of checking others as the final value is already known at that point. As for your example, usually evaluation goes from left to right but it is not a principle and it looks the language you used is not following that rule (othewise it still should crash on array lookup). But ut may be, this particular implementation optimizes this somehow (which IMHO is not good idea) and evaluates "simpler" things first (like checking if x > 0) before it look up the array. check the specs why this exact order works for you as in most popular languages you would still crash if test x > 0 wouldn't be evaluated before lookup
While checking whether an object is nil, someone use 1:
if (object == nil) {
//...
}
someone use 2:
if (nil == object) {
//...
}
Any difference between 1 and 2? Which one is better?
The difference is mainly that if you mistakingly forget a = e.g like this
(nil = myObject)
you will get an error cause you can't assign a value to nil. So it is some kind of faile-safe.
The use of nil == object is actually an idiom to prevent the unlucky case where you miss a = in your expression. Example, you want to write:
if (object == nil)
but write:
if (object = nil) {
this is a typical error and one that is very difficult to track down, since the assignment has also a value as an expression and thus the condition will evaluate to false (no error), but you will also have wiped out your object...
On the other hand, writing
if (nil == object)
you ensure that that kind of error will be detected by the compiler since
if (nil = object)
is not a regular assignment.
Actually, modern compilers (default settings) will provide a warning for the kind of "unintended" assignment, ie:
if (object = nil) {
will raise a warning; but still this can be tricky.
As others pointed out, they are equivalent. There is also another way to do it:
if (!object) {
// object is nil
}
The reason some developers prefer "Yoda conditionals" is that it's less likely to inadvertently write if (object = nil) (note the assignment).
This is not an issue any more since compilers warn when assigning in a conditional expression without extra parentheses.
Since Yoda conditionals are less readable they should be avoided.
They are equivalent. Back in the days it was common to write if (CONST == variable) to reduce the risk of accidental assignment. E.g. if (variable = CONST) would assign a constant to the variable and the if-statement would evaluate as true or false depending on the value of the constant, not the variable.
Nowadays, IDEs and compilers will usually be smart enough to issue a warning on such lines. And many people prefer the first version due to readability. But really it's a matter of style.
best practice when using the comparison operator == is to put the constant on the left of the operand. in this way it is impossible to accidentally mistype the assignment operator instead of the comparison.
example:
( iVarOne == 1 )
is functionally equal to
( 1 == iVarOne)
but
( iVarOne = 1 )
is much different than
( 1 = iVarOne )
this best practice works around the fact that compilers do not complain when you mistype an assignment for a comparison operator...
Nope, only in readability I prefer the first one, while some other developers may prefer the other.
Its just a coding style issue, it has no technical difference at all.
Some may say that the second is better, since it is more explicit, the nil comes first so its easier to note that we are testing for nil, but again it depends on the developer taste.
There is no difference at all. It's all about readability. If you want to write a clean code, you should take care of this.
If you place the "Object" to the right of the evaluation, it becomes less apparent what are you really doing.
It is not NIL, it is NULL.
They are one and the same. The == operator is a comparison operator. As a general trend, we use (object==NULL)
I'm designing a language, and trying to decide whether true should be 0x01 or 0xFF. Obviously, all non-zero values will be converted to true, but I'm trying to decide on the exact internal representation.
What are the pros and cons for each choice?
It doesn't matter, as long as it satisfies the rules for the external representation.
I would take a hint from C here, where false is defined absolutely as 0, and true is defined as not false. This is an important distinction, when compared to an absolute value for true. Unless you have a type that only has two states, you have to account for all values within that value type, what is true, and what is false.
0 is false because the processor has a flag that is set when a register is set to zero.
No other flags are set on any other value (0x01, 0xff, etc) - but the zero flag is set to false when there's a non-zero value in the register.
So the answers here advocating defining 0 as false and anything else as true are correct.
If you want to "define" a default value for true, then 0x01 is better than most:
It represents the same number in every bit length and signedness
It only requires testing one bit if you want to know whether it's true, should the zero flag be unavailable, or costly to use
No need to worry about sign extension during conversion to other types
Logical and arithmetic expressions act the same on it
-Adam
Using -1 has one advantage in a weakly typed language -- if you mess up and use the bitwise and operator instead of the logical and operator, your condition will still evaluate correctly as long as one of the operands has been converted to the canonical boolean representation. This isn't true if the canonical representation is 1.
0xffffffff & 0x00000010 == 0x00000010 (true)
0xffffffff && 0x00000010 == 0xffffffff (true)
but
0x00000001 & 0x00000010 == 0x00000000 (false)
0x00000001 && 0x00000010 == 0xffffffff (true)
Why are you choosing that non-zero values are true? In Ada true is TRUE and false is FALSE. There is no implicit type conversion to and from BOOLEAN.
IMO, if you want to stick with false=0x00, you should use 0x01. 0xFF is usually:
a sign that some operation overflowed
or
an error marker
And in both cases, it probably means false. Hence the *nix return value convention from executables, that true=0x00, and any non-zero value is false.
-1 is longer to type than 1...
In the end it doesn't matter since 0 is false and anything else is true, and you will never compare to the exact representation of true.
Edit, for those down voting, please explain why. This answer is essentially the same as the one currently rated at +19. So that is 21 votes difference for what is the same basic answer.
If it is because of the -1 comment, it is true, the person who actually defines "true" (eg: the compiler writer) is going to have to use -1 instead of 1, assuming they chose to use an exact representation. -1 is going to take longer to type than 1, and the end result will be the same. The statement is silly, it was meant to be silly, because there is no real difference between the two (1 or -1).
If you are going to mark something down at least provide a rationale for it.
0xff is an odd choice since it has an implicit assumption that 8 bits is your minimum storage unit. But it's not that uncommon to want to store boolean values more compactly than that.
Perhaps you want to rephrase by thinking about whether boolean operators produce something that is just one 0 or 1 bit (which works regardless of sign extension), or is all-zeroes or all-ones (and depends on sign extension of signed two's-complement quantities to maintain all-ones at any length).
I think your life is simpler with 0 and 1.
The pros are none, and the cons are none, too. As long as you provide an automatic conversion from integer to boolean, it will be arbitrary, so it really doesn't matter which numbers you choose.
On the other hand, if you didn't allow this automatic conversion you'd have a pro: you wouldn't have some entirely arbitrary rule in your language. You wouldn't have (7 - 4 - 3) == false, or 3 * 4 + 17 == "Hello", or "Hi mom!" == Complex(7, -2).
I think the C method is the way to go. 0 means false, anything else means true. If you go with another mapping for true, then you are left with the problem of having indeterminate values - that are neither true nor false.
If this is language that you'll be compiling for a specific instruction set that has special support for a particular representation, then I'd let that guide you. But absent any additional information, for an 'standard' internal representation, I'd go with -1 (all 1's in binary). This value extends well to whatever size boolean you want (single bit, 8-bit, 16, etc), and if you break up a "TRUE" or a "FALSE" into a smaller "TRUE" or "FALSE", its still the same. (where if you broke a 16 bit TRUE=0x0001 you'd get a FALSE=0x00 and a TRUE=0x01).
Design the language so that 0 is false and non-zero is true. There is no need to "convert" anything, and thinking "non-zero" instead of some specific value will help you write the code properly.
If you have built-in symbols like "True" then go ahead and pick a value, but always think "non-zero is true" instead of "0x01 is true".
Whatever you do, once you select your values don't change them. In FORTH-77, true and false were defined as 1 and 0. Then, FORTH-83 redefined them as -1 and 0. There were a not few (well ok, only a few, this is FORTH we are talking about) problems caused by this.