I've only seen people using let to unwrap optionals.
I was wondering if it is a good practice and I could just as well use var, or is there something more to it?
What are the pros and cons of using let versus var?
It like usual;
let is for a constant (you will not be able to modify the value).
The good practice is to always use let when you can (when you don't need to modify the value). It is for optimization purpose and for clarity of the code.
As mentioned before, you should always use let when you can, as it allows compiler do some fancy stuff and thus makes your program run faster.
There is technically nothing wrong with using var, though.
For further reading, please check out this article:
http://www.touch-code-magazine.com/swift-optionals-use-let/
Related
Having problems iterating through an optional enumerated type. Looking for general guidance if anyone has any and will post any code if needed.
It all stems from this
#Published var movielist: Movie?
but I'm really confused on how to handle this error.
(vm.movielist?.results ?? []).enumerated()
This way, if vm.movielist is nil, then an empty collection gets used instead.
(this is the first "Fix" solution that Xcode gives, although sometimes it gets the syntax a little bit funny when it tries to fix things automatically that are in a chain like this)
I'm trying to build a collection view with pagination (showing cells as previews) and found this tutorial. I'm getting the error from Xcode and guessing this must be due to an update to Xcode (since the tutorial seemed to have worked for a lot of people). Would love a hint on how to fix this:
Downcast from '[UICollectionViewLayoutAttributes]?' to
'[UICollectionViewLayoutAttributes]' only unwraps optionals; did you
mean to use '!'?
You probably did something like this (someOptionalAttributes being of type [UICollectionViewLayoutAttributes]?):
someOptionalAttributes as! [UICollectionViewLayoutAttributes]
However, because someOptionalAttributes is just the optional version of [UICollectionViewLayoutAttributes], you do not need a force downcast, you just need to unwrap it:
someOptionalAttributes!
Force downcasting is only necessary if casting to an entirely new type (usually subclass).
So, yes, you did mean to use ! as said in the error. If you want to be safe, you could use a number of different optional unwrapping techniques.
I was reading up on how to program in Swift, and the concept of optionals bugged me a little bit. Not really in terms of why to use optionals, that makes sense, but more so as to in what case would you not want to use optionals. From what I understand, an optional just allows an object to be set to nil, so why would you not want that feature? Isn't setting an object to nil the way you tell ARC to release an object? And don't most of the functions in Foundation and Cocoa return optionals? So outside of having to type an extra character each time to refer to an object, is there any good reason NOT to use an optional in place of a regular type?
There are tons of reasons to NOT use optional. The main reason: You want to express that a value MUST be available. For example, when you open a file, you want the file name to be a string, not an optional string. Using nil as filename simply makes no sense.
I will consider two main use cases: Function arguments and function return values.
For function arguments, the following holds: If the argument needs to be provided, option should not be used. If handing in nothing is okay and a valid (documented!) input, then hand in an optional.
For function return values returning no optional is especially nice: You promise the caller that he will receive an object, not either an object or nothing. When you do not return an optional, the caller knows that he may use the value right away instead of checking for null first.
For example, consider a factory method. Such method should always return an object, so why should you use optional here? There are a lot of examples like this.
Actually, most APIs should rather use non-optionals instead of optionals. Most of the time, simply passing/receiving possibly nothing is just not what you want. There are rather few cases where nothing is an option.
Each case where optional is used must be thoroughly documented: In which circumstances will a method return nothing? When is it okay to hand nothing to a method and what will the consequences be? A lot of documentation overhead.
Then there is also conciseness: If you use an API that uses optional all over the place, your code will be cluttered with null-checks. Of course, if every use of optional is intentional, then these checks are fine and necessary. However, if the API only uses optional because its author was lazy and was simply using optional all over the place, then the checks are unnecessary and pure boilerplate.
But beware!
My answer may sound as if the concept of optionals is quite crappy. The opposite is true! By having a concept like optionals, the programmer is able to declare whether handing in/returning nothing is okay. The caller of a function is always aware of that and the compiler enforces safety. Compare that to plain old C: You could not declare whether a pointer could be null. You could add documentation comments that state whether it may be null, but such comments were not enforced by the compiler. If the caller forgot to check a return value for null, you received a segfault. With optionals you can be sure that noone dereferences a null pointer anymore.
So in conclusion, a null-safe type system is one of the major advances in modern programming languages.
The original idea of optionals (which existed long before Swift) is to force programmer to check value for nil before using it — or to prevent outside code from passing nil where it is not allowed. A huge part of crashes in software, maybe even most of them, happens at address 0x00000000 (or with NullPointerException, or alike) precisely because it is way too easy to forget about nil-pointer scenario. (In 2009, Tony Hoare apologized for inventing null pointers).
Not using optionals is as valid and widespread use case as using them: when the value absolutely can not be missing, there should be non-optional type; when it can, there should be an optional.
But currently, the existing frameworks are written in Obj-C without optionals in mind, so automatically generated bridges between Swift and Obj-C just have to take and return optionals, because it is impossible to automatically deeply analyze each method and figure out which arguments and return values should be optionals or not. I'm sure over time Apple will manually fix every case where they got it wrong; right now you should not use those frameworks as an example, because they are definitely not a good one. (For good examples, you could check a popular functional language like Haskell which had optionals since the beginning).
If I only want to check if something is impossible or not (i.e., I will not be using something like if(possible)), should I name the boolean notPossible and use if(notPossible) or should I name it possible and use if(!possible) instead?
And just to be sure, if I also have to check for whether it is possible, I would name the boolean possible and use if(possible) along with else, right?
You should probably use isPossible.
Negative names for booleans like notPossible is a very bad idea. You might end up having to write things like if (!notPossible) which makes the code difficult to read. Don't do that.
I tend to err on the side of positivity here and use possible. It means someone can't write some code later that does this...
if (!notPossible)
Which is unreadable.
I like to name booleans with consistent short-verb prefixes such as is or has, and I'd find a not prefix peculiar and tricky to mentally "parse" (so, I suspect, would many readers of the code, whether aware of feeling that way or not;-) -- so, I'd either name the variable isPossible (and use !isPossible), or just name the variable isImpossible (many adjectives have handy antonyms like this, and for a prefix has you could use a prefix lacks to make an antonym of the whole thing;-).
I generally try to name my flags so that they will read as nicely as possible where they are being used. That means try to name them so that they won't have to be negated where they are used.
I know some folks insist all names be positive, so that people don't get confused between the negations in the name and those in their head. That's probably a good policy for a boolean in a class interface. But if its local to a single source file, and I know all the calls will negate it, I'd much rather see if (impossible && ...) than if (!isPossible && ...).
Whichever is easier to read in your specifc application. Just be sure you don't end up with "if (!notPossible)".
I think it's considered preferable to avoid using negatives in variable names so that you avoid the double negative of if(!notPossible).
I recommend using isPossible. It just seems to make a lot of sense to use 'is' (or perhaps 'has') whenever you can when naming boolean variables. This is logical because you want to find out if something is possible, right?
I agree that negatively named booleans are a bad idea, but sometimes it's possible to reframe the condition in a way that's positive. For example, you might use pathIsBlocked rather than cannotProceed, or rather than isNotAbleToDie you could use isImmortal.
You should name it for exactly what is being stored in it. If you're storing whether it's possible then name it isPossible. If you're storing whether it's impossible name it isImpossible.
In either case you can use an else if you need to check for both cases.
From your description it seems more important to you to check for impossibility so I'd go with isImpossible:
if(isImpossible)
{
// ...
}
else
{
//...
}
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.