What is the purpose of optionals in Swift? [duplicate] - swift

This question already has answers here:
What is an optional value in Swift?
(15 answers)
Closed 5 years ago.
So I've recently been learning Swift (3.1), and I've had problems with understanding the purpose/practical usage of optionals.
I've researched several sites, and all they talk about is how to use them, not why or when they are used (Sort of why, but not in a way that seems applicable to me). My citations are at the end.
I understand about how its either nil or a value, and how you need to unwrap it's possible value with !, as well as how to create auto-unwrapping optionals.
My main question is just, what are the practical uses of optionals? Apple's Swift handbook said that optionals are the central point of most of Swift's most powerful features, so I feel like it is a concept very worthwhile to fully understand. I fully understand how to write them, I just can't grasp why or when you would use them.
Citations:
https://www.tutorialspoint.com/swift/swift_optionals.htm
http://blog.teamtreehouse.com/understanding-optionals-swift
Thank you for your time.

As per my understanding :
Swift is a safe bound checking language , so it implements it using variable checking whether they are nil or assigned value.
Optionals can be used when you don't know for sure before hand that we would have a value in a specific variable throughout the program.
So if you defined a Text Field & assigned its text to a variable. Maybe user just keeps it blank & you binded that text field's data to a variable.
On runtime it would be empty , for reasons to avoid crash you could define an optional.
In the program when you would access optional value you have to make sure that the variable is present using "if let optional" or explicitly unwrapping the optionals. Explicitly unwrapping optionals are bad move & only should be done when you're sure there is a value present in that optional variable.
Swift uses optionals everywhere.
Disclosure : I'm still new to Swift & this would be first answer. Maybe my answer isn't well put to Stack Overflow's standards.

Related

weak vs unowned in Swift. What are the internal differences?

I understand the usage and superficial differences between weak and unowned in Swift:
The simplest examples I've seen is that if there is a Dog and a Bone, the Bone may have a weak reference to the Dog (and vice versa) because the each can exist independent of each other.
On the other hand, in the case of a Human and a Heart, the Heart may have an unowned reference to the human, because as soon as the Human becomes... "dereferenced", the Heart can no longer reasonably be accessed. That and the classic example with the Customer and the CreditCard.
So this is not a duplicate of questions asking about that.
My question is, what is the point in having two such similar concepts? What are the internal differences that necessitate having two keywords for what seem essentially 99% the same thing? The question is WHY the differences exist, not what the differences are.
Given that we can just set up a variable like this: weak var customer: Customer!, the advantage of unowned variables being non-optional is a moot point.
The only practical advantage I can see of using unowned vs implicitly unwrapping a weak variable via ! is that we can make unowned references constant via let.
... and that maybe the compiler can make more effective optimizations for that reason.
Is that true, or is there something else happening behind the scenes that provides a compelling argument to keeping both keywords (even though the slight distinction is – based on Stack Overflow traffic – evidently confusing to new and experienced developers alike).
I'd be most interested to hear from people who have worked on the Swift compiler (or other compilers).
My question is, what is the point in having two such similar concepts? What are the internal differences that necessitate having two keywords for what seem essentially 99% the same thing?
They are not at all similar. They are as different as they can be.
weak is a highly complex concept, introduced when ARC was introduced. It performs the near-miraculous task of allowing you to prevent a retain a cycle (by avoiding a strong reference) without risking a crash from a dangling pointer when the referenced object goes out of existence — something that used to happen all the time before ARC was introduced.
unowned, on the other hand, is non-ARC weak (to be specific, it is the same as non-ARC assign). It is what we used to have to risk, it is what caused so many crashes, before ARC was introduced. It is highly dangerous, because you can get a dangling pointer and a crash if the referenced object goes out of existence.
The reason for the difference is that weak, in order to perform its miracle, involves a lot of extra overhead for the runtime, inserted behind the scenes by the compiler. weak references are memory-managed for you. In particular, the runtime must maintain a scratchpad of all references marked in this way, keeping track of them so that if an object weakly referenced goes out of existence, the runtime can locate that reference and replace it by nil to prevent a dangling pointer.
In Swift, as a consequence, a weak reference is always to an Optional (exactly so that it can be replaced by nil). This is an additional source of overhead, because working with an Optional entails extra work, as it must always be unwrapped in order to get anything done with it.
For this reason, unowned is always to be preferred wherever it is applicable. But never use it unless it is absolutely safe to do so! With unowned, you are throwing away automatic memory management and safety. You are deliberately reverting to the bad old days before ARC.
In my usage, the common case arises in situations where a closure needs a capture list involving self in order to avoid a retain cycle. In such a situation, it is almost always possible to say [unowned self] in the capture list. When we do:
It is more convenient for the programmer because there is nothing to unwrap. [weak self] would be an Optional in need of unwrapping in order to use it.
It is more efficient, partly for the same reason (unwrapping always adds an extra level of indirection) and partly because it is one fewer weak reference for the runtime's scratchpad list to keep track of.
A weak reference is actually set to nil and you must check it when the referent deallocates and an unowned one is set to nil, but you are not forced to check it.
You can check a weak against nil with if let, guard, ?, etc, but it makes no sense to check an unowned, because you think that is impossible. If you are wrong, you crash.
I have found that in-practice, I never use unowned. There is a minuscule performance penalty, but the extra safety from using weak is worth it to me.
I would leave unowned usage to very specific code that needs to be optimized, not general app code.
The "why does it exist" that you are looking for is that Swift is meant to be able to write system code (like OS kernels) and if they didn't have the most basic primitives with no extra behavior, they could not do that.
NOTE: I had previously said in this answer that unowned is not set to nil. That is wrong, a bare unowned is set to nil. A unowned(unsafe) is not set to nil and could be a dangling pointer. This is for high-performance needs and should generally not be in application code.

Downcast from '[UICollectionViewLayoutAttributes]?' to '[UICollectionViewLayoutAttributes]' only unwraps optionals

I'm trying to build a collection view with pagination (showing cells as previews) and found this tutorial. I'm getting the error from Xcode and guessing this must be due to an update to Xcode (since the tutorial seemed to have worked for a lot of people). Would love a hint on how to fix this:
Downcast from '[UICollectionViewLayoutAttributes]?' to
'[UICollectionViewLayoutAttributes]' only unwraps optionals; did you
mean to use '!'?
You probably did something like this (someOptionalAttributes being of type [UICollectionViewLayoutAttributes]?):
someOptionalAttributes as! [UICollectionViewLayoutAttributes]
However, because someOptionalAttributes is just the optional version of [UICollectionViewLayoutAttributes], you do not need a force downcast, you just need to unwrap it:
someOptionalAttributes!
Force downcasting is only necessary if casting to an entirely new type (usually subclass).
So, yes, you did mean to use ! as said in the error. If you want to be safe, you could use a number of different optional unwrapping techniques.

Why NOT use optionals in Swift?

I was reading up on how to program in Swift, and the concept of optionals bugged me a little bit. Not really in terms of why to use optionals, that makes sense, but more so as to in what case would you not want to use optionals. From what I understand, an optional just allows an object to be set to nil, so why would you not want that feature? Isn't setting an object to nil the way you tell ARC to release an object? And don't most of the functions in Foundation and Cocoa return optionals? So outside of having to type an extra character each time to refer to an object, is there any good reason NOT to use an optional in place of a regular type?
There are tons of reasons to NOT use optional. The main reason: You want to express that a value MUST be available. For example, when you open a file, you want the file name to be a string, not an optional string. Using nil as filename simply makes no sense.
I will consider two main use cases: Function arguments and function return values.
For function arguments, the following holds: If the argument needs to be provided, option should not be used. If handing in nothing is okay and a valid (documented!) input, then hand in an optional.
For function return values returning no optional is especially nice: You promise the caller that he will receive an object, not either an object or nothing. When you do not return an optional, the caller knows that he may use the value right away instead of checking for null first.
For example, consider a factory method. Such method should always return an object, so why should you use optional here? There are a lot of examples like this.
Actually, most APIs should rather use non-optionals instead of optionals. Most of the time, simply passing/receiving possibly nothing is just not what you want. There are rather few cases where nothing is an option.
Each case where optional is used must be thoroughly documented: In which circumstances will a method return nothing? When is it okay to hand nothing to a method and what will the consequences be? A lot of documentation overhead.
Then there is also conciseness: If you use an API that uses optional all over the place, your code will be cluttered with null-checks. Of course, if every use of optional is intentional, then these checks are fine and necessary. However, if the API only uses optional because its author was lazy and was simply using optional all over the place, then the checks are unnecessary and pure boilerplate.
But beware!
My answer may sound as if the concept of optionals is quite crappy. The opposite is true! By having a concept like optionals, the programmer is able to declare whether handing in/returning nothing is okay. The caller of a function is always aware of that and the compiler enforces safety. Compare that to plain old C: You could not declare whether a pointer could be null. You could add documentation comments that state whether it may be null, but such comments were not enforced by the compiler. If the caller forgot to check a return value for null, you received a segfault. With optionals you can be sure that noone dereferences a null pointer anymore.
So in conclusion, a null-safe type system is one of the major advances in modern programming languages.
The original idea of optionals (which existed long before Swift) is to force programmer to check value for nil before using it — or to prevent outside code from passing nil where it is not allowed. A huge part of crashes in software, maybe even most of them, happens at address 0x00000000 (or with NullPointerException, or alike) precisely because it is way too easy to forget about nil-pointer scenario. (In 2009, Tony Hoare apologized for inventing null pointers).
Not using optionals is as valid and widespread use case as using them: when the value absolutely can not be missing, there should be non-optional type; when it can, there should be an optional.
But currently, the existing frameworks are written in Obj-C without optionals in mind, so automatically generated bridges between Swift and Obj-C just have to take and return optionals, because it is impossible to automatically deeply analyze each method and figure out which arguments and return values should be optionals or not. I'm sure over time Apple will manually fix every case where they got it wrong; right now you should not use those frameworks as an example, because they are definitely not a good one. (For good examples, you could check a popular functional language like Haskell which had optionals since the beginning).

Why does objective C does not support overloading? [duplicate]

This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Why Objective-C doesn't support method overloading?
I was going through this link which has the answer to my question. But I was not able to understand what the right guy is trying to say. It will be really great if someone can simplify and provide an explanation.
In C++ the compiler has type information and can choose between several methods based on the types. To do the same in Objective C it would have to be done at run-time, because the compiler knows little about object types due to the dynamic nature of the language (i.e. all objects are of type id). While this seems possible, it would be very inefficient in practice.
I think it's a historical artifact. Objective-C is derived from C and Smalltalk, and neither of them support overloading.
If you want overloading, you can use Objective-C++ instead. Just name your sources with the ".mm" extension instead of just ".m".
Just be careful to be sure you know what you are doing if you mix C++ and Objective-C idioms. For example Objective-C exceptions and C++ exceptions are two completely different animals and cannot be used interchangeable.
What he is trying to say is that method overloading is not possible with dynamic typed languages since in dynamic typed languages the information about each of the object is not known until run time. In a statically typed during compile time the overloading can be resolved. You will just create the same functions with same names but the compiler would have enough information to resolve the ambiguity between various calls which it gets. But in dynamic typed languages since the objects are resolved only during run time it is not possible to resolve between the various calls.

The evilness of 'var' in C#? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
C# 'var' keyword versus explicitly defined variables
EDIT:
For those who are still viewing this, I've completely changed my opinion on var. I think it was largely due to the responses to this topic that I did. I'm an avid 'var' user now, and I think its proponents comments below were absolutely correct in pretty much all cases. I think the thing I like most about var is it REALLY DOES reduce repetition (conforms to DRY), and makes your code considerably cleaner. It supports refactoring (when you need to change the return type of something, you have less code cleanup to deal with, and NO, NOT everyone has a fancy refactoring tool!), and anecdotally, people don't really seem to have a problem not knowing the specific type of a variable up front (its easy enough to "discover" the capabilities of a type on-demand, which is generally a necessity anyway, even if you DO know the name of a type.)
So here's a big applause for the 'var' keyword!!
This is a relatively simple question...more of a poll really. I am a HUGE fan of C#, and have used it for over 8 years, since before .NET was first released. I am a fan of all of the improvements made to the language, including lambda expressions, extension methods, LINQ, and anonymous types. However, there is one feature from C# 3.0 that I feel has been SORELY misused....the 'var' keyword.
Since the release of C# 3.0, on blogs, forums, and yes, even Stackoverflow, I have seen var replace pretty much every variable that has been written! To me, this is a grave misuse of the feature, and leads to very arbitrary code that can have many obfuscated bugs due to the lack in clarity of what type a variable actually is.
There is only a single truly valid use for 'var' (in my opinion at least). What is that valid use, you ask? The only valid use is when you are incapable of knowing the type, and the only instance where that can happen:
When accessing an anonymous type
Anonymous types have no compile-time identity, so var is the only option. It's the only reason why var was added...to support anonymous types.
So...whats your opinion? Given the prolific use of var on blogs, forums, suggested/enforced by tools like ReSharper, etc. many up and coming developers will see it as a completely valid thing.
Do you think var should be used so prolifically?
Do you think var should ever be used for anything other than an anonymous type?
Is it acceptable to use in code posted to blogs to maintain brevity...terseness? (Not sure about the answer this one myself...perhaps with a disclaimer)
Should we, as a community, encourage better use of strongly typed variables to improve code clarity, or allow C# to become more vague and less descriptive?
I would like to know the communities opinions. I see var used a lot, but I have very little idea why, and perhapse there is a good reason (i.e. brevity/terseness.)
var is a splendid idea to help implement a key principle of good programming: DRY, i.e., Don't Repeat Yourself.
VeryComplicatedType x = new VeryComplicatedType();
is bad coding, because it repeats VeryComplicatedType, and the effects are all negative: more verbose and boilerplatey code, less readability, silly "makework" for both the reader and the writer of the code. Because of all this, I count var as a very useful enhancement in C# 3 compared to Java and previous versions of C#.
Of course it can be mildly misused, by using as the RHS an expression whose type is not clear and obvious (e.g., a call to a method whose declaration may be far away) -- such misuse may decrease readability (by forcing the reader to hunt for the method's declaration or ponder deeply about some other subtle expression's type) instead of increasing it. But if you stick to using var to avoid repetition, you'll be in its sweet spot, and no misuse.
I think it should be used in those situations where the type is clearly specified elsewhere in the same statement:
Dictionary<string, List<int>> myHashMap = new Dictionary<string, List<int>>();
is a pain to read. This could be replaced by the following with no loss of clarity:
var myHashMap = new Dictionary<string, List<int>>();
Pop quiz!
What type is this:
var Foo = new string[]{"abc","123","yoda"};
How about this:
var Bar = {"abc","123","yoda"};
It takes me roughly no longer to determine what types those are than with the explicity redundant specification of the type. As a programmer I have no issues with letting a compiler figure out things that are obvious for me. You may disagree.
Cheers.
Never say never. I'm pretty sure there are a bunch of questions where people have expounded their views on var, but here's mine once more.
var is a tool; use it where it's appropriate, and don't use it when it's not. You're right that the only required use of var is when addressing anonymous types, in which case you have no type name to use. Personally, I'd say any other use has to be considered in terms of readability and laziness; specifically, when avoiding use of a cumbersome type name.
var i = 5;
(Laziness)
var list = new List<Customer>();
(Convenience)
var customers = GetCustomers();
(Questionable; I'd consider it acceptable if and only if GetCustomers() returns an IEnumerable)
Read up on Haskell. It's a statically typed language in which you rarely have to state the type of anything. So it uses the same approach as var, as the standard "idiomatic" coding style.
If the compiler can figure something out for you, why write the same thing twice?
A colleague of mine was at first very opposed to var, just as you are, but has now started using it habitually. He was worried it would make programs less self-documenting, but in practice that's caused more by overly long methods.
var MyCustomers = from c in Customers
where c.City="Madrid"
select new { c.Company, c.Mail };
If I need only Company and Mail from Customers collection. It's nonsense define new type with members what I need.
If you feel that giving the same information twice reduces errors (the designers of many web forms that insist you type in your email address twice seem to agree), then you'll probably hate var. If you write a lot of code that uses complicated type specifications then it's a godsend.
EDIT: To exapand this a bit (incase it sounds like I'm not in favour of var):
In the UK (at least at the time I went), it was standard practice to make Computer Science students learn how to program in Standard ML. Like other functional languages it has a type system that puts languages in the C++/Java mould to shame.
Anyway, what I noticed at the time (and heard similar remarks from other students) was that it was a nightmare to get your SML programs to compile because the compiler was so increadibly picky about types, but once the did compile, they almost always ran without error.
This aspect of SML (and other functional languages) seems to be one the questioner sees as a 'good thing' - i.e. that anything that helps the compiler catch more errors at compile time is good.
Now here's the thing with SML: it uses type inference exclusively when assigning. So I don't think type inference can be inherently bad.
I agree with others that var eliminates redundancy. I have decided to use var where it eliminates redundancy as much as possible. I think consistency is important. Choose a style and stick with it through a project.
As Earwicker indicated, there are some functional languages, Haskell being one and F# being another, where such type inference is used much more pervasively -- the C# analogy would be declaring the return types and parameter types of methods as "var", and then having the compiler infer the static type for you. Static and explicit typing are two orthogonal concerns.
In fact, is it even correct to say that use of "var" is dynamic typing? From what I understood, that's what the new "dynamic" keyword in C# 4.0 is for. "var" is for static type inference. Correct me if I am wrong.
I must admit when i first saw the var keyword pop up i was very skeptical.
However it is definitely an easy way to shorten the lines of a new declaration, and i use it all the time for that.
However when i change the type of an underlying method, and accept the return type using var. I do get the occasional run time error. Most are still picked up by the compiler.
The secound issue i run into is when i am not sure what method to use (and i am simply looking through the auto complete). IF i choose the wrong one and expect it to be type FOO and it is type BAR then it takes a while to figure that out.
If i had of literally specified the variable type in both cases it would have saved a bit of frustration.
overall the benefits exceed the problems.
I have to dissent with the view that var reduces redundancy in any meaningful way. In the cases that have been put forward here, type inference can and should come out of the IDE, where it can be applied much more liberally with no loss of readability.