self. in trailing swift closures, meaning and purpose? - swift

whenever I use a trailing closure on an action ... example:
run(SKAction.wait(forDuration: 10)){timeRemains = false}
I’m seeing this:
Reference to property (anything) in closure requires explicitly ‘self’
to make capture semantics explicit.
What does this mean? And what is it on about? I'm curious because I'm only ever doing this in the context/scope of the property or function I want to call in the trailing closure, so don't know why I need `self and fascinated by the use of the word
"semantics"
here. Does it have some profound meaning, and will I magically understand closures if I understand this?

Does it have some profound meaning, and will I magically understand closures if I understand this?
No and no. Or perhaps, maybe and maybe.
The reason for this syntactical demand is that this might really be a closure, that is, it might capture and preserve its referenced environment (because the anonymous function you are passing might be stored somewhere for a while). That means that if you refer here to some property of self, such as myProperty, you are in fact capturing a reference to self. Swift demands that you acknowledge this fact explicitly (by saying self.myProperty, not merely myProperty) so that you understand that this is what's happening.
Why do you need to understand it? Because under some circumstances you can end up with a retain cycle, or in some other way preserving the life of self in ways that you didn't expect. This is a way of getting you to think about that fact.
(If it is known that this particular function will not act as a closure, i.e. that it will be executed immediately, then there is no such danger and Swift will not demand that you say self explicitly.)

Related

Where to define typecast to struct in MATLAB OOP?

In the MATLAB OOP framework, it can be useful to cast an object to a struct, i.e., define a function that takes an object and returns a struct with equivalent fields.
What is the appropriate place to do this? I can think of several options:
Build a separate converter object that takes care of conversions between various classes
Add a function struct to the class that does the conversion to struct, and make the constructor accept structs
Neither option seems to be very elegant: the first means that logic about the class itself is moved to another class. On the other hand, in the second case, it provokes users to use the struct function for any object, which will in general give a warning (structOnObject).
Are there altenatives?
Personally I'd go with the second option, and not worry about provoking users to call struct on other classes; you can only worry about your own code, not that of a third-party, even if the third party is MathWorks. In any case, if they do start to call struct on an arbitrary class, it's only a warning; nothing actually dangerous is likely to happen, it's just not a good practice.
But if you're concerned about that, you can always call your converter method toStruct rather than struct. Or perhaps the best (although slightly more complex) way might be to overload cast for your class, accepting and handling the option 'struct', and passing any other option through to builtin('cast',....
PS The title of your question refers to typecasting, but what your after here is casting. In MATLAB, typecasting is a different operation, involving taking the exact bits of one type and reinterpreting them as bits of another type (possibly an array of the output type). See doc cast and doc typecast for more information on the distinction.
The second option sounds much better to me.
A quick and dirty way to get rid of the warning would be disabling it by calling
warning('off', 'MATLAB:structOnObject')
at the start of your program.
The solutions provided in Sam Roberts' answer are however much cleaner. I personally would go for the toStruct() method.

weak vs unowned in Swift. What are the internal differences?

I understand the usage and superficial differences between weak and unowned in Swift:
The simplest examples I've seen is that if there is a Dog and a Bone, the Bone may have a weak reference to the Dog (and vice versa) because the each can exist independent of each other.
On the other hand, in the case of a Human and a Heart, the Heart may have an unowned reference to the human, because as soon as the Human becomes... "dereferenced", the Heart can no longer reasonably be accessed. That and the classic example with the Customer and the CreditCard.
So this is not a duplicate of questions asking about that.
My question is, what is the point in having two such similar concepts? What are the internal differences that necessitate having two keywords for what seem essentially 99% the same thing? The question is WHY the differences exist, not what the differences are.
Given that we can just set up a variable like this: weak var customer: Customer!, the advantage of unowned variables being non-optional is a moot point.
The only practical advantage I can see of using unowned vs implicitly unwrapping a weak variable via ! is that we can make unowned references constant via let.
... and that maybe the compiler can make more effective optimizations for that reason.
Is that true, or is there something else happening behind the scenes that provides a compelling argument to keeping both keywords (even though the slight distinction is – based on Stack Overflow traffic – evidently confusing to new and experienced developers alike).
I'd be most interested to hear from people who have worked on the Swift compiler (or other compilers).
My question is, what is the point in having two such similar concepts? What are the internal differences that necessitate having two keywords for what seem essentially 99% the same thing?
They are not at all similar. They are as different as they can be.
weak is a highly complex concept, introduced when ARC was introduced. It performs the near-miraculous task of allowing you to prevent a retain a cycle (by avoiding a strong reference) without risking a crash from a dangling pointer when the referenced object goes out of existence — something that used to happen all the time before ARC was introduced.
unowned, on the other hand, is non-ARC weak (to be specific, it is the same as non-ARC assign). It is what we used to have to risk, it is what caused so many crashes, before ARC was introduced. It is highly dangerous, because you can get a dangling pointer and a crash if the referenced object goes out of existence.
The reason for the difference is that weak, in order to perform its miracle, involves a lot of extra overhead for the runtime, inserted behind the scenes by the compiler. weak references are memory-managed for you. In particular, the runtime must maintain a scratchpad of all references marked in this way, keeping track of them so that if an object weakly referenced goes out of existence, the runtime can locate that reference and replace it by nil to prevent a dangling pointer.
In Swift, as a consequence, a weak reference is always to an Optional (exactly so that it can be replaced by nil). This is an additional source of overhead, because working with an Optional entails extra work, as it must always be unwrapped in order to get anything done with it.
For this reason, unowned is always to be preferred wherever it is applicable. But never use it unless it is absolutely safe to do so! With unowned, you are throwing away automatic memory management and safety. You are deliberately reverting to the bad old days before ARC.
In my usage, the common case arises in situations where a closure needs a capture list involving self in order to avoid a retain cycle. In such a situation, it is almost always possible to say [unowned self] in the capture list. When we do:
It is more convenient for the programmer because there is nothing to unwrap. [weak self] would be an Optional in need of unwrapping in order to use it.
It is more efficient, partly for the same reason (unwrapping always adds an extra level of indirection) and partly because it is one fewer weak reference for the runtime's scratchpad list to keep track of.
A weak reference is actually set to nil and you must check it when the referent deallocates and an unowned one is set to nil, but you are not forced to check it.
You can check a weak against nil with if let, guard, ?, etc, but it makes no sense to check an unowned, because you think that is impossible. If you are wrong, you crash.
I have found that in-practice, I never use unowned. There is a minuscule performance penalty, but the extra safety from using weak is worth it to me.
I would leave unowned usage to very specific code that needs to be optimized, not general app code.
The "why does it exist" that you are looking for is that Swift is meant to be able to write system code (like OS kernels) and if they didn't have the most basic primitives with no extra behavior, they could not do that.
NOTE: I had previously said in this answer that unowned is not set to nil. That is wrong, a bare unowned is set to nil. A unowned(unsafe) is not set to nil and could be a dangling pointer. This is for high-performance needs and should generally not be in application code.

Why NOT use optionals in Swift?

I was reading up on how to program in Swift, and the concept of optionals bugged me a little bit. Not really in terms of why to use optionals, that makes sense, but more so as to in what case would you not want to use optionals. From what I understand, an optional just allows an object to be set to nil, so why would you not want that feature? Isn't setting an object to nil the way you tell ARC to release an object? And don't most of the functions in Foundation and Cocoa return optionals? So outside of having to type an extra character each time to refer to an object, is there any good reason NOT to use an optional in place of a regular type?
There are tons of reasons to NOT use optional. The main reason: You want to express that a value MUST be available. For example, when you open a file, you want the file name to be a string, not an optional string. Using nil as filename simply makes no sense.
I will consider two main use cases: Function arguments and function return values.
For function arguments, the following holds: If the argument needs to be provided, option should not be used. If handing in nothing is okay and a valid (documented!) input, then hand in an optional.
For function return values returning no optional is especially nice: You promise the caller that he will receive an object, not either an object or nothing. When you do not return an optional, the caller knows that he may use the value right away instead of checking for null first.
For example, consider a factory method. Such method should always return an object, so why should you use optional here? There are a lot of examples like this.
Actually, most APIs should rather use non-optionals instead of optionals. Most of the time, simply passing/receiving possibly nothing is just not what you want. There are rather few cases where nothing is an option.
Each case where optional is used must be thoroughly documented: In which circumstances will a method return nothing? When is it okay to hand nothing to a method and what will the consequences be? A lot of documentation overhead.
Then there is also conciseness: If you use an API that uses optional all over the place, your code will be cluttered with null-checks. Of course, if every use of optional is intentional, then these checks are fine and necessary. However, if the API only uses optional because its author was lazy and was simply using optional all over the place, then the checks are unnecessary and pure boilerplate.
But beware!
My answer may sound as if the concept of optionals is quite crappy. The opposite is true! By having a concept like optionals, the programmer is able to declare whether handing in/returning nothing is okay. The caller of a function is always aware of that and the compiler enforces safety. Compare that to plain old C: You could not declare whether a pointer could be null. You could add documentation comments that state whether it may be null, but such comments were not enforced by the compiler. If the caller forgot to check a return value for null, you received a segfault. With optionals you can be sure that noone dereferences a null pointer anymore.
So in conclusion, a null-safe type system is one of the major advances in modern programming languages.
The original idea of optionals (which existed long before Swift) is to force programmer to check value for nil before using it — or to prevent outside code from passing nil where it is not allowed. A huge part of crashes in software, maybe even most of them, happens at address 0x00000000 (or with NullPointerException, or alike) precisely because it is way too easy to forget about nil-pointer scenario. (In 2009, Tony Hoare apologized for inventing null pointers).
Not using optionals is as valid and widespread use case as using them: when the value absolutely can not be missing, there should be non-optional type; when it can, there should be an optional.
But currently, the existing frameworks are written in Obj-C without optionals in mind, so automatically generated bridges between Swift and Obj-C just have to take and return optionals, because it is impossible to automatically deeply analyze each method and figure out which arguments and return values should be optionals or not. I'm sure over time Apple will manually fix every case where they got it wrong; right now you should not use those frameworks as an example, because they are definitely not a good one. (For good examples, you could check a popular functional language like Haskell which had optionals since the beginning).

Why to use empty parentheses in Scala if we can just use no parentheses to define a function which does not need any arguments?

As far as I understand, in Scala we can define a function with no parameters either by using empty parentheses after its name, or no parentheses at all, and these two definitions are not synonyms. What is the purpose of distinguishing these 2 syntaxes and when should I better use one instead of another?
It's mostly a question of convention. Methods with empty parameter lists are, by convention, evaluated for their side-effects. Methods without parameters are assumed to be side-effect free. That's the convention.
Scala Style Guide says to omit parentheses only when the method being called has no side-effects:
http://docs.scala-lang.org/style/method-invocation.html
Other answers are great, but I also think it's worth mentioning that no-param methods allow for nice access to a classes fields, like so:
person.name
Because of parameterless methods, you could easily write a method to intercept reads (or writes) to the 'name' field without breaking calling code, like so
def name = { log("Accessing name!"); _name }
This is called the Uniform Access Principal
I have another light to bring to the usefulness of the convention encouraging an empty parentheses block in the declaration of functions (and thus later in calls to them) with side effects.
It is with the debugger.
If one add a watch in a debugger, such as, say, process referring for the example to a boolean in the focused debug context, either as a variable view, or as a pure side-effect free function evaluation, it creates a nasty risk for your later troubleshooting.
Indeed, if the debugger keeps that watch as a try-to-evaluate thing whenever you change the context (change thread, move in the call stack, reach another breakpoint...), which I found to be at least the case with IntelliJ IDEA, or Visual Studio for other languages, then the side-effects of any other process function possibly found in any browsed scope would be triggered...
Just imagine the kind of puzzling troubleshooting this could lead to if you do not have that warning just in mind, because of some innocent regular naming. If the convention were enforced, with my example, the process boolean evaluation would never fall back to a process() function call in the debugger watches; it might just be allowed in your debugger to explicitly access the () function putting process() in the watches, but then it would be clear you are not directly accessing any attribute or local variables, and fallbacks to other process() functions in other browsed scopes, if maybe unlucky, would at the very least be very less surprising.

How can I lazily load a Perl variable?

I have a variable that I need to pass to a subroutine. It is very possible that the subroutine will not need this variable, and providing the value for the variable is expensive. Is it possible to create a "lazy-loading" object that will only be evaluated if it is actually being used? I cannot change the subroutine itself, so it must still look like a normal Perl scalar to the caller.
You'll want to look at Data::Lazy and Scalar::Defer. Update: There's also Data::Thunk and Scalar::Lazy.
I haven't tried any of these myself, but I'm not sure they work properly for an object. For that, you might try a Moose class that keeps the real object in a lazy attribute which handles all the methods that object provides. (This wouldn't work if the subroutine does an isa check, though, unless it calls isa as a method, in which case you can override it in your class.)
Data::Thunk is the most transparent and robust way of doing this that i'm aware of.
However, I'm not a big fan of it, or any other similar modules or techniques that try to hide themself from the user. I prefer something more explicit, like having the code using the value that's hard to compute simply call a function to retrieve it. That way you don't need to precompute your value, your intent is more clearly visible, and you can also have various options to avoid re-computing the value, like lexical closures, perl's state variables, or modules like Memoize.
You might look into tying.
I would suggest stepping back and rethinking how you are structuring your program. Instead of passing a variable to a method that it might not need, make that value available in some other way, such as another method call, that can be called as needed (and not when it isn't).
In Moose, data like this is ideally stored in attributes. You can make attributes lazily built, so they are not calculated until they are first needed, but after that the value is saved so it does not need to be calculated a second time.