Initialize Non-Optional Property Before super.init call without duplicating code - swift

One of the most frustrating situations for me in Swift is when I have a class like this:
class A: NSObject {
var a: Int
override init() {
clear()
super.init()
}
func clear() {
a = 5
}
}
Of course, this causes multiple compiler errors ('self' used in method call 'clear' before 'super.init' call and Property 'self.a' not initialized at super.init call.
And changing the order of the constructor lines only fixes the first error:
override init() {
super.init()
clear()
}
Usually, I end up doing one of two things. The first option is to make a an implicitly unwrapped optional, but I feel that this is terrible style because a will never be nil after the constructor returns:
var a: Int! = nil
The second option is to copy the functionality of my clear() function in the constructor:
override init() {
a = 5
super.init()
}
This works fine for this simplified example, but it unnecessarily duplicates a lot of code in more complex code bases.
Is there a way to initialize a without duplicating my clear function or making a an optional? If not, why not?!

The "why not" in this specific case is very straightforward. What you've written would allow me to write this:
class B: A {
override func clear() {}
}
And then a would not be initialized. So what you've written can never be legal Swift.
That said, there's a deeper version that probably could be legal but isn't. If you marked the class final or if this were a struct, then the compiler might be able to prove that everything is correctly initialized along all code paths by inlining all the possible method calls, but the compiler doesn't do all that today; it's too complicated. The compiler just says "my proof engine isn't that strong so knock it off."
IMO, the correct solution here is a ! type, though I wouldn't add = nil. That's misleading. I would write it this way:
class A: NSObject {
var a: Int! // You don't have to assign `!` types; they're automatically nil
override init() {
super.init()
clear()
}
func clear() {
a = 5
}
}
This says "I am taking responsibility to make sure that a is going to be set before it is used." Which is exactly what you are doing. I do this all the time when I need to pass self as a delegate. I wish the compiler could explore every possible code path across every method call, but it doesn't today (and given what it might do to compile times, maybe I don't wish that).
but I feel that this is terrible style because a will never be nil after the constructor returns
That's exactly the point of ! types. They should never be nil by the time any other object can get their hands on it. If it could be nil, then you shouldn't be using !. I agree that ! is dangerous (since the compiler is no longer helping you), but it's not bad style.
The only other reasonable approach IMO is to assign default values. I wouldn't use the actual values of course; that would an invitation to subtle bugs. I would just use some default value to get things in place.
class A: NSObject {
var a = Int.min // -1 might be reasonable. I just like it to be clearly wrong
override init() {
super.init()
clear()
}
func clear() {
a = 5
}
}
I don't love this approach, though I've used it in a few places. I don't think it gives you any safety over the ! (it won't crash, but you'll almost certainly give you subtle behavioral bugs, and I'd rather the crash). This is a place where the programmer must pay attention because the compiler is powerful enough to prove everything's correct, and the point of ! is to mark those places.

Related

What is the difference between "lazy var funcName: () -> Void" and regular function declaration in Swift

I have recently encountered such code block when I was working on a different codebase.
I was wondering is there any significant difference (in terms of memory impact etc.) between these two declarations other than syntactic ease? I regularly use lazy stored properties but I couldn't visualize how a function can be "lazy". Can you guys also enlighten me how functions are processed (or share an article explaining this topic) by the swift compiler as well?
Thanks in advance, wish you bug free codes.
A few differences I can think of off the top of my head:
With functions, you can use parameter labels to make the call site more readable, but since you can't add parameter labels to function types, you can't do that with the lazy var declaration.
// doesn't work
lazy var say: (message: String, to person: String) -> Void = {
...
}
// works
func say(message: String, to person: String) {
...
}
You can only invoke a function type with no parameter labels at all :( say("Hello", "Sweeper") instead of say(message: "Hello", to: "Sweeper").
lazy vars are variables, so someone could just change them:
helloFunc = { /* something else */ }
you could prevent this by adding a private setter, but that still allows setting it from within the same class. With a function, its implementation can never be changed at runtime.
lazy vars cannot be generic. Functions can.
// doesn't work
lazy var someGenericThing<T>: (T) -> Void = {
...
}
// works
func someGenericThing<T>(x: T) {
...
}
You might be able to work around this by making the enclosing type generic, but that changes the semantics of the entire type. With functions, you don't need to do that at all.
If you're implementing a language, the magic you need to implement 'lazy' as a feature is to make the value provided silently be wrapped in a function (of no arguments) that is automatically evaluated the first time the property is read.
So for example if you have
var x = SomeObject() // eagerly construct an instance of SomeObject
and
lazy var x = SomeObject() // construct an instance if/when someone reads x
behind the scenes you have, for the lazy var x, either x has not yet been read, or it has been and therefore has a value. So this is like
enum Lazy<T> {
case notReadYet(() -> T) // either you have a function that makes the initial T
case read(T) // or you have a value for T
}
var thing: Lazy<SomeObject> = .notReadYet { return SomeObject() }
and with a suitable property wrapper or other language magic, wrap the calls to the getter for thing with a switch on the case, which checks if you are in the notReadYet case, and if so automatically invoke the function to produce the T and then set the property to .read(t) for the particular value of t.
If you substitute in your type: () -> Void from your example you have something like:
var thing: Lazy<() -> Void> = .notReadYet({ return { print("Hello") } })
This is all rather odd because it's a void value and not much point making it lazy, e.g. it's not expensive to compute; it doesn't help break a two-phase initialisation loop, so why do it?

Swift issue when storing Object inside a struct

I am confused of how memory is managed here, lets say there is a scenario:
import Foundation
class SomeObject {
deinit {
print("deinitCalled")
}
}
struct SomeStruct {
let object = SomeObject()
var closure: (() -> Void)?
}
func someFunction() {
var someStruct = SomeStruct()
someStruct.closure = {
print(someStruct)
}
someStruct.closure?()
}
someFunction()
print("the end")
What I would expect here is:
Optional(test.SomeStruct(object: test.SomeObject, closure: Optional((Function))))
deinitCalled
the end
However what I get is this:
SomeStruct(object: test.SomeObject, closure: Optional((Function)))
the end
And if I look at memory map:
retain cycle
How do I manage memory in this case
First, you should be very, very careful about putting reference types inside of value types, and especially a mutable reference type that is visible to the outside world. Structs are always value types, but you also want them to have value semantics, and it's challenging to do that while containing reference types. (It's very possible, lots of stdlib types do it in order to implement copy-on-write; it's just challenging.)
So the short version is "you almost certainly don't want to do what you're doing here."
But if you have maintained value semantics in SomeStruct, then the answer is to just make a copy. It's always fine to make a copy of a value type.
someStruct.closure = { [someStruct] in
print(someStruct)
}
This gives the closure it's own immutable value that is a copy of someStruct. Future changes to someStruct won't impact this closure.
If you mean for future changes to someStruct to impact this closure, then you may be violating value semantics, and you should redesign (likely by making SomeStruct a class, if you mean it to have reference semantics).

Are you required to call the a super class's implementation from each and every subclass's function

e.g. you have a subclass defined which overrides a function f in its superclass. Inside the override, are you required to call the superclass's f function.
The answer to this question is not really related to Swift but it's related to all OOP languages.
The simple answer is: No, you are not required to call the superclass implementation.
Whether you should call the superclass implementation or not should be stated in the documentation. When you are not sure, you should generally always call the super implementation.
You have to consider the fact that the superclass does something inside the method and if you don't call the superclass implementation, the subclass doesn't have to work properly.
Not calling the superclass implementation means that you want to prevent the superclass implementation from happening and that usually means your whole inheritance model is flawed.
One notable exception is the situation when the superclass provides some default implementation which you want to replace inside your subclass (e.g. UIViewController.loadView). That's not a very common use case though.
There are also other obvious exceptions, for example getters. When you are overriding a getter to return a custom object, there is usually no need to call super. However, again, even getters can technically initialize some structures in the superclass and it might be needed to call super.
The answer is false, you don't need to call the superclass method you are overriding in Swift or in OOP in general. That would be a horrible limitation if you did! There are times when you should in fact not call it and perhaps the most common example is when you create a UIViewController programmatically and have to override loadView() without calling super.loadView().
The default is that you call super, unless you know you're not breaking that function.
I've created a gist. You can copy it to your own playground and play around with it. But the answer is:
I had this exact confusion of yours. You're basically asking what's the difference between extending behavior vs. overriding behavior.
Swift doesn't do a good a job of telling your their difference.
What they both share is that both need to mark the function with override, but sometimes you're doing something in addition to the superclass's implementation (extending), and sometimes you're just completely rewriting it (overriding)
Suppose we have the following class:
class Person {
var age : Int?
func incrementAge() {
guard age != nil else {
age = 1
return
}
age! += 1
}
func eat() {
print("eat popcorn")
}
}
We can just initialize it and do:
var p1 = Person()
p1.incrementAge() // works fine
Now suppose we did this:
class Boy : Person{
override func incrementAge() {
age! += 2
}
}
var b1 = Boy()
b1.incrementAge()
What do you think is going to happen?!
It will crash. Because in the super class, we're doing a nil check for age, but in our subclass we never call super
To make it work we have to call super.
class GoodBoy : Person{
override func incrementAge() {
super.incrementAge()
age! += 2
}
}
var b2 = GoodBoy()
b2.incrementAge() // works fine.
We could get away without calling super directly.
class AlternateGoodBoy : Person{
override func incrementAge() {
guard age != nil else {
age = 1
return
}
age! += 2
}
}
var b3 = AlternateGoodBoy()
b3.incrementAge() // works fine.
^^ The above works, yet the superclass's implementation is not always known to us. A real example is UIKit. We don't know what happens really when viewDidLoad is called. Hence we must call super.viewDidLoad
That being said sometimes we could just not call super and be totally fine because we know what super does or maybe just don't care and want to completely get rid of it. e.g. :
class Girl : Person{
override func eat() {
print("eat hotdog")
}
}
var g1 = Girl()
g1.eat() // doesn't crash, even though you overrode the behavior. It doesn't crash because the call to super ISN'T critical
Yet the most common case is that you call super, but also add something on top of it.
class Dad : Person {
var moneyInBank = 0
override func incrementAge() {
super.incrementAge()
addMoneyToRetirementFunds()
}
func addMoneyToRetirementFunds() {
moneyInBank += 2000
}
}
var d1 = Dad()
d1.incrementAge()
print(d1.moneyInBank) // 2000
Pro tip:
Unlike most overrides where you first call super then the rest, for a tearDown function, it’s best to call super.tearDown() at the end of the function. In general, for any 'removal' functions, you'd want to call super at the end. e.g. viewWillDisAppear/viewDidDisappear

Swift Indicate that a closure parameter is retained

Is there any way to indicate to a "client" of a specific method that a closure parameter is going to be retained?
For instance, having the following code:
import Foundation
typealias MyClosureType = () -> Void
final class MyClass {
private var myClosure: MyClosureType?
func whatever(closure: MyClosureType?) {
myClosure = closure
}
}
Anyone could start using this class and passing closures to the method whatever without any idea about if it is actually being retained or not. Which is error prone and could lead to memory leaks.
For instance, a "client" doing something like this, would be never deallocated
final class MyDummyClient {
let myInstance = MyClass()
func setUp() {
myInstance.whatever {
self.whateverHandler()
}
}
func whateverHandler() {
print("Hey Jude, don't make it bad")
}
}
That is why I would like to know if there is any way to prevent this type of errors. Some type of paramater that I could add to the definition of my method whatever which gives a hint to the client about the need to weakify to avoid leaks
Whether the closure parameter is escaping or non-escaping is some indication to the caller whether it might be retained. In particular, a non-escaping closure param cannot be retained by a function call.
Per SE-0103, non-escaping closures (currently marked #noescape) will become the default in Swift 3, and you'll have to write #escaping if you want to save the closure, so situations like this will become a little more obvious.
Otherwise, there is no language feature to help you here. You'll have to solve this problem with API design and documentation. If it's something like a handler, I'd recommend a property, obj.handler = { ... }, or a method like obj.setHandler({ ... }) or obj.addHandler({ ... }). That way, when reading the code, you can easily tell that the closure is being saved because of the = or set or add.
(In fact, when compiling Obj-C, Clang explicitly looks for methods named set...: or add...: when determining whether to warn the user about retain cycles. It's possible a similar diagnostic could be added to the Swift compiler in the future.)
With the specific case you're presenting the closure itself is the only thing that will be retained, so if you correctly add [weak self] to your closure when you call it there shouldn't be any issues.
I'm not sure what issue you're trying to protect against, but you might also thing about using delegation (protocols) rather than a closure.

Swift: Overriding Self-requirement is allowed, but causes runtime error. Why?

I just started to learn Swift (v. 2.x) because I'm curious how the new features play out, especially the protocols with Self-requirements.
The following example is going to compile just fine, but causes arbitrary runtime effects to happen:
// The protocol with Self requirement
protocol Narcissistic {
func getFriend() -> Self
}
// Base class that adopts the protocol
class Mario : Narcissistic {
func getFriend() -> Self {
print("Mario.getFriend()")
return self;
}
}
// Intermediate class that eliminates the
// Self requirement by specifying an explicit type
// (Why does the compiler allow this?)
class SuperMario : Mario {
override func getFriend() -> SuperMario {
print("SuperMario.getFriend()")
return SuperMario();
}
}
// Most specific class that defines a field whose
// (polymorphic) access will cause the world to explode
class FireFlowerMario : SuperMario {
let fireballCount = 42
func throwFireballs() {
print("Throwing " + String(fireballCount) + " fireballs!")
}
}
// Global generic function restricted to the protocol
func queryFriend<T : Narcissistic>(narcissistic: T) -> T {
return narcissistic.getFriend()
}
// Sample client code
// Instantiate the most specific class
let m = FireFlowerMario()
// The call to the generic function is verified to return
// the same type that went in -- 'FireFlowerMario' in this case.
// But in reality, the method returns a 'SuperMario' and the
// call to 'throwFireballs' will cause arbitrary
// things to happen at runtime.
queryFriend(m).throwFireballs()
You can see the example in action on the IBM Swift Sandbox here.
In my browser, the output is as follows:
SuperMario.getFriend()
Throwing 32 fireballs!
(instead of 42! Or rather, 'instead of a runtime exception', as this method is not even defined on the object it is called on.)
Is this a proof that Swift is currently not type-safe?
EDIT #1:
Unpredictable behavior like this has to be unacceptable.
The true question is, what exact meaning the keyword Self (capital first letter) has.
I couldn't find anything online, but there are at least these two possibilities:
Self is simply a syntactic shortcut for the full class name it appears in, and it could be substituted with the latter without any change in meaning. But then, it cannot have the same meaning as when it appears inside a protocol definition.
Self is a sort of generic/associated type (in both protocols and classes) that gets re-instantiated in deriving/adopting classes. If that is the case, the compiler should have refused the override of getFriend in SuperMario.
Maybe the true definition is neither of those. Would be great if someone with more experience with the language could shed some light on the topic.
Yes, there seems to be a contradiction. The Self keyword, when used as a return type, apparently means 'self as an instance of Self'. For example, given this protocol
protocol ReturnsReceived {
/// Returns other.
func doReturn(other: Self) -> Self
}
we can't implement it as follows
class Return: ReturnsReceived {
func doReturn(other: Return) -> Self {
return other // Error
}
}
because we get a compiler error ("Cannot convert return expression of type 'Return' to return type 'Self'"), which disappears if we violate doReturn()'s contract and return self instead of other. And we can't write
class Return: ReturnsReceived {
func doReturn(other: Return) -> Return { // Error
return other
}
}
because this is only allowed in a final class, even if Swift supports covariant return types. (The following actually compiles.)
final class Return: ReturnsReceived {
func doReturn(other: Return) -> Return {
return other
}
}
On the other hand, as you pointed out, a subclass of Return can 'override' the Self requirement and merrily honor the contract of ReturnsReceived, as if Self were a simple placeholder for the conforming class' name.
class SubReturn: Return {
override func doReturn(other: Return) -> SubReturn {
// Of course this crashes if other is not a
// SubReturn instance, but let's ignore this
// problem for now.
return other as! SubReturn
}
}
I could be wrong, but I think that:
if Self as a return type really means 'self as an instance of
Self', the compiler should not accept this kind of Self requirement
overriding, because it makes it possible to return instances which
are not self; otherwise,
if Self as a return type must be simply a placeholder with no further implications, then in our example the compiler should already allow overriding the Self requirement in the Return class.
That said, and here any choice about the precise semantics of Self is not bound to change things, your code illustrates one of those cases where the compiler can easily be fooled, and the best it can do is generate code to defer checks to run-time. In this case, the checks that should be delegated to the runtime have to do with casting, and in my opinion one interesting aspect revealed by your examples is that at a particular spot Swift seems not to delegate anything, hence the inevitable crash is more dramatic than it ought to be.
Swift is able to check casts at run-time. Let's consider the following code.
let sm = SuperMario()
let ffm = sm as! FireFlowerMario
ffm.throwFireballs()
Here we create a SuperMario and downcast it to FireFlowerMario. These two classes are not unrelated, and we are assuring the compiler (as!) that we know what we are doing, so the compiler leaves it as it is and compiles the second and third lines without a hitch. However, the program fails at run-time, complaining that it
Could not cast value of type
'SomeModule.SuperMario' (0x...) to
'SomeModule.FireFlowerMario' (0x...).
when trying the cast in the second line. This is not wrong or surprising behaviour. Java, for example, would do exactly the same: compile the code, and fail at run-time with a ClassCastException. The important thing is that the application reliably crashes at run-time.
Your code is a more elaborate way to fool the compiler, but it boils down to the same problem: there is a SuperMario instead of a FireFlowerMario. The difference is that in your case we don't get a gentle "could not cast" message but, in a real Xcode project, an abrupt and terrific error when calling throwFireballs().
In the same situation, Java fails (at run-time) with the same error we saw above (a ClassCastException), which means it attempts a cast (to FireFlowerMario) before calling throwFireballs() on the object returned by queryFriend(). The presence of an explicit checkcast instruction in the bytecode easily confirms this.
Swift on the contrary, as far as I can see at the moment, does not try any cast before the call (no casting routine is called in the compiled code), so a horrible, uncaught error is the only possible outcome. If, instead, your code produced a run-time "could not cast" error message, or something as gracious as that, I would be completely satisfied with the behaviour of the language.
The issue here seems to be a violation in contract:
You define getFriend() to return an instance of receiver (Self). The problem here is that SuperMario does not return self but it returns a new instance of type SuperMario.
Now, when FireFlowerMario inherits that method the contract says that the method should return a FireFlowerMario but instead, the inherited method returns a SuperMario! This instance is then treated as if it were a FireFlowerMario, specifically: Swift tries to access the instance variable fireballCount which does not exist on SuperMario and you get garbage instead.
You can fix it like this:
class SuperMario : Mario {
required override init() {
super.init()
}
override func getFriend() -> SuperMario {
print("SuperMario.getFriend()")
// Dynamically create new instance of the same type as the receiver.
let myClass = self.dynamicType
return myClass.init()
}
}
Why does the compiler allow it? It has a hard time catching something like this, I guess. For SuperMario, the contract is still valid: the method getFriend does return an instance of the same class. The contract breaks when you create the subclass FireFlowerMario: should the compiler notice that a superclass might violate the contract? This would be an expensive check and probably more suited for a static analyzer, IMHO (Also, what happens if the compiler doesn't have access to SuperMario's source? What happens if that class is from a library?)
So it's actually SuperMario's duty to ensure that the contract is still valid when subclassing.