Observer method in MVVM Design Pattern swift? - swift

I have one question about the code you can see below.
class Bindable<T> {
var value: T? {
didSet {
observer?(value)
}
}
var observer: ((T?) -> ())?
func bind(observer: #escaping (T?) -> ()) {
self.observer = observer // there is the place ı can not understand
observer?(value)
}
}
Why we doing this self.observer = observer? I expect we should use observer = self.observer instead of self.observer = observer. The reason why I think just like that
that is I think
We have changed the value
self.observer will worked
When we called func bind() because of the parameter observer is going to be equal to self.observer everything needs to be work perfectly.
But, what am I missing?

self.observer = observer means you assign this function to the class property observer
The first self.observer is the observer you defined in var observer: ((T?) -> ())?
And the second observer is the one you passed through the function func bind(observer: #escaping (T?) -> ())
Give you another example you will understand better.
class Person {
var age: Int = 0
func setAge(age: Int) {
self.age = age
}
// to let you understand better
// if the param is `anotherAge`
// `self.` is not necessary
func setAge2(anotherAge: Int) {
age = anotherAge
}
}
The self.age = age here is nothing different from your self.observer = observer, all are assigned a value to a class property.
The only difference is that your observer is a function also.

Related

Swift: self in initializer has type (T) -> () -> T

TL;DR:
How does an NSObject's self variable acquire the type (T) -> () -> T?
Remarks
I can see why the use of self this way is not legal. But I am trying to make sense of the second error message.
The Code
struct DummyStorer {
let dummy: Dummy
}
struct Dummy {
let storer = DummyStorer(dummy: self)
// Use of unresolved identifier 'self'
// OK, that's reasonable. But...
}
class Dummy : NSObject {
let storer = DummyStorer(dummy: self)
// Cannot convert value of type '(Dummy) -> () -> Dummy' to expected argument type 'Dummy'
// ... how does the compiler arrive at this?
}
It is part of NSObjectProtocol
public protocol NSObjectProtocol {
func isEqual(_ object: Any?) -> Bool
var hash: Int { get }
var superclass: AnyClass? { get }
func `self`() -> Self // << here !!

Return arrow as a type in Swift

var textFieldChangedHandler: ((String) -> Void)?
What does it mean when you use the return arrow when declaring the type of a variable in Swift?
It's the return closure say you in ClassA and you need to send data to other place say ClassB then do
in ClassA
var textFieldChangedHandler: ((String) -> Void)?
func send() {
textFieldChangedHandler?(value)
}
Then in classB
let cl = ClassA()
cl.textFieldChangedHandler = { str in
print(str)
}

Assign Function to other class variable

Sorry for if this asked many times, I tried many solution none of the work for me. I am doing a very basic thing like this way.
class NotificationModel: NSObject {
var selector = (() -> Void).self
}
Other class.
class TestNotificationClass1 {
init() {
var model = NotificationModel.init()
model.selector = handleNotification //error is here
}
func handleNotification() -> Void {
print("handle function 1")
}
}
Error description: Cannot assign value of type '() -> Void' to type '(() -> Void).Type'
If you want selector to be able to hold any function with no parameters and no return value then change its declaration to:
var selector: (() -> Void)?
This also makes it optional. If you don't want it to be optional then you need to add an initializer to NotificationModel that takes the desired selector as a parameter as shown below:
class NotificationModel: NSObject {
var selector: (() -> Void)
init(selector: #escaping () -> Void) {
self.selector = selector
super.init()
}
}
class TestNotificationClass1 {
init() {
var model = NotificationModel(selector: handleNotification)
}
func handleNotification() -> Void {
print("handle function 1")
}
}

Swift #objc protocol - distinguish optional methods with similar signature

Let's say we have a protocol in Swift:
#objc protocol FancyViewDelegate {
optional func fancyView(view: FancyView, didSelectSegmentAtIndex index: Int)
optional func fancyView(view: FancyView, shouldHighlightSegmentAtIndex index: Int) -> Bool
}
Note that both methods are optional and have the same prefix signature.
Now our FancyView class looks like this:
class FancyView: UIView {
var delegate: FancyViewDelegate?
private func somethingHappened() {
guard let delegateImpl = delegate?.fancyView else {
return
}
let idx = doALotOfWorkToFindTheIndex()
delegateImpl(self, idx)
}
}
The compiler jumps in our face:
We could change somethingHappened() to this:
private func somethingHappened() {
let idx = doALotOfWorkToFindTheIndex()
delegate?.fancyView?(self, didSelectSegmentAtIndex: idx)
}
However, as you can see we risk doing a lot of work only to throw away the index afterwards, because the delegate does not implement the optional method.
The question is: How do we if let or guard let bind the implementation of two optional methods with a similar prefix signature.
First, your objective C protocol needs to confirm to NSObjectProtocol to ensure we can introspect if it supports a given method.
Then when we want to call specific method, check if that method is supported by conforming object and if yes, then perform necessary computations needed to call that method. I tried this code for instance-
#objc protocol FancyViewDelegate : NSObjectProtocol {
optional func fancyView(view: UIView, didSelectSegmentAtIndex index: Int)
optional func fancyView(view: UIView, shouldHighlightSegmentAtIndex index: Int) -> Bool
}
class FancyView: UIView {
var delegate: FancyViewDelegate?
private func somethingHappened() {
if delegate?.respondsToSelector("fancyView:didSelectSegmentAtIndex") == true {
let idx :Int = 0 //Compute the index here
delegate?.fancyView!(self, didSelectSegmentAtIndex: idx)
}
}
}

didSet for weak reference not working as expected

I have this small Swift script, which uses weak references:
#!/usr/bin/env swift
class Thing
{
deinit
{
print("Thing object deallocated")
}
}
class WeakThing
{
weak var thing: Thing?
{
didSet
{
print("Set thing to \(thing)")
}
}
}
var thing = Thing()
let weakThing = WeakThing()
weakThing.thing = thing
thing = Thing()
print("weakThing's thing is \(weakThing.thing)")
This prints:
Set thing to Optional(Test.Thing)
Thing object deallocated
weakThing's thing is nil
However, I would expect it to print:
Set thing to Optional(Test.Thing)
Set thing to nil
Thing object deallocated
weakThing's thing is nil
What am I doing incorrectly? I see that the object is being deallocated, and that the value of the thing variable is changing, but my didSet code is not executing.
didSet and willSet are not called when a weak-reference is auto-zeroed due to ARC.
If you were to manually set the property to nil, you would see the didSet code called.
I know this question is very old, but I stumbled across another answer that actually get's the problem solved here: https://stackoverflow.com/a/19344475/4069976
For what it's worth, this is my implementation to watch a deinit as suggested by the answer referenced above. Just make sure you don't create any retain cycles with your onDeinit closure!
private var key: UInt8 = 0
class WeakWatcher {
private var onDeinit: () -> ()
init(onDeinit: #escaping () -> ()) {
self.onDeinit = onDeinit
}
static func watch(_ obj: Any, onDeinit: #escaping () -> ()) {
watch(obj, key: &key, onDeinit: onDeinit)
}
static func watch(_ obj: Any, key: UnsafeRawPointer, onDeinit: #escaping () -> ()) {
objc_setAssociatedObject(obj, key, WeakWatcher(onDeinit: onDeinit), objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN)
}
deinit {
self.onDeinit()
}
}
Call it like this when initializing your weak var:
self.weakVar = obj
WeakWatcher.watch(obj, onDeinit: { /* do something */ })