How send generic value through the function? [duplicate] - swift

I have code that follows the general design of:
protocol DispatchType {}
class DispatchType1: DispatchType {}
class DispatchType2: DispatchType {}
func doBar<D:DispatchType>(value:D) {
print("general function called")
}
func doBar(value:DispatchType1) {
print("DispatchType1 called")
}
func doBar(value:DispatchType2) {
print("DispatchType2 called")
}
where in reality DispatchType is actually a backend storage. The doBarfunctions are optimized methods that depend on the correct storage type. Everything works fine if I do:
let d1 = DispatchType1()
let d2 = DispatchType2()
doBar(value: d1) // "DispatchType1 called"
doBar(value: d2) // "DispatchType2 called"
However, if I make a function that calls doBar:
func test<D:DispatchType>(value:D) {
doBar(value: value)
}
and I try a similar calling pattern, I get:
test(value: d1) // "general function called"
test(value: d2) // "general function called"
This seems like something that Swift should be able to handle since it should be able to determine at compile time the type constraints. Just as a quick test, I also tried writing doBar as:
func doBar<D:DispatchType>(value:D) where D:DispatchType1 {
print("DispatchType1 called")
}
func doBar<D:DispatchType>(value:D) where D:DispatchType2 {
print("DispatchType2 called")
}
but get the same results.
Any ideas if this is correct Swift behavior, and if so, a good way to get around this behavior?
Edit 1: Example of why I was trying to avoid using protocols. Suppose I have the code (greatly simplified from my actual code):
protocol Storage {
// ...
}
class Tensor<S:Storage> {
// ...
}
For the Tensor class I have a base set of operations that can be performed on the Tensors. However, the operations themselves will change their behavior based on the storage. Currently I accomplish this with:
func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { ... }
While I can put these in the Tensor class and use extensions:
extension Tensor where S:CBlasStorage {
func dot(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
this has a few side effects which I don't like:
I think dot(lhs, rhs) is preferable to lhs.dot(rhs). Convenience functions can be written to get around this, but that will create a huge explosion of code.
This will cause the Tensor class to become monolithic. I really prefer having it contain the minimal amount of code necessary and expand its functionality by auxiliary functions.
Related to (2), this means that anyone who wants to add new functionality will have to touch the base class, which I consider bad design.
Edit 2: One alternative is that things work expected if you use constraints for everything:
func test<D:DispatchType>(value:D) where D:DispatchType1 {
doBar(value: value)
}
func test<D:DispatchType>(value:D) where D:DispatchType2 {
doBar(value: value)
}
will cause the correct doBar to be called. This also isn't ideal, as it will cause a lot of extra code to be written, but at least lets me keep my current design.
Edit 3: I came across documentation showing the use of static keyword with generics. This helps at least with point (1):
class Tensor<S:Storage> {
// ...
static func cos(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
allows you to write:
let result = Tensor.cos(value)
and it supports operator overloading:
let result = value1 + value2
it does have the added verbosity of required Tensor. This can made a little better with:
typealias T<S:Storage> = Tensor<S>

This is indeed correct behaviour as overload resolution takes place at compile time (it would be a pretty expensive operation to take place at runtime). Therefore from within test(value:), the only thing the compiler knows about value is that it's of some type that conforms to DispatchType – thus the only overload it can dispatch to is func doBar<D : DispatchType>(value: D).
Things would be different if generic functions were always specialised by the compiler, because then a specialised implementation of test(value:) would know the concrete type of value and thus be able to pick the appropriate overload. However, specialisation of generic functions is currently only an optimisation (as without inlining, it can add significant bloat to your code), so this doesn't change the observed behaviour.
One solution in order to allow for polymorphism is to leverage the protocol witness table (see this great WWDC talk on them) by adding doBar() as a protocol requirement, and implementing the specialised implementations of it in the respective classes that conform to the protocol, with the general implementation being a part of the protocol extension.
This will allow for the dynamic dispatch of doBar(), thus allowing it to be called from test(value:) and having the correct implementation called.
protocol DispatchType {
func doBar()
}
extension DispatchType {
func doBar() {
print("general function called")
}
}
class DispatchType1: DispatchType {
func doBar() {
print("DispatchType1 called")
}
}
class DispatchType2: DispatchType {
func doBar() {
print("DispatchType2 called")
}
}
func test<D : DispatchType>(value: D) {
value.doBar()
}
let d1 = DispatchType1()
let d2 = DispatchType2()
test(value: d1) // "DispatchType1 called"
test(value: d2) // "DispatchType2 called"

Related

How to implement custom implementation of method?

I'm drawing a blank for some reason.. If I want to make a bunch of objects from a class, but I want each instance to have its own unique implementation of a certain method, how would I do this?
For example:
class MyClass {
var name: String
func doSomething() {
// Each object would have custom implementation of this method, here.
}
}
Do I provide each object with its own closure during initialization, and then call that closure in the doSomething() method? I'm trying to figure out the correct or "Swiftly" way to do this. I'm also thinking along the lines of something with protocols, but I can't seem to figure out how to go about this.
I think there're many ways to do it.
In case of Base class + some sub-classes (e.g. Animal, subclassed by Dog, Cat, etc), you can do this:
First of all it's a good idea to define a protocol:
protocol MyProtocol {
func doSomething()
}
Also provide a default implementation, which throws a fatal error if a class doesn't override that method:
extension MyProtocol {
func doSomething() {
fatalError("You must override me")
}
}
Now your base class confirms the protocol thanks to default implementation. But it will throw a fatal error at runtime:
class MyClass: MyProtocol {
// conformant
}
Child class, however, will run correctly as long as it overrides this function:
class MyOtherClass: MyClass {
func doSomething() {
print("Doing it!")
}
}
You could also move fatal error into base class, and not do any extension implementation.
In case of many instances of the same Class, that solution makes no sense. You can use a very simple callback design:
typealias MyDelegate = () -> Void
class MyClass {
var delegate: MyDelegate?
func doSomething() {
delegate?()
}
}
let x = MyClass()
x.delegate = {
print("do it!")
}
x.doSomething()
// Or you can use a defined function
func doIt() {
print("also doing it")
}
x.delegate = doIt
x.doSomething()
It can also be that you re looking for Strategy pattern, or Template pattern. Depends on your usage details.
Do I provide each object with its own closure during initialization, and then call that closure in the doSomething() method
Yes. That is extremely common and eminently Swifty. Incredibly miminalistic example:
struct S {
let f:()->()
func doYourThing() { f() }
}
let s = S { print("hello") }
let s2 = S { print("goodbye" )}
s.doYourThing() // hello
s2.doYourThing() // goodbye
Giving an object a settable method instance property is very, very easy and common. It doesn't have to be provided during initialization — you might set this property later on, and a lot of built-in objects work that way too.
That, after all, is all you're doing when you create a data task with dataTask(with:completionHandler:). You are creating a data task and handing it a function which it stores, and which it will call when it has performed the actual networking.

Swift: casting un-constrained generic type to generic type that confirms to Decodable

Situation
I have a two generic classes which will fetch data either from api and database, lets say APIDataSource<I, O> and DBDataSource<I, O> respectively
I will inject any of two class in view-model when creating it and view-model will use that class to fetch data it needed. I want view-model to work exactly same with both class. So I don't want different generic constraints for the classes
// sudo code
ViewModel(APIDataSource <InputModel, ResponseModel>(...))
// I want to change the datasource in future like
ViewModel(DBDataSource <InputModel, ResponseModel>(...))
To fetch data from api ResponseModel need to confirms to "Decodable" because I want to create that object from JSON. To fetch data from realm database it need to inherit from Object
Inside ViewModel I want to get response like
// sudo code
self.dataSource.request("param1", "param2")
If developer tries to fetch api data from database or vice-versa it will check for correct type and throws proper error.
Stripped out version of code for playground
Following is stripped out version of code which shows what I want to achieve or where I am stuck (casting un-constrained generic type to generic type that confirms to Decodable)
import Foundation
// Just to test functions below
class DummyModel: Decodable {
}
// Stripped out version of function which will convert json to object of type T
func decode<T:Decodable>(_ type: T.Type){
print(type)
}
// This doesn't give compilation error
// Ignore the inp
func testDecode<T:Decodable> (_ inp: T) {
decode(T.self)
}
// This gives compilation error
// Ignore the inp
func testDecode2<T>(_ inp: T){
if(T.self is Decodable){
// ??????????
// How can we cast T at runtime after checking T confirms to Decodable??
decode(T.self as! Decodable.Type)
}
}
testDecode(DummyModel())
Any help or explanation that this could not work would be appreciated. Thanks in advance :)
As #matt suggests, moving my various comments over to an answer in the form "your problem has no good solution and you need to redesign your problem."
What you're trying to do is at best fragile, and at worst impossible. Matt's approach is a good solution when you're trying to improve performance, but it breaks in surprising ways if it impacts behavior. For example:
protocol P {}
func doSomething<T>(x: T) -> String {
if x is P {
return "\(x) simple, but it's really P"
}
return "\(x) simple"
}
func doSomething<T: P>(x: T) -> String {
return "\(x) is P"
}
struct S: P {}
doSomething(x: S()) // S() is P
So that works just like we expect. But we can lose the type information this way:
func wrapper<T>(x: T) -> String {
return doSomething(x: x)
}
wrapper(x: S()) // S() simple, but it's really P!
So you can't solve this with generics.
Going back to your approach, which at least has the possibility of being robust, it's still not going to work. Swift's type system just doesn't have a way to express what you're trying to say. But I don't think you should be trying to say this anyway.
In the method that fetch data I will check type of generic type and if it confirms to "Decodable" protocol I will use it to fetch data from api else from database.
If fetching from the API vs the database represents different semantics (rather than just a performance improvement), this is very dangerous even if you could get it to work. Any part of the program can attach Decodable to any type. It can even be done in a separate module. Adding protocol conformance should never change the semantics (outwardly visible behaviors) of the program, only the performance or capabilities.
I have a generic class which will fetch data either from api or database
Perfect. If you already have a class, class inheritance makes a lot of sense here. I might build it like:
class Model {
required init(identifier: String) {}
}
class DatabaseModel {
required init(fromDatabaseWithIdentifier: String) {}
convenience init(identifier: String) { self.init(fromDatabaseWithIdentifier: identifier )}
}
class APIModel {
required init(fromAPIWithIdentifier: String) {}
convenience init(identifier: String) { self.init(fromAPIWithIdentifier: identifier )}
}
class SomeModel: DatabaseModel {
required init(fromDatabaseWithIdentifier identifier: String) {
super.init(fromDatabaseWithIdentifier: identifier)
}
}
Depending on your exact needs, you might rearrange this (and a protocol might also be workable here). But the key point is that the model knows how to fetch itself. That makes it easy to use Decodable inside the class (since it can easily use type(of: self) as the parameter).
Your needs may be different, and if you'll describe them a bit better maybe we'll come to a better solution. But it should not be based on whether something merely conforms to a protocol. In most cases that will be impossible, and if you get it working it will be fragile.
What you'd really like to do here is have two versions of testDecode, one for when T conforms to Decodable, the other for when it doesn't. You would thus overload the function testDecode so that the right one is called depending on the type of T.
Unfortunately, you can't do that, because you can't do a function overload that depends on the resolution of a generic type. But you can work around this by boxing the function inside a generic type, because you can extend the type conditionally.
Thus, just to show the architecture:
protocol P{}
struct Box<T> {
func f() {
print("it doesn't conform to P")
}
}
extension Box where T : P {
func f() {
print("it conforms to P")
}
}
struct S1:P {}
struct S2 {}
let b1 = Box<S1>()
b1.f() // "it conforms to P"
let b2 = Box<S2>()
b2.f() // "it doesn't conform to P"
This proves that the right version of f is being called, depending on whether the type that resolves the generic conforms to the protocol or not.

Difference between using Generic and Protocol as type parameters, what are the pros and cons of implement them in a function

Since Swift allows us using both Protocol and Generic as parameter types in a function, the scenario below has come into my mind:
protocol AProtocol {
var name: String{ get }
}
class ClassA: AProtocol {
var name = "Allen"
}
func printNameGeneric<T: AProtocol>(param: T) {
print(param.name)
}
func printNameProtocol(param: AProtocol) {
print(param.name)
}
The first function uses generic as parameter type with a type constraint, and the second function uses protocol as the parameter type directly. However, these two functions can have the same effect, which is the point confusing me. So my questions are:
What are the specific scenarios for each of them (or a case which can only be done by the specific one, but not another)?
For the given case, both functions turn out the same result. Which one is better to implement (or the pros and cons of each of them)?
This great talk has mentioned generic specialization, which is a optimization that turn the way of function dispatching from dynamic dispatching (function with non-generic parameters) to static dispatching or inlining (function with generic parameters). Since static dispatching and inlining are less expensive in contrast with dynamic dispatching, to implement functions with generic can always provide a better performance.
#Hamish also gave great information in this post, have a look for more information.
Here is a new question came to me:
struct StructA: AProtocol {
var a: Int
}
struct StructB: AProtocol {
var b: Int
}
func buttonClicked(sender: UIButton) {
var aVar: AProtocol
if sender == self.buttonA
{
aVar = StructA(a: 1)
}
else if sender == self.buttonA
{
aVar = StructB(b: 2)
}
foo(param: aVar)
}
func foo<T: AProtocol>(param: T) {
//do something
}
If there are several types conform to a Protocol, and are pass in to a generic function in different conditions dynamically. As shown above, pressing different buttons will pass different types(StructA or StructB) of parameter into function, would the generic specialization still work in this case?
There is actually a video from this year's WWDC about that (it was about performance of classes, structs and protocols; I don't have a link but you should be able to find it).
In your second function, where you pass a any value that conforms to that protocol, you are actually passing a container that has 24 bytes of storage for the passed value, and 16 bytes for type related information (to determine which methods to call, ergo dynamic dispatch). If the passed value is now bigger than 24 bytes in memory, the object will be allocated on the heap and the container stores a reference to that object! That is actually extremely time consuming and should certainly be avoided if possible.
In your first function, where you use a generic constraint, there is actually created another function by the compiler that explicitly performs the function's operations upon that type. (If you use this function with lots of different types, your code size may, however, increase significantly; see C++ code bloat for further reference.) However, the compiler can now statically dispatch the methods, inline the function if possible and does certainly not have to allocate any heap space. Stated in the video mentioned above, code size does not have to increase significantly as code can still be shared... so the function with generic constraint is certainly the way to go!
Now we have two protocol below:
protocol A {
func sometingA()
}
protocol B {
func sometingB()
}
Then the parameter need to conform to both A and B.
//Generic solution
func methodGeneric<T:A>(t:T)where T:B {
t.sometingA()
t.sometingB()
}
//we need protocol C to define the parameter type
protocol C:A,B {}
//Protocol solution
func methodProtocol(c:C){
c.sometingA()
c.sometingB()
}
It seems that nothing is wrong but when we define a struct like this:
struct S:A,B {
func sometingB() {
print("B")
}
func sometingA() {
print("A")
}
}
The methodGeneric works but we need to change struct S:A,B to struct S:C to make methodProtocol work. Some questions:
Do we really need protocol C?
Why not would we write like func method(s:S)?
You could read more about this in the Generic Doc for additional information .

Constraining one generic with another in Swift

I've run into a problem where I have some protocol:
protocol Baz {
func bar<T>(input:T)
}
The function bar is made generic because I don't want the protocol itself to have a Self(it needs to be useable in a collection). I have an implementation of the protocol defined as:
class Foo<S>: Baz {
var value:S
init(value:S) {
self.value = value
}
func bar<T>(input:T) {
value = input
}
}
This gives an error because the compiler doesn't know that S and T are the same type. Ideally I should be able to write something like:
func bar<T where T==S>(input:T) {
value = input
}
or
func bar<T:S>(input:T) {
value = input
}
The first form gives a "Same-type requirement makes generic parameter 'S' and 'T' equivalent" error (which is exactly what I'm trying to do, so not sure why it's an error). The second form gives me a "Inheritance from non-protocol, non-class type 'S'".
Any ideas of on either how to get this to work, or a better design pattern in Swift?
Update: As #luk2302 pointed out, I forgot to make Foo adhere to the Baz protocol
#luk2302 has hinted at much of this in the comments, but just to make it explicit for future searchers.
protocol Baz {
func bar<T>(input:T)
}
This protocol is almost certainly useless as written. It is effectively identical to the following protocol (which is also almost completely useless):
protocol Baz {
func bar(input:Any)
}
You very likely mean (and hint that you mean):
protocol Baz {
typealias T
func bar(input: T)
}
As you note, this makes the protocol a PAT (protocol with associated type), which means you cannot put it directly into a collection. As you note, the usual solution to that, if you really need a collection of them, is a type eraser. It would be nice if Swift would automatically write the eraser for you, which it likely will be able to do in the future, and would eliminate the problem. That said, though slightly tedious, writing type erasers is very straightforward.
Now while you cannot put a PAT directly into a collection, you can put a generically-constrained PAT into a collection. So as long as you wrap the collection into a type that constrains T, it's still no problem.
If these become complex, the constraint code can become tedious and very repetitive. This can be improved through a number of techniques, however.
Generic structs with static methods can be used to avoid repeatedly providing constraints on free-functions.
The protocol can be converted into a generic struct (this formalizes the type eraser as the primary type rather than "as needed").
Protocols can be replaced with functions in many cases. For example, given this:
protocol Bar {
typealias T
func bar(input: T)
}
struct Foo : Bar {
func bar(input: Int) {}
}
You can't do this:
let bars: [Bar] = [Foo()] // error: protocol 'Bar' can only be used as a generic constraint because it has Self or associated type requirements
But you can easily do this, which is just as good:
let bars = [(Int) -> Void] = [Foo().bar]
This is particularly powerful for single-method protocols.
A mix of protocols, generics, and functions is much more powerful than trying to force everything into the protocol box, at least until protocols add a few more missing features to fulfill their promise.
(It would be easier to give specific advice to a specific problem. There is no one answer that solves all issues.)
EDITED (Workaround for "... an error because the compiler doesn't know that S and T are the same type.")
First of all: This is just an separate note (and perhaps an attempt at redemption for my previous answer that ended up being myself chasing my own tail to compute lots and lots of redundant code) in addition to Robs excellent answer.
The following workaround will possibly let your implementation protocol Foo ... / class Bas : Foo ... mimic the behaviour you initially asked for, in the sense that class method bar(...) will know if generics S and T are actually the same type, while Foo still conforms to the protocol also in the case where S is not of the same type as T.
protocol Baz {
func bar<T>(input:T)
}
class Foo<S>: Baz {
var value:S
init(value:S) {
self.value = value
}
func bar<T>(input:T) {
if input is S {
value = input as! S
}
else {
print("Incompatible types. Throw...")
}
}
}
// ok
var a = Foo(value: 1.0) // Foo<Double>
print(a.value) // 1.0
a.bar(2.0)
print(a.value) // 2.0
let myInt = 1
a.bar(myInt) // Incompatible types. Throw...
print(a.value) // 2.0
// perhaps not a loophole we indended
let myAny : Any = 3.0
a.bar(myAny)
print(a.value) // 3.0
The Any and AnyObject loophole here could be redeemed by creating a dummy type constraint that you extend all types (that you wish for to use the generics) to, however not extending Any and AnyObject.
protocol NotAnyType {}
extension Int : NotAnyType {}
extension Double : NotAnyType {}
extension Optional : NotAnyType {}
// ...
protocol Baz {
func bar<T: NotAnyType>(input:T)
}
class Foo<S: NotAnyType>: Baz {
var value:S
init(value:S) {
self.value = value
}
func bar<T: NotAnyType>(input:T) {
if input is S {
value = input as! S
}
else {
print("Incompatible types. Throw...")
}
}
}
// ok
var a = Foo(value: 1.0) // Foo<Double>
// ...
// no longer a loophole
let myAny : Any = 3.0
a.bar(myAny) // compile time error
let myAnyObject : AnyObject = 3.0
a.bar(myAnyObject) // compile time error
This, however, excludes Any and AnyObject from the generic in full (not only for "loophole casting"), which is perhaps not a sought after behaviour.

Determining protocol types at runtime

How can we determine if protocol conforms to a specific subtype based on user provided instances, if it's not possible this way, any alternate solutions.
API
protocol Super {}
protocol Sub: Super {} //inherited by Super protocol
class Type1: Super {} //conforms to super protocol
class Type2: Type1, Sub {} //conforms to sub protocol
inside another API class
func store(closures: [() -> Super]) {
self.closures = closures
}
when it's time to call
func go() {
for closure in closures {
var instance = closure()
if instance is Super {
//do something - system will behave differently
} else { //it's Sub
//do something else - system will behave differently
}
}
}
users of the api
class Imp1: Type1 {}
class Imp2: Type2 {}
var closures: [() -> Super] = [ { Imp1() }, { Imp2() } ]
store(closures)
my current workaround within API
func go() {
for closure in closures {
var instance = closure()
var behavior = 0
if instance as? Type2 != nil { //not so cool, should be through protocols
behavior = 1 //instead of implementations
}
if behavior == 0 { //do something within the api,
} else { //do something else within the api
}
//instance overriden method will be called
//but not important here to show, polymorphism works in here
//more concerned how the api can do something different based on the types
}
}
You are jumping through a lot of hoops to manually recreate dynamic dispatch, i.e. one of the purposes of protocols and classes. Try actually using real runtime polymorphism to solve your problem.
Take this code:
if instance is Super {
//do something
} else { //it's Sub
//do something else
}
What you are saying is, if it’s a superclass, run the superclass method, else, run the subclass. This is a bit inverted – normally when you are a subclass you want to run the subclass code not the other way around. But assuming you turn it around to the more conventional order, you are essentially describing calling a protocol’s method and expecting the appropriate implementation to get called:
(the closures aren’t really related to the question in hand so ignoring them for now)
protocol Super { func doThing() }
protocol Sub: Super { } // super is actually a bit redundant here
class Type1: Super {
func doThing() {
println("I did a super thing!")
}
}
class Type2: Sub {
func doThing() {
println("I did a sub thing!")
}
}
func doSomething(s: Super) {
s.doThing()
}
let c: [Super] = [Type1(), Type2()]
for t in c {
doSomething(t)
}
// prints “I did a super thing!”, then “I did a sub thing!"
Alternatives to consider: eliminate Sub, and have Type2 inherit from Type1. Or, since there’s no class inheritance here, you could use structs rather than classes.
Almost any time you find yourself wanting to use is?, you probably meant to use an enum. Enums allow you use to the equivalent of is? without feeling bad about it (because it isn't a problem). The reason that is? is bad OO design is that it creates a function that is closed to subtyping, while OOP itself is always open to subtyping (you should think of final as a compiler optimization, not as a fundamental part of types).
Being closed to subtyping is not a problem or a bad thing. It just requires thinking in a functional paradigm rather than an object paradigm. Enums (which are the Swift implementation of a Sum type) are exactly the tool for this, and are very often a better tool than subclassing.
enum Thing {
case Type1(... some data object(s) ...)
case Type2(... some data object(s) ...)
}
Now in go(), instead of an is? check, you switch. Not only is this not a bad thing, it's required and fully type-checked by the compiler.
(Example removes the lazy closures since they're not really part of the question.)
func go(instances: [Thing]) {
for instance in instances {
switch instance {
case Type1(let ...) { ...Type1 behaviors... }
case Type2(let ...) { ...Type2 behaviors... }
}
}
}
If you have some shared behaviors, just pull those out into a function. You're free to let your "data objects" implement certain protocols or be of specific classes if that makes things easier to pass along to shared functions. It's fine if Type2 takes associated data that happens to be a subclass of Type1.
If you come along later and add a Type3, then the compiler will warn you about every switch that fails to consider this. That's why enums are safe while is? is not.
You need objects derived from the Objective-C world to do this:
#objc protocol Super {}
#objc protocol Sub: Super {}
class Parent: NSObject, Super {}
class Child: NSObject, Sub {}
func go( closures: [() -> Super]) {
for closure in closures {
let instance = closure()
if instance is Sub { // check for Sub first, check for Super is always true
//do something
} else {
//do something else
}
}
}
Edit: Version with different method implementations:
protocol Super {
func doSomething()
}
protocol Sub: Super {}
class Parent: Super {
func doSomething() {
// do something
}
}
class Child: Sub {
func doSomething() {
// do something else
}
}
func go( closures: [() -> Super]) {
for closure in closures {
let instance = closure()
instance.doSomething()
}
}