In essence, four questions are here for Golang interfaces, with each one slightly harder than the one before it.
say we imported a lot of interfaces, A, B, C, ..., G
import (
"A",
"B",
"C",
// more ...
"G"
)
and we define a type S:
type S struct {
// more ...
}
now if S "implements" one of the interfaces:
func (s S) Foo() {
// more ...
}
Up to date, is there any other way to tell which interface of A - G does S implement, except for looking and searching into interface declarations of A - G? If no, is there a way to explicitly tell S to implement which interface? So far, it appears that the which interface S implemented is inferred.
If, for the same example, B, C, D interfaces all have the Foo() method with the same signature. In this case, which interface does S implement? Does it depend on the order those interfaces are imported from B, C, and D?
If, rather than importing interfaces, we declare and define interfaces B, C, D in the same file, but those interfaces again all have Foo() method with the same signature, which interface does S implement? Does it depend on the order those interfaces are declared, or defined?
If, we declared and defined B, C, D interfaces in the same file, B has Foo() method, C has Bar() method, and D has both Foo() and Bar(). Now if S implement Foo() and Bar(), then does S implement B and C, or S only implements D?
No. (But as a side note, you don't import interfaces, you import packages.)
All of them. Order doesn't matter. If you declared variables of type B, C, and D, all of them would be able to hold an S.
All of them. What you do in the same or different files doesn't matter.
All of them. S implements B because S has Foo(). S implements C because S has Bar(). S implements D because S has both Foo() and Bar().
Related
How can I create the type from existed function in Dart/Flutter?
I know typedef can create custom type, something like:
typedef Custom<T> = int Function(T a, T b)
But if I have a function from the dependency and want to create the type for arguments, how can I do this?
int functionFromDependency(String a, {int b, ComplexType c}) {
// ...
}
// something syntax like `typeof<functionFromDependency>`
void myProcess(typeof<functionFromDependency>? runner) {
(runner ?? functionFromDependency)(
a,
b: b,
c: c
);
}
It is helpful for testing which can inject custom function.
Dart doesn't have something like C++'s decltype that would allow you to statically re-use the type of some other variable.
In the case of a callback argument to a function, I don't think that it would make sense anyway. For example, you could give up static type-checking and do:
void myProcess(Function? runner) {
(runner ?? functionFromDependency)(
a,
b: b,
c: c
);
}
but ultimately myProcess still requires that runner be a function that can takes a single positional argument and that can take named parameters b and c, all with types appropriate for the supplied arguments. You might as well do:
typedef MyProcessCallback = void Function(A a, {B b, C, c});
void myProcess(MyProcessCallback? runner) {
...
}
myProcess should not be trying to conform to its callers; that's backwards. Callers of myProcess should be conforming to myProcess's API instead, which is much easier.
I am new to Scala.
I have a Class A that extends a Class C. I also have a Class B that also extends a Class C
I want function objects of type A->B to extend C as well (and also other derived types, such as A->(A->B)). But I read in "Programming in scala" following:
A function literal is compiled into a class that when instantiated at runtime
is a function value.
Is there some way of automatically letting A->B extend C, other then manually having to create a new class that represents the function?
Functions in Scala are modelled via FunctionN trait. For example, simple one-input-one-output functions are all instances of the following trait:
trait Function1[-T1, +R] extends AnyRef
So what you're asking is "how can I make instances of Function also become subclasses of C". This is not doable via standard subtyping / inheritance, because obviously we can't modify the Function1 trait to make it extend your custom class C. Sure, we could make up a new class to represent the function as you suggested, but that will only take us so far and it's not so easy to implement, not to mention that any function you want to use as C will have to be converted to your pseudo-function trait first, which will make things horrible.
What we can do, however, is create a typeclass which then contains an implementation for A -> B, among others.
Let's take the following code as example:
trait A
trait B
trait C[T]
object C {
implicit val fa = new C[A] {}
implicit val fb = new C[B] {}
implicit val fab = new C[Function1[A, B]] {}
}
object Test extends scala.App {
val f: A => B = (a: A) => new B {}
def someMethod[Something: C](s: Something) = {
// uses "s", for example:
println(s)
}
someMethod(f) // Test$$$Lambda$6/1744347043#dfd3711
}
You haven't specified your motivation for making A -> B extend C, but obviously you want to be able to put A, B and A -> B under the "same umbrella" because you have, say, some method (called someMethod) which takes a C so with inheritance you can pass it values of type A, B or A -> B.
With a typeclass you achieve the same thing, with some extra advantages, such as e.g. adding D to the family one day without changing existing code (you would just need to implement an implicit value of type C[D] somewhere in scope).
So instead of having someMethod take instances of C, it simply takes something (let's call it s) of some type (let's call it Something), with the constraint that C[Something] must exist. If you pass something for which an instance of C doesn't exist, you will get an error:
trait NotC
someMethod(new NotC {})
// Error: could not find implicit value for evidence parameter of type C[NotC]
You achieve the same thing - you have a family of C whose members are A, B and A => B, but you go around subtyping problems.
Consider the two codes below. They accomplish the same goal : only such A[T]-s can be stored in the Container where T extends C
However they use two different approaches to achieve this goal :
1) existentials
2) covariance
I prefer the first solution because then A remains simpler. Is there any reason why I ever would want to use the second solution (covariance) ?
My problem with the second solution is that it is not natural in the sense that it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility. The second solution is also more complicated once I want to start to operate on A and then I have to deal with all the stuff that comes with covariance.
What benefit would I get by using the second (more complicated, less natural) solution ?
object Existentials extends App {
class A[T](var t:T)
class C
class C1 extends C
class C2 extends C
class Z
class Container[T]{
var t:T = _
}
val c=new Container[A[_<:C]]()
c.t=new A(new C)
// c.t=new Z // not compile
val r: A[_ <: C] = c.t
println(r)
}
object Cov extends App{
class A[+T](val t:T)
class C
class C1 extends C
class C2 extends C
class Z
class Container[T]{
var t:T = _
}
val c: Container[A[C]] =new Container[A[C]]()
c.t=new A(new C)
//c.t=new A(new Z) // not compile
val r: A[C] = c.t
println(r)
}
EDIT (in response to Alexey's answer):
Commenting on :
"My problem with the second solution is that it is not natural in the sense that it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility."
If I have class A[T](var t:T) that means that I can store only A[T]-s and not ( A[S] where S<:T ) in a container, in any container.
However if I have class A[+T](var t:T) then I can store A[S] where S<:T as well in any container.
So when declaring A either to be invariant or covariant I decide what type of A[S] can be stored in a container (as shown above), this decision takes place at the declaration of A.
However , I think, this decision should take place, instead, at the declaration of the container because it is container specific what will be allowed to go into that container, only A[T]-s or also A[S] where S<:T-s.
In other words, changing the variance in A[T] has effects globally, while changing the type parameter of a container from A[T] to A[_<:S] has a well defined local effect on the container itself. So the principle of "changes should have local effects" here favors the existential solution as well.
In the first case A is simpler, but in the second case its clients are. Since there is normally more than one place where you use A, this is often a worthwhile tradeoff. Your own code demonstrates it: when you need to write A[_ <: C] in the first case (in two places), you can just use A[C] in the second one.
In addition, in the first case you can write just A[C] where A[_ <: C] is really desired. Let's say you have a method
def foo(x: A[C]): C = x.t
Now you can't call foo(y) with y: A[C1] even though it would make sense: y.t does have type C.
When this happens in your code, it can be fixed, but what about third-party?
Of course, this applies to the standard library types as well: if types like Maybe and List weren't covariant, either signatures for all methods taking/returning them would have to be more complex or many programs which are currently valid and make perfect sense would break.
it should not be A-s responsibility to describe what I can store in a Container and what not, that should be the Container's responsibility.
Variance isn't about what you can store in a container; it is about when A[B] is a subtype of A[C]. This argument is a bit like saying that you shouldn't have extends at all: otherwise class Apple extends Fruit allows you to store an Apple in Container[Fruit], and deciding that is Container's responsibility.
I've seen a couple of time code like this in Scala libraries. What does it mean?
trait SecuredSettings {
this: GlobalSettings =>
def someMethod = {}
...
}
This trick is called "self type annotation".
This actually do two separate things at once:
Introduces a local alias for this reference (may be useful when you introduce nested class, because then you have several this objects in scope).
Enforces that given trait is mixable only into subtypes of some type (this way you assume you have some methods in scope).
Google for "scala self type annotation" for many discussion about this subject.
The scala-lang.org contains a pretty descent explanation of this feature:
http://docs.scala-lang.org/tutorials/tour/explicitly-typed-self-references.html
There are numerous patterns which use this trick in non-obvious way. For a starter, look here:
http://marcus-christie.blogspot.com/2014/03/scala-understanding-self-type.html
trait A {
def foo
}
class B { self: A =>
def bar = foo //compiles
}
val b = new B //fails
val b = new B with A //compiles
It means that any B instances must inherit (mix-in) A. B is not A, but its instances are promised to be so, therefore you can code B as if it were already A.
trait A { def someMethod = 1}
trait B { self : A => }
val refOfTypeB : B = new B with A
refOfTypeB.someMethod
The last line results in a type mismatch error. My question is: why it's impossible to reach the method (of A) when it's given that B is also of type A?
So B is not also of type A. The self type annotation that you've used here indicates specifically that B does not extend A but that instead, wherever B is mixed in, A must be mixed in at some point as well. Since you downcast refOfTypeB to a B instead of B with A you don't get access to any of type A's methods. Inside of the implementation of the B trait you can access A's methods since the compiler knows that you'll at some point have access to A in any implemented class. It might be easier to think about it as B depends on A instead of B is an A.
For a more thorough explanation see this answer: What is the difference between self-types and trait subclasses?
The problem is when you declare refOfTypeB, you have type B specified but not type B with A.
The self => syntax allow you to access the properties of A within B and pass the compilation.
However, in the runtime, refOfTypeB is not recognize as B with A and the compiler doesn't necessarily have the function map correctly.
So, the correct syntax should be:
trait A { def someMethod = 1}
trait B { self : A => }
val refOfTypeB : B with A = new B with A
refOfTypeB.someMethod //1
Indeed, this is more expressive in terms of explaining with what refOfTypeB exactly is.
This is the somehow the boilerplate of cake patterns.