I know trait Foo[T] means T is a parametrized type.
But some times I can see trait Foo[T1,T2], or trait Foo[T1,T2,R], I cannot find anywhere describe the meaning of multiple types inside a type bracket, could you please point me the usages in this case? From what I speculate, Foo[T1,T2] just means, it defined two type parameters, it doesn't have to be take a T1 and return a T2.
When I read playframework documentation today, I again found myself confused about this question. In the documentation, it says:
A BodyParser[A] is basically an Iteratee[Array[Byte],A], meaning that
it receives chunks of bytes (as long as the web browser uploads some
data) and computes a value of type A as result.
This explanation sounds like, the second the type parameter inside a type bracket is a return type.
I also remember that trait Function2 [-T1, -T2, +R] extends AnyRef means a function that takes a T1 and T2, return a R.
Why do they put the return type in the bracket? Does it mean all the last parameter in a bracket is a return type? Or they just happened defined a new type R for the return type?
From what I speculate, Foo[T1,T2] just means, it defined two type parameters, it doesn't have to be take a T1 and return a T2.
A type parameter means nothing more than "I need any type but I'm not interested to know what it is for a concrete type", where 'I' is the programmer who writes the code. Type parameters can used like any other types such as Int, String or Complex - the only difference is that they are not known until one uses them.
See type Map[A, +B]. When you first read this, you can't know what the A and B are for, thus you have to read the documentation:
A map from keys of type A to values of type B.
It explains the types and their meaning. There is nothing more to know or understand. They are just two types. It is possible to call Map something like Map[Key, Value] but inside of source code it is better when type parameters have only one or two letters. This makes it easier to differ between the type parameters and concrete types.
It is the documentation which specifies what a type parameter means. And if there is no documentation you have to take a look to the sources and find their meaning by yourself. For example you have to do this with Function2 [-T1, -T2, +R]. The documentation tells us only this:
A function of 2 parameters.
Ok, we know that two of the three type parameters are parameters the function expects, but what is the third one? We take a look to the sources:
def apply(v1: T1, v2: T2): R
Ah, now we know that T1 and T2 are the parameters and that R is a return type.
Type parameters also can be found in method signatures like map:
class List[+A] {
..
def map[B](f: (A) ⇒ B): List[B]
}
This is how map looks like when you use it with a List. A can be any type - it is the type of the elements a list contains. B is another arbitrary type. When you know what map does then you know what B does. Otherwise you have to understand map before. map expects a function which can transform each element of a List to another element. Because you know that A stands for the elements of the List you can derive from yourself that B have to be the type A is transformed to.
To answer all your other questions: This shouldn't be done in a single answer. There are a lot of other questions and answers on StackOverflow which can also answer your questions.
Summary
When you see some type parameters for example in Foo[T1, T2] you should not start to cry. Think: "Ok, I have a Foo which expects a T1 and a T2 and if I want to know what they do I have to read documentation or the sources."
Multiple types inside a type bracket means type parametrization on multiple types. Take for example
trait Pair[A, B]
This is a pair of values one having type A the other having type B.
Update:
I think you are interpreting too much into the semantics of type parameters. A type parametrized by multiple parameters is just that and nothing more. The position of a specific type parameter in the list of type parameters does not make it special in any way. Specifically the last parameter in a list of type parameters does not need to stand for 'the return type'.
The sentence from the play framework which you quoted explains the semantics of the type parameters for this one specific type. It does not generalize to other types. The same holds for the Function types: here the last type parameter happens to mean 'the return type'. This is not necessarily the case for other types though. The type Pair[A, B] from above is such an example. Here B is the type of the second component of the pair. There is no notion of a 'return type' here at all.
Type parameters of a parametrized type can appear anywhere inside the definition of the parametrized type where a 'regular' type could appear. That is, type parameters are just names for types which are bound to the actual types only when the parametrized type itself is instantiated.
Consider the following definition of a class Tuple:
class Tuple[A, B](a: A, b: B)
It is instantiated to a type of a tuple of Int and String like this:
type TupleIntString = Tuple[Int, String]
Which is essentially the same as
class TupleIntString(a: Int, b: String)
For an official source check the Scala Language Specification. Specifically Section 3.4 "Base Types and Member Definitions" under 1. the 4th bullet point says: "The base types of a parameterized type C[T_1, ..., T_n] are the base types of type C , where every occurrence of a type parameter a_i of C has been replaced by the corresponding parameter type T_i."
I think your question can actually be broken in three separate problems:
What's with the multiple type parameters for classes/traits/etc. ?
A classic example is a map from one type of object to another. If you want the type for the keys to be different from type of the value, but keep both generic, you need two type parameters. So, a Map[A,B] takes keys of generic type A and maps to values of generic type B. A user that wants a map from Bookmarks to Pages would declare it as Map[Bookmark, Page]. Having only one type parameters would not allow this distinction.
From what I speculate, Foo[T1,T2] just means, it defined two type parameters, it doesn't have to be take a T1 and return a T2.
No, all type parameters are equal citizens, though they have a meaning in that direction for function objects. See below.
What are all those +/-'s ?
They limit what the type parameters can bind to. The Scala by Example tutorial has a good explanation. See Section 8.2 Variance Annotations.
What is a function in scala?
Why do they put the return type in the bracket? Does it mean all the
last parameter in a bracket is a return type? Or they just happened
defined a new type R for the return type?
The Scala by Example tutorial explains this well in Section 8.6 Functions.
Their role is a bit similar to the ones (i.e. multiple type parameter) in a class, since traits are, after all, classes (without any constructor) meant to be added to some other class as a mixin.
The Scala spec gives the following example for Trait with multiple parameters:
Consider an abstract class Table that implements maps from a type of keys A to a type of values B.
The class has a method set to enter a new key/value pair into the table, and a method get that returns an optional value matching a given key.
Finally, there is a method apply which is like get, except that it returns a given default value if the table is undefined for the given key. This class is implemented as follows.
abstract class Table[A, B](defaultValue: B) {
def get(key: A): Option[B]
def set(key: A, value: B)
def apply(key: A) = get(key) match {
case Some(value) => value
case None => defaultValue
}
}
Here is a concrete implementation of the Table class.
class ListTable[A, B](defaultValue: B) extends Table[A, B](defaultValue) {
private var elems: List[(A, B)]
def get(key: A) = elems.find(._1.==(key)).map(._2)
def set(key: A, value: B) = { elems = (key, value) :: elems }
}
Here is a trait that prevents concurrent access to the get and set operations of its
parent class
trait Synchronized Table[A, B] extends Table[A, B] {
abstract override def get(key: A): B =
synchronized { super.get(key) }
abstract override def set((key: A, value: B) =
synchronized { super.set(key, value) }
}
Note that SynchronizedTable does not pass an argument to its superclass, Table,
even though Table is defined with a formal parameter.
Note also that the super calls in SynchronizedTable’s get and set methods statically refer to abstract methods in class Table. This is legal, as long as the calling method is labeled abstract override (§5.2).
Finally, the following mixin composition creates a synchronized list table with
strings as keys and integers as values and with a default value 0:
object MyTable extends ListTable[String, Int](0) with SynchronizedTable
The object MyTable inherits its get and set method from SynchronizedTable.
The super calls in these methods are re-bound to refer to the corresponding implementations in ListTable, which is the actual supertype of SynchronizedTable in
MyTable.
Related
Going through Play Form source code right now and encountered this
def bindFromRequest()(implicit request: play.api.mvc.Request[_]): Form[T] = {
I am guessing it takes request as an implicit parameter (you don't have to call bindFromRequet(request)) of type play.api.mvc.Request[_] and return a generic T type wrapped in Form class. But what does [_] mean.
The notation Foo[_] is a short hand for an existential type:
Foo[A] forSome {type A}
So it differs from a normal type parameter by being existentially quantified. There has to be some type so your code type checks where as if you would use a type parameter for the method or trait it would have to type check for every type A.
For example this is fine:
val list = List("asd");
def doSomething() {
val l: List[_] = list
}
This would not typecheck:
def doSomething[A]() {
val l: List[A] = list
}
So existential types are useful in situations where you get some parameterized type from somewhere but you do not know and care about the parameter (or only about some bounds of it etc.)
In general, you should avoid existential types though, because they get complicated fast. A lot of instances (especially the uses known from Java (called wildcard types there)) can be avoided be using variance annotations when designing your class hierarchy.
The method doesn't take a play.api.mvc.Request, it takes a play.api.mvc.Request parameterized with another type. You could give the type parameter a name:
def bindFromRequest()(implicit request: play.api.mvc.Request[TypeParameter]): Form[T] = {
But since you're not referring to TypeParameter anywhere else it's conventional to use an underscore instead. I believe the underscore is special cased as a 'black hole' here, rather than being a regular name that you could refer to as _ elsewhere in the type signature.
I just learned Scala. Now I am confused about Contravariance and Covariance.
From this page, I learned something below:
Covariance
Perhaps the most obvious feature of subtyping is the ability to replace a value of a wider type with a value of a narrower type in an expression. For example, suppose I have some types Real, Integer <: Real, and some unrelated type Boolean. I can define a function is_positive :: Real -> Boolean which operates on Real values, but I can also apply this function to values of type Integer (or any other subtype of Real). This replacement of wider (ancestor) types with narrower (descendant) types is called covariance. The concept of covariance allows us to write generic code and is invaluable when reasoning about inheritance in object-oriented programming languages and polymorphism in functional languages.
However, I also saw something from somewhere else:
scala> class Animal
defined class Animal
scala> class Dog extends Animal
defined class Dog
scala> class Beagle extends Dog
defined class Beagle
scala> def foo(x: List[Dog]) = x
foo: (x: List[Dog])List[Dog] // Given a List[Dog], just returns it
scala> val an: List[Animal] = foo(List(new Beagle))
an: List[Animal] = List(Beagle#284a6c0)
Parameter x of foo is contravariant; it expects an argument of type List[Dog], but we give it a List[Beagle], and that's okay
[What I think is the second example should also prove Covariance. Because from the first example, I learned that "apply this function to values of type Integer (or any other subtype of Real)". So correspondingly, here we apply this function to values of type List[Beagle](or any other subtype of List[Dog]). But to my surprise, the second example proves Cotravariance]
I think two are talking the same thing, but one proves Covariance and the other Contravariance. I also saw this question from SO. However I am still confused. Did I miss something or one of the examples is wrong?
A Good recent article (August 2016) on that topic is "Cheat Codes for Contravariance and Covariance" by Matt Handler.
It starts from the general concept as presented in "Covariance and Contravariance of Hosts and Visitors" and diagram from Andre Tyukin and anoopelias's answer.
And its concludes with:
Here is how to determine if your type ParametricType[T] can/cannot be covariant/contravariant:
A type can be covariant when it does not call methods on the type that it is generic over.
If the type needs to call methods on generic objects that are passed into it, it cannot be covariant.
Archetypal examples:
Seq[+A], Option[+A], Future[+T]
A type can be contravariant when it does call methods on the type that it is generic over.
If the type needs to return values of the type it is generic over, it cannot be contravariant.
Archetypal examples:
`Function1[-T1, +R]`, `CanBuildFrom[-From, -Elem, +To]`, `OutputChannel[-Msg]`
Regarding contravariance,
Functions are the best example of contravariance
(note that they’re only contravariant on their arguments, and they’re actually covariant on their result).
For example:
class Dachshund(
name: String,
likesFrisbees: Boolean,
val weinerness: Double
) extends Dog(name, likesFrisbees)
def soundCuteness(animal: Animal): Double =
-4.0/animal.sound.length
def weinerosity(dachshund: Dachshund): Double =
dachshund.weinerness * 100.0
def isDogCuteEnough(dog: Dog, f: Dog => Double): Boolean =
f(dog) >= 0.5
Should we be able to pass weinerosity as an argument to isDogCuteEnough? The answer is no, because the function isDogCuteEnough only guarantees that it can pass, at most specific, a Dog to the function f.
When the function f expects something more specific than what isDogCuteEnough can provide, it could attempt to call a method that some Dogs don’t have (like .weinerness on a Greyhound, which is insane).
That you can pass a List[Beagle] to a function expecting a List[Dog] is nothing to do with contravariance of functions, it is still because List is covariant and that List[Beagle] is a List[Dog].
Instead lets say you had a function:
def countDogsLegs(dogs: List[Dog], legCountFunction: Dog => Int): Int
This function counts all the legs in a list of dogs. It takes a function that accepts a dog and returns an int representing how many legs this dog has.
Furthermore lets say we have a function:
def countLegsOfAnyAnimal(a: Animal): Int
that can count the legs of any animal. We can pass our countLegsOfAnyAnimal function to our countDogsLegs function as the function argument, this is because if this thing can count the legs of any animal, it can count legs of dogs, because dogs are animals, this is because functions are contravariant.
If you look at the definition of Function1 (functions of one parameter), it is
trait Function1[-A, +B]
That is that they are contravariant on their input and covariant on their output. So Function1[Animal,Int] <: Function1[Dog,Int] since Dog <: Animal
Variance is used to indicate subtyping in terms of Containers(eg: List). In most of the languages, if a function requests object of Class Animal, passing any class that inherits Animal(eg: Dog) would be valid. However, in terms of Containers, these need not be valid.
If your function wants Container[A], what are the possible values that can be passed to it? If B extends A and passing Container[B] is valid, then it is Covariant(eg: List[+T]). If, A extends B(the inverse case) and passing Container[B] for Container[A] is valid, then it is Contravariant. Else, it is invariant(which is the default). You could refer to an article where I have tried explaining variances in Scala
https://blog.lakshmirajagopalan.com/posts/variance-in-scala/
I found myself really can't understand the difference between "Generic type" and "higher-kinded type".
Scala code:
trait Box[T]
I defined a trait whose name is Box, which is a type constructor that accepts a parameter type T. (Is this sentence correct?)
Can I also say:
Box is a generic type
Box is a higher-kinded type
None of above is correct
When I discuss the code with my colleagues, I often struggle between the word "generic" and "higher-kinde" to express it.
It's probably too late to answer now, and you probably know the difference by now, but I'm going to answer just to offer an alternate perspective, since I'm not so sure that what Greg says is right. Generics is more general than higher kinded types. Lots of languages, such as Java and C# have generics, but few have higher-kinded types.
To answer your specific question, yes, Box is a type constructor with a type parameter T. You can also say that it is a generic type, although it is not a higher kinded type. Below is a broader answer.
This is the Wikipedia definition of generic programming:
Generic programming is a style of computer programming in which algorithms are written in terms of types to-be-specified-later that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by ML in 1973,1 permits writing common functions or types that differ only in the set of types on which they operate when used, thus reducing duplication.
Let's say you define Box like this. It holds an element of some type, and has a few special methods. It also defines a map function, something like Iterable and Option, so you can take a box holding an integer and turn it into a box holding a string, without losing all those special methods that Box has.
case class Box(elem: Any) {
..some special methods
def map(f: Any => Any): Box = Box(f(elem))
}
val boxedNum: Box = Box(1)
val extractedNum: Int = boxedString.elem.asInstanceOf[Int]
val boxedString: Box = boxedNum.map(_.toString)
val extractedString: String = boxedString.elem.asInstanceOf[String]
If Box is defined like this, your code would get really ugly because of all the calls to asInstanceOf, but more importantly, it's not typesafe, because everything is an Any.
This is where generics can be useful. Let's say we define Box like this instead:
case class Box[A](elem: A) {
def map[B](f: A => B): Box[B] = Box(f(elem))
}
Then we can use our map function for all kinds of stuff, like changing the object inside the Box while still making sure it's inside a Box. Here, there's no need for asInstanceOf since the compiler knows the type of your Boxes and what they hold (even the type annotations and type arguments are not necessary).
val boxedNum: Box[Int] = Box(1)
val extractedNum: Int = boxedNum.elem
val boxedString: Box[String] = boxedNum.map[String](_.toString)
val extractedString: String = boxedString.elem
Generics basically lets you abstract over different types, letting you use Box[Int] and Box[String] as different types even though you only have to create one Box class.
However, let's say that you don't have control over this Box class, and it's defined merely as
case class Box[A](elem: A) {
//some special methods, but no map function
}
Let's say this API you're using also defines its own Option and List classes (both accepting a single type parameter representing the type of the elements). Now you want to be able to map over all these types, but since you can't modify them yourself, you'll have to define an implicit class to create an extension method for them. Let's add an implicit class Mappable for the extension method and a typeclass Mapper.
trait Mapper[C[_]] {
def map[A, B](context: C[A])(f: A => B): C[B]
}
implicit class Mappable[C[_], A](context: C[A])(implicit mapper: Mapper[C]) {
def map[B](f: A => B): C[B] = mapper.map(context)(f)
}
You could define implicit Mappers like so
implicit object BoxMapper extends Mapper[Box] {
def map[B](box: Box[A])(f: A => B): Box[B] = Box(f(box.elem))
}
implicit object OptionMapper extends Mapper[Option] {
def map[B](opt: Option[A])(f: A => B): Option[B] = ???
}
implicit object ListMapper extends Mapper[List] {
def map[B](list: List[A])(f: A => B): List[B] = ???
}
//and so on
and use it as if Box, Option, List, etc. have always had map methods.
Here, Mappable and Mapper are higher-kinded types, whereas Box, Option, and List are first-order types. All of them are generic types and type constructors. Int and String, however, are proper types. Here are their kinds, (kinds are to types as types are to values).
//To check the kind of a type, you can use :kind in the REPL
Kind of Int and String: *
Kind of Box, Option, and List: * -> *
Kind of Mappable and Mapper: (* -> *) -> *
Type constructors are somewhat analogous to functions (which are sometimes called value constructors). A proper type (kind *) is analogous to a simple value. It's a concrete type that you can use for return types, as the types of your variables, etc. You can just directly say val x: Int without passing Int any type parameters.
A first-order type (kind * -> *) is like a function that looks like Any => Any. Instead of taking a value and giving you a value, it takes a type and gives you another type. You can't use first-order types directly (val x: List won't work) without giving them type parameters (val x: List[Int] works). This is what generics does - it lets you abstract over types and create new types (the JVM just erases that information at runtime, but languages like C++ literally generate new classes and functions). The type parameter C in Mapper is also of this kind. The underscore type parameter (you could also use something else, like x) lets the compiler know that C is of kind * -> *.
A higher-kinded type/higher-order type is like a higher-order function - it takes another type constructor as a parameter. You can't use a Mapper[Int] above, because C is supposed to be of kind * -> * (so that you can do C[A] and C[B]), whereas Int is merely *. It's only in languages like Scala and Haskell with higher-kinded types that you can create types like Mapper above and other things beyond languages with more limited type systems, like Java.
This answer (and others) on a similar question may also help.
Edit: I've stolen this very helpful image from that same answer:
There is no difference between 'Higher-Kinded Types' and 'Generics'.
Box is a 'structure' or 'context' and T can be any type.
So T is generic in the English sense... we don't know what it will be and we don't care because we aren't going to be operating on T directly.
C# also refers to these as Generics. I suspect they chose this language because of its simplicity (to not scare people away).
I'm seeing something I do not understand. I have a hierarchy of (say) Vehicles, a corresponding hierarchy of VehicalReaders, and a VehicleReader object with apply methods:
abstract class VehicleReader[T <: Vehicle] {
...
object VehicleReader {
def apply[T <: Vehicle](vehicleId: Int): VehicleReader[T] = apply(vehicleType(vehicleId))
def apply[T <: Vehicle](vehicleType VehicleType): VehicleReader[T] = vehicleType match {
case VehicleType.Car => new CarReader().asInstanceOf[VehicleReader[T]]
...
Note that when you have more than one apply method, you must specify the return type. I have no issues when there is no need to specify the return type.
The cast (.asInstanceOf[VehicleReader[T]]) is the reason for the question - without it the result is compile errors like:
type mismatch;
found : CarReader
required: VehicleReader[T]
case VehicleType.Car => new CarReader()
^
Related questions:
Why cannot the compiler see a CarReader as a VehicleReader[T]?
What is the proper type parameter and return type to use in this situation?
I suspect the root cause here is that VehicleReader is invariant on its type parameter, but making it covariant does not change the result.
I feel like this should be rather simple (i.e., this is easy to accomplish in Java with wildcards).
The problem has a very simple cause and really doesn't have anything to do with variance. Consider even more simple example:
object Example {
def gimmeAListOf[T]: List[T] = List[Int](10)
}
This snippet captures the main idea of your code. But it is incorrect:
val list = Example.gimmeAListOf[String]
What will be the type of list? We asked gimmeAListOf method specifically for List[String], however, it always returns List[Int](10). Clearly, this is an error.
So, to put it in words, when the method has a signature like method[T]: Example[T] it really declares: "for any type T you give me I will return an instance of Example[T]". Such types are sometimes called 'universally quantified', or simply 'universal'.
However, this is not your case: your function returns specific instances of VehicleReader[T] depending on the value of its parameter, e.g. CarReader (which, I presume, extends VehicleReader[Car]). Suppose I wrote something like:
class House extends Vehicle
val reader = VehicleReader[House](VehicleType.Car)
val house: House = reader.read() // Assuming there is a method VehicleReader[T].read(): T
The compiler will happily compile this, but I will get ClassCastException when this code is executed.
There are two possible fixes for this situation available. First, you can use existential (or existentially quantified) type, which can be though as a more powerful version of Java wildcards:
def apply(vehicleType: VehicleType): VehicleReader[_] = ...
Signature for this function basically reads "you give me a VehicleType and I return to you an instance of VehicleReader for some type". You will have an object of type VehicleReader[_]; you cannot say anything about type of its parameter except that this type exists, that's why such types are called existential.
def apply(vehicleType: VehicleType): VehicleReader[T] forSome {type T} = ...
This is an equivalent definition and it is probably more clear from it why these types have such properties - T type is hidden inside parameter, so you don't know anything about it but that it does exist.
But due to this property of existentials you cannot really obtain any information about real type parameters. You cannot get, say, VehicleReader[Car] out of VehicleReader[_] except via direct cast with asInstanceOf, which is dangerous, unless you store a TypeTag/ClassTag for type parameter in VehicleReader and check it before the cast. This is sometimes (in fact, most of time) unwieldy.
That's where the second option comes to the rescue. There is a clear correspondence between VehicleType and VehicleReader[T] in your code, i.e. when you have specific instance of VehicleType you definitely know concrete T in VehicleReader[T] signature:
VehicleType.Car -> CarReader (<: VehicleReader[Car])
VehicleType.Truck -> TruckReader (<: VehicleReader[Truck])
and so on.
Because of this it makes sense to add type parameter to VehicleType. In this case your method will look like
def apply[T <: Vehicle](vehicleType: VehicleType[T]): VehicleReader[T] = ...
Now input type and output type are directly connected, and the user of this method will be forced to provide a correct instance of VehicleType[T] for that T he wants. This rules out the runtime error I have mentioned earlier.
You will still need asInstanceOf cast though. To avoid casting completely you will have to move VehicleReader instantiation code (e.g. yours new CarReader()) to VehicleType, because the only place where you know real value of VehicleType[T] type parameter is where instances of this type are constructed:
sealed trait VehicleType[T <: Vehicle] {
def newReader: VehicleReader[T]
}
object VehicleType {
case object Car extends VehicleType[Car] {
def newReader = new CarReader
}
// ... and so on
}
Then VehicleReader factory method will then look very clean and be completely typesafe:
object VehicleReader {
def apply[T <: Vehicle](vehicleType: VehicleType[T]) = vehicleType.newReader
}
I'd like to know how do the member types work in Scala, and how should I associate types.
One approach is to make the associated type a type parameter. The advantages of this approach is that I can prescribe the variance of the type, and I can be sure that a subtype doesn't change the type. The disadvantages are, that I cannot infer the type parameter from the type in a function.
The second approach is to make the associated type a member of the second type, which has the problem that I can't prescribe bounds on the subtypes' associated types and therefore, I can't use the type in function parameters (when x : X, X#T might not be in any relation with x.T)
A concrete example would be:
I have a trait for DFAs (could be without the type parameter)
trait DFA[S] { /* S is the type of the symbols in the alphabet */
trait State { def next(x : S); }
/* final type Sigma = S */
}
and I want to create a function for running this DFA over an input sequence, and I want
the function must take anything <% Seq[alphabet-type-of-the-dfa] as input sequence type
the function caller needn't specify the type parameters, all must be inferred
I'd like the function to be called with the concrete DFA type (but if there is a solution where the function would not have a type parameter for the DFA, it's OK)
the alphabet types must be unconstrained (ie. there must be a DFA for Char as well as for a yet unknown user-defined class)
the DFAs with different alphabet types are not subtypes
I tried this:
def runDFA[S, D <: DFA[S], SQ <% Seq[S]](d : D)(seq : SQ) = ....
this works, except the type S is not inferred here, so I have to write the whole type parameter list on each call site.
def runDFA[D <: DFA[S] forSome { type S }, SQ <% Seq[D#Sigma]]( ... same as above
this didn't work (invalid circular reference to type D??? (what is it?))
I also deleted the type parameter, created an abstract type Sigma and tried binding that type in the concrete classes. runDFA would look like
def runDFA[D <: DFA, SQ <% Seq[D#Sigma]]( ... same as above
but this inevitably runs into problems like "type mismatch: expected dfa.Sigma, got D#Sigma"
Any ideas? Pointers?
Edit:
As the answers indicate there is no simple way of doing this, could somebody elaborate more on why is it impossible and what would have to be changed so it worked?
The reasons I want runDFA ro be a free function (not a method) is that I want other similar functions, like automaton minimization, regular language operations, NFA-to-DFA conversions, language factorization etc. and having all of this inside one class is just against almost any principle of OO design.
First off, you don't need the parameterisation SQ <% Seq[S]. Write the method parameter as Seq[S]. If SQ <% Seq[S] then any instance of it is implicitly convertable to Seq[S] (that's what <% means), so when passed as Seq[S] the compiler will automatically insert the conversion.
Additionally, what Jorge said about type parameters on D and making it a method on DFA hold. Because of the way inner classes work in Scala I would strongly advise putting runDFA on DFA. Until the path dependent typing stuff works, dealing with inner classes of some external class can be a bit of a pain.
So now you have
trait DFA[S]{
...
def runDFA(seq : Seq[S]) = ...
}
And runDFA is all of a sudden rather easy to infer type parameters for: It doesn't have any.
Scala's type inference sometimes leaves much to be desired.
Is there any reason why you can't have the method inside your DFA trait?
def run[SQ <% Seq[S]](seq: SQ)
If you don't need the D param later, you can also try defining your method without it:
def runDFA[S, SQ <% Seq[S]](d: DFA[S])(seq: SQ) = ...
Some useful info on how the two differs :
From the the shapeless guide:
Without type parameters you cannot make dependent types , for example
trait Generic[A] {
type Repr
def to(value: A): Repr
def from(value: Repr): A
}
import shapeless.Generic
def getRepr[A](value: A)(implicit gen: Generic[A]) =
gen.to(value)
Here the type returned by to depends on the input type A (because the supplied implicit depends on A):
case class Vec(x: Int, y: Int)
case class Rect(origin: Vec, size: Vec)
getRepr(Vec(1, 2))
// res1: shapeless.::[Int,shapeless.::[Int,shapeless.HNil]] = 1 :: 2 ::
HNil
getRepr(Rect(Vec(0, 0), Vec(5, 5)))
// res2: shapeless.::[Vec,shapeless.::[Vec,shapeless.HNil]] = Vec(0,0)
:: Vec(5,5) :: HNil
without type members this would be impossible :
trait Generic2[A, Repr]
def getRepr2[A, R](value: A)(implicit generic: Generic2[A, R]): R =
???
We would have had to pass the desired value of Repr to getRepr as a
type parameter, effec vely making getRepr useless. The intui ve
take-away from this is that type parameters are useful as “inputs” and
type members are useful as “outputs”.
please see the shapeless guide for details.