Scala: ParVector does not conform to upper bound [_] =>> Seq[?] - scala

I am using Scala 3.1.3 with the parallel collections module, version 1.0.4 (Github).
I have the following type definition defining a generic version of a matrix:
type Matrix[T, C[_] <: Seq[_]] = C[C[T]]
As expected, initializing such a matrix works when using Vector as the collection type:
val vecMatrix: Matrix[Double, Vector] =
Vector(Vector(0.5, 0.6), Vector(0.6, 0.8))
However, using ParVector instead yields a compiler error:
val parvecMatrix: Matrix[Double, ParVector] =
ParVector(ParVector(0.5, 0.6), ParVector(0.6, 0.8))
Type argument collection.parallel.immutable.ParVector does not conform to upper bound [_] =>> Seq[?]
I read this as meaning ParVector is not a subtype of Seq. However, the documentation of scala.collection.parallel.ParSeqLike, a linear supertrait of ParVector, explicitly states:
Parallel sequences inherit the Seq trait.
Is the documentation wrong? Or is there an issue with my code that prevents the upper bound from working?

Related

Scala 3 type lambdas. Example for "curried type parameters"

Scala 3 has a powerful mechanism of expressing type constructors via type lambdas.
Even simple type lambdas can do powerful things like expressing partial application of a type constructor (see for ex https://stackoverflow.com/a/75428709/336184 ).
Docs mention "Curried Type Parameters" like
type TL = [X] =>> [Y] =>> (X, Y)
this looks like even more abstract thing.
Question:
Can anyone give a working example with an implementation of such a type lambda? Also - what is a practical purpose of such an abstraction? Any parallels in Haskell?
type TL = [X] =>> [Y] =>> ...
is the same as
type TL[X] = [Y] =>> ...
and should be the same as
type TL[X][Y] = ...
if there were multiple type-parameter lists (MTPL).
So [X] =>> [Y] =>> ... should be a way to introduce such type anonymously.
MTPL along with named type parameters could be useful for specifying some type parameters and inferring the rest of them.
Cannot prove equivalence with a path dependent type
Curried type "does not take type parameters"
https://contributors.scala-lang.org/t/multiple-type-parameter-lists-in-dotty-si-4719/2399
https://github.com/scala/bug/issues/4719
https://docs.scala-lang.org/scala3/reference/experimental/named-typeargs.html
For example specifying some type parameters and inferring the rest can be necessary for type-level calculations. Currently for type-level calculations people either make type parameters nested or use type members instead of type parameters.
When are dependent types needed in Shapeless?
In Haskell you can write
foo :: forall (f :: * -> * -> *) . ()
foo = ()
but in Scala without MTPL implemented, currently you can't write
def foo[F[_][_]]: Unit = ()
you can only write
def foo[F[_,_]]: Unit = ()
If there were MTPL then for a definition like def foo[F[_][_]]... it would be convenient having curried type lambdas [X] =>> [Y] =>> ..., you could use them at a call site foo[[X] =>> [Y] =>> ...].
In Haskell all type constructors are curried and there are no type lambdas
Lambda for type expressions in Haskell?

implicit conversions with user defined types?

With the following code:
import Base.convert
abstract type MySequential end
struct MyFiniteSequence{T} <: MySequential
vec::NTuple{N,T} where N
end
Base.convert(MyFiniteSequence, r) = MyFiniteSequence{typeof(r)}((r,))
Now try:
julia> convert(MyFiniteSequence, 1)
MyFiniteSequence{Int64}((1,))
So far so good. Now try to do an implict conversion:
julia> MyFiniteSequence(1)
ERROR: MethodError: no method matching MyFiniteSequence(::Int64)
Closest candidates are:
MyFiniteSequence(::Tuple{Vararg{T,N}} where N) where T at REPL[2]:2
Stacktrace:
[1] top-level scope at REPL[5]:1
I think there is a problem because of the {T, N} annotations, but am unsure how the syntax of convert needs to be changed. Is there a way to define convert to get the implicit conversion from Int to struct?
I believe implicit constructor calls to convert were removed in the transition from julia 0.7 to 1.0. But you can just define a constructor that calls convert if that's what you want:
julia> MyFiniteSequence(x) = Base.convert(MyFiniteSequence, x)
MyFiniteSequence
julia> MyFiniteSequence(1)
MyFiniteSequence{Int64}((1,))

About scala variance position: What is "the enclosing parameter clause"?

Why the type position of a method is marked as negative?
In linked above, n.m. answered:
The variance position of a method parameter is the opposite of the variance position of the enclosing parameter clause.
The variance position of a type parameter is the opposite of the
variance position of the enclosing type parameter clause.
I don't know what the enclosing parameter clause or the enclosing type parameter clause is.
can you give an example to explain it?
I don't know what the enclosing parameter clause or the enclosing type
parameter clause is.
The specification defines one important axiom before stating those lines about variance:
Let the opposite of covariance be contravariance, and the opposite of
invariance be itself. The top-level of the type or template is always
in covariant position. The variance position changes at the following
constructs.
So we begin with the fact that the initial allowed variance for a type parameter is covariant, and now we flip the variance back and forth depending on specific contexts (here is an example, there are more):
Method parameters (from covariant to contravariant),
Type parameter clauses of methods
Low bounds of type parameters
Type parameters of parameterized classes, if the corresponding formal parameter is contravariant.
Now let's look at these statements again:
The variance position of a method parameter is the opposite of the
variance position of the enclosing parameter clause.
This basically means that if we have a generic method parameter, we flip the variance for it:
def m(param: T)
The enclosing parameter clause is everything defined after the method m and inside the parenthesis, which in our case include param: T. T is in a contravariant position because we had to flip it (remember, all top-level type parameters begin in a covariant position), due to the rules (rule 1).
The variance position of a type parameter is the opposite of the
variance position of the enclosing type parameter clause.
Let's define a method with a type parameter:
def m[T >: U]()
The enclosing type parameter clause is referring to the square brackets [T >: U]. Again, the variance flips because of the rules, thus U is now in a contravariant position (rule 2).
You can think about it as a game. You have a starting state (covariant, or positive), and then a set of rules which makes positions switch (covariant -> contravariant, contravariant -> covariant, invariant -> invariant). At the end of the game, you have a selected state (position) which is applied to the type parameter.
This blog post explains things in a way which one can reason about.
I think I know what he meant and will try to illustrate.
Given the following preconditions:
trait A
trait B extends A
trait C extends B
val bs: List[B] = List(new B{}, new B{})
val b2b: B => B = identity
A List[+T] is covariant in its type argument (enclosing type parameter clause), but the type of variable it's assignable to (enclosing type clause) is contravariant:
val as: List[A] = bs // this is valid
val cs: List[C] = bs // ...and this is not
Another example involves functions. a Function1[-T, +R] is contravariant in its arguments and covariant in its return types, but when assigning to variables the situation is reverse:
val c2a: C => A = b2b // this compiles
val a2c: A => C = b2b // ...and this does not

Type inference inconsistency between toList and toBuffer

As per the example below, calling xs.toList.map(_.toBuffer) succeeds, but xs.toBuffer.map(_.toBuffer) fails. But when the steps in the latter are performed using an intermediate result, it succeeds. What causes this inconsistency?
scala> "ab-cd".split("-").toBuffer
res0: scala.collection.mutable.Buffer[String] = ArrayBuffer(ab, cd)
scala> res0.map(_.toBuffer)
res1: scala.collection.mutable.Buffer[scala.collection.mutable.Buffer[Char]] = ArrayBuffer(ArrayBuffer(a, b), ArrayBuffer(c, d))
scala> "ab-cd".split("-").toBuffer.map(_.toBuffer)
<console>:8: error: missing parameter type for expanded function ((x$1) => x$1.toBuffer)
"ab-cd".split("-").toBuffer.map(_.toBuffer)
^
scala> "ab-cd".split("-").toList.map(_.toBuffer)
res3: List[scala.collection.mutable.Buffer[Char]] = List(ArrayBuffer(a, b), ArrayBuffer(c, d))
Look at the definitions of toBuffer and toList:
def toBuffer[A1 >: A]: Buffer[A1]
def toList: List[A]
As you can see, toBuffer is generic, while toList is not.
The reason for this difference is - I believe - that Buffer is invariant, while List is covariant.
Let's say that we have the following classes:
class Foo
class Bar extends Foo
Because List is covariant, you can call toList on an instance of Iterable[Bar] and treat the result as a List[Foo].
If List where invariant, this would not be the case.
Buffer being invariant, if toBuffer was defined as def toBuffer: Buffer[A] you would similarly not be able to treat the result
of toBuffer (on an instance of Iterable[Bar]) as an instance of Buffer[Foo] (as Buffer[Bar] is not a sub-type of Buffer[Foo], unlike for lists).
But by declaring toBuffer as def toBuffer[A1 >: A] (notice the added type parameter A1), we get back the possibility to have toBuffer return an instance of Buffer[Foo] :
all we need is to explcitly set A1 to Foo, or let the compiler infer it (if toBuffer is called at a site where a Buffer[Foo] is expected).
I think this explains the reason why toList and toBuffer are defined differently.
Now the problem with this is that toBuffer is generic, and this can badly affect inference.
When you do this:
"ab-cd".split("-").toBuffer
You never explicitly say that A1 is String, but because "ab-cd".split("-") has unambiguously the type Array[String], the compiler knows that A is String.
It also knows that A1 >: A (in toBuffer), and without any further constraint, it will infer A1 to be exactly A, in other words String.
So in the end the whole expression returns a Buffer[String].
But here's the thing: in scala, type inference happens in an expression as a whole.
When you have something like a.b.c, you might expect that scala will infer an exact type
for a, then from that infer an exact type for a.b, and finally for a.b.c. Not so.
Type inference is deferred to the whole expression a.b.c (see scala specification "6.26.4 Local Type Inference
", "case 1: selections")
So, going back to your problem, in the expression "ab-cd".split("-").toBuffer.map(_.toBuffer), the sub-expression "ab-cd".split("-").toBuffer is not typed Buffer[String], but instead
it stays typed as something like Buffer[A1] forSome A1 >: String. In other words, A1 is not fixed, we just carry the constraint A1 >: String to the next step of inference.
This next step is map(_.toBuffer), where map is defined as map[C](f: (B) ⇒ C): Buffer[B]. Here B is actually the same as A1, but at this point A1
is still not fully known, we only know that A1 >: String.
Here lies our problem. The compiler needs to know the exact type of the anonymous function (_.toBuffer) (simply because instantiating a Function1[A,R] requires to know the exact types of A and R, just like for any generic type).
So you need to tell him explcitly somehow, as it was not able to infer it exactly.
This means you need to do either:
"ab-cd".split("-").toBuffer[String].map(_.toBuffer)
Or:
"ab-cd".split("-").toBuffer.map((_:String).toBuffer)

Weird behavior of & function in Set

Set is defined as Set[A]. It takes a in-variant parameter. Doing below works as expected as we are passing co-variant argument:
scala> val a = Set(new Object)
a: scala.collection.immutable.Set[Object] = Set(java.lang.Object#118c38f)
scala> val b = Set("hi")
b: scala.collection.immutable.Set[String] = Set(hi)
scala> a & b
<console>:10: error: type mismatch;
found : scala.collection.immutable.Set[String]
required: scala.collection.GenSet[Object]
Note: String <: Object, but trait GenSet is invariant in type A.
You may wish to investigate a wildcard type such as `_ <: Object`. (SLS 3.2.10)
a & b
But the below works:
scala> Set(new Object) & Set("hi")
res1: scala.collection.immutable.Set[Object] = Set()
Above as I see it, the scala compiler converts Set("hi") to Set[Object] type and hence works.
What is the type-inference doing here? Can someone please link to specification explaining the behavior and when does it happen in general? Shouldn't it be throwing a compile time error for such cases? As 2 different output for the same operation type.
Not sure, but I think what you're looking for is described in the language spec under "Local Type Inference" (at this time of writing, section 6.26.4 on page 100).
Local type inference infers type arguments to be passed to expressions of polymorphic type. Say e is of type [ a1 >: L1 <: U1, ..., an >: Ln <: Un ] T and no explicit type
parameters are given.
Local type inference converts this expression to a type application e [ T1, ..., Tn ]. The choice of the type arguments T1, ..., Tn depends on the context in which the expression appears and on the expected type pt. There are three cases.
[ ... ]
If the expression e appears as a value without being applied to value arguments, the type arguments are inferred by solving a constraint system which relates the expression's type T with the expected type pt. Without loss of generality we can assume that T is a value type; if it is a method type we apply eta-expansion to convert it to a function type. Solving means finding a substitution σ of types Ti for the type parameters ai such that
None of inferred types Ti is a singleton type
All type parameter bounds are respected, i.e. σ Li <: σ ai and σ ai <: σ Ui for i = 1, ..., n.
The expression's type conforms to the expected type, i.e. σ T <: σ pt.
It is a compile time error if no such substitution exists. If several substitutions exist, local-type inference will choose for each type variable ai a minimal or maximal type Ti of the solution space. A maximal type Ti will be chosen if the type parameter ai appears contravariantly in the type T of the expression. A minimal type Ti will be chosen in all other situations, i.e. if the variable appears covariantly, nonvariantly or not at all in the type T. We call such a substitution an optimal solution of the given constraint system for the type T.
In short: Scalac has to choose values for the generic types that you omitted, and it picks the most specific choices possible, under the constraint that the result compiles.
The expression Set("hi") can be either a scala.collection.immutable.Set[String] or a scala.collection.immutable.Set[Object], depending on what the context requires. (A String is a valid Object, of course.) When you write this:
Set(new Object) & Set("hi")
the context requires Set[Object], so that's the type that's inferred; but when you write this:
val b = Set("hi")
the context doesn't specify, so the more-specific type Set[String] is chosen, which (as you expected) then makes a & b be ill-typed.