Generic Breeze Vector method - scala

I am trying to implement a generic Scala method that processes Breeze vectors typed as Float or as Double (at least, less specificity a plus).
Here is a simple example for Vector[Double]:
def vectorSum(vectors: Seq[Vector[Double]]): Vector[Double] = {
vectors.reduce { (v1, v2) => v1 :+ v2 }
}
I am slightly new to Scala and Breeze, so my naive approach to make this generic is:
def vectorSumGeneric[T <: AnyVal](vectors: Seq[Vector[T]]): Vector[T] = {
vectors.reduce { (v1, v2) => v1 :+ v2 }
}
However, this throws the following compile errors:
diverging implicit expansion for type breeze.linalg.operators.OpAdd.Impl2[breeze.linalg.Vector[T],breeze.linalg.Vector[T],That] starting with method v_v_Idempotent_OpAdd in trait VectorOps
not enough arguments for method :+: (implicit op: breeze.linalg.operators.OpAdd.Impl2[breeze.linalg.Vector[T],breeze.linalg.Vector[T],That])That. Unspecified value parameter op.
I've tried with some variations including T <% AnyVal and T <% Double, but they do not work either (as expected, probably). The Scala documentation to type bounds do not give me a clue about such a use case such as this.
What is the correct way to solve this?

The problem is that the type parameter T can be anything, but you have to make sure that your type T supports at least addition as an algebraic operation. If T is a semiring, then you can add two elements of type T. You can enforce T to be a semiring by specifying a context bound:
def vectorSum[T: Semiring](vectors: Seq[Vector[T]]): Vector[T] = {
vectors.reduce(_ + _)
}
That way you enforce that for every instantiation of T you also have a Semiring[T] in your scope which defines the addition operation. Breeze already defines this structure for all primitive types which support addition.
If you want to support more algebraic operations such as division, then you should constrain your type variable to have a Field context bound.
def vectorDiv[T: Field](vectors: Seq[Vector[T]]): Vector[T] = {
vectors.reduce(_ / _)
}
If you want to support general purpose element-wise binary operations on vectors:
def vectorBinaryOp[T](
vectors: Seq[Vector[T]], op: (T, T) => T)(
implicit canZipMapValues: CanZipMapValues[Vector[T], T, T, Vector[T]])
: Vector[T] = {
vectors.reduce{
(left, right) => implicitly[CanZipMapValues[Vector[T], T, T, Vector[T]]].map(left, right, op)
}
}
Then you can define arbitrary binary operations on vectors:
val vectors = Seq(DenseVector(1.0,2.0,3.0,4.0), DenseVector(2.0,3.0,4.0,5.0))
val result = VectorSum.vectorBinaryOp(vectors, (a: Double, b: Double) => (a / b))

Related

In Scala, how to summon a polymorphic function applicable to an input type, without knowing the output type or full type arguments?

Since Scala 2.12 (or is it 2.13, can't be sure), the compiler can infer latent type arguments across multiple methods:
def commutative[
A,
B
]: ((A, B) => (B, A)) = {???} // implementing omitted
val a = (1 -> "a")
val b = commutative.apply(a)
The last line successfully inferred A = Int, B = String, unfortunately, this requires an instance a: (Int, String) to be given.
Now I'd like to twist this API for a bit and define the following function:
def findApplicable[T](fn: Any => Any)
Such that findApplicable[(Int, String)](commutative) automatically generate the correct function specialised for A = Int, B = String. Is there a way to do it within the language's capability? Or I'll have to upgrade to scala 3 to do this?
UPDATE 1 it should be noted that the output of commutative can be any type, not necessarily a Function2, e.g. I've tried the following definition:
trait SummonedFn[-I, +O] extends (I => O) {
final def summon[II <: I]: this.type = this
}
Then redefine commutative to use it:
def commutative[
A,
B
]: SummonedFn[(A, B), (B, A)] = {???} // implementing omitted
val b = commutative.summon[(Int, String)]
Oops, this doesn't work, type parameters don't get equal treatment like value parameters
If at some point some call-site knows the types of arguments (they aren't actually Any => Any) it is doable using type classes:
trait Commutative[In, Out] {
def swap(in: In): Out
}
object Commutative {
def swap[In, Out](in: In)(implicit c: Commutative[In, Out]): Out =
c.swap(in)
implicit def tuple2[A, B]: Commutative[(A, B), (B, A)] =
in => in.swap
}
At call site:
def use[In, Out](ins: List[In])(implicit c: Commutative[In, Out]): List[Out] =
ins.map(Commutative.swap(_))
However, this way you have to pass both In as well as Out as type parameters. If there are multiple possible Outs for a single In type, then there is not much you can do.
But if you want to have Input type => Output type implication, you can use dependent types:
trait Commutative[In] {
type Out
def swap(in: In): Out
}
object Commutative {
// help us transform dependent types back into generics
type Aux[In, Out0] = Commutative[In] { type Out = Out0 }
def swap[In](in: In)(implicit c: Commutative[In]): c.Out =
c.swap(in)
implicit def tuple2[A, B]: Commutative.Aux[(A, B), (B, A)] =
in => in.swap
}
Call site:
// This code is similar to the original code, but when the compiler
// will be looking for In it will automatically figure Out.
def use[In, Out](ins: List[In])(implicit c: Commutative.Aux[In, Out]): List[Out] =
ins.map(Commutative.swap(_))
// Alternatively, without Aux pattern:
def use2[In](ins: List[In])(implicit c: Commutative[In]): List[c.Out] =
ins.map(Commutative.swap(_))
def printMapped(list: List[(Int, String)]): Unit =
println(list)
// The call site that knows the input and provides implicit
// will also know the exact Out type.
printMapped(use(List("a" -> 1, "b" -> 2)))
printMapped(use2(List("a" -> 1, "b" -> 2)))
That's how you can solve the issue when you know the exact input type. If you don't know it... then you cannot use compiler (neither in Scala 2 nor in Scala 3) to generate this behavior as you have to implement this functionality using some runtime reflection, e.g. checking types using isInstanceOf, casting to some assumed types and then running predefined behavior etc.
I'm not sure I understand the question 100%, but it seems like you want to do some kind of advanced partial type application. Usually you can achieve such an API by introducing an intermediary class. And to preserve as much type information as possible you can use a method with a dependent return type.
class FindApplicablePartial[A] {
def apply[B](fn: A => B): fn.type = fn
}
def findApplicable[A] = new FindApplicablePartial[A]
scala> def result = findApplicable[(Int, String)](commutative)
def result: SummonedFn[(Int, String),(String, Int)]
And actually in this case since findApplicable itself doesn't care about type B (i.e. B doesn't have a context bound or other uses), you don't even need the intermediary class, but can use a wildcard/existential type instead:
def findApplicable[A](fn: A => _): fn.type = fn
This works just as well.

Scala pass generic numeric function as parameter

def weirdfunc(message: String, f: (Int, Int) => Int){
println(message + s" ${f(3,5)}")
}
I have the function as above. How do I make it so that the function f is generic for all Numeric types?
What you want is called a higher-ranked type (specifically, rank 2). Haskell has support for these types, and Scala gets a lot of its type theory ideas from Haskell, but Scala has yet to directly support this particular feature.
Now, the thing is, with a bit of black magic, we can get Scala to do what you want, but the syntax is... not pretty. In Scala, functions are always monomorphic, but you want to pass a polymorphic function around as an argument. We can't do that, but we can pass a polymorphic function-like object around that looks and behaves mostly like a function. What would this object look like?
trait NumFunc {
def apply[A : Numeric](a: A, b: A): A
}
It's just a trait that defines a polymorphic apply. Now we can define the function that you really want.
def weirdfunc(message: String, f: NumFunc) = ???
The trouble here, as I mentioned, is that the syntax is really quite atrocious. To call this, we can't just pass in a function anymore. We have to create a NumFunc and pass that in. Essentially, from a type theoretic perspective, we have to prove to the compiler that our function works for all numeric A. For instance, to call the simple weirdfunc that only takes integers and pass the addition function is very simple.
weirdfunc("Some message", (_ + _))
However, to call our "special" weirdfunc that works for all number types, we have to write this mess.
weirdfunc("Hi", new NumFunc {
override def apply[A : Numeric](a: A, b: A): A = {
import math.Numeric.Implicits._
a + b
}
})
And we can't hide that away with an implicit conversion because, as I alluded to earlier, functions are monomorphic, so any conversion coming out a function type is going to be monomorphic.
Bottom line. Is it possible? Yes. Is it worth the costs in terms of readability and usability? Probably not.
Scala has a typeclass for this, so it's quite easy to achieve using a context bound and the standard lib.
def weirdfunc[T: Numeric](message: String, x: T, y: T, f: (T, T) => T) {
println(message + s" ${f(x, y)}")
}
def test[T](a: T, b: T)(implicit ev: Numeric[T]): T = ev.plus(a, b)
weirdFunc[Int]("The sum is ", 3, 5, test)
// The sum is 8
Sorry cktang you cannot generify this. The caller gets to set the generic parameter.. not the called function.. just like the caller passes function parameters.
However you can use currying so that you pass the 'f' of type Int once, and then pass different Int pairs. Then you may pass 'f' of type Double, and pass different Double pairs.
def weirdfunc[A](message: String, f: (A, A) => A)(x: A, y: A){
println(message + s" ${f(x, y)}")
}
def g(x: Int, y: Int): Int = x * y
val wierdfuncWithF = weirdfunc("hello", g) _
wierdfuncWithF(3, 5)
wierdfuncWithF(2, 3)
In particular what you want cannot be done as it will break generics rules.

Impose more than one generic type constraint on a Scala type parameter

I want to do the following stuff using Scala's context-bound pattern:
class Polynomial[T: Ring] {
def apply[X: Ring with Includes[T]](x: X): X = ...
...
}
This is a polynomial class which requires the coefficients are elements in a Ring T. When applying this polynomial to a element (evaluation), the type of the parameter x is required to be a ring, and elements of type T can be implicitly cast to type X. for example T = Double, X = SquareMatrix.
How can I impose more than one type constraints to a generic type parameter in Scala?
It seems that [X: T] syntax is not enough for imposing two generic type constraints. Using two implicit parameters solved this problem:
def apply[X](x: X)(implicit ring: Ring[X], conv: Includes[X, T]): X = {
var xi = ring.one
var sum = ring.zero
for (i <- 0 to degree) {
sum += xi * conv.from(coef(i))
xi *= x
}
return sum
}
The [X: T] syntax is simply syntactic sugar for requiring an implicit parameter of type T[X]. For example, f and f2 below are identical:
def f[T: X]
def f2[T](implicit xt: X[T])
As for your code, if you explicitly write out the implicit parameter you will be able to express your constraints:
class Polynomial[T: Ring] {
def apply[X](x: X)(implicit xt: Ring[X] with Includes[T]): X = ...
}
Note that you aren't imposing more than one constraint on either X or T, your constraint is simply too complex for the context-bound syntactic sugar. However, Scala does allow you to impose as many context-bounds on type parameters as you like, and you can combine them with upper- and lower-bounds as well:
def f[T >: List[Int] <: AnyRef : Ordering : List] = ???

What is a "context bound" in Scala?

One of the new features of Scala 2.8 are context bounds. What is a context bound and where is it useful?
Of course I searched first (and found for example this) but I couldn't find any really clear and detailed information.
Robert's answer covers the techinal details of Context Bounds. I'll give you my interpretation of their meaning.
In Scala a View Bound (A <% B) captures the concept of 'can be seen as' (whereas an upper bound <: captures the concept of 'is a'). A context bound (A : C) says 'has a' about a type. You can read the examples about manifests as "T has a Manifest". The example you linked to about Ordered vs Ordering illustrates the difference. A method
def example[T <% Ordered[T]](param: T)
says that the parameter can be seen as an Ordered. Compare with
def example[T : Ordering](param: T)
which says that the parameter has an associated Ordering.
In terms of use, it took a while for conventions to be established, but context bounds are preferred over view bounds (view bounds are now deprecated). One suggestion is that a context bound is preferred when you need to transfer an implicit definition from one scope to another without needing to refer to it directly (this is certainly the case for the ClassManifest used to create an array).
Another way of thinking about view bounds and context bounds is that the first transfers implicit conversions from the caller's scope. The second transfers implicit objects from the caller's scope.
Did you find this article? It covers the new context bound feature, within the context of array improvements.
Generally, a type parameter with a context bound is of the form [T: Bound]; it is expanded to plain type parameter T together with an implicit parameter of type Bound[T].
Consider the method tabulate which forms an array from the results of applying
a given function f on a range of numbers from 0 until a given length. Up to Scala 2.7, tabulate could be
written as follows:
def tabulate[T](len: Int, f: Int => T) = {
val xs = new Array[T](len)
for (i <- 0 until len) xs(i) = f(i)
xs
}
In Scala 2.8 this is no longer possible, because runtime information is necessary to create the right representation of Array[T]. One needs to provide this information by passing a ClassManifest[T] into the method as an implicit parameter:
def tabulate[T](len: Int, f: Int => T)(implicit m: ClassManifest[T]) = {
val xs = new Array[T](len)
for (i <- 0 until len) xs(i) = f(i)
xs
}
As a shorthand form, a context bound can be used on the type parameter T instead, giving:
def tabulate[T: ClassManifest](len: Int, f: Int => T) = {
val xs = new Array[T](len)
for (i <- 0 until len) xs(i) = f(i)
xs
}
(This is a parenthetical note. Read and understand the other answers first.)
Context Bounds actually generalize View Bounds.
So, given this code expressed with a View Bound:
scala> implicit def int2str(i: Int): String = i.toString
int2str: (i: Int)String
scala> def f1[T <% String](t: T) = 0
f1: [T](t: T)(implicit evidence$1: (T) => String)Int
This could also be expressed with a Context Bound, with the help of a type alias representing functions from type F to type T.
scala> trait To[T] { type From[F] = F => T }
defined trait To
scala> def f2[T : To[String]#From](t: T) = 0
f2: [T](t: T)(implicit evidence$1: (T) => java.lang.String)Int
scala> f2(1)
res1: Int = 0
A context bound must be used with a type constructor of kind * => *. However the type constructor Function1 is of kind (*, *) => *. The use of the type alias partially applies second type parameter with the type String, yielding a type constructor of the correct kind for use as a context bound.
There is a proposal to allow you to directly express partially applied types in Scala, without the use of the type alias inside a trait. You could then write:
def f3[T : [X](X => String)](t: T) = 0
This is another parenthetical note.
As Ben pointed out, a context bound represents a "has-a" constraint between a type parameter and a type class. Put another way, it represents a constraint that an implicit value of a particular type class exists.
When utilizing a context bound, one often needs to surface that implicit value. For example, given the constraint T : Ordering, one will often need the instance of Ordering[T] that satisfies the constraint. As demonstrated here, it's possible to access the implicit value by using the implicitly method or a slightly more helpful context method:
def **[T : Numeric](xs: Iterable[T], ys: Iterable[T]) =
xs zip ys map { t => implicitly[Numeric[T]].times(t._1, t._2) }
or
def **[T : Numeric](xs: Iterable[T], ys: Iterable[T]) =
xs zip ys map { t => context[T]().times(t._1, t._2) }

Function syntax puzzler in scalaz

Following watching Nick Partidge's presentation on deriving scalaz, I got to looking at this example, which is just awesome:
import scalaz._
import Scalaz._
def even(x: Int) : Validation[NonEmptyList[String], Int]
= if (x % 2 ==0) x.success else "not even: %d".format(x).wrapNel.fail
println( even(3) <|*|> even(5) ) //prints: Failure(NonEmptyList(not even: 3, not even: 5))
I was trying to understand what the <|*|> method was doing, here is the source code:
def <|*|>[B](b: M[B])(implicit t: Functor[M], a: Apply[M]): M[(A, B)]
= <**>(b, (_: A, _: B))
OK, that is fairly confusing (!) - but it references the <**> method, which is declared thus:
def <**>[B, C](b: M[B], z: (A, B) => C)(implicit t: Functor[M], a: Apply[M]): M[C]
= a(t.fmap(value, z.curried), b)
So I have a few questions:
How come the method appears to take a higher-kinded type of one type parameter (M[B]) but can get passed a Validation (which has two type paremeters)?
The syntax (_: A, _: B) defines the function (A, B) => Pair[A,B] which the 2nd method expects: what is happening to the Tuple2/Pair in the failure case? There's no tuple in sight!
Type Constructors as Type Parameters
M is a type parameter to one of Scalaz's main pimps, MA, that represents the Type Constructor (aka Higher Kinded Type) of the pimped value. This type constructor is used to look up the appropriate instances of Functor and Apply, which are implicit requirements to the method <**>.
trait MA[M[_], A] {
val value: M[A]
def <**>[B, C](b: M[B], z: (A, B) => C)(implicit t: Functor[M], a: Apply[M]): M[C] = ...
}
What is a Type Constructor?
From the Scala Language Reference:
We distinguish between first-order
types and type constructors, which
take type parameters and yield types.
A subset of first-order types called
value types represents sets of
(first-class) values. Value types are
either concrete or abstract. Every
concrete value type can be represented
as a class type, i.e. a type
designator (§3.2.3) that refers to a
class1 (§5.3), or as a compound type
(§3.2.7) representing an intersection
of types, possibly with a refinement
(§3.2.7) that further constrains the
types of itsmembers. Abstract value
types are introduced by type
parameters (§4.4) and abstract type
bindings (§4.3). Parentheses in types
are used for grouping. We assume that
objects and packages also implicitly
define a class (of the same name as
the object or package, but
inaccessible to user programs).
Non-value types capture properties of
identifiers that are not values
(§3.3). For example, a type
constructor (§3.3.3) does not directly
specify the type of values. However,
when a type constructor is applied to
the correct type arguments, it yields
a first-order type, which may be a
value type. Non-value types are
expressed indirectly in Scala. E.g., a
method type is described by writing
down a method signature, which in
itself is not a real type, although it
gives rise to a corresponding function
type (§3.3.1). Type constructors are
another example, as one can write type
Swap[m[_, _], a,b] = m[b, a], but
there is no syntax to write the
corresponding anonymous type function
directly.
List is a type constructor. You can apply the type Int to get a Value Type, List[Int], which can classify a value. Other type constructors take more than one parameter.
The trait scalaz.MA requires that it's first type parameter must be a type constructor that takes a single type to return a value type, with the syntax trait MA[M[_], A] {}. The type parameter definition describes the shape of the type constructor, which is referred to as its Kind. List is said to have the kind '* -> *.
Partial Application of Types
But how can MA wrap a values of type Validation[X, Y]? The type Validation has a kind (* *) -> *, and could only be passed as a type argument to a type parameter declared like M[_, _].
This implicit conversion in object Scalaz converts a value of type Validation[X, Y] to a MA:
object Scalaz {
implicit def ValidationMA[A, E](v: Validation[E, A]): MA[PartialApply1Of2[Validation, E]#Apply, A] = ma[PartialApply1Of2[Validation, E]#Apply, A](v)
}
Which in turn uses a trick with a type alias in PartialApply1Of2 to partially apply the type constructor Validation, fixing the type of the errors, but leaving the type of the success unapplied.
PartialApply1Of2[Validation, E]#Apply would be better written as [X] => Validation[E, X]. I recently proposed to add such a syntax to Scala, it might happen in 2.9.
Think of this as a type level equivalent of this:
def validation[A, B](a: A, b: B) = ...
def partialApply1Of2[A, B C](f: (A, B) => C, a: A): (B => C) = (b: B) => f(a, b)
This lets you combine Validation[String, Int] with a Validation[String, Boolean], because the both share the type constructor [A] Validation[String, A].
Applicative Functors
<**> demands the the type constructor M must have associated instances of Apply and Functor. This constitutes an Applicative Functor, which, like a Monad, is a way to structure a computation through some effect. In this case the effect is that the sub-computations can fail (and when they do, we accumulate the failures).
The container Validation[NonEmptyList[String], A] can wrap a pure value of type A in this 'effect'. The <**> operator takes two effectful values, and a pure function, and combines them with the Applicative Functor instance for that container.
Here's how it works for the Option applicative functor. The 'effect' here is the possibility of failure.
val os: Option[String] = Some("a")
val oi: Option[Int] = Some(2)
val result1 = (os <**> oi) { (s: String, i: Int) => s * i }
assert(result1 == Some("aa"))
val result2 = (os <**> (None: Option[Int])) { (s: String, i: Int) => s * i }
assert(result2 == None)
In both cases, there is a pure function of type (String, Int) => String, being applied to effectful arguments. Notice that the result is wrapped in the same effect (or container, if you like), as the arguments.
You can use the same pattern across a multitude of containers that have an associated Applicative Functor. All Monads are automatically Applicative Functors, but there are even more, like ZipStream.
Option and [A]Validation[X, A] are both Monads, so you could also used Bind (aka flatMap):
val result3 = oi flatMap { i => os map { s => s * i } }
val result4 = for {i <- oi; s <- os} yield s * i
Tupling with `<|**|>`
<|**|> is really similar to <**>, but it provides the pure function for you to simply build a Tuple2 from the results. (_: A, _ B) is a shorthand for (a: A, b: B) => Tuple2(a, b)
And beyond
Here's our bundled examples for Applicative and Validation. I used a slightly different syntax to use the Applicative Functor, (fa ⊛ fb ⊛ fc ⊛ fd) {(a, b, c, d) => .... }
UPDATE: But what happens in the Failure Case?
what is happening to the Tuple2/Pair in the failure case?
If any of the sub-computations fails, the provided function is never run. It only is run if all sub-computations (in this case, the two arguments passed to <**>) are successful. If so, it combines these into a Success. Where is this logic? This defines the Apply instance for [A] Validation[X, A]. We require that the type X must have a Semigroup avaiable, which is the strategy for combining the individual errors, each of type X, into an aggregated error of the same type. If you choose String as your error type, the Semigroup[String] concatenates the strings; if you choose NonEmptyList[String], the error(s) from each step are concatenated into a longer NonEmptyList of errors. This concatenation happens below when two Failures are combined, using the ⊹ operator (which expands with implicits to, for example, Scalaz.IdentityTo(e1).⊹(e2)(Semigroup.NonEmptyListSemigroup(Semigroup.StringSemigroup)).
implicit def ValidationApply[X: Semigroup]: Apply[PartialApply1Of2[Validation, X]#Apply] = new Apply[PartialApply1Of2[Validation, X]#Apply] {
def apply[A, B](f: Validation[X, A => B], a: Validation[X, A]) = (f, a) match {
case (Success(f), Success(a)) => success(f(a))
case (Success(_), Failure(e)) => failure(e)
case (Failure(e), Success(_)) => failure(e)
case (Failure(e1), Failure(e2)) => failure(e1 ⊹ e2)
}
}
Monad or Applicative, how shall I choose?
Still reading? (Yes. Ed)
I've shown that sub-computations based on Option or [A] Validation[E, A] can be combined with either Apply or with Bind. When would you choose one over the other?
When you use Apply, the structure of the computation is fixed. All sub-computations will be executed; the results of one can't influence the the others. Only the 'pure' function has an overview of what happened. Monadic computations, on the other hand, allow the first sub-computation to influence the later ones.
If we used a Monadic validation structure, the first failure would short-circuit the entire validation, as there would be no Success value to feed into the subsequent validation. However, we are happy for the sub-validations to be independent, so we can combine them through the Applicative, and collect all the failures we encounter. The weakness of Applicative Functors has become a strength!