So I have a Vec2 class:
class Vec2(val x: Double, val y: Double)
{
def +(other: Vec2): Vec2 = Vec2(x + other.x, y + other.y)
def -(other: Vec2): Vec2 = Vec2(x - other.x, y - other.y)
def *(factor: Double): Vec2 = Vec2(x * factor, y * factor)
def /(divisor: Double): Vec2 = Vec2(x / divisor, y / divisor)
//Other methods omitted but you get the idea
}
I use this class a lot and I use it a lot in collections, so I would like short hand methods for .map(_ + other) .map(_ * factor) etc. I also want to add methods for addMultAll(second: Vec2, factor: Double) and multAddAll(factor: Double, second: Vec2) that I think would be clearer and safer as explicit methods.
So I chose the traversable class for my implicit. Is this the best / most general class that I can use? I have cribbed and modifed the map method from the scala source code. Is the code below correct? Will it work without problems?
object Vec2
{
import collection.mutable.{ Builder }
import scala.collection._
implicit class ImpVec2Class[+Repr](travLike: TraversableLike[Vec2, Repr])
{
def +++ [That](offset: Vec2)(implicit bf: generic.CanBuildFrom[Repr, Vec2, That]): That =
{
def builder =
{ // extracted to keep method size under 35 bytes, so that it can be JIT-inlined
val b = bf(travLike.repr)
b.sizeHint(travLike)
b
}
val b = builder
for (x <- travLike) b += x + offset
b.result
}
}
}
Do I need to do this for the map methods on Double if I want shortened methods, or does Scalaz / CATS already include them?
OK I'm going to post this as answer as its a slight improvement. I've removed the [That] type parameter on the method. I've also added an implict for Array[Vec2] as Array doesn't inherit from the Scala collection traits.
object Vec2
{
import collection.mutable.{ Builder }
import scala.collection._
implicit class ImpVec2Traversible[Repr](travLike: TraversableLike[Vec2, Repr])
{
def +++ (offset: Vec2)(implicit bf: generic.CanBuildFrom[Repr, Vec2, Repr]): Repr =
{
def builder =
{ // extracted to keep method size under 35 bytes, so that it can be JIT-inlined
val b = bf(travLike.repr)
b.sizeHint(travLike)
b
}
val b = builder
for (x <- travLike) b += x + offset
b.result
}
}
implicit class ImpVec2Array(arr: Array[Vec2])
{
def +++ (offset: Vec2): Array[Vec2] = arr.map(_ + offset)
}
}
Related
I'm trying to understand how to leverage monads in scala to solve simple problems as way of building up my familiarity. One simple problem is estimating PI using a functional random number generator. I'm including the code below for a simple stream based approach.
I'm looking for help in translating this to a monadic approach. For example, is there an idiomatic way convert this code to using the state (and other monads) in a stack safe way?
trait RNG {
def nextInt: (Int, RNG)
def nextDouble: (Double, RNG)
}
case class Point(x: Double, y: Double) {
val isInCircle = (x * x + y * y) < 1.0
}
object RNG {
def nonNegativeInt(rng: RNG): (Int, RNG) = {
val (ni, rng2) = rng.nextInt
if (ni > 0) (ni, rng2)
else if (ni == Int.MinValue) (0, rng2)
else (ni + Int.MaxValue, rng2)
}
def double(rng: RNG): (Double, RNG) = {
val (ni, rng2) = nonNegativeInt(rng)
(ni.toDouble / Int.MaxValue, rng2)
}
case class Simple(seed: Long) extends RNG {
def nextInt: (Int, RNG) = {
val newSeed = (seed * 0x5DEECE66DL + 0xBL) & 0xFFFFFFFFFFFFL
val nextRNG = Simple(newSeed)
val n = (newSeed >>> 16).toInt
(n, nextRNG)
}
def nextDouble: (Double, RNG) = {
val (n, nextRNG) = nextInt
double(nextRNG)
}
}
}
object PI {
import RNG._
def doubleStream(rng: Simple):Stream[Double] = rng.nextDouble match {
case (d:Double, next:Simple) => d #:: doubleStream(next)
}
def estimate(rng: Simple, iter: Int): Double = {
val doubles = doubleStream(rng).take(iter)
val inside = (doubles zip doubles.drop(3))
.map { case (a, b) => Point(a, b) }
.filter(p => p.isInCircle)
.size * 1.0
(inside / iter) * 4.0
}
}
// > PI.estimate(RNG.Simple(10), 100000)
// res1: Double = 3.14944
I suspect I'm looking for something like replicateM from the Applicative monad in cats but I'm not sure how to line up the types or how to do it in a way that doesn't accumulate intermediate results in memory. Or, is there a way to do it with a for comprehension that can iteratively build up Points?
Id you want to iterate using monad in a stack safe way, then there is a tailRecM method implemented in Monad type class:
// assuming random generated [-1.0,1.0]
def calculatePi[F[_]](iterations: Int)
(random: => F[Double])
(implicit F: Monad[F]): F[Double] = {
case class Iterations(total: Int, inCircle: Int)
def step(data: Iterations): F[Either[Iterations, Double]] = for {
x <- random
y <- random
isInCircle = (x * x + y * y) < 1.0
newTotal = data.total + 1
newInCircle = data.inCircle + (if (isInCircle) 1 else 0)
} yield {
if (newTotal >= iterations) Right(newInCircle.toDouble / newTotal.toDouble * 4.0)
else Left(Iterations(newTotal, newInCircle))
}
// iterates until Right value is returned
F.tailRecM(Iterations(0, 0))(step)
}
calculatePi(10000)(Future { Random.nextDouble }).onComplete(println)
It uses by-name param because you could try to pass there something like Future (even though the Future is not lawful), which are eager, so you would end up with evaluating the same thing time and time again. With by name param at least you have the chance of passing there a recipe for side-effecting random. Of course, if we use Option, List as a monad holding our "random" number, we should also expect funny results.
The correct solution would be using something that ensures that this F[A] is lazily evaluated, and any side effect inside is evaluated each time you need a value from inside. For that you basically have to use some of Effects type classes, like e.g. Sync from Cats Effects.
def calculatePi[F[_]](iterations: Int)
(random: F[Double])
(implicit F: Sync[F]): F[Double] = {
...
}
calculatePi(10000)(Coeval( Random.nextDouble )).value
calculatePi(10000)(Task( Random.nextDouble )).runAsync
Alternatively, if you don't care about purity that much, you could pass side effecting function or object instead of F[Int] for generating random numbers.
// simplified, hardcoded F=Coeval
def calculatePi(iterations: Int)
(random: () => Double): Double = {
case class Iterations(total: Int, inCircle: Int)
def step(data: Iterations) = Coeval {
val x = random()
val y = random()
val isInCircle = (x * x + y * y) < 1.0
val newTotal = data.total + 1
val newInCircle = data.inCircle + (if (isInCircle) 1 else 0)
if (newTotal >= iterations) Right(newInCircle.toDouble / newTotal.toDouble * 4.0)
else Left(Iterations(newTotal, newInCircle))
}
Monad[Coeval].tailRecM(Iterations(0, 0))(step).value
}
Here is another approach that my friend Charles Miller came up with. It's a bit more direct since it uses RNG directly but it follows the same approach provided by #Mateusz Kubuszok above that leverages Monad.
The key difference is that it leverages the State monad so we can thread the RNG state through the computation and generate the random numbers using the "pure" random number generator.
import cats._
import cats.data._
import cats.implicits._
object PICharles {
type RNG[A] = State[Long, A]
object RNG {
def nextLong: RNG[Long] =
State.modify[Long](
seed ⇒ (seed * 0x5DEECE66DL + 0xBL) & 0xFFFFFFFFFFFFL
) >> State.get
def nextInt: RNG[Int] = nextLong.map(l ⇒ (l >>> 16).toInt)
def nextNatural: RNG[Int] = nextInt.map { i ⇒
if (i > 0) i
else if (i == Int.MinValue) 0
else i + Int.MaxValue
}
def nextDouble: RNG[Double] = nextNatural.map(_.toDouble / Int.MaxValue)
def runRng[A](seed: Long)(rng: RNG[A]): A = rng.runA(seed).value
def unsafeRunRng[A]: RNG[A] ⇒ A = runRng(System.currentTimeMillis)
}
object PI {
case class Step(count: Int, inCircle: Int)
def calculatePi(iterations: Int): RNG[Double] = {
def step(s: Step): RNG[Either[Step, Double]] =
for {
x ← RNG.nextDouble
y ← RNG.nextDouble
isInCircle = (x * x + y * y) < 1.0
newInCircle = s.inCircle + (if (isInCircle) 1 else 0)
} yield {
if (s.count >= iterations)
Right(s.inCircle.toDouble / s.count.toDouble * 4.0)
else
Left(Step(s.count + 1, newInCircle))
}
Monad[RNG].tailRecM(Step(0, 0))(step(_))
}
def unsafeCalculatePi(iterations: Int) =
RNG.unsafeRunRng(calculatePi(iterations))
}
}
Thanks Charles & Mateusz for your help!
I created Combiner trait with subclasses Complex and IntCombiner and my objective is to make Matrix work with both Complex and Int. But some reason it dosen't compile saying that
[com.implicits.TestImplicits1.IntCombiner] do not conform to class Matrix's type parameter bounds [T <: com.implicits.TestImplicits1.Combiner[T]]
val m1 = new Matrix[IntCombiner](3, 3)((1 to 9).sliding(3).map {
But as my understanding goes as IntContainer is the subclass of Combiner it should work. Why such an error please explain ?
object TestImplicits1 {
trait Combiner[T] {
def +(b: T): T
def *(b: T): T
}
class Complex(r: Double, i: Double) extends Combiner[Complex] {
val real = r
val im = i
override def +(b: Complex): Complex = {
new Complex(real + b.real, im + b.im)
}
override def *(b: Complex): Complex = {
new Complex((real * b.real) - (im * b.im), real * b.im + b.real * im)
}
}
class IntCombiner(a: Int) extends Combiner[Int] {
val v = a
override def *(b: Int): Int = v * b
override def +(b: Int): Int = v + b
}
class Matrix[T <: Combiner[T]](x1: Int, y1: Int)(ma: Seq[Seq[T]]) {
self =>
val x: Int = x1
val y: Int = y1
def dot(v1: Seq[T], v2: Seq[T]): T = {
v1.zip(v2).map { t: (T, T) => {
t._1 * t._2
}
}.reduce(_ + _)
}
}
object MatrixInt extends App {
def apply[T <: Combiner[T]](x1: Int, y1: Int)(s: Seq[Seq[T]]) = {
new Matrix[T](x1, y1)(s)
}
val m1 = new Matrix[IntCombiner](3, 3)((1 to 9).sliding(3).map {
x => x map { y => new IntCombiner(y) }
}.toSeq)
}
}
F-bounded polymorphism cannot be added to an existing Int class, because Int is just what it is, it does not know anything about your Combiner traits, so it cannot extend Combiner[Int]. You could wrap every Int into something like an IntWrapper <: Combiner[IntWrapper], but this would waste quite a bit of memory, and library design around F-bounded polymorphism tends to be tricky.
Here is a proposal based on ad-hoc polymorphism and typeclasses instead:
object TestImplicits1 {
trait Combiner[T] {
def +(a: T, b: T): T
def *(a: T, b: T): T
}
object syntax {
object combiner {
implicit class CombinerOps[A](a: A) {
def +(b: A)(implicit comb: Combiner[A]) = comb.+(a, b)
def *(b: A)(implicit comb: Combiner[A]) = comb.*(a, b)
}
}
}
case class Complex(re: Double, im: Double)
implicit val complexCombiner: Combiner[Complex] = new Combiner[Complex] {
override def +(a: Complex, b: Complex): Complex = {
Complex(a.re + b.re, a.im + b.im)
}
override def *(a: Complex, b: Complex): Complex = {
Complex((a.re * b.re) - (a.im * b.im), a.re * b.im + b.re * a.im)
}
}
implicit val intCombiner: Combiner[Int] = new Combiner[Int] {
override def *(a: Int, b: Int): Int = a * b
override def +(a: Int, b: Int): Int = a + b
}
class Matrix[T: Combiner](entries: Vector[Vector[T]]) {
def frobeniusNormSq: T = {
import syntax.combiner._
entries.map(_.map(x => x * x).reduce(_ + _)).reduce(_ + _)
}
}
}
I don't know what you attempted with dot there, your x1, x2 and ma seemed to be completely unused, so I added a simple square-of-Frobenius-norm example instead, just to show how the typeclasses and the syntactic sugar for operators work together. Please don't expect anything remotely resembling "high performance" from it - JVM traditionally never cared about rectangular arrays and number crunching (at least not on a single compute node; Spark & Co is a different story). At least your code won't be automatically transpiled to optimized CUDA code, that's for sure.
I have the following function which generates a Uniform distributed value between 2 bounds:
def Uniform(x: Bounded[Double], n: Int): Bounded[Double] = {
val y: Double = (x.upper - x.lower) * scala.util.Random.nextDouble() + x.lower
Bounded(y, x.bounds)
}
and Bounded is defined as follows:
trait Bounded[T] {
val underlying: T
val bounds: (T, T)
def lower: T = bounds._1
def upper: T = bounds._2
override def toString = underlying.toString + " <- [" + lower.toString + "," + upper.toString + "]"
}
object Bounded {
def apply[T : Numeric](x: T, _bounds: (T, T)): Bounded[T] = new Bounded[T] {
override val underlying: T = x
override val bounds: (T, T) = _bounds
}
}
However, I want Uniform to work on all Fractional[T] values so I wanted to add a context bound:
def Uniform[T : Fractional](x: Bounded[T], n: Int): Bounded[T] = {
import Numeric.Implicits._
val y: T = (x.upper - x.lower) * scala.util.Random.nextDouble().asInstanceOf[T] + x.lower
Bounded(y, x.bounds)
}
This works swell when doing a Uniform[Double](x: Bounded[Double]), but the other ones are impossible and get a ClassCastException at runtime because they can not be casted. Is there a way to solve this?
I'd suggest defining a new type class that characterizes types that you can get random instances of:
import scala.util.Random
trait GetRandom[A] {
def next(): A
}
object GetRandom {
def instance[A](a: => A): GetRandom[A] = new GetRandom[A] {
def next(): A = a
}
implicit val doubleRandom: GetRandom[Double] = instance(Random.nextDouble())
implicit val floatRandom: GetRandom[Float] = instance(Random.nextFloat())
// Define any other instances here
}
Now you can write Uniform like this:
def Uniform[T: Fractional: GetRandom](x: Bounded[T], n: Int): Bounded[T] = {
import Numeric.Implicits._
val y: T = (x.upper - x.lower) * implicitly[GetRandom[T]].next() + x.lower
Bounded(y, x.bounds)
}
And use it like this:
scala> Uniform[Double](Bounded(2, (0, 4)), 1)
res15: Bounded[Double] = 1.5325899033654382 <- [0.0,4.0]
scala> Uniform[Float](Bounded(2, (0, 4)), 1)
res16: Bounded[Float] = 0.06786823 <- [0.0,4.0]
There are libraries like rng that provide a similar type class for you, but they tend to be focused on purely functional ways to work with random numbers, so if you want something simpler you're probably best off writing your own.
Let's say I define some operators for my class like this:
class A {
def +(f: Float) = /* ... */
}
val a: A = new A
This allows me to do a + 1f, easy enough. What if I want to enable the lib's user to be able to write 1f + a, too? How can I implement that?
In Scala 2.9 you can import this implicit conversion:
implicit def floatPlusAExtender (x: Float) =
new {
def + (a: A) = a + x
}
and use it as you wanted. Since Scala 2.10 you better do this conversion like so:
implicit class FloatPlusAExtender (x: Float) {
def + (a: A) = a + x
}
or even better like so:
implicit class FloatPlusAExtender (val x: Float) extends AnyVal {
def + (a: A) = a + x
}
The last way is called Value Class and in difference to preceding two it provides this functionality with zero overhead. (Thanks, axel22) This is also the new stuff that comes with 2.10
Or you can just modify A like so:
class A {
def + (x: Float) = /* ... */
def +: (x: Float) = this + x
}
and use it like so:
1f +: a
The last approach is preferable.
One approach is the pimp-my-library-pattern:
class FloatWithPlusA(f: Float) {
def +(a: A) = a + f
}
implicit def floatPlusA(f: Float): FloatWithPlusA =
new FloatWithPlusA(f)
val a: A = new A
a + 1.0f /* a.+(1.0f) */
1.0f + a /* floatPlusA(1.0f).+(a) */
Another approach is adding a right-associative method, but with the obvious disadvantage that the syntax of the two operators varies:
class A {
val f: Float = 1.0f
def +(f: Float) = this.f + f
def +:(f: Float) = this.f + f
}
val a: A = new A
a + 1.0f
1.0f +: a
This is a followup to this question.
I'm trying to implement vectors in scala with a generic super class using self-types:
trait Vec[V] { self:V =>
def /(d:Double):Vec[V]
def dot(v:V):Double
def norm:Double = math.sqrt(this dot this)
def normalize = self / norm
}
Here's an implementation of a 3D vector:
class Vec3(val x:Double, val y:Double, val z:Double) extends Vec[Vec3]
{
def /(d:Double) = new Vec3(x / d, y / d, z / d)
def dot(v:Vec3) = x * v.x + y * v.y + z * v.z
def cross(v:Vec3):Vec3 =
{
val (a, b, c) = (v.x, v.y, v.z)
new Vec3(c * y - b * z, a * z - c * x, b * x - a * y)
}
def perpTo(v:Vec3) = (this.normalize).cross(v.normalize)
}
Unfortunately this doesn't compile:
Vec3.scala:10: error: value cross is not a member of Vec[Vec3]
def perpTo(v:Vec3) = (this.normalize).cross(v.normalize)
^
What's going wrong, and how do I fix it?
Additionally, any references on self-types would be appreciated because I think these errors are cropping up from my lack of understanding.
To get rid of all the nastiness, you have to specify that the type parameter V is a subclass of Vec.
Now you can just use V everywhere, because your trait knows that V inherits all Vec[V] methods.
trait Vec[V <: Vec[V]] { self: V =>
def -(v:V): V
def /(d:Double): V
def dot(v:V): Double
def norm:Double = math.sqrt(this dot this)
def normalize: V = self / norm
def dist(v: V) = (self - v).norm
def nasty(v: V) = (self / norm).norm
}
Note the method nasty which won’t compile with Easy Angel’s approach.
I think, that method / in Vec should return V instead of Vec[V]:
trait Vec[V] { self:V =>
def /(d:Double): V
def dot(v:V):Double
def norm:Double = math.sqrt(this dot this)
def normalize = self / norm
}
method cross exists in Vec3 (or in other words in V) but not in Vec[V]