I am using the Azavea Numeric Scala library for generic maths operations. However, I cannot use these with the Scala Collections API, as they require a scala Numeric and it appears as though the two Numerics are mutually exclusive. Is there any way I can avoid re-implementing all mathematical operations on Scala Collections for Azavea Numeric, apart from requiring all types to have context bounds for both Numerics?
import Predef.{any2stringadd => _, _}
class Numeric {
def addOne[T: com.azavea.math.Numeric](x: T) {
import com.azavea.math.EasyImplicits._
val y = x + 1 // Compiles
val seq = Seq(x)
val z = seq.sum // Could not find implicit value for parameter num: Numeric[T]
}
}
Where Azavea Numeric is defined as
trait Numeric[#scala.specialized A] extends java.lang.Object with
com.azavea.math.ConvertableFrom[A] with com.azavea.math.ConvertableTo[A] with scala.ScalaObject {
def abs(a:A):A
...remaining methods redacted...
}
object Numeric {
implicit object IntIsNumeric extends IntIsNumeric
implicit object LongIsNumeric extends LongIsNumeric
implicit object FloatIsNumeric extends FloatIsNumeric
implicit object DoubleIsNumeric extends DoubleIsNumeric
implicit object BigIntIsNumeric extends BigIntIsNumeric
implicit object BigDecimalIsNumeric extends BigDecimalIsNumeric
def numeric[#specialized(Int, Long, Float, Double) A:Numeric]:Numeric[A] = implicitly[Numeric[A]]
}
You can use RĂ©gis Jean-Gilles solution, which is a good one, and wrap Azavea's Numeric. You can also try recreating the methods yourself, but using Azavea's Numeric. Aside from NumericRange, most should be pretty straightforward to implement.
You may be interested in Spire though, which succeeds Azavea's Numeric library. It has all the same features, but some new ones as well (more operations, new number types, sorting & selection, etc.). If you are using 2.10 (most of our work is being directed at 2.10), then using Spire's Numeric eliminates virtually all overhead of a generic approach and often runs as fast as a direct (non-generic) implementation.
That said, I think your question is a good suggestion; we should really add a toScalaNumeric method on Numeric. Which Scala collection methods were you planning on using? Spire adds several new methods to Arrays, such as qsum, qproduct, qnorm(p), qsort, qselect(k), etc.
The most general solution would be to write a class that wraps com.azavea.math.Numeric and implements scala.math.Numeric in terms of it:
class AzaveaNumericWrapper[T]( implicit val n: com.azavea.math.Numeric[T] ) extends scala.math.Numeric {
def compare (x: T, y: T): Int = n.compare(x, y)
def minus (x: T, y: T): T = n.minus(x, y)
// and so on
}
Then implement an implicit conversion:
// NOTE: in scala 2.10, we could directly declare AzaveaNumericWrapper as an implicit class
implicit def toAzaveaNumericWrapper[T]( implicit n: com.azavea.math.Numeric[T] ) = new AzaveaNumericWrapper( n )
The fact that n is itself an implicit is key here: it allows for implicit values of type com.azavea.math.Numeric to be automatically used where na implicit value of
type scala.math.Numeric is expected.
Note that to be complete, you'll probably want to do the reverse too (write a class ScalaNumericWrapper that implements com.azavea.math.Numeric in terms of scala.math.Numeric).
Now, there is a disadvantage to the above solution: you get a conversion (and thus an instanciation) on each call (to a method that has a context bound of type scala.math.Numeric, and where you only an instance of com.azavea.math.Numeric is in scope).
So you will actually want to define an implicit singleton instance of AzaveaNumericWrapper for each of your numeric type. Assuming that you have types MyType and MyOtherType for which you defined instances of com.azavea.math.Numeric:
implicit object MyTypeIsNumeric extends AzaveaNumericWrapper[MyType]
implicit object MyOtherTypeIsNumeric extends AzaveaNumericWrapper[MyOtherType]
//...
Also, keep in mind that the apparent main purpose of azavea's Numeric class is to greatly enhance execution speed (mostly due to type parameter specialization).
Using the wrapper as above, you lose the specialization and hence the speed that comes out of it. Specialization has to be used all the way down,
and as soon as you call a generic method that is not specialized, you enter in the world of unspecialized generics (even if that method then calls back a specialized method).
So in cases where speed matters, try to use azavea's Numeric directly instead of scala's Numeric (just because AzaveaNumericWrapper uses it internally
does not mean that you will get any speed increase, as specialization won't happen here).
You may have noticed that I avoided in my examples to define instances of AzaveaNumericWrapper for types Int, Long and so on.
This is because there are already (in the standard library) implicit values of scala.math.Numeric for these types.
You might be tempted to just hide them (via something like import scala.math.Numeric.{ShortIsIntegral => _}), so as to be sure that your own (azavea backed) version is used,
but there is no point. The only reason I can think of would be to make it run faster, but as explained above, it wont.
Related
The only way I can think of doing this, without creating a wrapper class, is to use scala 3's type unions like this
type Even = 0 | 2 | 4 | 6 | 8
val even : Even = 4
but that obviously has a limit. Is there a way to create the "entire" range?
As a follow up, what about for other ranges? Is there some way to create a function that restricts the type in some arbitrary way (as dangerous as that sounds)?
You can create a newtype with a smart constructor. Several ways to do it.
First, manually, to show how it work:
trait Newtype[T] {
type Type
protected def wrap(t: T): Type = t.asInstanceOf[Type]
protected def unwrap(t: Type): T = t.asInstanceOf[T]
}
type Even = Even.Type
object Even extends Newtype[Int] {
def parse(i: Int): Either[String, Even] =
if (i % 2 == 0) Right(wrap(i))
else Left(s"$i is odd")
implicit class EvenOps(private val even: Even) extends AnyVal {
def value: Int = unwrap(even)
def +(other: Even): Even = wrap(even.value + other.value)
def -(other: Even): Even = wrap(even.value - other.value)
}
}
You are creating type Even which compiler knows nothing about, so it cannot prove that an arbitrary value is its instance. But you can force-cast to it an back again - if JVM in runtime won't be able to catch some issue with it, there is not problem (and since it assumes nothing about Even it cannot disprove anything by contradiction).
Since Even resolves to Even.Type - that is type Type within Even object - Scala's implicit scope will automatically fetch all implicits that are defined in object Even, so you can place your extension methods and typeclasses there.
This will help you pretend that this type has some methods defined.
In Scala 3 you can achieve the same with opaque type. However this representation, has the nice side that it is easy to make it cross compilable with Scala 2 and Scala 3. As a matter of the fast, that's what Monix Newtype did, so you can use it instead of implementing this functionality yourself.
import monix.newtypes._
type Even = Even.Type
object Even extends Newtype[Int] {
// ...
}
Another option is older macro-annotation based library Scala Newtype. It will take your type defined as case class and rewrite the code to implement something similar to what we have above:
import io.estatico.newtype.macros.newtype
#newtype case class Even(value: Int)
however it is harder to add your own smart constructor there, which is why it usually is paired with Refined Types. Then your code would look like:
import eu.timepit.refined._
import eu.timepit.refined.api.Refined
import eu.timepit.refined.numeric
import io.estatico.newtype.macros.newtype
#newtype case class Even(value: Int Refined numeric.Even)
object Even {
def parse(i: Int): Either[String, Even] =
refineV[numeric.Even](i).map(Even(_))
}
However, you might want to just use the plain refined type at this point, since Even newtype wouldn't introduce any domain knowledge beyond what refinement does.
When implementing Typeclasses for our types, we can use different syntaxes (an implicit val or an implicit object, for example). As an example:
A Typeclass definition:
trait Increment[A] {
def increment(value: A): A
}
And, as far as I know, we could implement it for Int in the two following ways:
implicit val fooInstance: Increment[Int] = new Increment[Int] {
override def increment(value: Int): Int = value + 1
}
// or
implicit object fooInstance extends Increment[Int] {
override def increment(value: Int): Int = value + 1
}
I always use the first one as for Scala 2.13 it has an abbreviation syntax that looks like this:
implicit val fooInstance: Increment[Int] = (value: Int) => value + 1
But, is there any real difference between them? or is there any recommendation or standard to do this?
There is a related question about implicit defs and implicit classes for conversions, but I'm going more to the point of how to create (best practices) instances of Typeclasses, not about implicit conversions
As far as I know the differences would be:
objects have different initialization rules - quite often they will be lazily initialized (it doesn't matter if you don't perform side effects in constructor)
it would also be seen differently from Java (but again, you probably won't notice that difference in Scala)
object X will have a type X.type which is a subtype of whatever X extends or implements (but implicit resolution would find that it extends your typeclass, though perhaps with a bit more effort)
So, I wouldn't seen any difference in the actual usage, BUT the implicit val version could generate less JVM garbage (I say garbage as you wouldn't use any of that extra compiler's effort in this particular case).
Over the past week or so I've been working on a typed, indexed array trait for Scala. I'd like to supply the trait as a typeclass, and allow the library user to implement it however they like. Here's an example, using a list of lists to implement the 2d array typeclass:
// crate a 2d Array typeclass, with additional parameters
trait IsA2dArray[A, T, Idx0, Idx1] {
def get(arr: A, x: Int, y: Int): T // get a single element of the array; its type will be T
}
// give this typeclass method syntax
implicit class IsA2dArrayOps[A, T, Idx0, Idx1](value: A) {
def get(x: Int, y: Int)(implicit isA2dArrayInstance: IsA2dArray[A, T, Idx0, Idx1]): T =
isA2dArrayInstance.get(value, x, y)
}
// The user then creates a simple case class that can act as a 2d array
case class Arr2d[T, Idx0, Idx1] (
values: List[List[T]],
idx0: List[Idx0],
idx1: List[Idx1],
)
// A couple of dummy index element types:
case class Date(i: Int) // an example index element
case class Item(a: Char) // also an example
// The user implements the IsA2dArray typeclass
implicit def arr2dIsA2dArray[T, Idx0, Idx1] = new IsA2dArray[Arr2d[T, Idx0, Idx1], T, Idx0, Idx1] {
def get(arr: Arr2d[T, Idx0, Idx1], x: Int, y: Int): T = arr.values(x)(y)
}
// create an example instance of the type
val arr2d = Arr2d[Double, Date, Item] (
List(List(1.0, 2.0), List(3.0, 4.0)),
List(Date(0), Date(1)),
List(Item('a'), Item('b')),
)
// check that it works
arr2d.get(0, 1)
This all seems fine. Where I am having difficulties is that I would like to constrain the index types to a list of approved types (which the user can change). Since the program is not the original owner of all the approved types, I was thinking to have a typeclass to represent these approved types, and to have the approved types implement it:
trait IsValidIndex[A] // a typeclass, indicating whether this is a valid index type
implicit val dateIsValidIndex: IsValidIndex[Date] = new IsValidIndex[Date] {}
implicit val itemIsValidIndex: IsValidIndex[Item] = new IsValidIndex[Item] {}
then change the typeclass definition to impose a constraint that Idx0 and Idx1 have to implement the IsValidIndex typeclass (and here is where things start not to work):
trait IsA2dArray[A, T, Idx0: IsValidIndex, Idx1: IsValidIndex] {
def get(arr: A, x: Int, y: Int): T // get a single element of the array; its type will be T
}
This won't compile because it requires a trait to have an implicit parameter for the typeclass, which they are not allowed to have: (Constraining type parameters on case classes and traits).
This leaves me with two potential solutions, but both of them feel a bit sub-optimal:
Implement the original IsA2dArray typeclass as an abstract class instead, which then allows me to use the Idx0: IsValidIndex syntax directly above (kindly suggested in the link above). This was my original thinking, but
a) it is less user friendly, since it requires the user to wrap whatever type they are using in another class which then extends this abstract class. Whereas with a typeclass, the new functionality can be directly bolted on, and
b) this quickly got quite fiddly and hard to type - I found this blog post (https://tpolecat.github.io/2015/04/29/f-bounds.html) relevant to the problems - and it felt like taking the typeclass route would be easier over the longer term.
The contraint that Idx0 Idx0 and Idx1 must implement IsValidIndex can be placed in the implicit def to implement the typeclass:
implicit def arr2dIsA2dArray[T, Idx0: IsValidIndex, Idx1: IsValidIndex] = ...
But this is then in the user's hands rather than the library writer's, and there is no guarantee that they will enforce it.
If anyone could suggest either a work-around to square this circle, or an overall change of approach which achieves the same goal, I'd be most grateful. I understand that Scala 3 allows traits to have implicit parameters and therefore would allow me to use the Idx0: IsValidIndex constraint directly in the typeclass generic parameter list, which would be great. But switching over to 3 just for that feels like quite a big hammer to crack a relatively small nut.
I guess the solution is
Implement the original IsA2dArray typeclass as an abstract class instead, which then allows me to use the Idx0: IsValidIndex syntax directly above (kindly suggested in the link above).
This was my original thinking, but a) it is less user friendly, since it requires the user to wrap whatever type they are using in another class which then extends this abstract class.
No, abstract class will not be extended*, it will be still a type class, just abstract-class type class and not trait type class.
Can I just assume that trait and abstract class are interchangeable when defining typeclasses?
Mostly.
What is the advantage of using abstract classes instead of traits?
https://www.geeksforgeeks.org/difference-between-traits-and-abstract-classes-in-scala/
Unless you have hierarchy of type classes (like Functor, Applicative, Monad... in Cats). Trait or abstract class (a type class) can't extend several abstract classes (type classes) while it can extend several traits (type classes). But anyway inheritance of type classes is tricky
https://typelevel.org/blog/2016/09/30/subtype-typeclasses.html
* Well, when we write implicit def arr2dIsA2dArray[T, Idx0, Idx1] = new IsA2dArray[Arr2d[T, Idx0, Idx1], T, Idx0, Idx1] {... technically it's extending IsA2dArray but this is similar for IsA2dArray being a trait and abstract class.
Sorry I'm not very familiar with Scala, but I'm curious if this is possible and haven't been able to figure out how.
Basically, I want to create some convenience initializers that can generate a random sample of data (in this case a grid). The grid will always be filled with instances of a particular type (in this case a Location). But in different cases I might want grids filled with different subtypes of Location, e.g. Farm or City.
In Python, this would be trivial:
def fillCollection(klass, size):
return [klass() for _ in range(size)]
class City: pass
cities = fillCollection(City, 10)
I tried to do something similar in Scala but it does not work:
def fillGrid[T <: Location](size): Vector[T] = {
Vector.fill[T](size, size) {
T()
}
}
The compiler just says "not found: value T"
So, it it possible to approximate the above Python code in Scala? If not, what's the recommended way to handle this kind of situation? I could write an initializer for each subtype, but in my real code there's a decent amount of boilerplate overlap between them so I'd like to share code if possible.
The best workaround I've come up with so far is to pass a closure into the initializer (which seems to be how the fill method on Vectors already works), e.g.:
def fillGrid[T <: Location](withElem: => T, size: Int = 100): Vector[T] = {
Vector.fill[T](n1 = size, n2 = size)(withElem)
}
That's not a huge inconvenience, but it makes me curious why Scala doesn't support the "simpler" Python-style construct (if it in fact doesn't). I sort of get why having a "fully generic" initializer could cause trouble, but in this case I can't see what the harm would be generically initializing instances that are all known to be subtypes of a given parent type.
You are correct, in that what you have is probably the simplest option. The reason Scala can't do things the pythonic way is because the type system is much stronger, and it has to contend with type erasure. Scala can not guarantee at compile time that any subclass of Location has a particular constructor, and it will only allow you to do things that it can guarantee will conform to the types (unless you do tricky things with reflection).
If you want to clean it up a little bit, you can make it work more like python by using implicits.
implicit def emptyFarm(): Farm = new Farm
implicit def emptyCity(): City = new City
def fillGrid[T <: Location](size: Int = 100)(implicit withElem: () => T): Vector[Vector[T]] = {
Vector.fill[T](n1 = size, n2 = size)(withElem())
}
fillGrid[farm](3)
To make this more usable in a library, it's common to put the implicits in a companion object of Location, so they can all be brought into scope where appropriate.
sealed trait Location
...
object Location
{
implicit def emptyFarm...
implicit def emptyCity...
}
...
import Location._
fillGrid[Farm](3)
You can use reflection to accomplish what you want...
This is a simple example that will only work if all your subclasses have a zero args constructor.
sealed trait Location
class Farm extends Location
class City extends Location
def fillGrid[T <: Location](size: Int)(implicit TTag: scala.reflect.ClassTag[T]): Vector[Vector[T]] = {
val TClass = TTag.runtimeClass
Vector.fill[T](size, size) { TClass.newInstance().asInstanceOf[T] }
}
However, I have never been a fan of runtime reflection, and I hope there could be another way.
Scala cannot do this kind of thing directly because it's not type safe. It will not work if you pass a class without a zero-argument constructor. The Python version throws an error at runtime if you try to do this.
The closure is probably the best way to go.
Having used a Scala library that liberally exposes the reliance on implicits to the caller, I had experienced friction around this mechanism, as Scala makes it quite hard at times to debug implicit arguments, and because there's quite a bunch of places Scala would fill in values for implicit arguments from. (I could almost relate to it as "implicits hell" at one time).
At one time in my coding, Scala "complained" an implicit value could not be matched whereas in fact there was a "collision" of implicit values each coming from a different import.
Regardless of that perceived brittleness, it may at times feel borderline to an abuse of the context design pattern.
Why does it make sense to have implicit parameters in Scala?
In what scenarios would you use them and how would you avoid trouble?
As I'm not sure the experimentation-curve and potential for other team members getting totally confused are worth it, could you possibly suggest other scala idioms for sharing context between a multitude of Scala functions?
This questions is not for a specific implementation at hand, hopefully it's still a good fit for this site.
Generally, using a common type as an implicit parameter is a bad idea.
def badIdea(n: Int)(implicit s: String) = s * n
It doesn't take much to imagine why: you'll get conflicting implicits for the same thing if anyone else adopts this policy. Better to avoid it.
But who really wants to manually stuff in a scala.concurrent.ExecutionContext manually every time it's needed (which is practically everywhere)?
So the key is: when you have something with a specialized type, especially if it's bookkeeping that might need to be overridden manually but mostly should just do the right thing, then use implicit parameters. (This usually covers type classes as well.)
Then what do you do if you really need a string? Well, wrap it (at least formally--here it's a value class so in some contexts it will just pass the string around):
class MyWrappedString(val underlying: String) extends AnyVal {}
implicit val myString = new MyWrappedString("bird")
def decentIdea(n: Int)(implicit mws: MyWrappedString) = mws.underlying * n
scala> decentIdea(2) // In the bush?
res14: String = birdbird
Or if you think some additional logic is helpful, write a wrapper that takes an extra type parameter:
class ImplicitWithValue[K,V](val value: V) {
// Any extra generic logic goes here
}
object ImplicitWithValue {
class ValuePart[K] {
def apply[V](v: V) = new ImplicitWithValue[K,V](v)
}
private val genericValuePart = new ValuePart[Any]
private def typedValuePart[K] = genericValuePart.asInstanceOf[ValuePart[K]]
def apply[K] = typedValuePart[K]
}
Then you can
trait Marker1
implicit val implicit1 = ImplicitWithValue[Marker1]("fish")
def goodIdea(n: Int)(implicit ms: ImplicitWithValue[Marker1, String]) = ms.value * n
scala> goodIdea(3)
res17: String = fishfishfish