scala even type number - scala

The only way I can think of doing this, without creating a wrapper class, is to use scala 3's type unions like this
type Even = 0 | 2 | 4 | 6 | 8
val even : Even = 4
but that obviously has a limit. Is there a way to create the "entire" range?
As a follow up, what about for other ranges? Is there some way to create a function that restricts the type in some arbitrary way (as dangerous as that sounds)?

You can create a newtype with a smart constructor. Several ways to do it.
First, manually, to show how it work:
trait Newtype[T] {
type Type
protected def wrap(t: T): Type = t.asInstanceOf[Type]
protected def unwrap(t: Type): T = t.asInstanceOf[T]
}
type Even = Even.Type
object Even extends Newtype[Int] {
def parse(i: Int): Either[String, Even] =
if (i % 2 == 0) Right(wrap(i))
else Left(s"$i is odd")
implicit class EvenOps(private val even: Even) extends AnyVal {
def value: Int = unwrap(even)
def +(other: Even): Even = wrap(even.value + other.value)
def -(other: Even): Even = wrap(even.value - other.value)
}
}
You are creating type Even which compiler knows nothing about, so it cannot prove that an arbitrary value is its instance. But you can force-cast to it an back again - if JVM in runtime won't be able to catch some issue with it, there is not problem (and since it assumes nothing about Even it cannot disprove anything by contradiction).
Since Even resolves to Even.Type - that is type Type within Even object - Scala's implicit scope will automatically fetch all implicits that are defined in object Even, so you can place your extension methods and typeclasses there.
This will help you pretend that this type has some methods defined.
In Scala 3 you can achieve the same with opaque type. However this representation, has the nice side that it is easy to make it cross compilable with Scala 2 and Scala 3. As a matter of the fast, that's what Monix Newtype did, so you can use it instead of implementing this functionality yourself.
import monix.newtypes._
type Even = Even.Type
object Even extends Newtype[Int] {
// ...
}
Another option is older macro-annotation based library Scala Newtype. It will take your type defined as case class and rewrite the code to implement something similar to what we have above:
import io.estatico.newtype.macros.newtype
#newtype case class Even(value: Int)
however it is harder to add your own smart constructor there, which is why it usually is paired with Refined Types. Then your code would look like:
import eu.timepit.refined._
import eu.timepit.refined.api.Refined
import eu.timepit.refined.numeric
import io.estatico.newtype.macros.newtype
#newtype case class Even(value: Int Refined numeric.Even)
object Even {
def parse(i: Int): Either[String, Even] =
refineV[numeric.Even](i).map(Even(_))
}
However, you might want to just use the plain refined type at this point, since Even newtype wouldn't introduce any domain knowledge beyond what refinement does.

Related

Type Classe implementation best syntax

When implementing Typeclasses for our types, we can use different syntaxes (an implicit val or an implicit object, for example). As an example:
A Typeclass definition:
trait Increment[A] {
def increment(value: A): A
}
And, as far as I know, we could implement it for Int in the two following ways:
implicit val fooInstance: Increment[Int] = new Increment[Int] {
override def increment(value: Int): Int = value + 1
}
// or
implicit object fooInstance extends Increment[Int] {
override def increment(value: Int): Int = value + 1
}
I always use the first one as for Scala 2.13 it has an abbreviation syntax that looks like this:
implicit val fooInstance: Increment[Int] = (value: Int) => value + 1
But, is there any real difference between them? or is there any recommendation or standard to do this?
There is a related question about implicit defs and implicit classes for conversions, but I'm going more to the point of how to create (best practices) instances of Typeclasses, not about implicit conversions
As far as I know the differences would be:
objects have different initialization rules - quite often they will be lazily initialized (it doesn't matter if you don't perform side effects in constructor)
it would also be seen differently from Java (but again, you probably won't notice that difference in Scala)
object X will have a type X.type which is a subtype of whatever X extends or implements (but implicit resolution would find that it extends your typeclass, though perhaps with a bit more effort)
So, I wouldn't seen any difference in the actual usage, BUT the implicit val version could generate less JVM garbage (I say garbage as you wouldn't use any of that extra compiler's effort in this particular case).

How to create a Scala function that can parametrically create instances of sub-types of some type

Sorry I'm not very familiar with Scala, but I'm curious if this is possible and haven't been able to figure out how.
Basically, I want to create some convenience initializers that can generate a random sample of data (in this case a grid). The grid will always be filled with instances of a particular type (in this case a Location). But in different cases I might want grids filled with different subtypes of Location, e.g. Farm or City.
In Python, this would be trivial:
def fillCollection(klass, size):
return [klass() for _ in range(size)]
class City: pass
cities = fillCollection(City, 10)
I tried to do something similar in Scala but it does not work:
def fillGrid[T <: Location](size): Vector[T] = {
Vector.fill[T](size, size) {
T()
}
}
The compiler just says "not found: value T"
So, it it possible to approximate the above Python code in Scala? If not, what's the recommended way to handle this kind of situation? I could write an initializer for each subtype, but in my real code there's a decent amount of boilerplate overlap between them so I'd like to share code if possible.
The best workaround I've come up with so far is to pass a closure into the initializer (which seems to be how the fill method on Vectors already works), e.g.:
def fillGrid[T <: Location](withElem: => T, size: Int = 100): Vector[T] = {
Vector.fill[T](n1 = size, n2 = size)(withElem)
}
That's not a huge inconvenience, but it makes me curious why Scala doesn't support the "simpler" Python-style construct (if it in fact doesn't). I sort of get why having a "fully generic" initializer could cause trouble, but in this case I can't see what the harm would be generically initializing instances that are all known to be subtypes of a given parent type.
You are correct, in that what you have is probably the simplest option. The reason Scala can't do things the pythonic way is because the type system is much stronger, and it has to contend with type erasure. Scala can not guarantee at compile time that any subclass of Location has a particular constructor, and it will only allow you to do things that it can guarantee will conform to the types (unless you do tricky things with reflection).
If you want to clean it up a little bit, you can make it work more like python by using implicits.
implicit def emptyFarm(): Farm = new Farm
implicit def emptyCity(): City = new City
def fillGrid[T <: Location](size: Int = 100)(implicit withElem: () => T): Vector[Vector[T]] = {
Vector.fill[T](n1 = size, n2 = size)(withElem())
}
fillGrid[farm](3)
To make this more usable in a library, it's common to put the implicits in a companion object of Location, so they can all be brought into scope where appropriate.
sealed trait Location
...
object Location
{
implicit def emptyFarm...
implicit def emptyCity...
}
...
import Location._
fillGrid[Farm](3)
You can use reflection to accomplish what you want...
This is a simple example that will only work if all your subclasses have a zero args constructor.
sealed trait Location
class Farm extends Location
class City extends Location
def fillGrid[T <: Location](size: Int)(implicit TTag: scala.reflect.ClassTag[T]): Vector[Vector[T]] = {
val TClass = TTag.runtimeClass
Vector.fill[T](size, size) { TClass.newInstance().asInstanceOf[T] }
}
However, I have never been a fan of runtime reflection, and I hope there could be another way.
Scala cannot do this kind of thing directly because it's not type safe. It will not work if you pass a class without a zero-argument constructor. The Python version throws an error at runtime if you try to do this.
The closure is probably the best way to go.

How to test type conformance of higher-kinded types in Scala

I am trying to test whether two "containers" use the same higher-kinded type. Look at the following code:
import scala.reflect.runtime.universe._
class Funct[A[_],B]
class Foo[A : TypeTag](x: A) {
def test[B[_]](implicit wt: WeakTypeTag[B[_]]) =
println(typeOf[A] <:< weakTypeOf[Funct[B,_]])
def print[B[_]](implicit wt: WeakTypeTag[B[_]]) = {
println(typeOf[A])
println(weakTypeOf[B[_]])
}
}
val x = new Foo(new Funct[Option,Int])
x.test[Option]
x.print[Option]
The output is:
false
Test.Funct[Option,Int]
scala.Option[_]
However, I expect the conformance test to succeed. What am I doing wrong? How can I test for higher-kinded types?
Clarification
In my case, the values I am testing (the x: A in the example) come in a List[c.Expr[Any]] in a Macro. So any solution relying on static resolution (as the one I have given), will not solve my problem.
It's the mixup between underscores used in type parameter definitions and elsewhere. The underscore in TypeTag[B[_]] means an existential type, hence you get a tag not for B, but for an existential wrapper over it, which is pretty much useless without manual postprocessing.
Consequently typeOf[Funct[B, _]] that needs a tag for raw B can't make use of the tag for the wrapper and gets upset. By getting upset I mean it refuses to splice the tag in scope and fails with a compilation error. If you use weakTypeOf instead, then that one will succeed, but it will generate stubs for everything it couldn't splice, making the result useless for subtyping checks.
Looks like in this case we really hit the limits of Scala in the sense that there's no way for us to refer to raw B in WeakTypeTag[B], because we don't have kind polymorphism in Scala. Hopefully something like DOT will save us from this inconvenience, but in the meanwhile you can use this workaround (it's not pretty, but I haven't been able to come up with a simpler approach).
import scala.reflect.runtime.universe._
object Test extends App {
class Foo[B[_], T]
// NOTE: ideally we'd be able to write this, but since it's not valid Scala
// we have to work around by using an existential type
// def test[B[_]](implicit tt: WeakTypeTag[B]) = weakTypeOf[Foo[B, _]]
def test[B[_]](implicit tt: WeakTypeTag[B[_]]) = {
val ExistentialType(_, TypeRef(pre, sym, _)) = tt.tpe
// attempt #1: just compose the type manually
// but what do we put there instead of question marks?!
// appliedType(typeOf[Foo], List(TypeRef(pre, sym, Nil), ???))
// attempt #2: reify a template and then manually replace the stubs
val template = typeOf[Foo[Hack, _]]
val result = template.substituteSymbols(List(typeOf[Hack[_]].typeSymbol), List(sym))
println(result)
}
test[Option]
}
// has to be top-level, otherwise the substituion magic won't work
class Hack[T]
An astute reader will notice that I used WeakTypeTag in the signature of foo, even though I should be able to use TypeTag. After all, we call foo on an Option which is a well-behaved type, in the sense that it doesn't involve unresolved type parameters or local classes that pose problems for TypeTags. Unfortunately, it's not that simple because of https://issues.scala-lang.org/browse/SI-7686, so we're forced to use a weak tag even though we shouldn't need to.
The following is an answer that works for the example I have given (and might help others), but does not apply to my (non-simplified) case.
Stealing from #pedrofurla's hint, and using type-classes:
trait ConfTest[A,B] {
def conform: Boolean
}
trait LowPrioConfTest {
implicit def ctF[A,B] = new ConfTest[A,B] { val conform = false }
}
object ConfTest extends LowPrioConfTest {
implicit def ctT[A,B](implicit ev: A <:< B) =
new ConfTest[A,B] { val conform = true }
}
And add this to Foo:
def imp[B[_]](implicit ct: ConfTest[A,Funct[B,_]]) =
println(ct.conform)
Now:
x.imp[Option] // --> true
x.imp[List] // --> false

Getting implicit scala Numeric from Azavea Numeric

I am using the Azavea Numeric Scala library for generic maths operations. However, I cannot use these with the Scala Collections API, as they require a scala Numeric and it appears as though the two Numerics are mutually exclusive. Is there any way I can avoid re-implementing all mathematical operations on Scala Collections for Azavea Numeric, apart from requiring all types to have context bounds for both Numerics?
import Predef.{any2stringadd => _, _}
class Numeric {
def addOne[T: com.azavea.math.Numeric](x: T) {
import com.azavea.math.EasyImplicits._
val y = x + 1 // Compiles
val seq = Seq(x)
val z = seq.sum // Could not find implicit value for parameter num: Numeric[T]
}
}
Where Azavea Numeric is defined as
trait Numeric[#scala.specialized A] extends java.lang.Object with
com.azavea.math.ConvertableFrom[A] with com.azavea.math.ConvertableTo[A] with scala.ScalaObject {
def abs(a:A):A
...remaining methods redacted...
}
object Numeric {
implicit object IntIsNumeric extends IntIsNumeric
implicit object LongIsNumeric extends LongIsNumeric
implicit object FloatIsNumeric extends FloatIsNumeric
implicit object DoubleIsNumeric extends DoubleIsNumeric
implicit object BigIntIsNumeric extends BigIntIsNumeric
implicit object BigDecimalIsNumeric extends BigDecimalIsNumeric
def numeric[#specialized(Int, Long, Float, Double) A:Numeric]:Numeric[A] = implicitly[Numeric[A]]
}
You can use Régis Jean-Gilles solution, which is a good one, and wrap Azavea's Numeric. You can also try recreating the methods yourself, but using Azavea's Numeric. Aside from NumericRange, most should be pretty straightforward to implement.
You may be interested in Spire though, which succeeds Azavea's Numeric library. It has all the same features, but some new ones as well (more operations, new number types, sorting & selection, etc.). If you are using 2.10 (most of our work is being directed at 2.10), then using Spire's Numeric eliminates virtually all overhead of a generic approach and often runs as fast as a direct (non-generic) implementation.
That said, I think your question is a good suggestion; we should really add a toScalaNumeric method on Numeric. Which Scala collection methods were you planning on using? Spire adds several new methods to Arrays, such as qsum, qproduct, qnorm(p), qsort, qselect(k), etc.
The most general solution would be to write a class that wraps com.azavea.math.Numeric and implements scala.math.Numeric in terms of it:
class AzaveaNumericWrapper[T]( implicit val n: com.azavea.math.Numeric[T] ) extends scala.math.Numeric {
def compare (x: T, y: T): Int = n.compare(x, y)
def minus (x: T, y: T): T = n.minus(x, y)
// and so on
}
Then implement an implicit conversion:
// NOTE: in scala 2.10, we could directly declare AzaveaNumericWrapper as an implicit class
implicit def toAzaveaNumericWrapper[T]( implicit n: com.azavea.math.Numeric[T] ) = new AzaveaNumericWrapper( n )
The fact that n is itself an implicit is key here: it allows for implicit values of type com.azavea.math.Numeric to be automatically used where na implicit value of
type scala.math.Numeric is expected.
Note that to be complete, you'll probably want to do the reverse too (write a class ScalaNumericWrapper that implements com.azavea.math.Numeric in terms of scala.math.Numeric).
Now, there is a disadvantage to the above solution: you get a conversion (and thus an instanciation) on each call (to a method that has a context bound of type scala.math.Numeric, and where you only an instance of com.azavea.math.Numeric is in scope).
So you will actually want to define an implicit singleton instance of AzaveaNumericWrapper for each of your numeric type. Assuming that you have types MyType and MyOtherType for which you defined instances of com.azavea.math.Numeric:
implicit object MyTypeIsNumeric extends AzaveaNumericWrapper[MyType]
implicit object MyOtherTypeIsNumeric extends AzaveaNumericWrapper[MyOtherType]
//...
Also, keep in mind that the apparent main purpose of azavea's Numeric class is to greatly enhance execution speed (mostly due to type parameter specialization).
Using the wrapper as above, you lose the specialization and hence the speed that comes out of it. Specialization has to be used all the way down,
and as soon as you call a generic method that is not specialized, you enter in the world of unspecialized generics (even if that method then calls back a specialized method).
So in cases where speed matters, try to use azavea's Numeric directly instead of scala's Numeric (just because AzaveaNumericWrapper uses it internally
does not mean that you will get any speed increase, as specialization won't happen here).
You may have noticed that I avoided in my examples to define instances of AzaveaNumericWrapper for types Int, Long and so on.
This is because there are already (in the standard library) implicit values of scala.math.Numeric for these types.
You might be tempted to just hide them (via something like import scala.math.Numeric.{ShortIsIntegral => _}), so as to be sure that your own (azavea backed) version is used,
but there is no point. The only reason I can think of would be to make it run faster, but as explained above, it wont.

Scala singleton factories and class constants

OK, in the question about 'Class Variables as constants', I get the fact that the constants are not available until after the 'official' constructor has been run (i.e. until you have an instance). BUT, what if I need the companion singleton to make calls on the class:
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
If I create companion object first, the 'new thing(x)' (in the companion) causes an error. However, if I define the class first, the 'x * someConst' (in the class definition) causes an error.
I also tried placing the class definition inside the singleton.
object thing {
var someConst = 42
def apply(x: Int) = new thing(x)
class thing(x: Int) {
val field = x * someConst
override def toString = "val: " + field
}
}
However, doing this gives me a 'thing.thing' type object
val t = thing(2)
results in
t: thing.thing = val: 84
The only useful solution I've come up with is to create an abstract class, a companion and an inner class (which extends the abstract class):
abstract class thing
object thing {
val someConst = 42
def apply(x: Int) = new privThing(x)
class privThing(x: Int) extends thing {
val field = x * someConst
override def toString = "val: " + field
}
}
val t1 = thing(2)
val tArr: Array[thing] = Array(t1)
OK, 't1' still has type of 'thing.privThing', but it can now be treated as a 'thing'.
However, it's still not an elegant solution, can anyone tell me a better way to do this?
PS. I should mention, I'm using Scala 2.8.1 on Windows 7
First, the error you're seeing (you didn't tell me what it is) isn't a runtime error. The thing constructor isn't called when the thing singleton is initialized -- it's called later when you call thing.apply, so there's no circular reference at runtime.
Second, you do have a circular reference at compile time, but that doesn't cause a problem when you're compiling a scala file that you've saved on disk -- the compiler can even resolve circular references between different files. (I tested. I put your original code in a file and compiled it, and it worked fine.)
Your real problem comes from trying to run this code in the Scala REPL. Here's what the REPL does and why this is a problem in the REPL. You're entering object thing and as soon as you finish, the REPL tries to compile it, because it's reached the end of a coherent chunk of code. (Semicolon inference was able to infer a semicolon at the end of the object, and that meant the compiler could get to work on that chunk of code.) But since you haven't defined class thing it can't compile it. You have the same problem when you reverse the definitions of class thing and object thing.
The solution is to nest both class thing and object thing inside some outer object. This will defer compilation until that outer object is complete, at which point the compiler will see the definitions of class thing and object thing at the same time. You can run import thingwrapper._ right after that to make class thing and object thing available in global scope for the REPL. When you're ready to integrate your code into a file somewhere, just ditch the outer class thingwrapper.
object thingwrapper{
//you only need a wrapper object in the REPL
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
}
Scala 2.12 or more could benefit for sip 23 which just (August 2016) pass to the next iteration (considered a “good idea”, but is a work-in-process)
Literal-based singleton types
Singleton types bridge the gap between the value level and the type level and hence allow the exploration in Scala of techniques which would typically only be available in languages with support for full-spectrum dependent types.
Scala’s type system can model constants (e.g. 42, "foo", classOf[String]).
These are inferred in cases like object O { final val x = 42 }. They are used to denote and propagate compile time constants (See 6.24 Constant Expressions and discussion of “constant value definition” in 4.1 Value Declarations and Definitions).
However, there is no surface syntax to express such types. This makes people who need them, create macros that would provide workarounds to do just that (e.g. shapeless).
This can be changed in a relatively simple way, as the whole machinery to enable this is already present in the scala compiler.
type _42 = 42.type
type Unt = ().type
type _1 = 1 // .type is optional for literals
final val x = 1
type one = x.type // … but mandatory for identifiers