Understanding Scala AnyVal and Null - scala

Why does the following compile? I understand that AnyVal instances correspond to things in the underlying host system that cannot be constructed, and that Null is a subtype of all reference types but not of value types. I have an AnyVal type Boolean I give to safeMapDifferent, but don't see how it can satisfy this constraint of U >: Null.
object MyMainScala extends App {
implicit class RichObject[T](o: T) {
def safeMap[U](method: T => U)(implicit ev: Null <:< U): U =
Option(o).flatMap(result => Option(method(result))).orNull
def safeMapDifferent[U >: Null](method: T => U): U =
Option(o).flatMap(result => Option(method(result))).orNull
}
override def main(args: Array[String]): Unit = {
val testSubject = new Object() {
def scalaBoolean: Boolean = ???
}
// println(testSubject.safeMap(_.scalaBoolean)) // If not commented, this will fail to compile as I expect.
println(testSubject.safeMapDifferent(_.scalaBoolean).getClass) // Why does it compile?
}
}

Its because of Autoboxing. If you see in predef.scala, you will see bunch of implicit conversion methods that convert scala AnyVal to Java.
/** #group conversions-anyval-to-java */
implicit def boolean2Boolean(x: Boolean): java.lang.Boolean = x.asInstanceOf[java.lang.Boolean]
When you invoke your print method, println(testSubject.safeMapDifferent(_.scalaBoolean).getClass), you are providing T => U value i.e. _.scalaBoolean which takes testSubject as parameter which satisfy T type parameter and returns Boolean which does not satisfy U >: Null. Upon getting this error, instead of throwing exception, the compiler looks for implicit methods which can convert Boolean into expected U >: Null type and it found boolean2Boolean in predef.scala which satisfy this constraint because java.land.Boolean is a reference type. Hence, compile execute the code correctly.
def foo[U >: Null](o : U) = println(o)
foo(true) // compile correctly.
foo[java.lang.Boolean](true) //compile correctly, because true is implicitly converted to java Boolean and java.lang.Boolean is reference type and superType of scala.Null.
To avoid this, you must statically provide type parameter: foo[Boolean](true) //won't compile.

Related

Scala type parameter inference based on return type (function vs method)

I try to create a small matching library in Scala.
I have the following type representing a matcher that reprsents a constraint on a type T:
trait Matcher[-T] extends (T => Boolean)
and a matches function that checks whether that constraint holds on a given instance:
def matches[A](x: A, m: Matcher[A]) = m(x)
With this I would like to be able to write checks like:
matches(Option(1), contains(1))
matches(Seq(1,2), contains(1))
where the contains can abstract over any container. I tried the following abstraction using type classes:
trait F[-C[_]] {
def contains[A >: B, B](c: C[A], x: B): Boolean
}
which I can then use to define contains function:
def contains[A, B[A]](y: A)(implicit f: F[B]): Matcher[B[A]] = new Matcher[B[A]] {
override def apply(v1: B[A]): Boolean = f.contains(v1, y)
}
with two implicit definitions, one for Option:
implicit object OptionF extends F[Option] {
override def contains[A >: B, B](c: Option[A], x: B): Boolean = c.contains(x)
}
and for Iterable:
implicit object IterableF extends F[Iterable] {
override def contains[A >: B, B](c: Iterable[A], x: B): Boolean = c.exists(_ == x)
}
However, when I get the errors on each call to matches. They are all the same:
Error:(93, 39) ambiguous implicit values:
both object OptionF in object MatchExample of type MatchExample.OptionF.type
and object IterableF in object MatchExample of type MatchExample.IterableF.type
match expected type MatchExample.F[B]
matches(Option(1), contains(1))
It seems that the type inference could not deduce the type properly and that is why both implicit matches.
How can the matches function be defined without ambiguity?
I also tried to use implicit conversion to add the matches function directly to any type:
implicit class Mather2Any[A](that:A) {
def matches(m: Matcher[A]): Boolean = m(that)
}
and that is working just fine:
Option(x1) matches contains(x1)
lx matches contains(x1)
lx matches contains(y1)
What I don't understand why the matches functions does not work while the method does? It looks like the problem is that the inference is only based on the return type. For example instead of contains I could isEmpty with not arguments which again works with the matches method but not function.
The full code listing is in this gist.
What you need is to split the parameters into two lists:
def matches[A](x: A)(m: Matcher[A]) = m(x)
matches(Option(1))(contains(1))
matches(Seq(1,2))(contains(1))
A gets inferred from x and is then available while type-checking m. With Mather2Any you have the same situation.
Side note: variations on this get asked often, I just find it faster to reply than to find a duplicate.

Scala: how to determine if a type is nullable

I have two questions about nullable types in Scala:
Let's say I wish to define a new class: class myClass[T](x: T), and I'd like to make sure that T is nullable. How do I do that?
I'd like to write a function def myFunc(x: T) (not as part of the previous question), and I'd like to perform one thing if T is nullable or another if not. The difference from the previous question is that here I don't wish to limit T, but rather know if it's nullable or not. How do I do that?
In scala, all types that extend AnyRef (equivalent of Object) are nullable. Most of the scala community avoids using nulls though, and tends to be more explicit by representing the existence/absence of a value with Option.
1. Use a >: Null <: AnyRef bound:
# def f[T >: Null <: AnyRef](arg: T): T = arg
defined function f
# f(10)
cmd3.sc:1: inferred type arguments [Any] do not conform to method f's type parameter bounds [T >: Null <: AnyRef]
val res3 = f(10)
^
cmd3.sc:1: type mismatch;
found : Int(10)
required: T
val res3 = f(10)
^
Compilation Failed
# f("a")
res3: String = "a"
2. Use implicit type constraints with default values:
# def isNullable[T](arg: T)(implicit sn: Null <:< T = null, sar: T <:< AnyRef = null): Boolean = sn != null && sar != null
defined function isNullable
# isNullable(10)
res8: Boolean = false
# isNullable("a")
res9: Boolean = true
These are similar to static type bounds, except that they are performed during implicit resolution instead of type checking and therefore permit failures if you provide default values for them (nulls in this case, no pun intended :))
"Nullable" means that it is a subclass of AnyRef (and not Nothing), therefore, you can enforce that MyClass takes only nullable instances as follows:
case class MyClass[T <: AnyRef](t: T)
MyClass("hey")
MyClass[String](null)
MyClass(null)
// MyClass[Int](3) won't compile, because `Int` is primitive
To determine whether a type is nullable, you could provide implicit methods that generate nullability tokens:
sealed trait Nullability[-T]
case object Nullable extends Nullability[AnyRef] {
def isNull(t: Any): Boolean = t == null
}
case object NotNullable extends Nullability[AnyVal]
object Nullability {
implicit def anyRefIsNullable[T <: AnyRef]: Nullability[T] = Nullable
implicit def anyValIsNotNullable[T <: AnyVal]: Nullability[T] = NotNullable
}
def myFunc[T](t: T)(implicit nullability: Nullability[T]): Unit = {
nullability match {
case Nullable =>
if (t == null) {
println("that's a null")
} else {
println("that's a non-null object: " + t)
}
case NotNullable => println("That's an AnyVal: " + t)
}
}
Now you can use myFunc as follows:
myFunc("hello")
myFunc(null)
myFunc(42)
// outputs:
// that's a non-null object: hello
// that's a null
// That's an AnyVal: 42
This won't compile if you try to use myFunc on Any, because the compiler won't be able to determine whether it's AnyRef or AnyVal, and the two implicit methods will clash. In this way, it can be ensured at compile time that we don't accidentally use myFunc on Any, for which the nullability cannot be determined at compile time.
Even though we don't use null in Scala that often (in favor of Option) you may force function to take nullable parameter with
def f[T <: AnyRef](x: T) = ???

Scala upper and lower type bound

I'm having trouble finding a way in scala to simultaneously impose an upper and lower type bound. I need to make a generic function where the type parameter is both hashable (subtype of AnyRef) and nullable (supertype of Null).
I could achieve the former like this:
def foo[T <: AnyRef](t: T) = ???
And the latter like this:
def bar[T >: Null)(t: T) = ???
Is there a way that I can do both simultaneously? Thanks.
What about this?
def foo[T >: Null <: AnyRef](t: T) = ???
It should work. That is:
foo(42) // does not compile
foo(null) // compiles
foo("hello") // compiles
Any type that is a subclass of AnyRef can be assigned the value null, so you do not need the upper bound.
def foo[T <: AnyRef](x: T) = x
foo(null) // returns null
That said, since you need to be able to hash the value, it should be noted that if you attempt to dereference null (e.g. null.hashCode) you will get a NullPointerException. For example:
def foo[T <: AnyRef](x: T) = x.hashCode
foo(null) // Throws an NPE
Furthermore, any use of null in Scala programs is strongly discouraged. Bearing all that in mind, I think what you might really want is something like this, which works for any type:
def foo[T](x: Option[T]) = x.hashCode
def foo(None) // Works. None is equivalent to no value (and Option(null) == None).
def foo(Some(1)) // Works. Note an Int isn't an AnyRef or nullable!
def foo(Some("Hello, world!")) // Works
def foo(Option(null)) // Works.
def foo(Option(z)) // Works, where z can be any reference type value, including null.
Option[T] is a functional means of dealing with undefined values (such as nullable types), and it works for any type T.

Scala covariance with lower bound

I am looking at scala in action book and it has this code
sealed abstract class Maybe[+A] {
def isEmpty: Boolean
def get: A
def getOrElse[B >: A](default: B): B = {
if(isEmpty) default else get
}
}
final case class Just[A](value: A) extends Maybe[A] {
def isEmpty = false
def get = value
}
case object Nil extends Maybe[scala.Nothing] {
def isEmpty = true
def get = throw new NoSuchElementException("Nil.get")
}
If the signature of getOrElse is defined as
def getOrElse(default: A): A =
It doesnt compile.
The author states
"The lower bound B >: A declares that the type parameter B is constrained to some super type of type A"
Yet I seem to be able to do this and it works
val j1 = Just(1)
val l1 = j1.getOrElse("fdsf") //l1 : Any = 1
Is String a super type of Int? What am i not understanding as to why this works? Its like its falling back to argument 1 being type A being of type Any (which it is) rather than type Int.
In Scala you cannot have covariant types in method parameters.
This is because allowing covariant types in method parameters breaks type safety.
In order to have a covariant type you must use a bounded type:
getOrElse[B >: A](default: B): B
This says find some type, B, such that it is a superclass of A and that becomes the method return type.
In your case A is Int and you pass in a String. The only type B which is a superclass of both Int and String is Any.
In this case B becomes Any so the method returns Any.

what's different between <:< and <: in scala

I already know that:
<: is the Scala syntax type constraint
while <:< is the type that leverage the Scala implicit to reach the type constrait
for example:
object Test {
// the function foo and bar can have the same effect
def foo[A](i:A)(implicit ev : A <:< java.io.Serializable) = i
foo(1) // compile error
foo("hi")
def bar[A <: java.io.Serializable](i:A) = i
bar(1) // compile error
bar("hi")
}
but I want to know when we need to use <: and <:< ?
and if we already have <:, why we need <:< ?
thanks!
The main difference between the two is, that the <: is a constraint on the type, while the <:< is a type for which the compiler has to find evidence, when used as an implicit parameter. What that means for our program is, that in the <: case, the type inferencer will try to find a type that satisfies this constraint. E.g.
def foo[A, B <: A](a: A, b: B) = (a,b)
scala> foo(1, List(1,2,3))
res1: (Any, List[Int]) = (1,List(1, 2, 3))
Here the inferencer finds that Int and List[Int] have the common super type Any, so it infers that for A to satisfy B <: A.
<:< is more restrictive, because the type inferencer runs before the implicit resolution. So the types are already fixed when the compiler tries to find the evidence. E.g.
def bar[A,B](a: A, b: B)(implicit ev: B <:< A) = (a,b)
scala> bar(1,1)
res2: (Int, Int) = (1,1)
scala> bar(1,List(1,2,3))
<console>:9: error: Cannot prove that List[Int] <:< Int.
bar(1,List(1,2,3))
^
1. def bar[A <: java.io.Serializable](i:A) = i
<: - guarantees that instance of i of type parameter A will be subtype of Serializable
2. def foo[A](i:A)(implicit ev : A <:< java.io.Serializable) = i
<:< - guarantees that execution context will contains implicit value (for ev paramenter) of type A what is subtype of Serializable.
This implicit defined in Predef.scala and for foo method and it is prove if instance of type parameter A is subtype of Serializable:
implicit def conforms[A]: A <:< A = singleton_<:<.asInstanceOf[A <:< A]
fictional case of using <:< operator:
class Boo[A](x: A) {
def get: A = x
def div(implicit ev : A <:< Double) = x / 2
def inc(implicit ev : A <:< Int) = x + 1
}
val a = new Boo("hi")
a.get // - OK
a.div // - compile time error String not subtype of Double
a.inc // - compile tile error String not subtype of Int
val b = new Boo(10.0)
b.get // - OK
b.div // - OK
b.inc // - compile time error Double not subtype of Int
val c = new Boo(10)
c.get // - OK
c.div // - compile time error Int not subtype of Double
c.inc // - OK
if we not call methods what not conform <:< condition than all compile and execute.
There are definitely differences between <: and <:<; here is my attempt at explaining which one you should pick.
Let's take two classes:
trait U
class V extends U
The type constraint <: is always used because it drives type inference. That's the only thing it can do: constrain the type on its left-hand side.
The constrained type has to be referenced somewhere, usually in the parameter list (or return type), as in:
def whatever[A <: U](p: A): List[A] = ???
That way, the compiler will throw an error if the input is not a subclass of U, and at the same time allow you to refer to the input's type by name for later use (for example in the return type). Note that if you don't have that second requirement, all this isn't necessary (with exceptions...), as in:
def whatever(p: U): String = ??? // this will obviously only accept T <: U
The Generalized Type Constraint <:< on the other hand, has two uses:
You can use it as an after-the-fact proof that some type was inferred. As in:
class List[+A] {
def sum(implicit ev: A =:= Int) = ???
}
You can create such a list of any type, but sum can only be called when you have the proof that A is actually Int.
You can use the above 'proof' as a way to infer even more types. This allows you to infer types in two steps instead of one.
For example, in the above List class, you could add a flatten method:
def flatten[B](implicit ev: A <:< List[B]): List[B]
This isn't just a proof, this is a way to grab that inner type B with A now fixed.
This can be used within the same method as well: imagine you want to write a utility sort function, and you want both the element type T and the collection type Coll. You could be tempted to write the following:
def sort[T, Coll <: Seq[T]](l: Coll): Coll
But T isn't constrained to be anything in there: it doesn't appear in the arguments nor output type. So T will end up as Nothing, or Any, or whatever the compiler wants, really (usually Nothing). But with this version:
def sort[T, Coll](l: Coll)(implicit ev: Coll <:< Seq[T]): Coll
Now T appears in the parameter's types. There will be two inference runs (one per parameter list): Coll will be inferred to whatever was given, and then, later on, an implicit will be looked for, and if found, T will be inferred with Coll now fixed. This essentially extracts the type parameter T from the previously-inferred Coll.
So essentially, <:< checks (and potentially infers) types as a side-effect of implicit resolution, so it can be used in different places / at different times than type parameter inference. When they happen to do the same thing, stick to <:.
After some thinking, I think it has some different.
for example:
object TestAgain {
class Test[A](a: A) {
def foo[A <: AnyRef] = a
def bar(implicit ev: A <:< AnyRef) = a
}
val test = new Test(1)
test.foo // return 1
test.bar // error: Cannot prove that Int <:< AnyRef.
}
this menas:
the scope of <: is just in the method param generic tpye scope foo[A <: AnyRef]. In the example, the method foo have it's generic tpye A, but not the A in class Test[A]
the scope of <:< , will first find the method's generic type, but the method bar have no param generic type, so it will find the Test[A]'s generic type.
so, I think it's the main difference.