I'm a beginner with Scala, but I'm wondering why the following compiles in Scala 3?
import scala.language.strictEquality
case class C(x: String => String) derives CanEqual
Shouldn't this fail because CanEqual is not defined on String => String and hence can't be derived?
No it shouldn't.
Simply speaking, with derives CanEqual here you just give the compiler know that it's legal to use == or != with instances of C. The compiler will add an instance of CanEqual for C itself.
The things are different with generic classes (type constructors), as with CanEqual derives CanEqual that requires presence of CanEqual instances for all type parameters. It's however a constraint that's a result of using this particular method of deriving instances. You can always provide it yourself and then even for generic classes you may not need CanEqual for type parameters. E.g. this seems to be working:
class C[T](t: T)
given CanEqual[C[_], C[_]] = CanEqual.derived
println(new C[Int](1) == new C[String]("1")) // false
As I said, it's just a green light for using == or !=, it doesn't construct/derive the comparison operation itself. equals() method is used, whatever is its implementation. So for your case it will use default case class equals() method and it will use reference comparison for function in x.
Worth noting that scala.language.strictEquality doesn't limit usage of equals()
More reading:
https://docs.scala-lang.org/scala3/book/ca-multiversal-equality.html
https://dotty.epfl.ch/docs/reference/contextual/multiversal-equality.html
plus I find descriptions of this feature in "Programming in Scala" and "Programming Scala" new editions a bit easier to digest
Hope it helps a bit
Related
How are primitive types in Scala objects if we do not use the word "new" to instantiate the instances of those primitives? Programming in Scala by Martin Odersky described the reasoning as some enforcing by a "trick" that makes these value classes to be defined abstract and final, which did not quite make sense to me because how are we able to make an instance of these classes if its abstract? If that same primitive literal is to be stored somewhere let's say into a variable will that make the variable an object?
I assume that you use scala 2.13 with implementation of literal types. For this explanation you can think of type and class as synonyms, but in reality they are different concepts.
To put it all together it worth to treat each primitive type as a set of subtypes each of which representing type of one single literal value.
So literal 1 is a value and type at the same time (instance 1 of type 1), and it is subtype of value class Int.
Let's prove that 1 is subtype of Int by using 'implicitly':
implicitly[1 <:< Int] // compiles
The same but using val:
val one:1 = 1
implicitly[one.type <:< Int] // compiles
So one is kind of an instance (object) of type 1 (and instance of type Int at the same time because because Int is supertype of 1). You can use this value the same way as any other objects (pass it to function or assign to other vals etc).
val one:1 = 1
val oneMore: 1 = one
val oneMoreGeneric: Int = one
val oneNew:1 = 1
We can assume that all these vals contain the same instance of one single object because from practical perspective it doesn't actually matter if this is the same object or not.
Technically it's not an object at all, because primitives came form java (JVM) world where primitives are not objects. They are different kind of entities.
Scala language is trying to unify these two concepts into one (everything is classes), so developers don't have to think too much about differences.
But here are still some differences in a backstage. Each value class is a subtype of AnyVal, but the rest of the classes are subtype of AnyRef (regular class).
implicitly[1 <:< AnyVal] //compiles
implicitly[Int <:< AnyVal] // compiles
trait AnyTraint
implicitly[AnyTraint <:< AnyVal] // fails to compail
implicitly[AnyTraint <:< AnyRef] // compiles
And in addition, because of its non-class nature in the JVM, you can't extend value classes as regular class or use new to create an instance (because scala compiler emulates new by itself). That's why from perspective of extending value classes you should think about them as final and from perspective of creating instances manually you should think of them as abstract. But form most of the other perspectives it's like any other regular class.
So scala compiler can kind of extend Int by 1,2,3 .. types and create instances of them for vals, but developers can't do it manually.
value classes can be used to achieve type safety without the overhead of unboxing.
I had the impression that in runtime such types/classes would "not exist", being seen as simple types (for instance, a value class case class X(i: Int) extends AnyVal would be a simple Int on runtime).
But if you do call a .toString method on a value class instance it would print something like:
scala> val myValueClass = X(3)
myValueClass: X = 3
scala> myValueClass.toString
res5: String = X(3)
so I guess the compiler includes some information after all?
Not really. The compiler creates a static method (in Scala this corresponds to the class's companion object) which is called with your int value as a parameter in order to simulate calling a method on your value class-typed object.
Your value class itself only exists in the source code. In compiled bytecode an actual primitive int is used and static methods are called rather than new object instances with real method calls. You can read more about this mechanism here.
Value classes are designed so that adding or removing extends AnyVal (if legal) shouldn't change the results of calculations (except even non-case value classes have equals and hashCode defined automatically like case classes). This requires that in some circumstances they survive, e.g.
def toString(x: Any) = x.toString
toString(myValueClass)
but the situation in your question isn't one of them.
http://docs.scala-lang.org/sips/completed/value-classes.html#expansion-of-value-classes explains more precisely how value classes are implemented and is useful to see in what cases they survive, though some details may have changed since.
I want to use a private constructor in a macro. This example is a positive integer, but the basic pattern could not only be used for other numeric types like even numbers, but also string derived types like email addresses or a directory name. By making the constructor private the user is denied the opportunity to make an illegal type. I have the following code:
object PosInt
{
import language.experimental.macros
import reflect.runtime.universe._
import reflect.macros.Context
def op(inp: Int): Option[PosInt] = if (inp > 0) Some(new PosInt(inp)) else None
def apply(param: Int): PosInt = macro apply_impl
def apply_impl(c: Context)(param: c.Expr[Int]): c.Expr[PosInt] =
{
import c.universe._
param match {
case Expr(Literal(i)) if (i.value.asInstanceOf[Int] > 0) =>
case Expr(Literal(i)) if (i.value.asInstanceOf[Int] == 0) => c.abort(c.enclosingPosition, "0 is not a positive integer")
case Expr(Literal(i)) => c.abort(c.enclosingPosition, "is not a positive integer")
case _ => c.abort(c.enclosingPosition, "Not a Literal")
}
reify{new PosInt(param.splice)}
}
}
class PosInt (val value: Int) extends AnyVal
However if I make the PosInt Constructor private, although the Macro compiles as expected I get an error if try to use the macro. I can't work out how to build the expression tree manually, but I'm not sure if that would help anyway. Is there anyway I can do this?
You still can't use a private constructor even if PosInt is not a value class. I'll accept an answer that doesn't use a value class. The disadvantage of value classes is that they get type erasure. Plus classes that I'm interested in like subsets of 2d co-ordinates can't be implement as value classes anyway. I'm not actually interested in Positive Integers, I'm just using them as a simple test bed. I'm using Scala 2.11M5. Scala 2.11 will have the addition of the quasiquotes feature. I haven't worked out how to use, quasiquotes yet, as all the material at the moment on them seems to assume a familiarity with Macro Paradise, which I don't have.
Unfortunately for what you are trying to achieve, macros do not work this way. They just manipulate the AST at compile time. Whatever the final result is, it is always something you could have written literally in Scala (without the macro).
Thus, in order to constrain the possible values of PosInt, you will need a runtime check somewhere, either in a public constructor or in a factory method on the companion object.
If runtime exceptions are not palatable to you, then one possible approach would be:
Make the constructor private on the class.
Provide (for example) a create method on the companion object that returns Option[PosInt] (or Try[PosInt], or some other type of your choice that allows you to express a "failure" when the argument is out of range).
Provide an apply method on the companion object similar to your example, which verifies at compile time that the argument is in range and then returns an expression tree that simply calls create(x).get.
Calling .get on the Option is acceptable in this case because you are sure that it will never be None.
The downside is that you have to repeat the check twice: once at compile time, and once at runtime.
I'm not an expert, but I figured I'll give it a shot...
In Java, the scope of a private constructor is limited to the same class... so the PosInt object would need to be moved into the scope of the same class from which it's being called.
With that said, I found an article that shows two ways you can keep the object from being inherited # http://www.developer.com/java/other/article.php/3109251/Stopping-Your-Class-from-Being-Inherited-in-Java-the-Official-Way-and-the-Unofficial-Way.htm
It describes using the "final" keyword in the class declaration to prevent it from being inherited. That's the "official" way. The "unofficial" way is to make the constructor private, but add a public static method that returns an object of the class...
Yes, I know, it is an old question... but it was left unanswered. You never know when an old question will be the top hit in someone's search results...
I am trying to understand the following piece of code. but I don't know what the R#X mean. could someone help me?
// define the abstract types and bounds
trait Recurse {
type Next <: Recurse
// this is the recursive function definition
type X[R <: Recurse] <: Int
}
// implementation
trait RecurseA extends Recurse {
type Next = RecurseA
// this is the implementation
type X[R <: Recurse] = R#X[R#Next]
}
object Recurse {
// infinite loop
type C = RecurseA#X[RecurseA]
}
You may gain type from existing instance of a class:
class C {
type someType = Int
}
val c = new C
type t = c.someType
Or may address to the type directly without instantiating an object: C#someType This form is very usefull for type expressions where you have no space to create intermediate variables.
Adding some clarifications as it was suggested in comments.
Disclaimer: I have only partial understanding of how Scala's type system works. I'd tried to read documentation several times, but was able to extract only patchy knowledges from it. But I have rich experience in scala and may predict compilers behavior on individual cases well.
# called type projection and type projection compliments normal hierarchical type access via . In every type expression scala implicitly uses both.
scala reference gives examples of such invisible conversions:
t ə.type#t
Int scala.type#Int
scala.Int scala.type#Int
data.maintable.Node data.maintable.type#Node
As use see, every trivial usage of type projection actually works on type (that is return with .type) not on an object. The main practical difference (I'm bad with definitions) is that object type is something ephemeral as object itself is. Its type may be changed in appropriate circumstances such as inheritance of an abstract class type. In contrast type's type (the definition of the type projection) is as stable as sun. Types (don't mix them with classes) in scala are not first-class citizens and can not be overridden further.
There are different places suitable for putting type expression into. There are also some places where only stable types are allowed. So basically type projection is more constant for terms of type.
When I compile:
object Test extends App {
implicit def pimp[V](xs: Seq[V]) = new {
def dummy(x: V) = x
}
}
I get:
$ fsc -d aoeu go.scala
go.scala:3: error: Parameter type in structural refinement may not refer to an abstract type defined outside that refinement
def dummy(x: V) = x
^
one error found
Why?
(Scala: "Parameter type in structural refinement may not refer to an abstract type defined outside that refinement" doesn't really answer this.)
It's disallowed by the spec. See 3.2.7 Compound Types.
Within a method declaration in a structural refinement, the type of any value parameter may only refer to type parameters or abstract types that are contained inside the refinement. That is, it must refer either to a type parameter of the method
itself, or to a type definition within the refinement. This restriction does not apply
to the function’s result type.
Before Bug 1906 was fixed, the compiler would have compiled this and you'd have gotten a method not found at runtime. This was fixed in revision 19442 and this is why you get this wonderful message.
The question is then, why is this not allowed?
Here is very detailed explanation from Gilles Dubochet from the scala mailing list back in 2007. It roughly boils down to the fact that structural types use reflection and the compiler does not know how to look up the method to call if it uses a type defined outside the refinement (the compiler does not know ahead of time how to fill the second parameter of getMethod in p.getClass.getMethod("pimp", Array(?))
But go look at the post, it will answer your question and some more.
Edit:
Hello list.
I try to define structural types with abstract datatype in function
parameter. ... Any reason?
I have heard about two questions concerning the structural typing
extension of Scala 2.6 lately, and I would like to answer them here.
Why did we change Scala's native values (“int”, etc.) boxing scheme
to Java's (“java.lang.Integer”).
Why is the restriction on parameters for structurally defined
methods (“Parameter type in structural refinement may not refer
to abstract type defined outside that same refinement”) required.
Before I can answer these two questions, I need to speak about the
implementation of structural types.
The JVM's type system is very basic (and corresponds to Java 1.4). That
means that many types that can be represented in Scala cannot be
represented in the VM. Path dependant types (“x.y.A”), singleton types
(“a.type”), compound types (“A with B”) or abstract types are all types
that cannot be represented in the JVM's type system.
To be able to compile to JVM bytecode, the Scala compilers changes the
Scala types of the program to their “erasure” (see section 3.6 of the
reference). Erased types can be represented in the VM's type system and
define a type discipline on the program that is equivalent to that of
the program typed with Scala types (saving some casts), although less
precise. As a side note, the fact that types are erased in the VM
explains why operations on the dynamic representation of types (pattern
matching on types) are very restricted with respect to Scala's type
system.
Until now all type constructs in Scala could be erased in some way.
This isn't true for structural types. The simple structural type “{ def
x: Int }” can't be erased to “Object” as the VM would not allow
accessing the “x” field. Using an interface “interface X { int x{}; }”
as the erased type won't work either because any instance bound by a
value of this type would have to implement that interface which cannot
be done in presence of separate compilation. Indeed (bear with me) any
class that contains a member of the same name than a member defined in
a structural type anywhere would have to implement the corresponding
interface. Unfortunately this class may be defined even before the
structural type is known to exist.
Instead, any reference to a structurally defined member is implemented
as a reflective call, completely bypassing the VM's type system. For
example def f(p: { def x(q: Int): Int }) = p.x(4) will be rewritten
to something like:
def f(p: Object) = p.getClass.getMethod("x", Array(Int)).invoke(p, Array(4))
And now the answers.
“invoke” will use boxed (“java.lang.Integer”) values whenever the
invoked method uses native values (“int”). That means that the above
call must really look like “...invoke(p, Array(new
java.lang.Integer(4))).intValue”.
Integer values in a Scala program are already often boxed (to allow the
“Any” type) and it would be wasteful to unbox them from Scala's own
boxing scheme to rebox them immediately as java.lang.Integer.
Worst still, when a reflective call has the “Any” return type,
what should be done when a java.lang.Integer is returned? The called
method may either be returning an “int” (in which case it should be
unboxed and reboxed as a Scala box) or it may be returning a
java.lang.Integer that should be left untouched.
Instead we decided to change Scala's own boxing scheme to Java's. The
two previous problems then simply disappear. Some performance-related
optimisations we had with Scala's boxing scheme (pre-calculate the
boxed form of the most common numbers) were easy to use with Java
boxing too. In the end, using Java boxing was even a bit faster than
our own scheme.
“getMethod”'s second parameter is an array with the types of the
parameters of the (structurally defined) method to lookup — for
selecting which method to get when the name is overloaded. This is the
one place where exact, static types are needed in the process of
translating a structural member call. Usually, exploitable static types
for a method's parameter are provided with the structural type
definition. In the example above, the parameter type of “x” is known to
be “Int”, which allows looking it up.
Parameter types defined as abstract types where the abstract type is
defined inside the scope of the structural refinement are no problem
either:
def f(p: { def x[T](t: T): Int }) = p.xInt
In this example we know that any instance passed to “f” as “p” will
define “x[T](t: T)” which is necessarily erased to “x(t: Object)”. The
lookup is then correctly done on the erased type:
def f(p: Object) = p.getClass.getMethod("x", Array(Object)).invoke(p,
Array(new java.lang.Integer(4)))
But if an abstract type from outside the structural refinement's scope
is used to define a parameter of a structural method, everything breaks:
def f[T](p: { def x(t: T): Int }, t: T) = p.x(t)
When “f” is called, “T” can be instantiated to any type, for example:
f[Int]({ def x(t: Int) = t }, 4)
f[Any]({ def x(t: Any) = 5 }, 4)
The lookup for the first case would have to be “getMethod("x",
Array(int))” and for the second “getMethod("x", Array(Object))”, and
there is no way to know which one to generate in the body of
“f”: “p.x(t)”.
To allow defining a unique “getMethod” call inside “f”'s body for
any instantiation of “T” would require any object passed to “f” as the
“p” parameter to have the type of “t” erased to “Any”. This would be a
transformation where the type of a class' members depend on how
instances of this class are used in the program. And this is something
we definitely don't want to do (and can't be done with separate
compilation).
Alternatively, if Scala supported run-time types one could use them to
solve this problem. Maybe one day ...
But for now, using abstract types for structural method's parameter
types is simply forbidden.
Sincerely,
Gilles.
Discovered the problem shortly after posting this: I have to define a named class instead of using an anonymous class. (Still would love to hear a better explanation of the reasoning though.)
object Test extends App {
case class G[V](xs: Seq[V]) {
def dummy(x: V) = x
}
implicit def pimp[V](xs: Seq[V]) = G(xs)
}
works.