OK, in the question about 'Class Variables as constants', I get the fact that the constants are not available until after the 'official' constructor has been run (i.e. until you have an instance). BUT, what if I need the companion singleton to make calls on the class:
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
If I create companion object first, the 'new thing(x)' (in the companion) causes an error. However, if I define the class first, the 'x * someConst' (in the class definition) causes an error.
I also tried placing the class definition inside the singleton.
object thing {
var someConst = 42
def apply(x: Int) = new thing(x)
class thing(x: Int) {
val field = x * someConst
override def toString = "val: " + field
}
}
However, doing this gives me a 'thing.thing' type object
val t = thing(2)
results in
t: thing.thing = val: 84
The only useful solution I've come up with is to create an abstract class, a companion and an inner class (which extends the abstract class):
abstract class thing
object thing {
val someConst = 42
def apply(x: Int) = new privThing(x)
class privThing(x: Int) extends thing {
val field = x * someConst
override def toString = "val: " + field
}
}
val t1 = thing(2)
val tArr: Array[thing] = Array(t1)
OK, 't1' still has type of 'thing.privThing', but it can now be treated as a 'thing'.
However, it's still not an elegant solution, can anyone tell me a better way to do this?
PS. I should mention, I'm using Scala 2.8.1 on Windows 7
First, the error you're seeing (you didn't tell me what it is) isn't a runtime error. The thing constructor isn't called when the thing singleton is initialized -- it's called later when you call thing.apply, so there's no circular reference at runtime.
Second, you do have a circular reference at compile time, but that doesn't cause a problem when you're compiling a scala file that you've saved on disk -- the compiler can even resolve circular references between different files. (I tested. I put your original code in a file and compiled it, and it worked fine.)
Your real problem comes from trying to run this code in the Scala REPL. Here's what the REPL does and why this is a problem in the REPL. You're entering object thing and as soon as you finish, the REPL tries to compile it, because it's reached the end of a coherent chunk of code. (Semicolon inference was able to infer a semicolon at the end of the object, and that meant the compiler could get to work on that chunk of code.) But since you haven't defined class thing it can't compile it. You have the same problem when you reverse the definitions of class thing and object thing.
The solution is to nest both class thing and object thing inside some outer object. This will defer compilation until that outer object is complete, at which point the compiler will see the definitions of class thing and object thing at the same time. You can run import thingwrapper._ right after that to make class thing and object thing available in global scope for the REPL. When you're ready to integrate your code into a file somewhere, just ditch the outer class thingwrapper.
object thingwrapper{
//you only need a wrapper object in the REPL
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
}
Scala 2.12 or more could benefit for sip 23 which just (August 2016) pass to the next iteration (considered a “good idea”, but is a work-in-process)
Literal-based singleton types
Singleton types bridge the gap between the value level and the type level and hence allow the exploration in Scala of techniques which would typically only be available in languages with support for full-spectrum dependent types.
Scala’s type system can model constants (e.g. 42, "foo", classOf[String]).
These are inferred in cases like object O { final val x = 42 }. They are used to denote and propagate compile time constants (See 6.24 Constant Expressions and discussion of “constant value definition” in 4.1 Value Declarations and Definitions).
However, there is no surface syntax to express such types. This makes people who need them, create macros that would provide workarounds to do just that (e.g. shapeless).
This can be changed in a relatively simple way, as the whole machinery to enable this is already present in the scala compiler.
type _42 = 42.type
type Unt = ().type
type _1 = 1 // .type is optional for literals
final val x = 1
type one = x.type // … but mandatory for identifiers
Related
The only way I can think of doing this, without creating a wrapper class, is to use scala 3's type unions like this
type Even = 0 | 2 | 4 | 6 | 8
val even : Even = 4
but that obviously has a limit. Is there a way to create the "entire" range?
As a follow up, what about for other ranges? Is there some way to create a function that restricts the type in some arbitrary way (as dangerous as that sounds)?
You can create a newtype with a smart constructor. Several ways to do it.
First, manually, to show how it work:
trait Newtype[T] {
type Type
protected def wrap(t: T): Type = t.asInstanceOf[Type]
protected def unwrap(t: Type): T = t.asInstanceOf[T]
}
type Even = Even.Type
object Even extends Newtype[Int] {
def parse(i: Int): Either[String, Even] =
if (i % 2 == 0) Right(wrap(i))
else Left(s"$i is odd")
implicit class EvenOps(private val even: Even) extends AnyVal {
def value: Int = unwrap(even)
def +(other: Even): Even = wrap(even.value + other.value)
def -(other: Even): Even = wrap(even.value - other.value)
}
}
You are creating type Even which compiler knows nothing about, so it cannot prove that an arbitrary value is its instance. But you can force-cast to it an back again - if JVM in runtime won't be able to catch some issue with it, there is not problem (and since it assumes nothing about Even it cannot disprove anything by contradiction).
Since Even resolves to Even.Type - that is type Type within Even object - Scala's implicit scope will automatically fetch all implicits that are defined in object Even, so you can place your extension methods and typeclasses there.
This will help you pretend that this type has some methods defined.
In Scala 3 you can achieve the same with opaque type. However this representation, has the nice side that it is easy to make it cross compilable with Scala 2 and Scala 3. As a matter of the fast, that's what Monix Newtype did, so you can use it instead of implementing this functionality yourself.
import monix.newtypes._
type Even = Even.Type
object Even extends Newtype[Int] {
// ...
}
Another option is older macro-annotation based library Scala Newtype. It will take your type defined as case class and rewrite the code to implement something similar to what we have above:
import io.estatico.newtype.macros.newtype
#newtype case class Even(value: Int)
however it is harder to add your own smart constructor there, which is why it usually is paired with Refined Types. Then your code would look like:
import eu.timepit.refined._
import eu.timepit.refined.api.Refined
import eu.timepit.refined.numeric
import io.estatico.newtype.macros.newtype
#newtype case class Even(value: Int Refined numeric.Even)
object Even {
def parse(i: Int): Either[String, Even] =
refineV[numeric.Even](i).map(Even(_))
}
However, you might want to just use the plain refined type at this point, since Even newtype wouldn't introduce any domain knowledge beyond what refinement does.
There is a library X I'm working on, which depends on another library Y. To support multiple versions of Y, X publishes multiple artifacts named X_Y1.0, X_Y1.1, etc. This is done using multiple subprojects in SBT with version-specific source directories like src/main/scala-Y1.0 and src/main/scala-Y1.1.
So far, it worked well. One minor problem is that sometimes version-specific source directories are too much. Sometimes they require a lot of code duplication because it's syntactically impossible to extract just the tiny differences into separate files. Sometimes doing so introduces performance overhead or makes the code unreadable.
Trying to solve the issue, I've added macro annotations to selectively delete a part of the code. It works like this:
class MyClass {
#UntilB1_0
def f: Int = 1
#SinceB1_1
def f: Int = 2
}
However, it seems it only works for methods. When I try to use the macro on fields, compilation fails with an error saying "f is already defined as value f". Also, it doesn't work for classes and objects.
My suspicion is that macros are applied during compilation before resolving method overloads, but after basic checks like checking duplicate names.
Is there a way to make the macros work for fields, classes, and objects too?
Here's an example macro to demonstrate the issue.
import scala.annotation.{compileTimeOnly, StaticAnnotation}
import scala.language.experimental.macros
import scala.reflect.macros.blackbox
#compileTimeOnly("enable macro paradise to expand macro annotations")
class Delete extends StaticAnnotation {
def macroTransform(annottees: Any*): Any = macro DeleteMacro.impl
}
object DeleteMacro {
def impl(c: blackbox.Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
c.Expr[Nothing](EmptyTree)
}
}
When the annotation #Delete is used on methods, it works.
class MyClass {
#Delete
def f: Int = 1
def f: Int = 2
}
// new MyClass().f == 2
However, it doesn't work for fields.
class MyClass {
#Delete
val f: Int = 1
val f: Int = 2
}
// error: f is already defined as value f
First of all, good idea :)
It is a strange (and quite uncontrollable) behaviour, and I think that what you want to do is difficult to perform with macros.
To understand why you expansions doesn't work, I tried to print all the scalac phases.
Your expansion works, indeed giving this code:
class Foo {
#Delete
lazy val x : Int = 12
val x : Int = 10
#Delete
def a : Int = 10
def a : Int = 12
}
the code printed after typer is:
package it.unibo {
class Foo extends scala.AnyRef {
def <init>(): it.unibo.Foo = {
Foo.super.<init>();
()
};
<empty>; //val removed
private[this] val x: Int = 10;
<stable> <accessor> def x: Int = Foo.this.x;
<empty>; //def removed
def a: Int = 12
};
...
}
But, unfortunately, the error will be thrown anyway, I'm going to explain why this happens.
In scalac, macros are expanded -- at least in Scala 2.13 -- during the packageobjects phases (so after the parser and namer phases).
Here, different things happen, such as (as said here):
infers types,
checks whether types match,
searches for implicit arguments and adds them to trees,
does implicit conversions,
checks whether all type operations are allowed (for example type cannot be a subtype of itself),
resolves overloading,
type-checks parent references,
checks type violations,
searches for implicits ,
expands macros,
and creates additional methods for case classes (like apply or copy).
The essential problem here is that we cannot change the order, so it happens that invalid val references are checked before the method overloading, and macros expansion happen before method overloading check. For this reason #delete works with methods but it doesn't work with vals.
To solve your problem, I think that is necessary to use compiler plugin, here you can add a phase before the namer, so no error will be thrown. Build compiler plugin is more difficult of writing macros, but I think that is the best option for your case.
On a recent worksheet I was presented with the question asking what would be the output of the following code:
class A { def m(x:Double) = x+x }
class B[Any] extends A{ def m(x: Any) = print(x) }
class C[Any] { def m (x:Double) = x+x; def m (x: Any) = print(x) }
val obj1 = new B[Int]; val obj2 = new C[Any]
obj1.m(1); obj1.m(2.3); obj2.m(4); obj2.m(5.6)
I'm quite confused as to what having a concrete type in the square brackets after the class name would mean (i.e. class B[Any]). Is the later expression val obj1 = new B[Int] valid because Int <: Any, Int being a subclass of Any?
When later running the code snippet, the result given was simply "1" being printed. This was not what I had expected the call to obj.m(2.3) to resolve at def m(x: any), where it seems in actuality the compiler went up to A and called the m in class A.
The later expressions, obj2.m(4) and obj2.m(5.6) seems to make sense as both 4 and 5.6 would land in the function with def m(x: Double), thus not print anything out.
In what order exactly does the compiler traverse to find what to call? I'd be very grateful if someone could clear up my confusions with how polymorphism is handled here by Scala, thank you very much :)
When you do class B[Any], you define a class with a type parameter called Any. Don't confuse the type parameter name with the actual class Any. You are just shadowing its name.
You could just as fine do this:
class B[Int]
val obj = new B[String]
You may see why it is bad practice to name type parameters after actual types. Usually, people use single letter names for their type parameters, like this:
class B[T] // I just changed the name of the type parameter from "Int" to "T".
val obj = new B[String]
I'm developing a library that depends on another. The dependency has a package object that I'd like to alias into my own package domain, to 'hide' the underlying library from the users of the one I'm developing, for potential later reimplementation of that library. I've tried a couple things, including
object functions {
def identity(a: Any): Any = a
def toUpper(s: String): String = s.toUpperCase
}
object renamedfunctions {
import functions._
}
This compiles but import renamedfunctions._ brings nothing into scope. I've also tried extending the backing object, but scala objects are un-extendable. Does anyone know of a way to accomplish what I'm trying to do without forking the underlying library?
It is not possible to do this with Scala packages, in general. Usually, you would only alias a package locally within a file:
import scala.{ math => physics }
scala> physics.min(1, 2)
res6: Int = 1
But this doesn't do what you ask. Packages themselves aren't values or types, so you cannot assign them as such. These will fail:
type physics = scala.math
val physics = scala.math
With a package object, you can grab ahold of it's concrete members, but not the classes within. For example:
scala> val physics = scala.math.`package`
physics: math.type = scala.math.package$#42fcc7e6
scala> physics.min(1, 2)
res0: Int = 1
But using objects or types that belong to the traditional package won't work:
scala> scala.math.BigDecimal(1)
res1: scala.math.BigDecimal = 1
scala> physics.BigDecimal(1)
<console>:13: error: value BigDecimal is not a member of object scala.math.package
physics.BigDecimal(1)
^
Ok, so what should you do?
The reason you're even considering this is that you want to hide the implementation of which library you're using so that it can easily be replaced later. If that's the case, what you should do is hide the library within another interface or object (a facade). It doesn't mean you need to forward every single method and value contained within the library, only the one's you're actually using. This way, when it comes to migrating to another library, you only need to change one class, because the rest of the code will only reference the facade.
For example, if we wanted to use min and max from scala.math, but later wanted to replace it with another library that provided a more efficient solution (if such a thing exists), we could create a facade like this:
object Math {
def min(x: Int, y: Int): Int = scala.math.min(x, y)
def max(x: Int, y: Int): Int = scala.math.max(x, y)
}
All other classes would use Math.min and Math.max, so that when scala.math was replaced, they could remain the same. You could also make Math a trait (sans implementations) and provide the implementations in a sub-class or object (say ScalaMath), so that classes could inject different implementations.
Unfortunately, the commented-out code crashes the compiler:
package object p { def f = 42 }
package q {
object `package` { def f = p.f }
}
/*
package object q {
val `package` = p.`package`
}
*/
package client {
import q.`package`._
object Test extends App {
println(f)
}
}
That would make clients not break when you migrated to implementations in a package object.
Simply:
val renamedfunctions = functions
import renamedfunctions._
You can see it being done in the scala library itself: https://github.com/scala/scala/blob/2.12.x/src/library/scala/Predef.scala#L150
val Map = immutable.Map
Suppose I have:
class X
{
val listPrimitive: List[Int] = null
val listX: List[X] = null
}
and I print out the return types of each method in Scala as follows:
classOf[ComplexType].getMethods().foreach { m => println(s"${m.getName}: ${m.getGenericReturnType()}") }
listPrimitive: scala.collection.immutable.List<Object>
listX: scala.collection.immutable.List<X>
So... I can determine that the listX's element type is X, but is there any way to determine via reflection that listPrimitive's element type is actually java.lang.Integer? ...
val list:List[Int] = List[Int](123);
val listErased:List[_] = list;
println(s"${listErased(0).getClass()}") // java.lang.Integer
NB. This seems not to be an issue due to JVM type erasure since I can find the types parameter of List. It looks like the scala compiler throws away this type information IFF the parameter type is java.lang.[numbers] .
UPDATE:
I suspect this type information is available, due to the following experiment. Suppose I define:
class TestX{
def f(x:X):Unit = {
val floats:List[Float] = x.listPrimitive() // type mismatch error
}
}
and X.class is imported via a jar. The full type information must be available in X.class in order that this case correctly fails to compile.
UPDATE2:
Imagine you're writing a scala extension to a Java serialization library. You need to implement a:
def getSerializer(clz:Class[_]):Serializer
function that needs to do different things depending on whether:
clz==List[Int] (or equivalently: List[java.lang.Integer])
clz==List[Float] (or equivalently: List[java.lang.Float])
clz==List[MyClass]
My problem is that I will only ever see:
clz==List[Object]
clz==List[Object]
clz==List[MyClass]
because clz is provided to this function as clz.getMethods()(i).getGenericReturnType().
Starting with clz:Class[_] how can I recover the element type information that was lost?
Its not clear to me that TypeToken will help me because its usages:
typeTag[T]
requires that I provide T (ie. at compile time).
So, one path to a solution... Given some clz:Class[_], can I determine the TypeTokens of its method's return types? Clearly this is possible as this information must be contained (somewhere) in a .class file for a scala compiler to correctly generate type mismatch errors (see above).
At the java bytecode level Ints have to be represented as something else (apparently Object) because a List can only contain objects, not primitives. So that's what java-level reflection can tell you. But the scala type information is, as you infer, present (at the bytecode level it's in an annotation, IIRC), so you should be able to inspect it with scala reflection:
import scala.reflect.runtime.universe._
val list:List[Int] = List[Int](123)
def printTypeOf[A: TypeTag](a: A) = println(typeOf[A])
printTypeOf(list)
Response to update2: you should use scala reflection to obtain a mirror, not the Class[_] object. You can go via the class name if need be:
import scala.reflect.runtime.universe._
val rm = runtimeMirror(getClass.getClassLoader)
val someClass: Class[_] = ...
val scalaMirrorOfClass = rm.staticClass(someClass.getName)
// or possibly rm.reflectClass(someClass) ?
val someObject: Any = ...
val scalaMirrorOfObject = rm.reflectClass(someObject)
I guess if you really only have the class, you could create a classloader that only loads that class? I can't imagine a use case where you wouldn't have the class, or even a value, though.