How to handle different package names in different versions? - scala

I have a 3rd party library with package foo.bar
I normally use it as:
import foo.bar.{Baz => MyBaz}
object MyObject {
val x = MyBaz.getX // some method defined in Baz
}
The new version of the library has renamed the package from foo.bar to newfoo.newbar. I have now another version of my code with the slight change as follows:
import newfoo.newbar.{Baz => MyBaz}
object MyObject {
val x = MyBaz.getX // some method defined in Baz
}
Notice that only the first import is different.
Is there any way I can keep the same version of my code and still switch between different versions of the 3rd party library as and when needed?
I need something like conditional imports, or an alternative way.

The other answer is on the right track but doesn't really get you all the way there. The most common way to do this kind of thing in Scala is to provide a base compatibility trait that has different implementations for each version. In my little abstracted library, for example, I have the following MacrosCompat for Scala 2.10:
package io.travisbrown.abstracted.internal
import scala.reflect.ClassTag
private[abstracted] trait MacrosCompat {
type Context = scala.reflect.macros.Context
def resultType(c: Context)(tpe: c.Type)(implicit
tag: ClassTag[c.universe.MethodType]
): c.Type = {
import c.universe.MethodType
tpe match {
case MethodType(_, res) => resultType(c)(res)
case other => other
}
}
}
And this one for 2.11:
package io.travisbrown.abstracted.internal
import scala.reflect.ClassTag
private[abstracted] trait MacrosCompat {
type Context = scala.reflect.macros.whitebox.Context
def resultType(c: Context)(tpe: c.Type): c.Type = tpe.finalResultType
}
And then my classes, traits, and objects that use the macro reflection API can just extend MacrosCompat and they'll get the appropriate Context and an implementation of resultType for the version we're currently building (this is necessary because of changes to the macros API between 2.10 and 2.11).
(This isn't originally my idea or pattern, but I'm not sure who to attribute it to. Probably Eugene Burmako?)
If you're using SBT, there's special support for version-specific source trees—you can have a src/main/scala for your shared code and e.g. src/main/scala-2.10 and src/main/scala-2.11 directories for version-specific code, and SBT will take care of the rest.

You can try to use type aliases:
package myfoo
object mybar {
type MyBaz = newfoo.newbar.Baz
// val MyBaz = newfoo.newbar.Baz // if Baz is a case class/object, then it needs to be aliased twice - as a type and as a value
}
And then you may simply import myfoo.mybar._ and replace the object mybar to switch to different version of the library.

Related

Scala macros: How can I get a list of the objects within a given package that inherit some trait?

I have a package foo.bar in which a trait Parent is defined, and a series of objects Child1, Child2, Child3 are defined. I would like to get a List[Parent] containing all child objects defined in foo.bar. How can I write such a macro?
Right now I have the following:
def myMacro(c: blackbox.Context): c.Expr[Set[RuleGroup]] = {
val parentSymbol = c.mirror.staticClass("foo.bar.Parent")
c.mirror.staticPackage("foo.bar").info.members
// get all objects
.filter { sym =>
// remove $ objects
sym.isModule && sym.asModule.moduleClass.asClass.baseClasses.contains(parentSymbol)
}.map { ??? /* retrieve? */ }
???
}
I think this is what you'd be looking for:
.map(sym => c.mirror.reflectModule(sym.asModule).instance.asInstanceOf[Parent])
Later edit:
I have tried doing this in a trait, so not a macro like above, and when calling it with a different package than the one calling it from, it returned an empty collection of objects. Reading through it might have to do with how classloaders work in Scala as they don't have the knowledge of all the classes being loaded, but i see your macro doesn't use a classloader so maybe it still works in your case.
For me it worked using the Reflections library like this in a trait:
import org.reflections.Reflections
import scala.reflect.runtime.universe
import scala.reflect.{ClassTag, classTag}
import scala.collection.JavaConverters._
trait ChildObjects {
def childObjectsOf[Parent: ClassTag](containingPackageFullName: String): Set[Parent] = {
new Reflections(containingPackageFullName)
.getSubTypesOf(classTag[Parent].runtimeClass)
.asScala
.map(cls => {
val mirror = universe.runtimeMirror(cls.getClassLoader)
val moduleSymbol = mirror.moduleSymbol(cls)
mirror.reflectModule(moduleSymbol).instance.asInstanceOf[Parent]
})
.toSet
}
}
If the trait is not sealed you can't do that. Fundamentally if a trait is not sealed, it means new subclasses can be added later under different compilation unit.
If the trait is sealed, than you can use knownDirectSubclasses of ClassSymbolApi but beware of the possible issues the depend on order such as this and this in circe

Scala: package object v.s. singleton object within a package

I want to group a set of similar functions in a library in scala.
Here are two approaches I have seen elsewhere. I want to understand the
differences between the two.
Singleton object defined in a package
// src/main/scala/com/example/toplevel/functions.scala
package com.example.toplevel
object functions {
def foo: Unit = { ... }
def bar: Unit = { ... }
}
Package object
// src/main/scala/com/example/toplevel/package.scala
package com.example.toplevel
package object functions {
def foo: Unit = { ... }
def bar: Unit = { ... }
}
Comparison
As far as I can tell, the first approach will require explicitly importing
the functions object whenever you want to use its functions. While the package object approach allows anything in the package functions to access those methods without importing them.
Ie, com.example.toplevel.functions.MyClass would have access to com.example.toplevel.functions.foo implicitly.
Is my understanding correct?
If there are no classes defined within com.example.toplevel.functions,
it seems the approaches would be equivalent, is this correct?
Ansered by terminally-chill in a comment:
yes, your understanding is correct. Anything defined in your package
object will be available without having to import. If you use an
object, you will have to import it even within the same package.

scala scoping brings in types not shown in import

I read an article concerning that scala's type inference might have done too
much:
Given this piece of code:
package A1 {
case class Foo(a: Int)
package object A2 {
def bar() = Foo(1)
}
}
--
import A1.A2._
object Main extends App {
val a: Foo = bar() // error: not found type Foo
}
It won't compile as Main can't see Foo unless we also import A1.Foo.
Whereas if the type annotation is taken away, then it fine:
import A1.A2._
object Main extends App {
val a = bar()
}
The author thinks this comparing to java, where we have to explicitly import whatever types we're using, would reduce readability as imports no long have complete information about the set of types we're using.
I think what he wants is that the types being used, explicitly or implicitly, need to be imported to make it clear what the code depends on and perhaps to assist some static analysis tools.
For this problem I wonder what you think about it.
EDIT:
As #flavian pointing out, this has little to do with type inference, more of how scoping works.
EDIT2:
I have a second thought on this. Maybe this question is not important if an IDE can automatically add imports(even for those used implicitly) if the developer wants to.
--
In your first example the compiler sees
val a: Foo = bar()
and doesn't know what Foo is, so it complains.
To fix this code there are three options.
// import Foo
import A1.Foo
val a: Foo = bar()
// use the fully qualified name
val a: A1.Foo = bar()
// let the compiler infer the type
val a = bar()
These all compile the same.
The last option is not avaliable to Java.
The author thinks this comparing to java, where we have to explicitly import whatever types we're using
Not true.
// we can use Foo with no import
useFoo(x.getFoo());
// and we can use fully qualified names
A1.Foo foo = bar();
The compiler will add to the compiled class file a list of all classes that are needed by the class.
I don't think type inference is the question at play in here. Members propagate in scope through direct import or inheritance. If you had:
trait A1 {
case class Foo(..)
}
object A2 extends A1
This would correctly import Foo into scope. Again, as far as I know this is not a type inference problem, but rather with the fact that imports and implicits propagate only through inheritance. It's more about how scoping works in Scala than anything else.

Abstract reflection API in Scala 2.10

Scala 2.10 comes with a great reflection API. There are two entry points to it, however: runtime universe and macro context universe.
When using runtime reflection, you should import scala.reflect.runtime.universe. When using reflection inside a macro implementation, you should import universe from the context.
Is it possible to write some code that works in both environments? How should one obtain the universe?
Consider this example:
class MyReflection(val u: scala.reflect.api.Universe) {
import u._
def foo[T: TypeTag] = implicitly[TypeTag[T]].tpe.members // returns MyReflection.u.MemberScope
}
val x = new MyReflection(scala.reflect.runtime.universe)
val members: scala.reflect.runtime.universe.MemberScope = x.foo[String] // BANG! Compiler error
This won't compile because of type mismatch. Same time, it is obvious that both scala.reflect.runtime.universe.MemberScope and MyReflection.u.MemberScope in this example share the same API. Is there a way to abstract over different universes?
Or am I possibly doing something philosophically wrong with trying to export reflection artifacts (MemberScope in this example)?
You can just accept the universe as a parameter:
class MyReflection(val u: scala.reflect.api.Universe) {
import u._
def foo[T : TypeTag] = implicitly[TypeTag[T]].tpe.members
}
val x = new MyReflection(scala.reflect.runtime.universe)
Note that you'll have to refer to the universe via your instance of MyReflection to get the path-dependent types right.
val members: x.u.MemberScope = x.foo[String]
Have a look at this question for more examples and options.

Generating a Scala class automatically from a trait

I want to create a method that generates an implementation of a trait. For example:
trait Foo {
def a
def b(i:Int):String
}
object Processor {
def exec(instance: AnyRef, method: String, params: AnyRef*) = {
//whatever
}
}
class Bar {
def wrap[T] = {
// Here create a new instance of the implementing class, i.e. if T is Foo,
// generate a new FooImpl(this)
}
}
I would like to dynamically generate the FooImpl class like so:
class FooImpl(val wrapped:AnyRef) extends Foo {
def a = Processor.exec(wrapped, "a")
def b(i:Int) = Processor.exec(wrapped, "b", i)
}
Manually implementing each of the traits is not something we would like (lots of boilerplate) so I'd like to be able to generate the Impl classes at compile time. I was thinking of annotating the classes and perhaps writing a compiler plugin, but perhaps there's an easier way? Any pointers will be appreciated.
java.lang.reflect.Proxy could do something quite close to what you want :
import java.lang.reflect.{InvocationHandler, Method, Proxy}
class Bar {
def wrap[T : ClassManifest] : T = {
val theClass = classManifest[T].erasure.asInstanceOf[Class[T]]
theClass.cast(
Proxy.newProxyInstance(
theClass.getClassLoader(),
Array(theClass),
new InvocationHandler {
def invoke(target: AnyRef, method: Method, params: Array[AnyRef])
= Processor.exec(this, method.getName, params: _*)
}))
}
}
With that, you have no need to generate FooImpl.
A limitation is that it will work only for trait where no methods are implemented. More precisely, if a method is implemented in the trait, calling it will still route to the processor, and ignore the implementation.
You can write a macro (macros are officially a part of Scala since 2.10.0-M3), something along the lines of Mixing in a trait dynamically. Unfortunately now I don't have time to compose an example for you, but feel free to ask questions on our mailing list at http://groups.google.com/group/scala-internals.
You can see three different ways to do this in ScalaMock.
ScalaMock 2 (the current release version, which supports Scala 2.8.x and 2.9.x) uses java.lang.reflect.Proxy to support dynamically typed mocks and a compiler plugin to generate statically typed mocks.
ScalaMock 3 (currently available as a preview release for Scala 2.10.x) uses macros to support statically typed mocks.
Assuming that you can use Scala 2.10.x, I would strongly recommend the macro-based approach over a compiler plugin. You can certainly make the compiler plugin work (as ScalaMock demonstrates) but it's not easy and macros are a dramatically superior approach.