Moving a package-private class—should I consider that binary incompatible? - scala

Because of an issue with package name aux under Windows, I am moving a helper class within the package hierarchy of my library from
de.sciss.scalainterpreter.aux
to
de.sciss.scalainterpreter
The class is private to the library, i.e. private[scalainterpreter] object Helper.
Now using the Typesafe Migration-Manager, obviously it reports that the change is not compatible:
Found 2 binary incompatibiities
===============================
* class de.sciss.scalainterpreter.aux.Helper does not have a correspondent
in new version
* object de.sciss.scalainterpreter.aux.Helper does not have a correspondent
in new version
But I suspect that if client code does not call into either object, the interfaces are still compatible, and thus I can use a minor version increase to indicate the change, and allow those two versions to be used interchangeably.
Correct?

You are not specifying if Helper was already package private before the move. So I'll treat both cases:
If it was already package private:
I suspect that the migration manager reports an incompatibility only because it must stay conservative: packages are open in scala (like in java), which means that client code might very well define a class package scalainterpreter.
So by moving Helper, you would indeed break that class.
However let's be pragmatic: de.sciss.scalainterpreter.aux is your package (and so should be their sub-packages), and nobody should define their own classes there.
With this additional prerequiste, moving Helper is indeed a binary compatible change toward client scala code.
As for client java code, it's a bit different because even if Helper is package private, its visibility is still public as far as the JVM is concerned, and thus the java compiler will happily let client code access Helper (thus client java code might very well already access Helper, despite it being declared package private).
If it was not package private before the move:
Well, tough luck. Client code could very well already access Helper, and the move will certainly break that. As a side note, you can employ a little trick to make the change source-compatible, but alas not binary-compatible. Just add the following file:
package de.sciss
package object scalainterpreter {
object aux {
val Helper = _root_.de.sciss.scalainterpreter.Helper
}
}
With the above, you can still access Helper as de.sciss.scalainterpreter.aux.Helper, and it still compiles under windows (unlike defining a package aux, which does not compile because of the reserved meaning as a file name).
But again, this is not binary compatible, only source compatible.

It's easy to see how inlining can break client code, since the inlined code essentially bleeds into the client interface. This example really asks for a linkage error; we can experiment and do things like javap | grep Helper, but at some level you have to let scalac do its job.
package lib {
object Lib {
//import util.Helper
#inline def result = Helper.help
}
//package util {
private [lib] object Helper {
#inline def help = "Does this help?"
}
//}
}
Sample innocently bystanding client:
package client
object Test {
import lib.Lib
def main(args: Array[String]) {
println(Lib.result)
}
}
Changing package of package-private class:
$ scala -cp "classes;target" client.Test
Does this help?
apm#halyard ~/tmp/taking-it-private
$ vi lib.scala
apm#halyard ~/tmp/taking-it-private
$ rm -rf classes/*
apm#halyard ~/tmp/taking-it-private
$ smalac -d classes -optimise lib.scala
apm#halyard ~/tmp/taking-it-private
$ smala -cp "classes;target" client.Test
java.lang.ClassNotFoundException: lib.util.Helper$
Javap shows why. [Namely, the call is inlined but it still wants to init the module.]
I haven't followed the discussions, but for example there are links at:
https://github.com/scala/scala/pull/1133
and other discussions on the ML about what expectations about binary compatibility are valid.
https://groups.google.com/forum/?fromgroups=#!topic/scala-internals/sJ-xnWL_8PE

Simply put, no reason why it wouldn't be. Linkage happens around signatures; since the object in question is scoped to the compilation unit, clients cannot (or rather, should not) be using it, and binary compatibility is therefore not an issue.

Related

Is there a way to run a single benchmark with sbt-jmh?

I am working on a big sbt project and there is some functionality that I want to benchmark. I decided that I will be using jmh, thus I enabled the sbt-jmh plugin.
I wrote an initial test benchmark that looks like this:
import org.openjdk.jmh.annotations.Benchmark
class TestBenchmark {
#Benchmark
def functionToBenchMark = {
5 + 5
}
}
However, when I try to run it with jmh:run -i 20 -wi 10 -f1 -t1 .*TestBenchmark.* I get java.lang.InternalError: Malformed class name. I have freshly rebuilt the project and everything compiles and runs just fine.
The first printed message says
Processing 6718 classes from /path-to-repo/target/scala-2.11/classes
with "reflection" generator
I find it weird that the plugin tries to reflect the whole project (I guess including classes within the standard library). Before rebuilding I was getting NoClassDefFoundError, although the project was otherwise working well.
Since there are plenty of classes within the project and I cannot make sure that every little bit conforms to jmh's requirements, I was wondering if there's a way to overcome this issue and focus and reflect only the relevant classes that are annotated with #Benchmark?
My sbt version is 0.13.6 and the sbt-jmh version is 0.2.25.
So this is an issue with Scala and Class.getSimpleClassName.
Its not abonormal in Scala to have types like this:
object Outer {
sealed trait Inner
object Inner {
case object Inner1 extends
case object Inner2 extends Inner
}
}
With the above calling Outer.Inner.Inner1.getClass().getSimpleName() will throw the exception your seeing.
I don't think it uses the full project, but only for things that are directly referred to in the State or Benchmark.
Once I had my bench file written that way it worked.

duplicate package objects in main and test

I have a package object defined in both main and the test code tree as shown below. When I execute the program with sbt run the one in the main code tree takes effect. Whereas when I run the test cases (sbt test) the package object defined in the test code tree takes effect. For eg
src/main/scala/com/example/package.scala
package object core {
val foo = "Hello World"
}
src/test/scala/com/example/package.scala
package object core {
val foo = "Goodbye World"
}
on sbt run the value of com.example.core.foo is Hello World. on sbt test the value of com.example.core.foo is Goodbye World
Is this just a quirk of SBT or is it a well defined scala/sbt trait?. I currently use this behaviour for dependency injection by defining my module bindings for production and test in their corresponding package objects. This is an advisable approach?
Scala looks for package objects in your current path, so it's a well defined behavior. Since your code in test and main resides in different places it finds different val foos.
The way you are using this mechanism is very similar to using implicits. General advice with implicits and implicit resolution is not to abuse it. I think in this case it's not the best way of providing dependencies.
You always have to consider what scope you are in - if you are using a class defined in main in test scope how do you use foo from main, and how do you use foo from test - whenever you need one or the other. You have to think already about how it will work and consider various scenarios. What if your test class is in a different package, which foo would you get, does it depend on where your tested class is declared?
Make dependency injection more explicit and don't spend mental cycles on it, or run a chance to get someone confused.

Does Scala have a global object or class?

I know programmers are supposed to wrap their code in an application object:
object Hello extends App {
println("Hello, World")
}
It is required in Eclipse, if I ever want to get any output. However, when I tried to write some code (very casually) in Emacs, I write like this:
class Pair[+T](val first: T, val second: T)
trait Friend[-T] {
def befriend(someone: T)
}
def makeFriendWith(s: Student, f: Friend[Student]) {
f.befriend(s)
}
It seems like there is no universal object or class that wraps over the function makeFriendWith. Is Scala like JavaScript, everything is attached to a global object? If not, what is this function attached to?
Also why can this work in console (I complied it with scala command and it worked) but does not work in Eclipse? What's the use of the Application object?
Scala doesn't have top-level defs, but your script can be run by either the REPL or the scala script runner.
The precise behavior of your script depends on which way you run it.
The REPL can run scripts line-by-line or whole hog. (Compare :paste and :paste -raw versus :load or -i init.script and the future option -I init.script.)
There is an issue about sensitive scripting. The script runner should realize if you're trying to run an App.
There is another effort to make scripting a compiler phase that is easily customized. Scroll to Scripter.scala for code comments about its current heuristics.
In short, your defs must be wrapped in a top-level entity, but exactly how that happens is context-dependent.
There was a recent effort to make an alternative baked-in wrapping scheme available for the REPL.
None of this is mandated by the language spec, any more than special rules pertaining to sbt build files are defined by the language.
You can define methods like this only in the console, which (behind the scenes) automatically wraps them in an anonymous class for you.
Outside of the console, there's no such luxury.
As a JVM language, Scala cannot truly create any top-level entities other than classes and interfaces.
It does, however, have the notion of a "package object" which creates the illusion of value entites (val, var and def) not enclosed in a class or trait.
See http://www.scala-lang.org/docu/files/packageobjects/packageobjects.html for information on package objects.
You can run code like this directly in Eclipse: use Scala worksheet. IntelliJ IDEA Scala plugin supports it as well.

Doing something like Python's "import" in Scala

Is it possible to use Scala's import without specifying a main function in an object, and without using the package keyword in the source file with the code you wish to import?
Some explanation: In Python, I can define some functions in some file "Lib.py", write
from Lib import *
in some other file "Run.py" in the same directory, use the functions from Lib in Run, and then run Run with the command python Run.py. This workflow is ideal for small scripts that I might write in an hour.
In Scala, it appears that if I want to include functions from another file, I need to start wrapping things in superfluous objects. I would rather not do this.
Writing Python in Scala is unlikely to yield satisfactory results. Objects are not "superfluous" -- it's your program that is not written in an object oriented way.
First, methods must be inside objects. You can place them inside a package object, and they'll then be visible to anything else that is inside the package of the same name.
Second, if one considers solely objects and classes, then all package-less objects and classes whose class files are present in the classpath, or whose scala files are compiled together, will be visible to each other.
This is as minimal as I could get it:
[$]> cat foo.scala
object Foo {
def foo(): Boolean = {
return true
}
}
// vim: set ts=4 sw=4 et:
[$]> cat bar.scala
object Bar extends App {
import Foo._
println(foo)
}
// vim: set ts=4 sw=4 et:
[$]> fsc foo.scala bar.scala
[$]> export CLASSPATH=.:$CLASSPATH # Or else it can't find Bar.
[$]> scala Bar
true
When you just write simple scripts, use Scala's REPL. There, you can define functions and call them without having any enclosing object or package, and without a main method.
Objects/classes don't have to be in packages, though it's highly recommended. That said, you can also treat singleton objects like packages, i.e., as namespaces for standalone functions, and import their contents as if they were packages.
If you define your application as an object that extends App, then you don't have to define a main method. Just write your code in the body of the object, and the App trait (which extends thespecial DelayedInit trait) will provide a main method that will execute your code.
If just want to write a script, you can forgo the object altogether and just write code without any container, then pass your source file to the interpreter (REPL) in non-interactive mode.

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection