Assume you have two SBT projects, one called A and another called B
A has a subproject called macro, that follows the exact same pattern as showed here (http://www.scala-sbt.org/0.13.0/docs/Detailed-Topics/Macro-Projects.html). In other words, A has a subproject macro with a package that exposes a macro (lets called it macrotools). Now both projects, A and B, use the macrotools package (and A and B are strictly separate projects, B uses A via dependancies in SBT, with A using publish-local)
Now, A using A's macrotools package is fine, everything works correctly. However when B uses A macrotools package, the following error happens
java.lang.IllegalAccessError: tried to access method com.monetise.waitress.types.Married$.<init>()V from class com.monetise.waitress.types.RelationshipStatus$
For those wondering, the macro is this one https://stackoverflow.com/a/13672520/1519631, so in other words, this macro is what is inside the macrotools package
This is also related to my earlier question Macro dependancy appearing in POM/JAR, except that I am now using SBT 0.13, and I am following the altered guide for SBT 0.13
The code being referred to above is, in this case, this is what is in B, and A is com.monetise.incredients.macros.tools (which is a dependency specified in build.sbt)
package com.monetise.waitress.types
import com.monetise.ingredients.macros.tools.SealedContents
sealed abstract class RelationshipStatus(val id:Long, val formattedName:String)
case object Married extends RelationshipStatus(0,"Married")
case object Single extends RelationshipStatus(1,"Single")
object RelationshipStatus {
// val all:Set[RelationshipStatus] = Set(
// Married,Single
// )
val all:Set[RelationshipStatus] = SealedContents.values[RelationshipStatus]
}
As you can see, when I use whats commented, the code works fine (the job of the macro is to fill the Set with all the case objects in an ADT). When I use the macro version, i.e. SealedContents.values[RelationshipStatus] is when I hit the java.lang.IllegalAccessError
EDIT
Here are the repos containing the projects
https://github.com/mdedetrich/projectacontainingmacro
https://github.com/mdedetrich/projectb
Note that I had to do some changes, which I forgot about earlier. Because the other project needs to depend on the macro as well, the following 2 lines to disable macro publishing have been commented out
publish := {},
publishLocal := {}
In the build.scala. Also note this is a runtime, not a compile time error
EDIT 2
Created a github issue here https://github.com/sbt/sbt/issues/874
This issue is unrelated to SBT. It looks like the macro from Iteration over a sealed trait in Scala? that you're using has a bug. Follow the link to see a fix.
Related
I am working on a big sbt project and there is some functionality that I want to benchmark. I decided that I will be using jmh, thus I enabled the sbt-jmh plugin.
I wrote an initial test benchmark that looks like this:
import org.openjdk.jmh.annotations.Benchmark
class TestBenchmark {
#Benchmark
def functionToBenchMark = {
5 + 5
}
}
However, when I try to run it with jmh:run -i 20 -wi 10 -f1 -t1 .*TestBenchmark.* I get java.lang.InternalError: Malformed class name. I have freshly rebuilt the project and everything compiles and runs just fine.
The first printed message says
Processing 6718 classes from /path-to-repo/target/scala-2.11/classes
with "reflection" generator
I find it weird that the plugin tries to reflect the whole project (I guess including classes within the standard library). Before rebuilding I was getting NoClassDefFoundError, although the project was otherwise working well.
Since there are plenty of classes within the project and I cannot make sure that every little bit conforms to jmh's requirements, I was wondering if there's a way to overcome this issue and focus and reflect only the relevant classes that are annotated with #Benchmark?
My sbt version is 0.13.6 and the sbt-jmh version is 0.2.25.
So this is an issue with Scala and Class.getSimpleClassName.
Its not abonormal in Scala to have types like this:
object Outer {
sealed trait Inner
object Inner {
case object Inner1 extends
case object Inner2 extends Inner
}
}
With the above calling Outer.Inner.Inner1.getClass().getSimpleName() will throw the exception your seeing.
I don't think it uses the full project, but only for things that are directly referred to in the State or Benchmark.
Once I had my bench file written that way it worked.
I have a Scala project A that has an interface (abstract class) I, the implementations of it, and a reference to project B (B.jar). A is packaged with publish-local to be compiled into jar file and stored in a .ivy directory.
Project B, in turn, uses the I interface in project B; it compiled into a jar and into a .ivy directory.
Here come some design questions in Scala:
Is this a circular dependency as A refers to B when B refers to A?
If the first question is an issue, I guess the simplistic solution is to extract an interface I from A, make it another project to be referenced both by A and B. Isn't this overkill to have a project that has only one interface? Or it's just OK as B references only one class file in A. What's the best practice in Scala?
There are times when a so called cyclic dependency can reduce lines of code and so cannot be discouraged as `bad practice' de facto.
It all depends on the context of the project.
You need answers to questions like ,
Do the projects needs to be in different libraries at all ?
If so can we consider using a DI framework ? eg Spring , Guice etc
Then because it is scala you dont really need a framework per say to implement this.
Consider the following example ,
class IdentityCard(val id: String, val manufacturerCompany: String, person: => Person)
class Person(val firstName: String, val lastName: String, icard: => IdentityCard)
lazy val iCard = new IdentityCard("123","XYZ",person)
lazy val person:Person = new Person("som","bhattacharyya",iCard)
These two classes can be in separate jars and still compile and work together with less code.
Note, we are using a call-by-name for the dependencies being passed in. Also we are doing lazy initialization of the variables so they are not evaluated till they are accessed. This allows us to forward-reference ie. using a variable first and defining later.
It's difficult to give "best practice" advice without more specifics. If "project B" is tightly coupled with project A, they should probably both be in the same project, but in different sub-projects/sub-modules. The interface B uses could also be its own sub-project to remove the circle.
sbt and maven support this, here's the sbt docs.
I have a package object defined in both main and the test code tree as shown below. When I execute the program with sbt run the one in the main code tree takes effect. Whereas when I run the test cases (sbt test) the package object defined in the test code tree takes effect. For eg
src/main/scala/com/example/package.scala
package object core {
val foo = "Hello World"
}
src/test/scala/com/example/package.scala
package object core {
val foo = "Goodbye World"
}
on sbt run the value of com.example.core.foo is Hello World. on sbt test the value of com.example.core.foo is Goodbye World
Is this just a quirk of SBT or is it a well defined scala/sbt trait?. I currently use this behaviour for dependency injection by defining my module bindings for production and test in their corresponding package objects. This is an advisable approach?
Scala looks for package objects in your current path, so it's a well defined behavior. Since your code in test and main resides in different places it finds different val foos.
The way you are using this mechanism is very similar to using implicits. General advice with implicits and implicit resolution is not to abuse it. I think in this case it's not the best way of providing dependencies.
You always have to consider what scope you are in - if you are using a class defined in main in test scope how do you use foo from main, and how do you use foo from test - whenever you need one or the other. You have to think already about how it will work and consider various scenarios. What if your test class is in a different package, which foo would you get, does it depend on where your tested class is declared?
Make dependency injection more explicit and don't spend mental cycles on it, or run a chance to get someone confused.
Suppose my entire project configuration is this simple build.sbt:
scalaVersion := "2.11.4"
libraryDependencies += "org.scalaz" %% "scalaz-core" % "7.1.0"
And this is my code:
import scalaz.Equal
import scalaz.syntax.equal._
object Foo {
def whatever[A: Equal](a: A, b: A) = a === b
}
Now when I run sbt doc and open the API docs in the browser, I see the scalaz package in the ScalaDoc root package listing, together with my Foo:
object Foo
package scalaz
Or, in case you don't believe me:
I've noticed this with Scalaz before, and I'm not the only one it happens to (see for example the currently published version of the Argonaut API docs). I'm not sure I've seen it happen with any library other than Scalaz.
If I don't actually use anything from Scalaz in my project code, it doesn't show up. The same thing happens on at least 2.10.4 and 2.11.4.
Why is the scalaz package showing up here and how can I make it stop?
I noticed this too. It also happens with the akka.pattern package from Akka as well as for example the upickle package from the upickle project.
Those three packages have two things in common:
They are package objects
They define at least one type in a mixin trait.
So I did a little experiment with two projects:
Project A:
trait SomeFunctionality {
class P(val s: String)
}
package object projectA extends SomeFunctionality
Project B (depends on Project A):
package projectB
import projectA._
object B extends App {
val p = new P("Test")
}
And voila: in the ScalaDoc of Project B appear two packages in the root package:
projectA
projectB
It seems like both criteria above have to be met, since eliminating one solves the issue.
I believe this is a bug in the scala compiler. So I can not help you with avoiding this since the only way would be to change the sources of scalaz in this case. Changing anything in ProjectB except for removing all references to ProjectA didn't help.
Update: It seems like you can instruct the compiler to exclude specific packages from scaladoc.
scaladoc -skip-packages <pack1>:<pack2>:...:<packN> files.scala
So this would be a workaround here
I have a scala project which I build using sbt 13.5.
Since I implemented a module using shapeless, whenever sbt tries to incrementally compile the project it fails in resolving the required implicits:
[...] could not find implicit value for parameter mapper [...]
[...] could not find implicit value for parameter folder [...]
and so on.
sbt clean and sbt compile solves the issue, but the project is fairly big and this is painfully slowing down the compilation times and my productivity in turn, as fresh build can take several minutes.
Any idea of what's going on here?
some extra info
So, after some more thinking I made some hypothesis. The issue occurs when using shapeless records, and by looking at the generated files, I think it could be an issue with the macro that generates the singleton type for each record key.
My module takes an HList of ColParser[T, K] declared as:
sealed trait ColParser[T, K] {
val columnL Witness.Aux[K]
}
So the compiler generates a Witness for each ColParser using a macro, and I'm afraid sbt loses track of the generated macros when deciding what to recompile, but this is just a sketchy hypothesis.
As a matter of fact, whenever I change something in the code that invokes my module (e.g. I add/remove a ColParser from the HList), I get the above error.
Forcing the recompilation of the module (by deleting the generated .class) fixes the issue.