Circular dependency in Scala - scala

I have a Scala project A that has an interface (abstract class) I, the implementations of it, and a reference to project B (B.jar). A is packaged with publish-local to be compiled into jar file and stored in a .ivy directory.
Project B, in turn, uses the I interface in project B; it compiled into a jar and into a .ivy directory.
Here come some design questions in Scala:
Is this a circular dependency as A refers to B when B refers to A?
If the first question is an issue, I guess the simplistic solution is to extract an interface I from A, make it another project to be referenced both by A and B. Isn't this overkill to have a project that has only one interface? Or it's just OK as B references only one class file in A. What's the best practice in Scala?

There are times when a so called cyclic dependency can reduce lines of code and so cannot be discouraged as `bad practice' de facto.
It all depends on the context of the project.
You need answers to questions like ,
Do the projects needs to be in different libraries at all ?
If so can we consider using a DI framework ? eg Spring , Guice etc
Then because it is scala you dont really need a framework per say to implement this.
Consider the following example ,
class IdentityCard(val id: String, val manufacturerCompany: String, person: => Person)
class Person(val firstName: String, val lastName: String, icard: => IdentityCard)
lazy val iCard = new IdentityCard("123","XYZ",person)
lazy val person:Person = new Person("som","bhattacharyya",iCard)
These two classes can be in separate jars and still compile and work together with less code.
Note, we are using a call-by-name for the dependencies being passed in. Also we are doing lazy initialization of the variables so they are not evaluated till they are accessed. This allows us to forward-reference ie. using a variable first and defining later.

It's difficult to give "best practice" advice without more specifics. If "project B" is tightly coupled with project A, they should probably both be in the same project, but in different sub-projects/sub-modules. The interface B uses could also be its own sub-project to remove the circle.
sbt and maven support this, here's the sbt docs.

Related

duplicate package objects in main and test

I have a package object defined in both main and the test code tree as shown below. When I execute the program with sbt run the one in the main code tree takes effect. Whereas when I run the test cases (sbt test) the package object defined in the test code tree takes effect. For eg
src/main/scala/com/example/package.scala
package object core {
val foo = "Hello World"
}
src/test/scala/com/example/package.scala
package object core {
val foo = "Goodbye World"
}
on sbt run the value of com.example.core.foo is Hello World. on sbt test the value of com.example.core.foo is Goodbye World
Is this just a quirk of SBT or is it a well defined scala/sbt trait?. I currently use this behaviour for dependency injection by defining my module bindings for production and test in their corresponding package objects. This is an advisable approach?
Scala looks for package objects in your current path, so it's a well defined behavior. Since your code in test and main resides in different places it finds different val foos.
The way you are using this mechanism is very similar to using implicits. General advice with implicits and implicit resolution is not to abuse it. I think in this case it's not the best way of providing dependencies.
You always have to consider what scope you are in - if you are using a class defined in main in test scope how do you use foo from main, and how do you use foo from test - whenever you need one or the other. You have to think already about how it will work and consider various scenarios. What if your test class is in a different package, which foo would you get, does it depend on where your tested class is declared?
Make dependency injection more explicit and don't spend mental cycles on it, or run a chance to get someone confused.

Is it possible to ignore JVM-only properties and safely export to JavaScript?

I have a basic project setup following this Play with ScalaJS example. Other examples I have found using this same pattern would separate the case classes (models) from what would traditionally be their companion objects. That is, the case class would live in the "shared" sub-project, and the "companion object" (really just some object) would live in the "server" sub-project.
It would be highly preferable to keep these two within the same file (i.e. put important stuff in the real companion object), as it is very convenient to place type-class instances there and have them resolve properly. For example:
case class User(id: Int, name: String)
object User {
val default = User(1, "Guest")
// I need this for the back-end, but don't need to export to JS
implicit val reads: Reads[User] = ...
}
Unfortunately, this leads to a linking error, as the Reads type exists solely on the JVM (just one of many). But, if I were to move val reads into a different file, the implicit resolution of Reads[User] would break throughout the "server" sub-project, without adding explicit imports (which would be annoying).
Is it possible to explicitly ignore certain properties in the ScalaJS export, while still allowing them to compile for the JVM? I'd like the User case class to export, and possibly even other properties of its companion object, but others that exist on the JVM only could be ignored without disrupting the front-end.
The way I have worked around this in the past (in the Scala.js codebase itself), is by a PlattformExtensions trait that mix into the cross compiled object but is different for JVM and JS:
object User extends UserPlattformExtensions {
val default = User(1, "Guest")
}
In your JVM project:
trait UserPlattformExtensions {
implicit val reads: Reads[User] = ???
}
In your JS project:
trait UserPlattformExtensions
In your file organization (with a standard cross project), this would look like the following:
project/
shared/
src/main/
User.scala
jvm/
src/main/
UserPlattformExtensions.scala
js/
src/main/
UserPlattformExtensions.scala
There are no dependency issues, since to the compiler, the source files are assembled as follows:
sources in projectJVM:
shared/src/main/User.scala
jvm/src/main/UserPlattformExtensions.scala
sources in projectJS:
shared/src/main/User.scala
jvm/src/main/UserPlattformExtensions.scala
So to each individual compilation run, this whole construct is simply an object that inherits from a trait. Which source directories the sources come from do not matter to the compilation.

Issue with using Macros in SBT

Assume you have two SBT projects, one called A and another called B
A has a subproject called macro, that follows the exact same pattern as showed here (http://www.scala-sbt.org/0.13.0/docs/Detailed-Topics/Macro-Projects.html). In other words, A has a subproject macro with a package that exposes a macro (lets called it macrotools). Now both projects, A and B, use the macrotools package (and A and B are strictly separate projects, B uses A via dependancies in SBT, with A using publish-local)
Now, A using A's macrotools package is fine, everything works correctly. However when B uses A macrotools package, the following error happens
java.lang.IllegalAccessError: tried to access method com.monetise.waitress.types.Married$.<init>()V from class com.monetise.waitress.types.RelationshipStatus$
For those wondering, the macro is this one https://stackoverflow.com/a/13672520/1519631, so in other words, this macro is what is inside the macrotools package
This is also related to my earlier question Macro dependancy appearing in POM/JAR, except that I am now using SBT 0.13, and I am following the altered guide for SBT 0.13
The code being referred to above is, in this case, this is what is in B, and A is com.monetise.incredients.macros.tools (which is a dependency specified in build.sbt)
package com.monetise.waitress.types
import com.monetise.ingredients.macros.tools.SealedContents
sealed abstract class RelationshipStatus(val id:Long, val formattedName:String)
case object Married extends RelationshipStatus(0,"Married")
case object Single extends RelationshipStatus(1,"Single")
object RelationshipStatus {
// val all:Set[RelationshipStatus] = Set(
// Married,Single
// )
val all:Set[RelationshipStatus] = SealedContents.values[RelationshipStatus]
}
As you can see, when I use whats commented, the code works fine (the job of the macro is to fill the Set with all the case objects in an ADT). When I use the macro version, i.e. SealedContents.values[RelationshipStatus] is when I hit the java.lang.IllegalAccessError
EDIT
Here are the repos containing the projects
https://github.com/mdedetrich/projectacontainingmacro
https://github.com/mdedetrich/projectb
Note that I had to do some changes, which I forgot about earlier. Because the other project needs to depend on the macro as well, the following 2 lines to disable macro publishing have been commented out
publish := {},
publishLocal := {}
In the build.scala. Also note this is a runtime, not a compile time error
EDIT 2
Created a github issue here https://github.com/sbt/sbt/issues/874
This issue is unrelated to SBT. It looks like the macro from Iteration over a sealed trait in Scala? that you're using has a bug. Follow the link to see a fix.

Create new *package* in a Scala Compiler Plugin

In my quest to generate new code in a Scala compiler plugin, I have now created working classes. The next logical step is to put those classes in a new, non-existing package. In Java, a package is basically a directory name, but in Scala a package seems much more complicated. So far I haven't found/recognized an example where a compiler plugin creates a new package.
At my current level of understanding, I would think that I would need to create first a package symbol with:
parentPackage.newPackage(...)
// ...
and than later create a Tree for the package with PackageDef. But PackageDef doesn't take the symbol as parameter, as one would expect, and searching for:
Scala newPackage PackageDef
returned nothing useful. So it seems that I don't need to do those two steps together. Possibly one is done for my by the compiler, but I don't know which one. So far, what I have looks like this:
val newPkg = parentPackage.newPackage(NoPosition, newTermName(name))
newPkg.moduleClass.setInfo(new PackageClassInfoType(new Scope,
newPkg.moduleClass))
newPkg.setInfo(newPkg.moduleClass.tpe)
parentPackage.info.decls.enter(newPkg)
// ...
val newPkgTree = PackageDef(Ident(newPkg.name), List(ClassDef(...)))
I think my answer to your other question should answer this one as well:
How to add a new Class in a Scala Compiler Plugin?

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection