Suppose my entire project configuration is this simple build.sbt:
scalaVersion := "2.11.4"
libraryDependencies += "org.scalaz" %% "scalaz-core" % "7.1.0"
And this is my code:
import scalaz.Equal
import scalaz.syntax.equal._
object Foo {
def whatever[A: Equal](a: A, b: A) = a === b
}
Now when I run sbt doc and open the API docs in the browser, I see the scalaz package in the ScalaDoc root package listing, together with my Foo:
object Foo
package scalaz
Or, in case you don't believe me:
I've noticed this with Scalaz before, and I'm not the only one it happens to (see for example the currently published version of the Argonaut API docs). I'm not sure I've seen it happen with any library other than Scalaz.
If I don't actually use anything from Scalaz in my project code, it doesn't show up. The same thing happens on at least 2.10.4 and 2.11.4.
Why is the scalaz package showing up here and how can I make it stop?
I noticed this too. It also happens with the akka.pattern package from Akka as well as for example the upickle package from the upickle project.
Those three packages have two things in common:
They are package objects
They define at least one type in a mixin trait.
So I did a little experiment with two projects:
Project A:
trait SomeFunctionality {
class P(val s: String)
}
package object projectA extends SomeFunctionality
Project B (depends on Project A):
package projectB
import projectA._
object B extends App {
val p = new P("Test")
}
And voila: in the ScalaDoc of Project B appear two packages in the root package:
projectA
projectB
It seems like both criteria above have to be met, since eliminating one solves the issue.
I believe this is a bug in the scala compiler. So I can not help you with avoiding this since the only way would be to change the sources of scalaz in this case. Changing anything in ProjectB except for removing all references to ProjectA didn't help.
Update: It seems like you can instruct the compiler to exclude specific packages from scaladoc.
scaladoc -skip-packages <pack1>:<pack2>:...:<packN> files.scala
So this would be a workaround here
Related
I have a few enumerations implemented as sealed traits and case objects. I prefer using the ADT approach because of the non-exhaustive warnings and mostly because we want to avoid the type erasure. Something like this
sealed abstract class Maker(val value: String) extends Product with Serializable {
override def toString = value
}
object Maker {
case object ChryslerMaker extends Vendor("Chrysler")
case object ToyotaMaker extends Vendor("Toyota")
case object NissanMaker extends Vendor("Nissan")
case object GMMaker extends Vendor("General Motors")
case object UnknownMaker extends Vendor("")
val tipos = List(ChryslerMaker, ToyotaMaker, NissanMaker,GMMaker, UnknownMaker)
private val fromStringMap: Map[String, Maker] = tipos.map(s => s.toString -> s).toMap
def apply(key: String): Option[Maker] = fromStringMap.get(key)
}
This is working well so far, now we are considering providing access to other programmers to our code to allow them to configure on site. I see two potential problems:
1) People messing up and writing things like:
case object ChryslerMaker extends Vendor("Nissan")
and people forgetting to update the tipos
I have been looking into using a configuration file (JSON or csv) to provide these values and read them as we do with plenty of other elements, but all the answers I have found rely on macros and seem to be extremely dependent on the scala version used (2.12 for us).
What I would like to find is:
1a) (Prefered) a way to create dynamically the case objects from a list of strings making sure the objects are named consistently with the value they hold
1b) (Acceptable) if this proves too hard a way to obtain the objects and the values during the test phase
2) Check that the number of elements in the list matches the number of case objects created.
I forgot to mention, I have looked briefly to enumeratum but I would prefer not to include additional libraries unless i really understand the pros and cons (and right now I am not sure how enumerated compares with the ADT approach, if you think this is the best way and can point me to such discussion that would work great)
Thanks !
One idea that comes to my mind is to create an SBT SourceGenerator task.
That will read an input JSON, CSV, XML or whatever file, that is part of your project and will create a scala file.
// ----- File: project/VendorsGenerator.scala -----
import sbt.Keys._
import sbt._
/**
* An SBT task that generates a managed source file with all Scalastyle inspections.
*/
object VendorsGenerator {
// For demonstration, I will use this plain List[String] to generate the code,
// you may change the code to read a file instead.
// Or maybe this will be good enough.
final val vendors: List[String] =
List(
"Chrysler",
"Toyota",
...
"Unknow"
)
val generatorTask = Def.task {
// Make the 'tipos' List, which contains all vendors.
val tipos =
vendors
.map(vendorName => s"${vendorName}Vendor")
.mkString("val tipos: List[Vendor] = List(", ",", ")")
// Make a case object for each vendor.
val vendorObjects = vendors.map { vendorName =>
s"""case object ${vendorName}Vendor extends Vendor { override final val value: String = "${vendorName}" }"""
}
// Fill the code template.
val code =
List(
List(
"package vendors",
"sealed trait Vendor extends Product with Serializable {",
"def value: String",
"override final def toString: String = value",
"}",
"object Vendors extends (String => Option[Vendor]) {"
),
vendorObjects,
List(
tipos,
"private final val fromStringMap: Map[String, Vendor] = tipos.map(v => v.toString -> v).toMap",
"override def apply(key: String): Option[Vendor] = fromStringMap.get(key.toLowerCase)",
"}"
)
).flatten
// Save the new file to the managed sources dir.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala"
IO.writeLines(vendorsFile, code)
Seq(vendorsFile)
}
}
Now, you can activate your source generator.
This task will be run each time, before the compile step.
// ----- File: build.sbt -----
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue
Please note that I suggest this, because I have done it before and because I don't have any macros nor meta programming experience.
Also, note that this example relays a lot in Strings, which make the code a little bit hard to understand and maintain.
BTW, I haven't used enumeratum, but giving it a quick look looks like the best solution to this problem
Edit
I have my code ready to read a HOCON file and generate the matching code. My question now is where to place the scala file in the project directory and where will the files be generated. I am a little bit confused because there seems to be multiple steps 1) compile my scala generator, 2) run the generator, and 3) compile and build the project. Is this right?
Your generator is not part of your project code, but instead of your meta-project (I know that sounds confusing, you may read this for understanding that) - as such, you place the generator inside the project folder at the root level (the same folder where is the build.properties file for specifying the sbt version).
If your generator needs some dependencies (I'm sure it does for reading the HOCON) you place them in a build.sbt file inside that project folder.
If you plan to add unit test to the generator, you may create an entire scala project inside the meta-project (you may give a look to the project folder of a open source project (Yes, yes I know, confusing again) in which I work for reference) - My personal suggestion is that more than testing the generator itself, you should test the generated file instead, or better both.
The generated file will be automatically placed in the src_managed folder (which lives inside target and thus it is ignored from your source code version control).
The path inside that is just by order, as everything inside the src_managed folder is included by default when compiling.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala" // Path to the file to write.`
In order to access the values defined in the generated file on your source code, you only need to add a package to the generated file and import the values from that package in your code (as with any normal file).
You don't need to worry about anything related with compilation order, if you include your source generator in your build.sbt file, SBT will take care of everything automatically.
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue // Activate the source generator.
SBT will re-run your generator everytime it needs to compile.
"BTW I get "not found: object sbt" on the imports".
If the project is inside the meta-project space, it will find the sbt package by default, don't worry about it.
I have a bunch of SBT 0.13 project definitions that look like this:
lazy val coreBase = crossProject.crossType(CrossType.Pure).in(file("core"))
.settings(...)
.jvmConfigure(_.copy(id = "core"))
.jsConfigure(_.copy(id = "coreJS"))
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
(Mostly because I'm resentful about having to maintain Scala.js builds and don't want to have to type the JVM suffix every time I'm changing projects, etc.)
This doesn't compile in SBT 1.0 because Project doesn't have a copy method now.
Okay, let's check the migration guide.
Many of the case classes are replaced with pseudo case classes generated using Contraband. Migrate .copy(foo = xxx) to withFoo(xxx).
Cool, let's try it.
build.sbt:100: error: value withId is not a member of sbt.Project
.jvmConfigure(_.withId("core"))
^
So I asked on Gitter and got crickets.
The links for the 1.0 API docs actually point to something now, which is nice, but they're not very helpful in this case, and trying to read the SBT source gives me a headache. I'm not in a rush to update to 1.0, but I'm going to have to at some point, I guess, and maybe some helpful person will have answered this by then.
(This answer has been edited with information about sbt 1.1.0+ and sbt-crossproject 0.3.1+, which significantly simplify the whole thing.)
With sbt 1.1.0 and later, you can use .withId("core"). But there's better with sbt-crossproject 0.3.1+, see below.
I don't know about changing the ID of a Project, but here is also a completely different way to solve your original issue, i.e., have core/coreJS instead of coreJVM/coreJS. The idea is to customize crossProject to use the IDs you want to begin with.
First, you'll need to use sbt-crossproject. It is the new "standard" for compilation across several platforms, co-designed by #densh from Scala Native and myself (from Scala.js). Scala.js 1.x will allways use sbt-crossproject, but it is also possible to use sbt-crossproject with Scala.js 0.6.x. For that, follow the instructions in the readme. In particular, don't forget the "shadowing" part:
// Shadow sbt-scalajs' crossProject and CrossType from Scala.js 0.6.x
import sbtcrossproject.{crossProject, CrossType}
sbt-crossproject is more flexible than Scala.js' hard-coded crossProject. This means you can customize it more easily. In particular, it has a generic notion of Platform, defining how any given platform behaves.
For a cross JVM/JS project, the new-style crossProject invocation would be
lazy val coreBase = crossProject(JVMPlatform, JSPlatform)
.crossType(CrossType.Pure)
.in(file("core"))
.settings(...)
.jvmConfigure(_.copy(id = "core"))
.jsConfigure(_.copy(id = "coreJS"))
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
Starting with sbt-crossproject 0.3.1, you can simply tell it not to add the platform suffix for one of your platforms. In your case, you want to avoid the suffix for the JVM platform, so you would write:
lazy val coreBase = crossProject(JVMPlatform, JSPlatform)
.withoutSuffixFor(JVMPlatform)
.crossType(CrossType.Pure)
...
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
and that's all you need to do!
Old answer, applicable to sbt-crossproject 0.3.0 and before
JVMPlatform and JSPlatform are not an ADT; they are designed in an OO style. This means you can create your own. In particular, you can create your own JVMPlatformNoSuffix that would do the same as JVMPlatform but without adding a suffix to the project ID:
import sbt._
import sbtcrossproject._
case object JVMPlatformNoSuffix extends Platform {
def identifier: String = "jvm"
def sbtSuffix: String = "" // <-- here is the magical empty string
def enable(project: Project): Project = project
val crossBinary: CrossVersion = CrossVersion.binary
val crossFull: CrossVersion = CrossVersion.full
}
Now that's not quite enough yet, because .jvmSettings(...) and friends are defined to act on a JVMPlatform, not on any other Platform such as JVMPlatformNoSuffix. You'll therefore have to redefine that as well:
implicit def JVMNoSuffixCrossProjectBuilderOps(
builder: CrossProject.Builder): JVMNoSuffixCrossProjectOps =
new JVMNoSuffixCrossProjectOps(builder)
implicit class JVMNoSuffixCrossProjectOps(project: CrossProject) {
def jvm: Project = project.projects(JVMPlatformNoSuffix)
def jvmSettings(ss: Def.SettingsDefinition*): CrossProject =
jvmConfigure(_.settings(ss: _*))
def jvmConfigure(transformer: Project => Project): CrossProject =
project.configurePlatform(JVMPlatformNoSuffix)(transformer)
}
Once you have all of that in your build (hidden away in a project/JVMPlatformNoSuffix.scala in order not to pollute the .sbt file), you can define the above cross-project as:
lazy val coreBase = crossProject(JVMPlatformNoSuffix, JSPlatform)
.crossType(CrossType.Pure)
.in(file("core"))
.settings(...)
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
without any need to explicitly patch the project IDs.
I have a Scala project A that has an interface (abstract class) I, the implementations of it, and a reference to project B (B.jar). A is packaged with publish-local to be compiled into jar file and stored in a .ivy directory.
Project B, in turn, uses the I interface in project B; it compiled into a jar and into a .ivy directory.
Here come some design questions in Scala:
Is this a circular dependency as A refers to B when B refers to A?
If the first question is an issue, I guess the simplistic solution is to extract an interface I from A, make it another project to be referenced both by A and B. Isn't this overkill to have a project that has only one interface? Or it's just OK as B references only one class file in A. What's the best practice in Scala?
There are times when a so called cyclic dependency can reduce lines of code and so cannot be discouraged as `bad practice' de facto.
It all depends on the context of the project.
You need answers to questions like ,
Do the projects needs to be in different libraries at all ?
If so can we consider using a DI framework ? eg Spring , Guice etc
Then because it is scala you dont really need a framework per say to implement this.
Consider the following example ,
class IdentityCard(val id: String, val manufacturerCompany: String, person: => Person)
class Person(val firstName: String, val lastName: String, icard: => IdentityCard)
lazy val iCard = new IdentityCard("123","XYZ",person)
lazy val person:Person = new Person("som","bhattacharyya",iCard)
These two classes can be in separate jars and still compile and work together with less code.
Note, we are using a call-by-name for the dependencies being passed in. Also we are doing lazy initialization of the variables so they are not evaluated till they are accessed. This allows us to forward-reference ie. using a variable first and defining later.
It's difficult to give "best practice" advice without more specifics. If "project B" is tightly coupled with project A, they should probably both be in the same project, but in different sub-projects/sub-modules. The interface B uses could also be its own sub-project to remove the circle.
sbt and maven support this, here's the sbt docs.
Scopes matters in sbt. And I'm completely OK with it. But there are also delegating rules that allows you build a hierarchical structure of settings. I'd like to use it to bring extra settings to more specific rules.
import sbt._
import Keys._
object TestBuild extends Build {
val sourceExample = settingKey[Seq[String]]("example source for setting dependency")
val targetExample = settingKey[Seq[String]]("example of a dependent setting")
override lazy val settings = super.settings ++ Seq (
sourceExample := Seq("base"),
targetExample := "extended" +: sourceExample.value,
sourceExample in Test += "testing"
)
}
The example gives me unexpected output:
> show compile:sourceExample
[info] List(base)
> show test:sourceExample
[info] List(base, testing)
> show compile:targetExample
[info] List(extended, base)
> show test:targetExample
[info] List(extended, base)
I expect test:targetExample be List(extended, base, testing) not List(extended, base). Once I've get the result I immediately figure out why exactly it works as shown. test:targetExample delegates from *:targetExample the calculated value but not the rule for calculating it in nested scope.
This behavior brings two difficulties for me writing my own plugin. I have extra work to define same rules in every scope as a plugin developer. And I have to memorize scope definitions of internal tasks to use it correctly as user.
How can I overcome this inconvenience? I'd like to introduce settings in call-by-name semantic instead of call-by-value. What tricks may work for it?
P.S. libraryDependencies in Test looks much more concise that using % test.
I should make clear that I perfectly understand that the sbt derives values just as it is described in documentation. It works as the creator intended it to work.
But why should I obey to the rules? I see them completely counter-intuitive. Sbt introduces inheritance semantic that actually works unlike how inheritance used to be defined. When you write
trait A { lazy val x : Int = 5 }
trait B extends A { lazy val y : Int = x * 2}
trait C extends A { override lazy val x : Int = 3 }
you expect (new B with C).y be 6, not 10. Knowing that it would be actually 10 allows you to use this kind of inheritance correctly but leaves your with desire to find more conventional means for implementing inheritance. You may even write your own implementation based on name->value dictionary. And you may proceed further according to the tenth rule of programming.
So I'm searching for a hack that would bring inheritance semantic in accordance with common one. As a start point I may suggest writing command to scan all settings and push them from parents to children explicitly. And than invoke this command automatically each time sbt runs.
But it seems too dirty for me, so I'm curios if there is more graceful way to achieve similar semantic.
The reason for the "incorrect" value is that targetExample depends on sourceExample in Compile scope as in:
targetExample := "extended" +: sourceExample.value
Should it use sourceExample value from Test scope, use in Test to be explicit about your wish as follows:
targetExample := "extended" +: (sourceExample in Test).value
Use inspect to know the dependency chain.
BTW, I strongly advise using build.sbt file for such a simple build definition.
You could have default settings, and reuse it in different configurations as described in Plugins Best Practices - Playing nice with configurations. I believe, this should give you a semantic similar to what you're looking for.
You can define your base settings and reuse it in different configurations.
import sbt._
import Keys._
object TestBuild extends Build {
val sourceExample = settingKey[Seq[String]]("example source for setting dependency")
val targetExample = settingKey[Seq[String]]("example of a dependent setting")
override lazy val settings = super.settings ++
inConfig(Compile)(basePluginSettings) ++
inConfig(Test)(basePluginSettings ++ Seq(
sourceExample += "testing" // we are already "in Test" here
))
lazy val basePluginSettings: Seq[Setting[_]] = Seq (
sourceExample := Seq("base"),
targetExample := "extended" +: sourceExample.value
)
}
PS. Since you're talking about writing your plugin, you may also want to look at the new way of writing sbt plugins, namely AutoPlugin, as the old mechanism is now deprecated.
Assume you have two SBT projects, one called A and another called B
A has a subproject called macro, that follows the exact same pattern as showed here (http://www.scala-sbt.org/0.13.0/docs/Detailed-Topics/Macro-Projects.html). In other words, A has a subproject macro with a package that exposes a macro (lets called it macrotools). Now both projects, A and B, use the macrotools package (and A and B are strictly separate projects, B uses A via dependancies in SBT, with A using publish-local)
Now, A using A's macrotools package is fine, everything works correctly. However when B uses A macrotools package, the following error happens
java.lang.IllegalAccessError: tried to access method com.monetise.waitress.types.Married$.<init>()V from class com.monetise.waitress.types.RelationshipStatus$
For those wondering, the macro is this one https://stackoverflow.com/a/13672520/1519631, so in other words, this macro is what is inside the macrotools package
This is also related to my earlier question Macro dependancy appearing in POM/JAR, except that I am now using SBT 0.13, and I am following the altered guide for SBT 0.13
The code being referred to above is, in this case, this is what is in B, and A is com.monetise.incredients.macros.tools (which is a dependency specified in build.sbt)
package com.monetise.waitress.types
import com.monetise.ingredients.macros.tools.SealedContents
sealed abstract class RelationshipStatus(val id:Long, val formattedName:String)
case object Married extends RelationshipStatus(0,"Married")
case object Single extends RelationshipStatus(1,"Single")
object RelationshipStatus {
// val all:Set[RelationshipStatus] = Set(
// Married,Single
// )
val all:Set[RelationshipStatus] = SealedContents.values[RelationshipStatus]
}
As you can see, when I use whats commented, the code works fine (the job of the macro is to fill the Set with all the case objects in an ADT). When I use the macro version, i.e. SealedContents.values[RelationshipStatus] is when I hit the java.lang.IllegalAccessError
EDIT
Here are the repos containing the projects
https://github.com/mdedetrich/projectacontainingmacro
https://github.com/mdedetrich/projectb
Note that I had to do some changes, which I forgot about earlier. Because the other project needs to depend on the macro as well, the following 2 lines to disable macro publishing have been commented out
publish := {},
publishLocal := {}
In the build.scala. Also note this is a runtime, not a compile time error
EDIT 2
Created a github issue here https://github.com/sbt/sbt/issues/874
This issue is unrelated to SBT. It looks like the macro from Iteration over a sealed trait in Scala? that you're using has a bug. Follow the link to see a fix.