Why does test-scoped setting not hold correct value? - scala

Scopes matters in sbt. And I'm completely OK with it. But there are also delegating rules that allows you build a hierarchical structure of settings. I'd like to use it to bring extra settings to more specific rules.
import sbt._
import Keys._
object TestBuild extends Build {
val sourceExample = settingKey[Seq[String]]("example source for setting dependency")
val targetExample = settingKey[Seq[String]]("example of a dependent setting")
override lazy val settings = super.settings ++ Seq (
sourceExample := Seq("base"),
targetExample := "extended" +: sourceExample.value,
sourceExample in Test += "testing"
)
}
The example gives me unexpected output:
> show compile:sourceExample
[info] List(base)
> show test:sourceExample
[info] List(base, testing)
> show compile:targetExample
[info] List(extended, base)
> show test:targetExample
[info] List(extended, base)
I expect test:targetExample be List(extended, base, testing) not List(extended, base). Once I've get the result I immediately figure out why exactly it works as shown. test:targetExample delegates from *:targetExample the calculated value but not the rule for calculating it in nested scope.
This behavior brings two difficulties for me writing my own plugin. I have extra work to define same rules in every scope as a plugin developer. And I have to memorize scope definitions of internal tasks to use it correctly as user.
How can I overcome this inconvenience? I'd like to introduce settings in call-by-name semantic instead of call-by-value. What tricks may work for it?
P.S. libraryDependencies in Test looks much more concise that using % test.
I should make clear that I perfectly understand that the sbt derives values just as it is described in documentation. It works as the creator intended it to work.
But why should I obey to the rules? I see them completely counter-intuitive. Sbt introduces inheritance semantic that actually works unlike how inheritance used to be defined. When you write
trait A { lazy val x : Int = 5 }
trait B extends A { lazy val y : Int = x * 2}
trait C extends A { override lazy val x : Int = 3 }
you expect (new B with C).y be 6, not 10. Knowing that it would be actually 10 allows you to use this kind of inheritance correctly but leaves your with desire to find more conventional means for implementing inheritance. You may even write your own implementation based on name->value dictionary. And you may proceed further according to the tenth rule of programming.
So I'm searching for a hack that would bring inheritance semantic in accordance with common one. As a start point I may suggest writing command to scan all settings and push them from parents to children explicitly. And than invoke this command automatically each time sbt runs.
But it seems too dirty for me, so I'm curios if there is more graceful way to achieve similar semantic.

The reason for the "incorrect" value is that targetExample depends on sourceExample in Compile scope as in:
targetExample := "extended" +: sourceExample.value
Should it use sourceExample value from Test scope, use in Test to be explicit about your wish as follows:
targetExample := "extended" +: (sourceExample in Test).value
Use inspect to know the dependency chain.
BTW, I strongly advise using build.sbt file for such a simple build definition.

You could have default settings, and reuse it in different configurations as described in Plugins Best Practices - Playing nice with configurations. I believe, this should give you a semantic similar to what you're looking for.
You can define your base settings and reuse it in different configurations.
import sbt._
import Keys._
object TestBuild extends Build {
val sourceExample = settingKey[Seq[String]]("example source for setting dependency")
val targetExample = settingKey[Seq[String]]("example of a dependent setting")
override lazy val settings = super.settings ++
inConfig(Compile)(basePluginSettings) ++
inConfig(Test)(basePluginSettings ++ Seq(
sourceExample += "testing" // we are already "in Test" here
))
lazy val basePluginSettings: Seq[Setting[_]] = Seq (
sourceExample := Seq("base"),
targetExample := "extended" +: sourceExample.value
)
}
PS. Since you're talking about writing your plugin, you may also want to look at the new way of writing sbt plugins, namely AutoPlugin, as the old mechanism is now deprecated.

Related

Why value method cannot be used outside macros?

The error message
`value` can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
val x = version.value
^
clearly indicates how to fix the problem, for example, using :=
val x = settingKey[String]("")
x := version.value
The explanation in sbt uses macros heavily states
The value method itself is in fact a macro, one that if you invoke it
outside of the context of another macro, will result in a compile time
error, the exact error message being...
And you can see why, since sbt settings are entirely declarative, you
can’t access the value of a task from the key, it doesn’t make sense
to do that.
however I am confused what is meant by declarative nature of sbt being the reason. For example, intuitively I would think the following vanilla Scala snippet is semantically similar to sbt's
def version: String = ???
lazy val x = s"Hello $version" // ok
trait Foo {
def version: String
val x = version // ok
}
As this is legal, clearly the Scala snippet is not semantically equivalent to the sbt one. I was wondering if someone could elaborate on why value cannot be used outside macros? Is the reason purely syntactic related to macro syntax or am I missing something fundamental about sbt's nature?
As another sentence there says
Defining sbt’s task engine is done by giving sbt a series of settings, each setting declaring a task implementation. sbt then executes those settings in order. Tasks can be declared multiple times by multiple settings, the last one to execute wins.
So at the moment when the line
val x = version.value
would be executed (if it were allowed!), that whole program is still being set up and SBT doesn't know the final definition of version.
In what sense is the program "still being set up"?
SBT's order of actions is, basically (maybe missing something):
All your Scala build code is run.
It contains some settings and tasks definitions, SBT collects those when it encounters (along with ones from core, plugins, etc.)
They are topologically sorted into the task graph, deduplicated ("the last one to execute wins"), etc.
Settings are evaluated.
Now you are allowed to actually run tasks (e.g. from SBT console).
version.value is only available after step 4, but val x = version.value runs on step 1.
Would not lazy evaluation take care of that?
Well, when you write val x = ... there is no lazy evaluation. But lazy val x = ... runs on step 1 too.

Sealed trait and dynamic case objects

I have a few enumerations implemented as sealed traits and case objects. I prefer using the ADT approach because of the non-exhaustive warnings and mostly because we want to avoid the type erasure. Something like this
sealed abstract class Maker(val value: String) extends Product with Serializable {
override def toString = value
}
object Maker {
case object ChryslerMaker extends Vendor("Chrysler")
case object ToyotaMaker extends Vendor("Toyota")
case object NissanMaker extends Vendor("Nissan")
case object GMMaker extends Vendor("General Motors")
case object UnknownMaker extends Vendor("")
val tipos = List(ChryslerMaker, ToyotaMaker, NissanMaker,GMMaker, UnknownMaker)
private val fromStringMap: Map[String, Maker] = tipos.map(s => s.toString -> s).toMap
def apply(key: String): Option[Maker] = fromStringMap.get(key)
}
This is working well so far, now we are considering providing access to other programmers to our code to allow them to configure on site. I see two potential problems:
1) People messing up and writing things like:
case object ChryslerMaker extends Vendor("Nissan")
and people forgetting to update the tipos
I have been looking into using a configuration file (JSON or csv) to provide these values and read them as we do with plenty of other elements, but all the answers I have found rely on macros and seem to be extremely dependent on the scala version used (2.12 for us).
What I would like to find is:
1a) (Prefered) a way to create dynamically the case objects from a list of strings making sure the objects are named consistently with the value they hold
1b) (Acceptable) if this proves too hard a way to obtain the objects and the values during the test phase
2) Check that the number of elements in the list matches the number of case objects created.
I forgot to mention, I have looked briefly to enumeratum but I would prefer not to include additional libraries unless i really understand the pros and cons (and right now I am not sure how enumerated compares with the ADT approach, if you think this is the best way and can point me to such discussion that would work great)
Thanks !
One idea that comes to my mind is to create an SBT SourceGenerator task.
That will read an input JSON, CSV, XML or whatever file, that is part of your project and will create a scala file.
// ----- File: project/VendorsGenerator.scala -----
import sbt.Keys._
import sbt._
/**
* An SBT task that generates a managed source file with all Scalastyle inspections.
*/
object VendorsGenerator {
// For demonstration, I will use this plain List[String] to generate the code,
// you may change the code to read a file instead.
// Or maybe this will be good enough.
final val vendors: List[String] =
List(
"Chrysler",
"Toyota",
...
"Unknow"
)
val generatorTask = Def.task {
// Make the 'tipos' List, which contains all vendors.
val tipos =
vendors
.map(vendorName => s"${vendorName}Vendor")
.mkString("val tipos: List[Vendor] = List(", ",", ")")
// Make a case object for each vendor.
val vendorObjects = vendors.map { vendorName =>
s"""case object ${vendorName}Vendor extends Vendor { override final val value: String = "${vendorName}" }"""
}
// Fill the code template.
val code =
List(
List(
"package vendors",
"sealed trait Vendor extends Product with Serializable {",
"def value: String",
"override final def toString: String = value",
"}",
"object Vendors extends (String => Option[Vendor]) {"
),
vendorObjects,
List(
tipos,
"private final val fromStringMap: Map[String, Vendor] = tipos.map(v => v.toString -> v).toMap",
"override def apply(key: String): Option[Vendor] = fromStringMap.get(key.toLowerCase)",
"}"
)
).flatten
// Save the new file to the managed sources dir.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala"
IO.writeLines(vendorsFile, code)
Seq(vendorsFile)
}
}
Now, you can activate your source generator.
This task will be run each time, before the compile step.
// ----- File: build.sbt -----
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue
Please note that I suggest this, because I have done it before and because I don't have any macros nor meta programming experience.
Also, note that this example relays a lot in Strings, which make the code a little bit hard to understand and maintain.
BTW, I haven't used enumeratum, but giving it a quick look looks like the best solution to this problem
Edit
I have my code ready to read a HOCON file and generate the matching code. My question now is where to place the scala file in the project directory and where will the files be generated. I am a little bit confused because there seems to be multiple steps 1) compile my scala generator, 2) run the generator, and 3) compile and build the project. Is this right?
Your generator is not part of your project code, but instead of your meta-project (I know that sounds confusing, you may read this for understanding that) - as such, you place the generator inside the project folder at the root level (the same folder where is the build.properties file for specifying the sbt version).
If your generator needs some dependencies (I'm sure it does for reading the HOCON) you place them in a build.sbt file inside that project folder.
If you plan to add unit test to the generator, you may create an entire scala project inside the meta-project (you may give a look to the project folder of a open source project (Yes, yes I know, confusing again) in which I work for reference) - My personal suggestion is that more than testing the generator itself, you should test the generated file instead, or better both.
The generated file will be automatically placed in the src_managed folder (which lives inside target and thus it is ignored from your source code version control).
The path inside that is just by order, as everything inside the src_managed folder is included by default when compiling.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala" // Path to the file to write.`
In order to access the values defined in the generated file on your source code, you only need to add a package to the generated file and import the values from that package in your code (as with any normal file).
You don't need to worry about anything related with compilation order, if you include your source generator in your build.sbt file, SBT will take care of everything automatically.
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue // Activate the source generator.
SBT will re-run your generator everytime it needs to compile.
"BTW I get "not found: object sbt" on the imports".
If the project is inside the meta-project space, it will find the sbt package by default, don't worry about it.

How do I change a project's id in SBT 1.0?

I have a bunch of SBT 0.13 project definitions that look like this:
lazy val coreBase = crossProject.crossType(CrossType.Pure).in(file("core"))
.settings(...)
.jvmConfigure(_.copy(id = "core"))
.jsConfigure(_.copy(id = "coreJS"))
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
(Mostly because I'm resentful about having to maintain Scala.js builds and don't want to have to type the JVM suffix every time I'm changing projects, etc.)
This doesn't compile in SBT 1.0 because Project doesn't have a copy method now.
Okay, let's check the migration guide.
Many of the case classes are replaced with pseudo case classes generated using Contraband. Migrate .copy(foo = xxx) to withFoo(xxx).
Cool, let's try it.
build.sbt:100: error: value withId is not a member of sbt.Project
.jvmConfigure(_.withId("core"))
^
So I asked on Gitter and got crickets.
The links for the 1.0 API docs actually point to something now, which is nice, but they're not very helpful in this case, and trying to read the SBT source gives me a headache. I'm not in a rush to update to 1.0, but I'm going to have to at some point, I guess, and maybe some helpful person will have answered this by then.
(This answer has been edited with information about sbt 1.1.0+ and sbt-crossproject 0.3.1+, which significantly simplify the whole thing.)
With sbt 1.1.0 and later, you can use .withId("core"). But there's better with sbt-crossproject 0.3.1+, see below.
I don't know about changing the ID of a Project, but here is also a completely different way to solve your original issue, i.e., have core/coreJS instead of coreJVM/coreJS. The idea is to customize crossProject to use the IDs you want to begin with.
First, you'll need to use sbt-crossproject. It is the new "standard" for compilation across several platforms, co-designed by #densh from Scala Native and myself (from Scala.js). Scala.js 1.x will allways use sbt-crossproject, but it is also possible to use sbt-crossproject with Scala.js 0.6.x. For that, follow the instructions in the readme. In particular, don't forget the "shadowing" part:
// Shadow sbt-scalajs' crossProject and CrossType from Scala.js 0.6.x
import sbtcrossproject.{crossProject, CrossType}
sbt-crossproject is more flexible than Scala.js' hard-coded crossProject. This means you can customize it more easily. In particular, it has a generic notion of Platform, defining how any given platform behaves.
For a cross JVM/JS project, the new-style crossProject invocation would be
lazy val coreBase = crossProject(JVMPlatform, JSPlatform)
.crossType(CrossType.Pure)
.in(file("core"))
.settings(...)
.jvmConfigure(_.copy(id = "core"))
.jsConfigure(_.copy(id = "coreJS"))
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
Starting with sbt-crossproject 0.3.1, you can simply tell it not to add the platform suffix for one of your platforms. In your case, you want to avoid the suffix for the JVM platform, so you would write:
lazy val coreBase = crossProject(JVMPlatform, JSPlatform)
.withoutSuffixFor(JVMPlatform)
.crossType(CrossType.Pure)
...
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
and that's all you need to do!
Old answer, applicable to sbt-crossproject 0.3.0 and before
JVMPlatform and JSPlatform are not an ADT; they are designed in an OO style. This means you can create your own. In particular, you can create your own JVMPlatformNoSuffix that would do the same as JVMPlatform but without adding a suffix to the project ID:
import sbt._
import sbtcrossproject._
case object JVMPlatformNoSuffix extends Platform {
def identifier: String = "jvm"
def sbtSuffix: String = "" // <-- here is the magical empty string
def enable(project: Project): Project = project
val crossBinary: CrossVersion = CrossVersion.binary
val crossFull: CrossVersion = CrossVersion.full
}
Now that's not quite enough yet, because .jvmSettings(...) and friends are defined to act on a JVMPlatform, not on any other Platform such as JVMPlatformNoSuffix. You'll therefore have to redefine that as well:
implicit def JVMNoSuffixCrossProjectBuilderOps(
builder: CrossProject.Builder): JVMNoSuffixCrossProjectOps =
new JVMNoSuffixCrossProjectOps(builder)
implicit class JVMNoSuffixCrossProjectOps(project: CrossProject) {
def jvm: Project = project.projects(JVMPlatformNoSuffix)
def jvmSettings(ss: Def.SettingsDefinition*): CrossProject =
jvmConfigure(_.settings(ss: _*))
def jvmConfigure(transformer: Project => Project): CrossProject =
project.configurePlatform(JVMPlatformNoSuffix)(transformer)
}
Once you have all of that in your build (hidden away in a project/JVMPlatformNoSuffix.scala in order not to pollute the .sbt file), you can define the above cross-project as:
lazy val coreBase = crossProject(JVMPlatformNoSuffix, JSPlatform)
.crossType(CrossType.Pure)
.in(file("core"))
.settings(...)
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
without any need to explicitly patch the project IDs.

Using input args inside a TaskKey

I'm writing an sbt plugin, and have created a TaskKey that need to get parsed arguments
lazy val getManager = TaskKey[DeployManager]("Deploy manager")
lazy val getCustomConfig = InputKey[String]("Custom config")
...
getCustomConfig := {
spaceDelimited("<arg>").parsed(0)
}
getManager := {
val conf = configResource.evaluated
...
}
but I get this error during compilation:
`parsed` can only be used within an input task macro, such as := or Def.inputTask.
I can't define getManager as InputKey since I later use it's value many times, and for an inputKey the value gets created anew on each evaluation (and I need to use the same instance)
You cannot do what you want in a reasonable way in sbt. (And the type system nicely prevents you from doing that in this case).
Imagine that getManager is a TaskKey that takes parsed arguments (a note aside, the sbt way of naming this would probably be manager, get is implied).
I now decide that, for example, compile depends on getManager. If I type compile in the shell, what arguments should getManager parse?
There is no concept of arguments inside the sbt dependency tree. They are just a shallow (and IMHO somewhat hackish) addition to make for a nicer CLI.
If you want to make getManager configurable, you can add additional settings, getManager depends on and then use set on the command line to change these where necessary.
So in you case:
lazy val configResource = SettingKey[...]("Config resource")
getManager := {
val conf = configResource.value
// ...
}

How to define project-specific settings/keys in sbt 0.13

I have a simple multi project, whereby the root aggregates projects a and b. The root project loads this plugin I'm writing that is supposed to allow easy integration with the build system in our company.
lazy val a = project in file("a")
lazy val b = project in file("b")
Now, I'd like to define some Settings in the plugin that don't make sense in the root project, and can have different values for each sub-project. However, if I just define them like
object P extends Plugin {
val kind = settingKey[Kind]("App or Lib?")
val ourResolver = settingKey[Resolver]("...")
override def projectSettings = Seq(
// I want this to only be defined in a and b, where `kind` is defined
ourResolver <<= kind { k => new OurInternalResolver(k) }
)
}
then sbt will complain that ourResolver cannot be defined because kind is not defined in the root project.
Is there a way to specify a scope for this setting (ourResolver), such that the setting would be defined in every aggregated project except the root project?
Or do I have to make it into an SettingKey[Option[_]] and set it to None by default?
Edit: I have quite a large set of settings which progressively depend on kind and then ourResolver and so on, and these settings should only be defined where (read: "in the project scopes where") kind is defined. Added ourResolver to example code to reflect this.
However, if I just define them like....
There's nothing really magical about setting keys. Keys are just String entry tied with the type. So You can define your key the way you're doing now just fine.
Is there a way to specify a scope for this setting, such that the setting would be defined in every aggregated project except the root project?
A setting consists of the following four things:
scope (project, config, task)
a key
dependencies
an ability to evaluate itself (def evaluate(Settings[Scope]): T)
If you did not specify otherwise, your settings are already scoped to a particular project in the build definition.
lazy val a = (project in file("a")).
settings(commonSettings: _*).
settings(
name := "a",
kind := Kind.App,
libraryDependencies ++= aDeps(scalaVersion.value)
)
lazy val b = (project in file("b")).
settings(commonSettings: _*).
settings(
name := "b",
kind := Kind.Lib,
libraryDependencies ++= bDeps(scalaVersion.value)
)
lazy val root = (project in file(".")).
settings(commonSettings: _*).
settings(
name := "foo",
publishArtifact := false
).
aggregate(a, b)
In the above, kind := Kind.App setting is scoped to project a. So that would answer your question literally.
sbt will complain that kind is not defined in the root project.
This is the part I'm not clear what's going on. Does this happen when you load the build? Or does it happen when you type kind into the sbt shell? If you're seeing it at the startup that likely means you have a task/setting that's trying to depend on kind key. Don't load the setting, or reimplement it so something that doesn't use kind.
Another way to avoid this issue may be to give up on using root aggregation. Switch into subproject and run tasks or construct explicit aggregation using ScopeFilter.
I've managed to ultimately do this by leveraging derive, a method that creates DerivedSettings. These settings will then be automatically expanded by sbt in every scope where their dependencies are defined.
Therefore I can now write my plugin as:
object P extends Plugin {
val kind = settingKey[Kind]("App or Lib?")
val ourResolver = settingKey[Resolver]("...")
def derivedSettings = Seq(
// I want this to only be defined in a and b, where `kind` is defined
ourResolver <<= kind { k => new OurInternalResolver(k) }
) map (s => Def.derive(s, trigger = _ != streams.key))
override def projectSettings = derivedSettings ++ Seq( /* other stuff */ )
}
And if I define kind in a and b's build.sbt, then ourResolver will also end up being defined in those scopes.