Using input args inside a TaskKey - scala

I'm writing an sbt plugin, and have created a TaskKey that need to get parsed arguments
lazy val getManager = TaskKey[DeployManager]("Deploy manager")
lazy val getCustomConfig = InputKey[String]("Custom config")
...
getCustomConfig := {
spaceDelimited("<arg>").parsed(0)
}
getManager := {
val conf = configResource.evaluated
...
}
but I get this error during compilation:
`parsed` can only be used within an input task macro, such as := or Def.inputTask.
I can't define getManager as InputKey since I later use it's value many times, and for an inputKey the value gets created anew on each evaluation (and I need to use the same instance)

You cannot do what you want in a reasonable way in sbt. (And the type system nicely prevents you from doing that in this case).
Imagine that getManager is a TaskKey that takes parsed arguments (a note aside, the sbt way of naming this would probably be manager, get is implied).
I now decide that, for example, compile depends on getManager. If I type compile in the shell, what arguments should getManager parse?
There is no concept of arguments inside the sbt dependency tree. They are just a shallow (and IMHO somewhat hackish) addition to make for a nicer CLI.
If you want to make getManager configurable, you can add additional settings, getManager depends on and then use set on the command line to change these where necessary.
So in you case:
lazy val configResource = SettingKey[...]("Config resource")
getManager := {
val conf = configResource.value
// ...
}

Related

Why value method cannot be used outside macros?

The error message
`value` can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
val x = version.value
^
clearly indicates how to fix the problem, for example, using :=
val x = settingKey[String]("")
x := version.value
The explanation in sbt uses macros heavily states
The value method itself is in fact a macro, one that if you invoke it
outside of the context of another macro, will result in a compile time
error, the exact error message being...
And you can see why, since sbt settings are entirely declarative, you
can’t access the value of a task from the key, it doesn’t make sense
to do that.
however I am confused what is meant by declarative nature of sbt being the reason. For example, intuitively I would think the following vanilla Scala snippet is semantically similar to sbt's
def version: String = ???
lazy val x = s"Hello $version" // ok
trait Foo {
def version: String
val x = version // ok
}
As this is legal, clearly the Scala snippet is not semantically equivalent to the sbt one. I was wondering if someone could elaborate on why value cannot be used outside macros? Is the reason purely syntactic related to macro syntax or am I missing something fundamental about sbt's nature?
As another sentence there says
Defining sbt’s task engine is done by giving sbt a series of settings, each setting declaring a task implementation. sbt then executes those settings in order. Tasks can be declared multiple times by multiple settings, the last one to execute wins.
So at the moment when the line
val x = version.value
would be executed (if it were allowed!), that whole program is still being set up and SBT doesn't know the final definition of version.
In what sense is the program "still being set up"?
SBT's order of actions is, basically (maybe missing something):
All your Scala build code is run.
It contains some settings and tasks definitions, SBT collects those when it encounters (along with ones from core, plugins, etc.)
They are topologically sorted into the task graph, deduplicated ("the last one to execute wins"), etc.
Settings are evaluated.
Now you are allowed to actually run tasks (e.g. from SBT console).
version.value is only available after step 4, but val x = version.value runs on step 1.
Would not lazy evaluation take care of that?
Well, when you write val x = ... there is no lazy evaluation. But lazy val x = ... runs on step 1 too.

sbt illegal dynamic reference in runMain

I'm trying to run a code generator, and passing it the filename to write the output:
resourceGenerators in (proj, Compile) += Def.task {
val file = (resourceManaged in (proj, Compile)).value / "swagger.yaml"
(runMain in (proj, Compile)).toTask(s"api.swagger.SwaggerDump $file").value
Seq(file)
}.value
However, this gives me:
build.sbt:172: error: Illegal dynamic reference: file
(runMain in (proj, Compile)).toTask(s"api.swagger.SwaggerDump $file").value
Your code snippet has two problems:
You use { ... }.value instead of { ... }.taskValue. The type of resource generators is Seq[Task[Seq[File]]] and when you do value, you get Seq[File] not Task[Seq[File]]. That causes a legitimate compile error.
The dynamic variable file is used as the argument of toTask, which the current macro implementation prohibits.
Why static?
Sbt forces task implementations to have static dependencies on other tasks. Otherwise, sbt cannot perform task deduplication and cannot provide correct information in the inspect commands. That means that whichever task evaluation you perform inside a task cannot depend on a variable (a value known only at runtime), as your file in toTask does.
To overcome this limitation, there exists dynamic tasks, whose body allows you to return a task. Every "dynamic dependency" has to be defined inside a dynamic task, and then you can depend on the hoisted up dynamic values in the task that you return.
Dynamic solution
The following Scastie is the correct implementation of your task. I copy-paste the code so that folks can have a quick look, but go to that Scastie to check that it successfully compiles and runs.
resourceGenerators in (proj, Compile) += Def.taskDyn {
val file = (resourceManaged in (proj, Compile)).value / "swagger.yaml"
Def.task {
(runMain in (proj, Compile))
.toTask(s"api.swagger.SwaggerDump $file")
.value
Seq(file)
}
}.taskValue
Discussion
If you had fixed the taskValue error, should your task implementation correctly compile?
In my opinion, yes, but I haven't looked at the internal implementation good enough to assert that your task implementation does not hinder task deduplication and dependency extraction. If it does not, the illegal reference check should disappear.
This is a current limitation of sbt that I would like to get rid of, either by improving the whole macro implementation (hoisting up values and making sure that dependency analysis covers more cases) or by just improving the "illegal references checks" to not be over pessimistic. However, this is a hard problem, takes time and it's not likely to happen in the short term.
If this is an issue for you, please file a ticket in sbt/sbt. This is the only way to know the urgency of fixing this issue, if any. For now, the best we can do is to document it.

How to sequentially call an input task and other tasks in sbt

I am trying to override the it:run task in sbt so that it would run two other tasks (actually three if you include logging) in sequence. The first of these tasks is to provision the application (it:provision) and the second are to actually run the tests (it:test).
I started of with a simple Task[Unit] for my custom provision task.
lazy val provision = taskKey[Unit]("Provisions the application in an environment based on the configuration.")
I then defined a function to use for the it:run implementation as:
def runIntegrationTestsImpl(): Initialize[Task[Unit]] = Def.inputTask {
Def.sequential(
provision in IntegrationTest,
Def.task(state.value.log.info("Running integration tests.")),
test in IntegrationTest
).value
}
This worked fine. However what I really wanted was to make the provision task an InputTask[Unit].
So I then changed provision to:
lazy val provision = inputKey[Unit]("Provisions the application in an environment based on the configuration.")
And tried to update the it:run implementation:
def runIntegrationTestsImpl(): Initialize[InputTask[Unit]] = {
val parser = DefaultParsers.any.*.map(_.mkString)
Def.inputTask {
val prov = (provision in IntegrationTest).toTask(parser.parsed)
Def.sequential(
prov,
Def.task(state.value.log.info("Running integration tests.")),
test in IntegrationTest
).value
}
}
This is as close as I got to it compiling. I didn't really intend on parsing the arguments here but it seemed toTask was the only way to get an Initialize[Task[Unit]].
No matter what I try and do I still cannot get rid of the following error:
Illegal dynamic reference: prov
Which is referring to the first element in the sequence.
I seem to hit this problem a lot with the sbt macros. Is there a way to achieve what I want?

Why does test-scoped setting not hold correct value?

Scopes matters in sbt. And I'm completely OK with it. But there are also delegating rules that allows you build a hierarchical structure of settings. I'd like to use it to bring extra settings to more specific rules.
import sbt._
import Keys._
object TestBuild extends Build {
val sourceExample = settingKey[Seq[String]]("example source for setting dependency")
val targetExample = settingKey[Seq[String]]("example of a dependent setting")
override lazy val settings = super.settings ++ Seq (
sourceExample := Seq("base"),
targetExample := "extended" +: sourceExample.value,
sourceExample in Test += "testing"
)
}
The example gives me unexpected output:
> show compile:sourceExample
[info] List(base)
> show test:sourceExample
[info] List(base, testing)
> show compile:targetExample
[info] List(extended, base)
> show test:targetExample
[info] List(extended, base)
I expect test:targetExample be List(extended, base, testing) not List(extended, base). Once I've get the result I immediately figure out why exactly it works as shown. test:targetExample delegates from *:targetExample the calculated value but not the rule for calculating it in nested scope.
This behavior brings two difficulties for me writing my own plugin. I have extra work to define same rules in every scope as a plugin developer. And I have to memorize scope definitions of internal tasks to use it correctly as user.
How can I overcome this inconvenience? I'd like to introduce settings in call-by-name semantic instead of call-by-value. What tricks may work for it?
P.S. libraryDependencies in Test looks much more concise that using % test.
I should make clear that I perfectly understand that the sbt derives values just as it is described in documentation. It works as the creator intended it to work.
But why should I obey to the rules? I see them completely counter-intuitive. Sbt introduces inheritance semantic that actually works unlike how inheritance used to be defined. When you write
trait A { lazy val x : Int = 5 }
trait B extends A { lazy val y : Int = x * 2}
trait C extends A { override lazy val x : Int = 3 }
you expect (new B with C).y be 6, not 10. Knowing that it would be actually 10 allows you to use this kind of inheritance correctly but leaves your with desire to find more conventional means for implementing inheritance. You may even write your own implementation based on name->value dictionary. And you may proceed further according to the tenth rule of programming.
So I'm searching for a hack that would bring inheritance semantic in accordance with common one. As a start point I may suggest writing command to scan all settings and push them from parents to children explicitly. And than invoke this command automatically each time sbt runs.
But it seems too dirty for me, so I'm curios if there is more graceful way to achieve similar semantic.
The reason for the "incorrect" value is that targetExample depends on sourceExample in Compile scope as in:
targetExample := "extended" +: sourceExample.value
Should it use sourceExample value from Test scope, use in Test to be explicit about your wish as follows:
targetExample := "extended" +: (sourceExample in Test).value
Use inspect to know the dependency chain.
BTW, I strongly advise using build.sbt file for such a simple build definition.
You could have default settings, and reuse it in different configurations as described in Plugins Best Practices - Playing nice with configurations. I believe, this should give you a semantic similar to what you're looking for.
You can define your base settings and reuse it in different configurations.
import sbt._
import Keys._
object TestBuild extends Build {
val sourceExample = settingKey[Seq[String]]("example source for setting dependency")
val targetExample = settingKey[Seq[String]]("example of a dependent setting")
override lazy val settings = super.settings ++
inConfig(Compile)(basePluginSettings) ++
inConfig(Test)(basePluginSettings ++ Seq(
sourceExample += "testing" // we are already "in Test" here
))
lazy val basePluginSettings: Seq[Setting[_]] = Seq (
sourceExample := Seq("base"),
targetExample := "extended" +: sourceExample.value
)
}
PS. Since you're talking about writing your plugin, you may also want to look at the new way of writing sbt plugins, namely AutoPlugin, as the old mechanism is now deprecated.

Run custom task automatically before/after standard task

I often want to do some customization before one of the standard tasks are run. I realize I can make new tasks that executes existing tasks in the order I want, but I find that cumbersome and the chance that a developer misses that he is supposed to run my-compile instead of compile is big and leads to hard to fix errors.
So I want to define a custom task (say prepare-app) and inject it into the dependency tree of the existing tasks (say package-bin) so that every time someone invokes package-bin my custom tasks is run right before it.
I tried doing this
def mySettings = {
inConfig(Compile)(Seq(prepareAppTask <<= packageBin in Compile map { (pkg: File) =>
// fiddle with the /target folder before package-bin makes it into a jar
})) ++
Seq(name := "my project", version := "1.0")
}
lazy val prepareAppTask = TaskKey[Unit]("prepare-app")
but it's not executed automatically by package-bin right before it packages the compile output into a jar. So how do I alter the above code to be run at the right time ?
More generally where do I find info about hooking into other tasks like compile and is there a general way to ensure that your own tasks are run before and after a standard tasks are invoked ?.
Extending an existing task is documented the SBT documentation for Tasks (look at the section Modifying an Existing Task).
Something like this:
compile in Compile <<= (compile in Compile) map { _ =>
// what you want to happen after compile goes here
}
Actually, there is another way - define your task to depend on compile
prepareAppTask := (whatever you want to do) dependsOn compile
and then modify packageBin to depend on that:
packageBin <<= packageBin dependsOn prepareAppTask
(all of the above non-tested, but the general thrust should work, I hope).
As an update for the previous answer by #Paul Butcher, this could be done in a bit different way in SBT 1.x versions since <<== is no longer supported. I took an example of a sample task to run before the compilation that I use in one of my projects:
lazy val wsdlImport = TaskKey[Unit]("wsdlImport", "Generates Java classes from WSDL")
wsdlImport := {
import sys.process._
"./wsdl/bin/wsdl_import.sh" !
// or do whatever stuff you need
}
(compile in Compile) := ((compile in Compile) dependsOn wsdlImport).value
This is very similar to how it was done before 1.x.
Also, there is another way suggested by official SBT docs, which is basically a composition of tasks (instead of dependencies hierarchy). Taking the same example as above:
(compile in Compile) := {
val w = wsdlImport.value
val c = (compile in Compile).value
// you can add more tasks to composition or perform some actions with them
c
}
It feels like giving more flexibility in some cases, though the first example looks a bit neater, as for me.
Tested on SBT 1.2.3 but should work with other 1.x as well.