How to call SBT InputTask dynamically? - scala

I want to create a new custom InputTask (testOnlyCustom)
that calls testOnly with the same arguments as given to testOnlyCustom and
that maybe, based on a SBT setting (condition), calls another task (let's call it pre) before calling testOnly. Here, I have to force "sequential" execution.
Thus:
If condition is true
testOnlyCustom com.dummy.TestSuite calls
pre and then
testOnly com.dummy.TestSuite
If condition is false
testOnlyCustom com.dummy.TestSuite calls
testOnly com.dummy.TestSuite
While I was able to achieve a solution with testCustom referring to pre and test (and thus having no arguments), I'm not able to solve the problem for testOnlyCustom, as InputTask used
Here is my code:
import sbt._
import sbt.Keys._
import sbt.Def._
import sbtsequential.Plugin._
object Simple extends sbt.Plugin {
import SimpleKeys._
object SimpleKeys {
lazy val condition = SettingKey[Boolean]("mode", "The mode.")
lazy val pre = TaskKey[Unit]("test-with-pre", "Do some pre step.")
lazy val testWithPre = TaskKey[Unit]("test-with-pre", "Run pre task beforehand")
lazy val testCustom = TaskKey[Unit]("test-custom", "Run pre (depending on condition) and then test.")
lazy val testOnlyWithPre = InputKey[Unit]("test-only-with-pre", "Run selected tests (like test-only in SBT) with pre executed before.")
lazy val testOnlyCustom = InputKey[Unit]("test-only-configured", "Run pre (depending on condition) and then call test-only.")
}
lazy val baseSettings: Seq[sbt.Def.Setting[_]] = Seq(
// this is working
testWithPre := test.value,
testWithPre <<= testWithPre.dependsOn( pre ),
testCustom := Def.taskDyn {
val c = condition.value
if (c) {
testWithPre
} else {
test
}
}.value,
//
// this is the part, where my question focuses on
//
testOnlyWithPre := testOnly.evaluated,
testOnlyWithPre <<= testOnlyWithPre.dependsOn( pre ),
// is this the correct approach?
testOnlyCustom := Def.inputTaskDyn {
// ???????????????????????????????
Def.task()
}.evaluated
)
lazy val testSimpleSettings: Seq[sbt.Def.Setting[_]] = baseSettings
}
Is inputTaskDyn the way to go? What exactly does it? I have just chosen it, because it seems to be dynamic version for InputTasks. Unfortunately, documentation is very rare on inputTaskDyn.
Is it okay to force "sequential" execution via dependsOn, like I did? I already have seen tha SBT 0.13.8 contains Def.sequantial. But this does not seem to be applicable to InputTasks?
How to convert an InputTask into a Task (to be used with taskDyn / inputTaskDyn) but still sticking to evaluated instesd of using an explicit parser? Or is there a way to reuse the testOnly parser?
Could someone illustrate a little more on .evaluated and .parsed of InputTask. What exactly does InputTask.parse do under the hood?
It would be great if someone could provide a working solution!
Many thanks in advance
Martin

Just for the record, the SBT 1 equivalent is
testOnlyWithPre := test.dependsOn(pre).value
and
testOnlyWithPre := testOnly.dependsOn(pre).evaluated

the best solution I could come up with is
testOnlyCustom := Def.inputTaskDyn {
val args: Seq[String] = spaceDelimited("").parsed
val c = condition.value
if (c) {
testOnlyWithPre.toTask(" " + args.head)
} else {
testOnly.toTask(" " + args.head)
}
}.evaluated
But still, this forces me to use a new parser (spaceDelimited) and I am not able to (re-)use the testOnly parser.
Any ideas how to reuse the parser?

Additional comments
First, OlegYch_ indicated on Freenode # sbt that coming up with SBT 0.13.9 the execution of Inputtasks will be possible via
def runInputTask[T](key: InputKey[T], input: String, state: State): (State, T)
in sbt.Extracted.scala.
Second, the testOnly parsers can be reused via sbt.Defaults#inputTests.

Related

sbt val evaluated more than once

I'm creating a val into my build.sbt, made of a random string, to be used in the Setup and Cleanup methods for scalatest, like this:
val foo = Random.alphanumeric.take(3).mkString
...
Test / testOptions += Tests.Setup(() => {
// do stuff with it
})
...
Test / testOptions += Tests.Cleanup(() => {
// do stuff with the same string
}
but it seems that the two functions are actually re-evaluating the val, resulting in two different strings. It seems that the forking of the JVM (fork := true) does not play a role into it, so I'm kinda out of ideas. Is that intended and/or is there a way to fix it/finding another approach to the problem (native to Scala/sbt)?
Apparently the solution was easier than thought:
lazy val foo = SettingKey[String]("foo", "Random string")
foo := Random.alphanumeric.take(3).mkString
and then call foo.value in the sbt code after

SBT: Evaluating sequence of tasks

I am trying to get the information about all modules in my sbt project.
Having the following code
lazy val getModule = taskKey[Module]("get single module info")
lazy val allModules = taskKey[Seq[Module]]("get all modules info")
getModule := Def.task {
Module(name.value, description.value, version.value, organization.value)
}.value,
allModules := Def.task {
val sbtModules = (ThisScope / thisProject).value.aggregate
sbtModules.map { m =>
(ThisScope.in(m) / getModule).value
}
}.value
I'm getting the errors:
[error] problem: Task invocations inside anonymous functions are evaluated independently of whether the anonymous function is invoked or not.
...
[error] /Users/ikryvorotenko/projects/rae/rae-lib/project/SbtToGradlePlugin.scala:27:23: Illegal dynamic reference: m
[error] (ThisScope.in(m) / getModule).value
Does sbt have anything to chain tasks dynamically?
Basically I'm looking for something like Future.sequence for chaining all tasks results into one.
There are a few features described in Tasks that might be helpful.
First, Dynamic Computation with Def.taskDyn allows you to use the result of one task to compute the other. In your case, allModules should be (Def.taskDyn { ... }).value.
To aggregate a task across multiple subprojects etc, you can use ScopeFilter and .all method on a task key.

Can a parsed inputTask be used to invoke a runTask in sbt?

I'm trying to use sbt as a general task runner (similar to rake/npm). I can get it to parse input the way I want through an inputTask, but I'm absolutely stumped how to use this to invoke a runTask/fullRunTask
val partners: List[Parser[String]] = List("foo", "bar")
val partnerParser = partners.reduce(_ | _)
val acceptArgs = (' ' ~> partnerParser ~ (' ' ~> StringBasic))
lazy val importDump = inputKey[Unit]("Import static data dump")
lazy val importDumpTask = importDump := {
val (arg1, arg2) = acceptArgs.parsed
// how can I make this call?
// ... runTask(Compile, "foo.bar.baz.DoIt.dispatch", arg1, arg2).value
}
I understand that you can't directly call tasks from other tasks, only "depend" on them so the above code won't work.
I know I can do something like
mainClass := Some("foo.bar.baz.DoIt.dispatch")
(runMain in Compile).toTask(s" foo.bar.baz.DoIt.dispatch $arg1 $arg2").value
But that means I can't use any of the parsing/autocomplete functionality.
So my question is:
How can I parse input with an inputTask, then call a main method in my code with the resulting arguments?
This is extremely painful to do in sbt. I would recommend writing a shell script (or using sbt's built-in Process support).
That said, it's possible to do this by writing a new Command that mutates the State object provided, adding the tasks you want to run as items in the remainingCommands field.

SBT InputKey with property-like arguments

Can someone help me create a SBT task that can support property-like arguments from command line?
lazy val myTask = inputKey[Unit]("my task")
myTask := {
if (directoryOpt.isEmpty) // directoryOpt comes from an optional command line argument: directory="~/downloads"
fullRunInputTask(inputKey, Compile, "example.MyTaskClass")
else
fullRunInputTask(inputKey, Compile, "example.MyTaskClass", directoryOpt.get)
}
Where the task can be run from command line like:
sbt myTask directory="~/downloads"
I did read the sbt doc at http://www.scala-sbt.org/0.13/docs/Input-Tasks.html. But it only explains how to create a task parser like sbt myTask option1 option2 which does not quite meet my need.
UPDATE:
I used jazmit's solution since that was an easy change. It works well! I will also try Mariusz's solution and update here.
You can use project/Build.scala along your build.sbt with your inputs. You can also use Commands instead of Tasks. Below, an example:
import sbt._
import Keys._
object CustomBuild extends Build {
def myTask = Command.args("myTask", "<name>"){ (state, args) =>
val argMap = args.map { s =>
s.split("=").toList match {
case n :: v :: Nil => n -> v
}
}.toMap
//println(argMap) //to see all argument pairs
//react on name in params list
println("Hi "+ argMap.getOrElse("name", "Unknown"))
state //Command can modify state, so you must to return it.
}
}
Now You have to add this command to you project, in build.sbt add
commands += myTask
Now you can use it:
> sbt "myTask name=Mario"
> Hi Mario
> sbt myTask
> sbt Hi Unknown
Hope, it'll help You!
more about commands:
you can find here
You can use environmental properties to achieve what you want quickly.
From the command line, set a property as follows:
sbt myTask -Ddirectory="~/downloads"
From the task, you can retrieve the value as follows:
val directory = System.getProperty("directory");
If you want to do something more solid with syntax checking, tab completion, etc, you can define an input task as detailed here. If you need property=value syntax, you can define this using the parser combinator library, eg:
import sbt.complete.DefaultParsers._
val myArgs: Parser[String] = "directory=" ~> StringEscapable

reproduce task dependencies in actual practise

the sbt task documentation shows an example of usage dependencies. It is very simple, artificial but it works! So I reproduced it in my project/scala.build without problem.
Note that I choose global scope to make tasks available for any project and any configuration
import sbt._
import Keys._
object TestBuild extends Build {
lazy val sampleTask = taskKey[Int]("A sample task")
lazy val intTask = taskKey[Int]("An int task")
override lazy val settings = super.settings ++ Seq(
intTask := 1 + 2 ,
sampleTask := intTask.value + 1
)
}
Now I'm trying to do something useful and enrich existing sbt key definitions with task that collects compiled class names
import sbt._
import Keys._
import sbt.inc.Analysis
import xsbti.api.ClassLike
import xsbt.api.Discovery.{isConcrete, isPublic}
object TestBuild extends Build {
lazy val debugAPIs = taskKey[List[String]]("list of all top-level definitions")
override lazy val settings = super.settings ++ Seq(
debugAPIs := getAllTop( compile.value )
)
private def getAllTop(analysis : Analysis) : List[String] =
Tests.allDefs(analysis).toList collect {
case c : ClassLike if isConcrete(c) && isPublic(c) => c.name
}
}
Now I get error from sbt:
Reference to undefined setting:
{.}/*:compile from {.}/*:debugAPIs (/home/sbt/project/build.scala:11)
So I have two questions:
How should I define debugAPIs properly so that it task would be available for all projects and all configurations?
How can I reproduce this error in synthetic configuration?
I'm more interested in the second question actually. I look for deep understanding of how sbt works because I'd like to write a plugin for it.
The problem is that you try to access a key value without a proper Scope.
The documentation gives us some hint here.
By default, all the keys associated with compiling, packaging, and
running are scoped to a configuration and therefore may work
differently in each configuration. The most obvious examples are the
task keys compile, package, and run; but all the keys which affect
those keys (such as source-directories or scalac-options or
full-classpath) are also scoped to the configuration.
Let's first focus on a very simple example, which maybe doesn't make much sense, but illustrates the problem. Lets assume that you want to redefine the compile task to itself.
override lazy val settings = super.settings ++ Seq (
compile := { compile.value }
)
Running this in SBT will give you an error, which is more or less like this
[error] {.}/*:compile from {.}/*:compile (/tmp/q-23723818/project/Build.scala:12)
[error] Did you mean compile:compile ?
We didn't specify the scope so SBT picked some defaults. The project was set to ThisBuild (meaning no specific project) and configuration set to Global. The setting was undefined in that context. However it's important to understand that a key is not a setting. The key can exist without scope, but the value of a key is attached to a scope. Note also that, if SBT won't find the value in the requested scope it can delegate to other scopes, but this is another topic.
How can we check this? Turns out that quite simple. Let's ignore the error, and let the SBT start.
If you type inspect compile you'll see that the inspect will look in compile:compile, where the value is defined. We can force it to look in a specific scope, e.g. inspect {.}/*:compile, will look in scope that gave us the error.
> inspect {.}/*:compile
[info] No entry for key.
Indeed it's undefined.
How to solve the issue? You have to give SBT the scope you're looking for. Naively you could try to add a configuration scope.
// this will NOT work
override lazy val settings = super.settings ++ Seq (
compile in Compile := { (compile in Compile).value }
)
Well but there is no global compile, there is only compile per project. You could overcome the issue by not overriding global settings, but the settings for a specific project, and specifying Compile configuration there.
lazy val root = project.in(file(".")).settings(Seq(
compile in Compile := {(compile in Compile).value}
): _*)
This would work,but what if you want to get the compile value regardless of where it is? This is where ScopeFilter comes in handy. Back to your original example. I assume you want to get compile's Analysis object from all the projects.
import sbt._
import Keys._
import sbt.inc.Analysis
import xsbti.api.ClassLike
import xsbt.api.Discovery.{isConcrete, isPublic}
object TestBuild extends Build {
val debugAPIs = taskKey[Seq[String]]("list of all top-level definitions")
val compileInAnyProject = ScopeFilter(inAnyProject, inConfigurations(Compile))
override lazy val settings = super.settings ++ Seq(
debugAPIs := {
getAllTop(compile.all(compileInAnyProject).value)
}
)
private def getAllTop(analyses : Seq[Analysis]) : Seq[String] =
analyses.flatMap { analysis =>
Tests.allDefs(analysis) collect { case c : ClassLike if isConcrete(c) && isPublic(c) => c.name }
}
}
What we created is a ScopeFilter filtering for any project, and in that projects for Compile configuration. Then we looked for all compile values.
You can configure the ScopeFilter to match your needs, and only filter for specific projects/configurations or even tasks. But the key to understand the problem is to remember that in SBT settings are always scoped.
Edit
You have asked how it comes that the compile is not defined globally but is available to every project. This is because there is Defaults.defaultSettings which define it. And each project include it. If you removed super.settings from your Build definition you'd see that among others compile is undefined.
And as if you should do it this way. Well overriding settings in your plugin is in general discouraged in Plugin Best Practices. However I recommend that you read it, together with Plugins chapter. It should give you an idea of how to proceed.
You can also get multiple values from multiple scopes by defining new task returning them. For example to get analyses with a project, you could use following piece of code.
object TestBuild extends Build {
val debugAPIs = taskKey[Seq[(String, String)]]("list of all top-level definitions")
val compileInAnyProject = ScopeFilter(inAnyProject, inConfigurations(Compile))
override lazy val settings = super.settings ++ Seq(
debugAPIs := {
getAllTop(analysisWithProject.all(compileInAnyProject).value)
}
)
lazy val analysisWithProject = Def.task { (thisProject.value, compile.value) }
private def getAllTop(analyses : Seq[(ResolvedProject, Analysis)]) : Seq[(String, String)] =
analyses.flatMap { case (project, analysis) =>
Tests.allDefs(analysis) collect { case c : ClassLike if isConcrete(c) && isPublic(c) => (project.id, c.name) }
}
}