Scala conditional compilation - scala

I'm writing a Scala program and I want it to work with two version of a big library.
This big library's version 2 changes the API very slightly (only one class constructor signature has an extra parameter).
// Lib v1
class APIClass(a: String, b:Integer){
...
}
// Lib v2
class APIClass(a: String, b: Integer, c: String){
...
}
// And my code extends APIClass.. And I have no #IFDEF
class MyClass() extends APIClass("x", 1){ // <-- would be APIClass("x", 1, "y") in library v2
...
}
I really don't want to branch my code. Because then I'd need to maintain two branches, and tomorrow 3,4,..branches for tiny API changes :(
Ideally we'd have a simple preprocessor in Scala, but the idea was rejected long ago by Scala community.
A thing I don't really couldn't grasp is: can Scalameta help simulating a preprocessor in this case? I.e. parsing two source files conditionally to - say - an environmental variable known at compile time?
If not, how would you approach this real life problem?

1. C++ preprocessors can be used with Java/Scala if you run cpp before javac or scalac (also there is Manifold).
2. If you really want to have conditional compilation in Scala you can use macro annotation (expanding at compile time)
macros/src/main/scala/extendsAPIClass.scala
import scala.annotation.{StaticAnnotation, compileTimeOnly}
import scala.language.experimental.macros
import scala.reflect.macros.blackbox
#compileTimeOnly("enable macro paradise")
class extendsAPIClass extends StaticAnnotation {
def macroTransform(annottees: Any*): Any = macro ExtendsAPIClassMacro.impl
}
object ExtendsAPIClassMacro {
def impl(c: blackbox.Context)(annottees: c.Tree*): c.Tree = {
import c.universe._
annottees match {
case q"$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends { ..$earlydefns } with ..$parents { $self => ..$stats }" :: tail =>
def updateParents(parents: Seq[Tree], args: Seq[Tree]) =
q"""${tq"APIClass"}(..$args)""" +: parents.filter { case tq"scala.AnyRef" => false; case _ => true }
val parents1 = sys.env.get("LIB_VERSION") match {
case Some("1") => updateParents(parents, Seq(q""" "x" """, q"1"))
case Some("2") => updateParents(parents, Seq(q""" "x" """, q"1", q""" "y" """))
case None => parents
}
q"""
$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends { ..$earlydefns } with ..$parents1 { $self => ..$stats }
..$tail
"""
}
}
}
core/src/main/scala/MyClass.scala (if LIB_VERSION=2)
#extendsAPIClass
class MyClass
//Warning:scalac: {
// class MyClass extends APIClass("x", 1, "y") {
// def <init>() = {
// super.<init>();
// ()
// }
// };
// ()
//}
build.sbt
ThisBuild / name := "macrosdemo"
lazy val commonSettings = Seq(
scalaVersion := "2.13.2",
organization := "com.example",
version := "1.0.0",
scalacOptions ++= Seq(
"-Ymacro-debug-lite",
"-Ymacro-annotations",
),
)
lazy val macros: Project = (project in file("macros")).settings(
commonSettings,
libraryDependencies ++= Seq(
scalaOrganization.value % "scala-reflect" % scalaVersion.value,
)
)
lazy val core: Project = (project in file("core")).aggregate(macros).dependsOn(macros).settings(
commonSettings,
)
)
3. Alternatively you can use Scalameta for code generation (at the time before compile time)
build.sbt
ThisBuild / name := "scalametacodegendemo"
lazy val commonSettings = Seq(
scalaVersion := "2.13.2",
organization := "com.example",
version := "1.0.0",
)
lazy val common = project
.settings(
commonSettings,
)
lazy val in = project
.dependsOn(common)
.settings(
commonSettings,
)
lazy val out = project
.dependsOn(common)
.settings(
sourceGenerators in Compile += Def.task {
Generator.gen(
inputDir = sourceDirectory.in(in, Compile).value,
outputDir = sourceManaged.in(Compile).value
)
}.taskValue,
commonSettings,
)
project/build.sbt
libraryDependencies += "org.scalameta" %% "scalameta" % "4.3.10"
project/Generator.scala
import sbt._
object Generator {
def gen(inputDir: File, outputDir: File): Seq[File] = {
val finder: PathFinder = inputDir ** "*.scala"
for(inputFile <- finder.get) yield {
val inputStr = IO.read(inputFile)
val outputFile = outputDir / inputFile.toURI.toString.stripPrefix(inputDir.toURI.toString)
val outputStr = Transformer.transform(inputStr)
IO.write(outputFile, outputStr)
outputFile
}
}
}
project/Transformer.scala
import scala.meta._
object Transformer {
def transform(input: String): String = {
val (v1on, v2on) = sys.env.get("LIB_VERSION") match {
case Some("1") => (true, false)
case Some("2") => (false, true)
case None => (false, false)
}
var v1 = false
var v2 = false
input.tokenize.get.filter(_.text match {
case "// Lib v1" =>
v1 = true
false
case "// End Lib v1" =>
v1 = false
false
case "// Lib v2" =>
v2 = true
false
case "// End Lib v2" =>
v2 = false
false
case _ => (v1on && v1) || (v2on && v2) || (!v1 && !v2)
}).mkString("")
}
}
common/src/main/scala/com/api/APIClass.scala
package com.api
class APIClass(a: String, b: Integer, c: String)
in/src/main/scala/com/example/MyClass.scala
package com.example
import com.api.APIClass
// Lib v1
class MyClass extends APIClass("x", 1)
// End Lib v1
// Lib v2
class MyClass extends APIClass("x", 1, "y")
// End Lib v2
out/target/scala-2.13/src_managed/main/scala/com/example/MyClass.scala
(after sbt out/compile if LIB_VERSION=2)
package com.example
import com.api.APIClass
class MyClass extends APIClass("x", 1, "y")
Macro annotation to override toString of Scala function
How to merge multiple imports in scala?

I see some options but none if them is "conditional compilation"
you can create 2 modules in your build - they would have a shared source directory and each of them you have a source directory for code specific to it. Then you would publish 2 versions of your whole library
create 3 modules - one with your library and an abstract class/trait that it would talk to/through and 2 other with version-specific implementation of the trait
The problem is - what if you build the code against v1 and user provided v2? Or the opposite? You emitted the bytecode but JVM expects something else and it all crashes.
Virtually every time you have such compatibility breaking changes, library either refuses to update or fork. Not because you wouldn't be able to generate 2 versions - you would. Problem is in the downstream - how would your users deal with this situation. If you are writing an application you can commit to one of these. If you are writing library and you don't want to lock users to your choices... you have to publish separate version for each choice.
Theoretically you could create one project, with 2 modules, which share the same code and use different branches like #ifdef macros in C++ using Scala macros or Scalameta - but that is a disaster if you want to use IDE or publish sourcecode that your users can use in IDE. No source to look at. No way to jump to the definition's source. Disassembled byte code at best.
So the solution that you simply have separate source directories for mismatching versions is much easier to read, write and maintain in a long run.

Related

Auto-Generate Companion Object for Case Class in Scala

I've defined a case class to be used as a schema for a Dataset in Spark.
I want to be able to refer to individual columns from that schema by referencing them programmatically (vs. hardcoding their string value somewhere)
For example, for the following case class
final case class MySchema(id: Int, name: String, timestamp: Long)
I would like to auto-generate the following object
object MySchema {
val id = "id"
val name = "name"
val timestamp = "timestamp"
}
The Macro approach outlined here appears to be what I want, but it won't compile under Scala 2.12. It gives the following errors which are completely baffling to me and show up in a total of 2 Google results with 0 fixes.
[error] pattern var qq$macro$2 in method unapply is never used: use a wildcard `_` or suppress this warning with `qq$macro$2#_`
[error] case (c#q"$_ class $tpname[..$_] $_(...$params) extends { ..$_ } with ..$_ { $_ => ..$_ }") :: Nil =>
[error] ^
[error] pattern var qq$macro$19 in method unapply is never used: use a wildcard `_` or suppress this warning with `qq$macro$19#_`
[error] case (c#q"$_ class $_[..$_] $_(...$params) extends { ..$_ } with ..$_ { $_ => ..$_ }") ::
[error] ^
[error] pattern var qq$macro$27 in method unapply is never used: use a wildcard `_` or suppress this warning with `qq$macro$27#_`
[error] q"$mods object $tname extends { ..$earlydefns } with ..$parents { $self => ..$body }" :: Nil =>
[error] ^
Suppressing the warning as outlined won't work because the macro numbers change every time I compile.
It's also worth noting that the similar SO answer here runs into the same compiler errors as shown above
IntelliJ also complains about several parts of the macro that the compiler doesn't complain about, but that's not really an issue if I can get it to compile
Is there a way to fix that Macro approach to work in Scala 2.12 or is there a better Scala 2.12 way to do this? (I can't use Scala 2.13 or higher due to compute environment constraints)
Just checked that the macro is still working both in Scala 2.13.10 and 2.12.17.
Most probably, you didn't set up your project for macro annotations
build.sbt
//ThisBuild / scalaVersion := "2.13.10"
ThisBuild / scalaVersion := "2.12.17"
lazy val macroAnnotationSettings = Seq(
scalacOptions ++= (CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, v)) if v >= 13 => Seq("-Ymacro-annotations") // for Scala 2.13
case _ => Nil
}),
libraryDependencies ++= (CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, v)) if v <= 12 => // for Scala 2.12
Seq(compilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full))
case _ => Nil
})
)
lazy val core = project
.settings(
macroAnnotationSettings,
scalacOptions ++= Seq(
"-Ymacro-debug-lite", // optional, convenient to see how macros are expanded
),
)
.dependsOn(macros) // you must split your project into subprojects because macros must be compiled before core
lazy val macros = project
.settings(
macroAnnotationSettings,
libraryDependencies ++= Seq(
scalaOrganization.value % "scala-reflect" % scalaVersion.value, // necessary for macros
),
)
project structure:
core
src
main
scala
Main.scala
macros
src
main
scala
Macros.scala
Then just do sbt clean compile.
The whole project: https://gist.github.com/DmytroMitin/2d9dbd6441ebf167aa127b80fb516afd
sbt documentation:
https://www.scala-sbt.org/1.x/docs/Macro-Projects.html
Scala documentation: https://docs.scala-lang.org/overviews/macros/annotations.html
Examples of build.sbt:
https://github.com/typelevel/simulacrum/blob/master/build.sbt
https://github.com/DmytroMitin/AUXify/blob/master/build.sbt

How can I override tasks ``run`` and ``runMain`` in SBT to use my own ``ForkOptions``?

Problem
In a multimodule build, each module has it's own baseDirectory but I would like to launch applications defined in modules employing the baseDirectory of the root project instead of the baseDirectory relative to modules involved.
This way, applications always would take relative file names from the root folder, which is a very common pattern.
The problem is that ForkOptions enforces the baseDirectory from the module and apparently there's no easy way to change that because forkOptions is private. I would like to pass a forkOptions populated with the baseDirectory from the root project instead.
Besides, there are modules which contain two or more applications. So, I'd like to have separate configurations for each application in a given module which contains two or more applications.
An example tells more than 1000 words:
build.sbt
import sbt._
import Keys._
lazy val buildSettings: Seq[Setting[_]] = Defaults.defaultSettings
lazy val forkRunOptions: Seq[Setting[_]] = Seq(fork := true)
addCommandAlias("r1", "ModuleA/RunnerR1:run")
addCommandAlias("r2", "ModuleA/RunnerR2:run")
lazy val RunnerR1 = sbt.config("RunnerR1").extend(Compile)
lazy val RunnerR2 = sbt.config("RunnerR2").extend(Compile)
lazy val root =
project
.in(file("."))
.settings(buildSettings:_*)
.aggregate(ModuleA)
lazy val ModuleA =
project
.in(file("ModuleA"))
.settings(buildSettings:_*)
.configs(RunnerR1,RunnerR2)
.settings(inConfig(RunnerR1)(
forkRunOptions ++
Seq(
mainClass in Compile := Option("sbt.tests.issueX.Application1"))):_*)
.settings(inConfig(RunnerR2)(
forkRunOptions ++
Seq(
mainClass in Compile := Option("sbt.tests.issueX.Application2"))):_*)
In SBT console, I would expect this:
> r1
This is Application1
> r2
This is Application2
But I see this:
> r1
This is Application2
> r2
This is Application2
What is the catch?
Not only that... SBT is running applications in process. It's not forking them. Why fork := true is not taking any effect?
Explanation
see: https://github.com/frgomes/sbt-issue-2247
Turns out that configurations do not work the way one might think they work.
The problem is that, in the snippet below, configuration RunnerR1 does not inherit tasks from module ModuleA as you might expect. So, when you type r1 or r2 (i.e: ModuleA/RunnerR1:run or ModuleA/RunnerR2:run), SBT will employ the delegaton algorithm in order to find tasks and settings which, depending on how these tasks and settings were defined, it will end up running tasks from scopes you do not expect, or finding settings from scopes you do not expect.
lazy val ModuleA =
project
.in(file("ModuleA"))
.settings(buildSettings:_*)
.configs(RunnerR1,RunnerR2)
.settings(inConfig(RunnerR1)(
forkRunOptions ++
Seq(
mainClass in Compile := Option("sbt.tests.issueX.Application1"))):_*)
This issue is related to usability, since the API provided by SBT is misleading. Eventually this pattern can be improved or better documented, but it's more a usability problem than anything else.
Circumventing the difficulty
Please find below how this issue can be circumvented.
Since ForkOptions is private, we have to provide our own way of running applications, which is based on SBT code, as much as possible.
In a nutshell, we have to guarantee that we redefine run, runMain and runner in all configurations we have.
import sbt._
import Keys._
//-------------------------------------------------------------
// This file contains a solution for the problem presented by
// https://github.com/sbt/sbt/issues/2247
//-------------------------------------------------------------
lazy val buildSettings: Seq[Setting[_]] = Defaults.defaultSettings ++ runSettings
lazy val runSettings: Seq[Setting[_]] =
Seq(
fork in (Compile, run) := true)
def forkRunOptions(s: Scope): Seq[Setting[_]] =
Seq(
// see: https://github.com/sbt/sbt/issues/2247
// see: https://github.com/sbt/sbt/issues/2244
runner in run in s := {
val forkOptions: ForkOptions =
ForkOptions(
workingDirectory = Some((baseDirectory in ThisBuild).value),
bootJars = Nil,
javaHome = (javaHome in s).value,
connectInput = (connectInput in s).value,
outputStrategy = (outputStrategy in s).value,
runJVMOptions = (javaOptions in s).value,
envVars = (envVars in s).value)
new {
val fork_ = (fork in run).value
val config: ForkOptions = forkOptions
} with ScalaRun {
override def run(mainClass: String, classpath: Seq[File], options: Seq[String], log: Logger): Option[String] =
javaRunner(
Option(mainClass), Option(classpath), options,
Some("java"), Option(log), fork_,
config.runJVMOptions, config.javaHome, config.workingDirectory, config.envVars, config.connectInput, config.outputStrategy)
}
},
runner in runMain in (s) := (runner in run in (s)).value,
run in (s) <<= Defaults.runTask (fullClasspath in s, mainClass in run in s, runner in run in s),
runMain in (s) <<= Defaults.runMainTask(fullClasspath in s, runner in runMain in s)
)
def javaRunner(mainClass: Option[String] = None,
classpath: Option[Seq[File]] = None,
options: Seq[String],
javaTool: Option[String] = None,
log: Option[Logger] = None,
fork: Boolean = false,
jvmOptions: Seq[String] = Nil,
javaHome: Option[File] = None,
cwd: Option[File] = None,
envVars: Map[String, String] = Map.empty,
connectInput: Boolean = false,
outputStrategy: Option[OutputStrategy] = Some(StdoutOutput)): Option[String] = {
def runner(app: String,
args: Seq[String],
cwd: Option[File] = None,
env: Map[String, String] = Map.empty): Int = {
import scala.collection.JavaConverters._
val cmd: Seq[String] = app +: args
val pb = new java.lang.ProcessBuilder(cmd.asJava)
if (cwd.isDefined) pb.directory(cwd.get)
pb.inheritIO
//FIXME: set environment
val process = pb.start()
if (fork) 0
else {
def cancel() = {
if(log.isDefined) log.get.warn("Background process cancelled.")
process.destroy()
15
}
try process.waitFor catch {
case e: InterruptedException => cancel()
}
}
}
val app: String = javaHome.fold("") { p => p.absolutePath + "/bin/" } + javaTool.getOrElse("java")
val jvm: Seq[String] = jvmOptions.map(p => p.toString)
val cp: Seq[String] =
classpath
.fold(Seq.empty[String]) { paths =>
Seq(
"-cp",
paths
.map(p => p.absolutePath)
.mkString(java.io.File.pathSeparator))
}
val klass = mainClass.fold(Seq.empty[String]) { name => Seq(name) }
val xargs: Seq[String] = jvm ++ cp ++ klass ++ options
if(log.isDefined)
if(fork) {
log.get.info(s"Forking: ${app} " + xargs.mkString(" "))
} else {
log.get.info(s"Running: ${app} " + xargs.mkString(" "))
}
if (cwd.isDefined) IO.createDirectory(cwd.get)
val exitCode = runner(app, xargs, cwd, envVars)
if (exitCode == 0)
None
else
Some("Nonzero exit code returned from " + app + ": " + exitCode)
}
addCommandAlias("r1", "ModuleA/RunnerR1:run")
addCommandAlias("r2", "ModuleA/RunnerR2:run")
lazy val RunnerR1 = sbt.config("RunnerR1").extend(Compile)
lazy val RunnerR2 = sbt.config("RunnerR2").extend(Compile)
lazy val root =
project
.in(file("."))
.settings(buildSettings:_*)
.aggregate(ModuleA)
lazy val ModuleA =
project
.in(file("ModuleA"))
.settings(buildSettings:_*)
.configs(RunnerR1,RunnerR2)
.settings(inConfig(RunnerR1)(
forkRunOptions(ThisScope) ++
Seq(
mainClass := Option("sbt.tests.issueX.Application1"))):_*)
.settings(inConfig(RunnerR2)(
forkRunOptions(ThisScope) ++
Seq(
mainClass := Option("sbt.tests.issueX.Application2"))):_*)

Is there a cleaner way to compose SBT tasks?

There is a useful SBT plugin for formatting license headers:
https://github.com/Banno/sbt-license-plugin
As of 0.1.0, it only formats "scalaSource in Compile":
object Plugin extends sbt.Plugin {
import LicenseKeys._
object LicenseKeys {
lazy val formatLicenseHeaders = TaskKey[Unit]("formatLicenseHeaders", "Includes the license header to source files")
…
}
def licenseSettings = Seq(
formatLicenseHeaders <<= formatLicenseHeadersTask,
…
)
…
private def formatLicenseHeadersTask =
(streams, scalaSource in Compile, license in formatLicenseHeaders, removeExistingHeaderBlock in formatLicenseHeaders) map {
(out, sourceDir, lic, removeHeader) =>
modifySources(sourceDir, lic, removeHeader, out.log)
}
I wrote a pull request where I generalized this to format both java and scala sources used for both compilation and testing here:
https://github.com/Banno/sbt-license-plugin/blob/master/src/main/scala/license/plugin.scala
private def formatLicenseHeadersTask = Def.task[Unit] {
formatLicenseHeadersTask1.value
formatLicenseHeadersTask2.value
formatLicenseHeadersTask3.value
formatLicenseHeadersTask4.value
}
The tasks formatLicenseHeadersTask1234 correspond to the combinations of {scala,java}Source and {Compile,Test}.
This works but it's really ugly. Is there a way to write the same thing with loops as sketched below?
for {
src <- Seq( scalaSource, javaSource )
kind <- Seq( Compile, Test )
…
}

sbt runs IMain and play makes errors

I've build a litte object, which can interpret scala code on the fly and catches a value out of it.
object Interpreter {
import scala.tools.nsc._
import scala.tools.nsc.interpreter._
class Dummy
val settings = new Settings
settings.usejavacp.value = false
settings.embeddedDefaults[Dummy] // to make imain useable with sbt.
val imain = new IMain(settings)
def run(code: String, returnId: String) = {
this.imain.beQuietDuring{
this.imain.interpret(code)
}
val ret = this.imain.valueOfTerm(returnId)
this.imain.reset()
ret
}
}
object Main {
def main(args: Array[String]) {
println(Interpreter.run("val x = 1", "x"))
}
}
In a pure sbt environment or called by the scala interpreter this code works fine! But if I run this in a simple play (version 2.2.2) application, it gets a null pointer at val ret = this.imain.valueOfTerm(returnId).
play uses also a modified sbt, therefor it should probably work. What does play do that this code doesn't work anymore? Any ideas how to get this code to work in play?
Note
That's the used build.sbt:
name := "Test"
version := "1.0"
scalaVersion := "2.10.3"
libraryDependencies += "org.scala-lang" % "scala-compiler" % scalaVersion.value
Alternatively I tried this implementation, but it doesen't solve the problem either:
object Interpreter2 {
import scala.tools.nsc._
import scala.tools.nsc.interpreter._
import play.api._
import play.api.Play.current
val settings: Settings = {
lazy val urls = java.lang.Thread.currentThread.getContextClassLoader match {
case cl: java.net.URLClassLoader => cl.getURLs.toList
case _ => sys.error("classloader is not a URLClassLoader")
}
lazy val classpath = urls map {_.toString}
val tmp = new Settings()
tmp.bootclasspath.value = classpath.distinct mkString java.io.File.pathSeparator
tmp
}
val imain = new IMain(settings)
def run(code: String, returnId: String) = {
this.imain.beQuietDuring {
this.imain.interpret(code)
}
val ret = this.imain.valueOfTerm(returnId)
this.imain.reset()
ret
}
}
Useful links I found to make this second implementation:
scala.tools.nsc.IMain within Play 2.1
How to set up classpath for the Scala interpreter in a managed environment?
https://groups.google.com/forum/#!topic/scala-user/wV86VwnKaVk
https://github.com/gourlaysama/play-repl-example/blob/master/app/REPL.scala#L18
https://gist.github.com/mslinn/7205854
After spending a few hours on this issue myself, here is a solution that I came up with. It works both inside SBT and outside. It is also expected to work in a variety of managed environments (like OSGi):
private def getClasspathUrls(classLoader: ClassLoader, acc: List[URL]): List[URL] = {
classLoader match {
case null => acc
case cl: java.net.URLClassLoader => getClasspathUrls(classLoader.getParent, acc ++ cl.getURLs.toList)
case c => LOGGER.error("classloader is not a URLClassLoader and will be skipped. ClassLoader type that was skipped is " + c.getClass)
getClasspathUrls(classLoader.getParent, acc)
}
}
val classpathUrls = getClasspathUrls(this.getClass.getClassLoader, List())
val classpathElements = classpathUrls map {url => url.toURI.getPath}
val classpath = classpathElements mkString java.io.File.pathSeparator
val settings = new Settings
settings.bootclasspath.value = classpath
val imain = new IMain(settings)
// use imain to interpret code. It should be able to access all your application classes as well as dependent libraries.
It's because play uses the "fork in run" feature from sbt. This feature starts a new JVM and this causes that this failure appears:
[info] Failed to initialize compiler: object scala.runtime in compiler mirror not found.
[info] ** Note that as of 2.8 scala does not assume use of the java classpath.
[info] ** For the old behavior pass -usejavacp to scala, or if using a Settings
[info] ** object programatically, settings.usejavacp.value = true.
See: http://www.scala-sbt.org/release/docs/Detailed-Topics/Forking

Idiomatically defining dynamic tasks in SBT 0.13?

I'm moving an SBT plugin from 0.12 over to 0.13. At various points in my plugin I schedule a dynamic set of tasks onto the SBT build graph.
Below is my old code. Is this still the idiomatic way to express this, or is it possible to leverage the macros to make everything prettier?
import sbt._
import Keys._
object Toplevel extends Build
{
lazy val ordinals = taskKey[Seq[String]]("A list of things")
lazy val times = taskKey[Int]("Number of times to list things")
lazy val inParallel = taskKey[Seq[String]]("Strings to log in parallel")
lazy val Foo = Project( id="Foo", base=file("foo"),
settings = Defaults.defaultSettings ++ Seq(
scalaVersion := "2.10.2",
ordinals := Seq( "First", "Second", "Third", "Four", "Five" ),
times := 3,
inParallel <<= (times, ordinals, streams) flatMap
{ case (t, os, s) =>
os.map( o => toTask( () =>
{
(0 until t).map( _ => o ).mkString(",")
} ) ).join
}
)
)
}
Apologies for the entirely contrived example!
EDIT
So, taking Mark's advice into account I have the following tidier code:
import sbt._
import Keys._
object Toplevel extends Build
{
lazy val ordinals = taskKey[Seq[String]]("A list of things")
lazy val times = taskKey[Int]("Number of times to list things")
lazy val inParallel = taskKey[Seq[String]]("Strings to log in parallel")
def parTask = Def.taskDyn
{
val t = times.value
ordinals.value.map(o => ordinalTask(o, t)).join
}
def ordinalTask(o: String, t: Int) = Def.task
{
(0 until t).map(_ => o).mkString(",")
}
lazy val Foo = Project( id="Foo", base=file("foo"),
settings = Defaults.defaultSettings ++ Seq(
scalaVersion := "2.10.2",
ordinals := Seq( "First", "Second", "Third", "Four", "Five" ),
times := 3,
inParallel := parTask.value
)
)
}
This seems to be nearly there, but fails the build with:
[error] /home/alex.wilson/tmp/sbt0.13/project/build.scala:13: type mismatch;
[error] found : sbt.Def.Initialize[Seq[sbt.Task[String]]]
[error] required: sbt.Def.Initialize[sbt.Task[?]]
[error] ordinals.value.map(o => ordinalTask(o, t)).join
You can use Def.taskDyn, which provides the new syntax for flatMap. The difference from Def.task is that the expected return type is a task Initialize[Task[T]] instead of just T. Translating your example,
inParallel := parTask.value
def parTask = Def.taskDyn {
val t = times.value
ordinals.value.map(o => ordinalTask(o, t)).joinWith(_.join)
}
def ordinalTask(o: String, t: Int) = Def.task {
(0 until t).map(_ => o).mkString(",")
}