Defining sbt task that invokes method from project code? - scala

I'm using SBT to build a scala project. I want to define a very simple task, that when I input generate in sbt:
sbt> generate
It will invoke my my.App.main(..) method to generate something.
There is a App.scala file in myproject/src/main/scala/my, and the simplified code is like this:
object App {
def main(args: Array[String]) {
val source = readContentOfFile("mysource.txt")
val result = convert(source)
writeToFile(result, "mytarget.txt");
}
// ignore some methods here
}
I tried to add following code into myproject/build.sbt:
lazy val generate = taskKey[Unit]("Generate my file")
generate := {
my.App.main(Array())
}
But which doesn't compile since it can't find my.App.
Then I tried to add it to myproject/project/build.scala:
import sbt._
import my._
object HelloBuild extends Build {
lazy val generate = taskKey[Unit]("Generate my file")
generate := {
App.main(Array())
}
}
But it still can't be compiled, that it can't find package my.
How to define such a task in SBT?

In .sbt format, do:
lazy val generate = taskKey[Unit]("Generate my file")
fullRunTask(generate, Compile, "my.App")
This is documented at http://www.scala-sbt.org/0.13.2/docs/faq.html, “How can I create a custom run task, in addition to run?”
Another approach would be:
lazy val generate = taskKey[Unit]("Generate my file")
generate := (runMain in Compile).toTask(" my.App").value
which works fine in simple cases but isn't as customizable.
Update: Jacek's advice to use resourceGenerators or sourceGenerators instead is good, if it fits your use case — can't tell from your description whether it does.

The other answers fit the question very well, but I think the OP might benefit from mine, too :)
The OP asked about "I want to define a very simple task, that when I input generate in sbt will invoke my my.App.main(..) method to generate something." that might ultimately complicate the build.
Sbt already offers a way to generate files at build time - sourceGenerators and resourceGenerators - and I can't seem to notice a need to define a separate task for this from having read the question.
In Generating files (see the future version of the document in the commit) you can read:
sbt provides standard hooks for adding source or resource generation
tasks.
With the knowledge one could think of the following solution:
sourceGenerators in Compile += Def.task {
my.App.main(Array()) // it's not going to work without one change, though
Seq[File]() // a workaround before the above change is in effect
}.taskValue
To make that work you should return a Seq[File] that contains files generated (and not the empty Seq[File]()).
The main change for the code to work is to move the my.App class to project folder. It then becomes a part of the build definition. It also reflects what the class does as it's really a part of the build not the artifact that's the product of it. When the same code is a part of the build and the artifact itself you don't keep the different concerns separate. If the my.App class participates in a build, it should belong to it - hence the move to the project folder.
The project's layout would then be as follows:
$ tree
.
├── build.sbt
└── project
├── App.scala
└── build.properties
Separation of concerns (aka #joescii in da haus)
There's a point in #joescii's answer (which I extend in the answer) - "to make it a separate project that other projects can use. To do this, you will need to put your App object into a separate project and include it as a dependency in project/project", i.e.
Let's assume you've got a separate project build-utils with App.scala under src/main/scala. It's a regular sbt configuration with just the Scala code.
jacek:~/sandbox/so/generate-project-code
$ tree build-utils/
build-utils/
└── src
└── main
└── scala
└── App.scala
You could test it out as a regular Scala application without messing up with sbt. No additional setup's required (and frees your mind from sbt that might be beneficial at times - less setup is always of help).
In another project - project-code - that uses App.scala that is supposed to be a base for the build, build.sbt is as follows:
project-code/build.sbt
lazy val generate = taskKey[Unit]("Generate my file")
generate := {
my.App.main(Array())
}
Now the most important part - the wiring between projects so the App code is visible for the build of project-code:
project-code/project/build.sbt
lazy val buildUtils = RootProject(
uri("file:/Users/jacek/sandbox/so/generate-project-code/build-utils")
)
lazy val plugins = project in file(".") dependsOn buildUtils
With the build definition(s), executing generate gives you the following:
jacek:~/sandbox/so/generate-project-code/project-code
$ sbt
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/sandbox/so/generate-project-code/project-code/project
[info] Updating {file:/Users/jacek/sandbox/so/generate-project-code/build-utils/}build-utils...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Updating {file:/Users/jacek/sandbox/so/generate-project-code/project-code/project/}plugins...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Compiling 1 Scala source to /Users/jacek/sandbox/so/generate-project-code/build-utils/target/scala-2.10/classes...
[info] Set current project to project-code (in build file:/Users/jacek/sandbox/so/generate-project-code/project-code/)
> generate
Hello from App.main
[success] Total time: 0 s, completed May 2, 2014 2:54:29 PM
I've changed the code of App to be:
> eval "cat ../build-utils/src/main/scala/App.scala"!
package my
object App {
def main(args: Array[String]) {
println("Hello from App.main")
}
}
The project structure is as follows:
jacek:~/sandbox/so/generate-project-code/project-code
$ tree
.
├── build.sbt
└── project
├── build.properties
└── build.sbt
Other changes aka goodies
I'd also propose some other changes to the code of the source generator:
Move the code out of main method to a separate method that returns the files generated and have main call it. It'll make reusing the code in sourceGenerators easier (without unnecessary Array() to call it as well as explicitly returning the files).
Use filter or map functions for convert (to add a more functional flavour).

The solution that #SethTisue proposes will work. Another approach is to make it a separate project that other projects can use. To do this, you will need to put your App object into a separate project and include it as a dependency in project/project, OR package it as an sbt plugin ideally with this task definition included.
For an example of how to create a lib that is packaged as a plugin, take a look at snmp4s. The gen directory contains the code that does some code generation (analogous to your App code) and the sbt directory contains the sbt plugin wrapper for gen.

Related

Best practices for Scala physical directory structure [duplicate]

I am learning Scala now and I want to write some silly little app like a console Twitter client, or whatever. The question is, how to structure application on disk and logically. I know python, and there I would just create some files with classes and then import them in the main module like import util.ssh or from tweets import Retweet (strongly hoping you wouldn't mind that names, they are just for reference). But how should I do this stuff using Scala? Also, I have not much experience with JVM and Java, so I am a complete newbie here.
I'm going to disagree with Jens, here, though not all that much.
Project Layout
My own suggestion is that you model your efforts on Maven's standard directory layout.
Previous versions of SBT (before SBT 0.9.x) would create it automatically for you:
dcs#ayanami:~$ mkdir myproject
dcs#ayanami:~$ cd myproject
dcs#ayanami:~/myproject$ sbt
Project does not exist, create new project? (y/N/s) y
Name: myproject
Organization: org.dcsobral
Version [1.0]:
Scala version [2.7.7]: 2.8.1
sbt version [0.7.4]:
Getting Scala 2.7.7 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (9911kB/134ms)
Getting org.scala-tools.sbt sbt_2.7.7 0.7.4 ...
:: retrieving :: org.scala-tools.sbt#boot-app
confs: [default]
15 artifacts copied, 0 already retrieved (4096kB/91ms)
[success] Successfully initialized directory structure.
Getting Scala 2.8.1 ...
:: retrieving :: org.scala-tools.sbt#boot-scala
confs: [default]
2 artifacts copied, 0 already retrieved (15118kB/160ms)
[info] Building project myproject 1.0 against Scala 2.8.1
[info] using sbt.DefaultProject with sbt 0.7.4 and Scala 2.7.7
> quit
[info]
[info] Total session time: 8 s, completed May 6, 2011 12:31:43 PM
[success] Build completed successfully.
dcs#ayanami:~/myproject$ find . -type d -print
.
./project
./project/boot
./project/boot/scala-2.7.7
./project/boot/scala-2.7.7/lib
./project/boot/scala-2.7.7/org.scala-tools.sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.7.7.final
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-src
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/compiler-interface-bin_2.8.0.RC2
./project/boot/scala-2.7.7/org.scala-tools.sbt/sbt/0.7.4/xsbti
./project/boot/scala-2.8.1
./project/boot/scala-2.8.1/lib
./target
./lib
./src
./src/main
./src/main/resources
./src/main/scala
./src/test
./src/test/resources
./src/test/scala
So you'll put your source files inside myproject/src/main/scala, for the main program, or myproject/src/test/scala, for the tests.
Since that doesn't work anymore, there are some alternatives:
giter8 and sbt.g8
Install giter8, clone ymasory's sbt.g8 template and adapt it to your necessities, and use that. See below, for example, this use of unmodified ymasory's sbt.g8 template. I think this is one of the best alternatives to starting new projects when you have a good notion of what you want in all your projects.
$ g8 ymasory/sbt
project_license_url [http://www.gnu.org/licenses/gpl-3.0.txt]:
name [myproj]:
project_group_id [com.example]:
developer_email [john.doe#example.com]:
developer_full_name [John Doe]:
project_license_name [GPLv3]:
github_username [johndoe]:
Template applied in ./myproj
$ tree myproj
myproj
├── build.sbt
├── LICENSE
├── project
│   ├── build.properties
│   ├── build.scala
│   └── plugins.sbt
├── README.md
├── sbt
└── src
└── main
└── scala
└── Main.scala
4 directories, 8 files
np plugin
Use softprops's np plugin for sbt. In the example below, the plugin is configured on ~/.sbt/plugins/build.sbt, and its settings on ~/.sbt/np.sbt, with standard sbt script. If you use paulp's sbt-extras, you'd need to install these things under the right Scala version subdirectory in ~/.sbt, as it uses separate configurations for each Scala version. In practice, this is the one I use most often.
$ mkdir myproj; cd myproj
$ sbt 'np name:myproj org:com.example'
[info] Loading global plugins from /home/dcsobral/.sbt/plugins
[warn] Multiple resolvers having different access mechanism configured with same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
[info] Set current project to default-c642a2 (in build file:/home/dcsobral/myproj/)
[info] Generated build file
[info] Generated source directories
[success] Total time: 0 s, completed Apr 12, 2013 12:08:31 PM
$ tree
.
├── build.sbt
├── src
│   ├── main
│   │   ├── resources
│   │   └── scala
│   └── test
│   ├── resources
│   └── scala
└── target
└── streams
└── compile
└── np
└── $global
└── out
12 directories, 2 files
mkdir
You could simply create it with mkdir:
$ mkdir -p myproj/src/{main,test}/{resource,scala,java}
$ tree myproj
myproj
└── src
├── main
│   ├── java
│   ├── resource
│   └── scala
└── test
├── java
├── resource
└── scala
9 directories, 0 files
Source Layout
Now, about the source layout. Jens recommends following Java style. Well, the Java directory layout is a requirement -- in Java. Scala does not have the same requirement, so you have the option of following it or not.
If you do follow it, assuming the base package is org.dcsobral.myproject, then source code for that package would be put inside myproject/src/main/scala/org/dcsobral/myproject/, and so on for sub-packages.
Two common ways of diverging from that standard are:
Omitting the base package directory, and only creating subdirectories for the sub-packages.
For instance, let's say I have the packages org.dcsobral.myproject.model, org.dcsobral.myproject.view and org.dcsobral.myproject.controller, then the directories would be myproject/src/main/scala/model, myproject/src/main/scala/view and myproject/src/main/scala/controller.
Putting everything together. In this case, all source files would be inside myproject/src/main/scala. This is good enough for small projects. In fact, if you have no sub-projects, it is the same as above.
And this deals with directory layout.
File Names
Next, let's talk about files. In Java, the practice is separating each class in its own file, whose name will follow the name of the class. This is good enough in Scala too, but you have to pay attention to some exceptions.
First, Scala has object, which Java does not have. A class and object of the same name are considered companions, which has some practical implications, but only if they are in the same file. So, place companion classes and objects in the same file.
Second, Scala has a concept known as sealed class (or trait), which limits subclasses (or implementing objects) to those declared in the same file. This is mostly done to create algebraic data types with pattern matching with exhaustiveness check. For example:
sealed abstract class Tree
case class Node(left: Tree, right: Tree) extends Tree
case class Leaf(n: Int) extends Tree
scala> def isLeaf(t: Tree) = t match {
| case Leaf(n: Int) => println("Leaf "+n)
| }
<console>:11: warning: match is not exhaustive!
missing combination Node
def isLeaf(t: Tree) = t match {
^
isLeaf: (t: Tree)Unit
If Tree was not sealed, then anyone could extend it, making it impossible for the compiler to know whether the match was exhaustive or not. Anyway, sealed classes go together in the same file.
Another naming convention is to name the files containing a package object (for that package) package.scala.
Importing Stuff
The most basic rule is that stuff in the same package see each other. So, put everything in the same package, and you don't need to concern yourself with what sees what.
But Scala also have relative references and imports. This requires a bit of an explanation. Say I have the following declarations at the top of my file:
package org.dcsobral.myproject
package model
Everything following will be put in the package org.dcsobral.myproject.model. Also, not only everything inside that package will be visible, but everything inside org.dcsobral.myproject will be visible as well. If I just declared package org.dcsobral.myproject.model instead, then org.dcsobral.myproject would not be visible.
The rule is pretty simple, but it can confuse people a bit at first. The reason for this rule is relative imports. Consider now the following statement in that file:
import view._
This import may be relative -- all imports can be relative unless you prefix it with _root_.. It can refer to the following packages: org.dcsobral.myproject.model.view, org.dcsobral.myproject.view, scala.view and java.lang.view. It could also refer to an object named view inside scala.Predef. Or it could be an absolute import refering to a package named view.
If more than one such package exists, it will pick one according to some precedence rules. If you needed to import something else, you can turn the import into an absolute one.
This import makes everything inside the view package (wherever it is) visible in its scope. If it happens inside a class, and object or a def, then the visibility will be restricted to that. It imports everything because of the ._, which is a wildcard.
An alternative might look like this:
package org.dcsobral.myproject.model
import org.dcsobral.myproject.view
import org.dcsobral.myproject.controller
In that case, the packages view and controller would be visible, but you'd have to name them explicitly when using them:
def post(view: view.User): Node =
Or you could use further relative imports:
import view.User
The import statement also enable you to rename stuff, or import everything but something. Refer to relevant documentation about it for more details.
So, I hope this answer all your questions.
Scala supports and encourages the package structure of Java /JVM and pretty much the same recommendation apply:
mirror the package structure in the directory structure. This isn't necessary in Scala, but it helps to find your way around
use your inverse domain as a package prefix. For me that means everything starts with de.schauderhaft. Use something that makes sense for you, if you don't have you own domain
only put top level classes in one file if they are small and closely related. Otherwise stick with one class/object per file. Exceptions: companion objects go in the same file as the class. Implementations of a sealed class go into the same file.
if you app grows you might want to have something like layers and modules and mirror those in the package structure, so you might have a package structure like this: <domain>.<module>.<layer>.<optional subpackage>.
don't have cyclic dependencies on a package, module or layer level

Using DependsOn between two ScalaJS SBT projects

(Long question ahead. Simplified tl;dr at the bottom).
I have two ScalaJS projects built with SBT - "myapp" and "mylib", in the following directory structure
root/build.sbt
root/myapp/build.sbt
root/myapp/jvm/
root/myapp/js/
root/myapp/shared/
root/mylib/build.sbt
root/mylib/jvm
root/mylib/js
root/mylib/shared
lib exports an artifact named "com.example:mylib:0.1", which as used as a libraryDependency for myapp.
myapp and mylib are in separate repositories, contain their own build files, and should be able to be build completely separately (i.e. they must contain their own individual build config).
In production, they will be built separately with mylib being first published as a maven artifact before building myapp separately.
In development however, I want to be able to merge these into a parent SBT project so that both can be developed in parallel without needing to use publishLocal after each change.
In a traditional (not scalajs) project this would be quite easy
$ROOT/build.sbt:
lazy val mylib = project
lazy val myapp = project.dependsOn(mylib)
However in ScalaJS, we actually have two projects inside each module - appJVM, appJS, libJVM and libJS. As such, the above configuration only finds the aggregate root project and does not correctly apply the dependsOn configuration to the actual JVM and JS projects.
(i.e. myapp and mylib build.sbt each contains two projects, and an aggregate root project)
Ideally I'd like to be able to do something like the following
lazy val mylibJVM = project
lazy val myappJVM = project.dependsOn(mylibJVM)
lazy val mylibJS = project
lazy val myappJS = project.dependsOn(myappJS)
Unfortunately this just creates new projects within the root instead of importing the subprojects themselves.
I've also tried various combinations of paths (such as)
lazy val mylibJVM = project.in(file("mylib/jvm"))
But this doesn't see configuration in build.sbt file in mylib
Ultimately I keep running up against the same problem - when importing an existing multi-project SBT project into a parent sbt file, it imports the root project, but does not seem to provide a way to import a subproject from an existing multimodule SBT file in a way that lets me add dependsOn configuration to it.
tl;dr
If I have
root/mylib/build.sbt with multiple projects defined and
root/myapp/build.sbt with multiple projects defined
Is it possible to import individual subprojects into root/build.sbt instead of the root project from the submodule?
i.e. Can I have two layers of multiproject builds.
After spending a lot of time digging through SBT source code, I managed to figure out a solution. This isn't clean, but it works. (For bonus points, it imports correctly into IntelliJ).
// Add this function to your root build.sbt file.
// It can be used to define a dependency between any
// `ProjectRef` without needing a full project definition.
def addDep(from:String, to:String) = {
buildDependencies in Global <<= (
buildDependencies in Global,
thisProjectRef in from,
thisProjectRef in to) {
(deps, fromref, toref) =>
deps.addClasspath(fromref, ResolvedClasspathDependency(toref, None))
}
}
// `project` will import the `build.sbt` file
// in the subdirectory of the same name as the `lazy val`
// (performed by an SBT macro). i.e. `./mylib/build.sbt`
//
// This won't reference the actual subprojects directly,
// will but import them into the namespace such that they
// can be referenced as "ProjectRefs", which are implicitly
// converted to from strings.
//
// We then aggregate the JVM and JS ScalaJS projects
// into the new root project we've defined. (Which unfortunately
// won't inherit anything from the child build.sbt)
lazy val mylib = project.aggregate("mylibJVM","mylibJS")
lazy val myapp = project.aggregate("myappJVM","myappJS")
// Define a root project to aggregate everything
lazy val root = project.in(file(".")).aggregate(mylib,myapp)
// We now call our custom function to define a ClassPath dependency
// between `myapp` -> `mylib` for both JVM and JS subprojects.
// In particular, this will correctly find exported artifacts
// so that `myapp` can refer to `mylib` in libraryDependencies
// without needing to use `publishLocal`.
addDep("myappJVM", "mylibJVM")
addDep("myappJS","mylibJS")

How to create a custom package task to jar a subset of classes in SBT

I am trying to define a separate package task without modifying the original task in compile configuration. This new task will package only a subset of classes conforming an API which we need to be able to share with other teams so they can write plugins for our application. So the end result will be two jars, one with the full application and a second one with a subset of the classes.
I approached this problem by creating a different configuration which I called pluginApi and would redefine the packageBin task within this new configuration so it does not change the original definition of packageBin. This idea was taken from here:
How to create custom "package" task to jar up only specific package in SBT?
In my build.stb I have:
lazy val PluginApi = config("pluginApi") extend(Compile) describedAs("Custom plugin api configuration")
lazy val root = project in file(".") overrideConfigs (PluginApi)
This effectively creates my new configuration and I can call
sbt pluginApi:packageBin
Which generates the complete jar in the same way as compile:packageBin would do. I then try to modify the mappings in the new packageBin task with:
mappings in (PluginApi, packageBin) ~= { (ms: Seq[(File, String)]) =>
ms filter { case (file, toPath) =>
toPath.startsWith("some/path/defining/api")
}
}
but this has no effect. I think the reason is because the call to pluginApi:packageBin is delegated to compile:packageBin rather than it being a cloned task.
I can redefine a new packageBin within the new scope like:
packageBin in PluginApi := {
}
However I would have to rewrite all packageBin functionality instead of reusing existing code. Also, in case that rewriting is unavoidable I am not sure how that implementation would be.
Could somebody provide an example about how to achieve this?
You could have it done as follows
lazy val PluginApi = config("pluginApi").extend(Compile)
inConfig(PluginApi)(Defaults.compileSettings) // you have to have standard
mappings in (PluginApi, packageBin) := {
val original = (mappings in (PluginApi, packageBin)).value
original.filter { case (file, toPath) => toPath.startsWith("some/path/defining/api") }
}
unmanagedSourceDirectories in PluginApi := (unmanagedSourceDirectories in Compile).value
Note that, if you keep your sources in src/main/scala you'll have to override unmanagedSourceDirectories in the newly created configuration.
Normally the unmanagedSourceDirectories contains the configuration name. E.g. src/pluginApi/scala or src/pluginApi/java.
I have had similar problems (with more than one jar per project). Our project uses ant - here you can do it, you just will repeat yourself a lot.
However, I have come to the conclusion that this scenario (2 JARs for one project) actually can be simplified by splitting the project - i.e. making 2 modules out of it.
This way, I don't have to "fight" tools which assume project==artifact (like sbt, maybe maven?, IDEA's default setting,...).
As a bonus point the compiler helps me to verify that my dependencies are correct, i.e. that I did not accidentally make my API package depend on the implementation package - when compiling everything together and only splitting classes apart in the JAR step, you do run the risk of getting an invalid dependency in your setup which you would only see when testing, because during compile time everything is compiled together.

sbt subproject aggregation and dependency behavior

I have an sbt project with a few subprojects, each of which publishes some artifacts and has a fairly extensive test suite.
When I run the build on my CI server, I want to publish the artifacts to a staging location and run the tests after the publishing task. Since others may want the artifacts, I'd like to tell sbt that I want it to build all the artifacts for all subprojects, then run all the tests, since by default it seems to run them interleaved in an unspecified order.
I have a ScopeFilter giving me access to all my subprojects, so I can make my ciBuild task depend on something like the following
(test in Test).all(subprojectScopeFilter).dependsOn(myArtifactsTask.all(subprojectScopeFilter))`
However, that doesn't seem to have any real effect on the order, and I definitely see some subprojects running tests before others have run their myArtifactsTask. I'm guessing that I don't fully understand how all works and it might be saying that each independent subproject's test task depends on that same subproject's myArtifactsTask? If that's the case, how can I specify what I want? Is it documented somewhere that I've missed? The manual describes the basics of all but not how it interacts with other constructs.
SBT will resolve automatically the order between task and projects and build them in that order.
What you could do is - let's assume you have three projects. Root and two sub-projects. I assume that the key myArtifactTask is defined in the root.
project/Build.scala
object MyBuild extends Build {
val myArtifactTask = TaskKey[Unit]("my-artifact-task", "My Artifact Task")
}
The myArtifactTask is implemented in both sub-projects.
subproject-a/build.sbt
myArtifactTask := {
println("myArtifactTask:project-a")
}
subproject-a/build.sbt
myArtifactTask := {
println("myArtifactTask:project-b")
}
What you want to do is to define your root's build.sbt in a way that it calls myArtifactTask in both projects. Then you could define new task testedArtifact which would depend on myArtifactTask.
build.sbt
lazy val testedArtifact = taskKey[Unit]("Runs myArtifactTask followed by tests")
lazy val inAnyProjectButRoot: ScopeFilter = ScopeFilter (
inAnyProject -- inProjects(ThisProject)
)
myArtifactTask := {
myArtifactTask.all(inAnyProjectButRoot).value
}
testedArtifact := {
(test in Test).all(anyProjectButRoot).value
}
testedArtifact <<= testedArtifact.dependsOn(myArtifactTask)
Now calling testedArtifactin the root project will first call all myArtifactTasks in sub-projects followed by tests.

Call sourceGenerators manually in sbt

I am using sourceGenerators in Compile to generate some Scala source files to target\scala-2.10\src_managed. When I run sbt compilethe sources are generated and compiled along with the regular code under src\main\scala.
But what if I want to generate the sources separately without compiling? What I am looking for is this flow:
call a task to generate source code
use the generated sources for IDE assistance in my regular sources
compile everything
How can this be accomplished?
Update
If I got you correct now, you want to be able to call the source generators seperately. For that you can simply define a custom task like this somewhere in your /build.sbt or /project/Project.scala file:
val generateSources = taskKey[List[File]]("generate sources")
generateSources <<=
(sourceGenerators in Compile) { _.join.map(_.flatten.toList) }
you can then call the generator manually from the sbt console like this:
> generateSources
[success] Total time: 0 s, completed 07.04.2014 13:42:41
Side Note:
I would however reccomend to setup your IDE to generate the sources if the only thing you need them for is to get IDE support.
Old answer for future reference
You don't need to do anything special to use a generated class or object from a non-generated class or object.
In your /build.sbt or /project/Project.scala file you define the source generator:
sourceGenerators in Compile <+= sourceManaged in Compile map { dir =>
val file = dir / "A.scala"
IO.write(file, "class A(val name: String)")
Seq(file)
}
Then you write some code which creates an instance of class A in/src/main/scala/B.scala:
object B extends App {
val a = new A("It works")
println(a.name)
}
If you compile this code from sbt, it will consider both generated and non-generated code at compilation:
> run
[info] Compiling 2 scala sources to <...>
[info] Running B
It works
[success] Total time: 0 s, completed 07.04.2014 13:15:47