build.sbt Scoping does not work - scala

Hello I'm trying to set a value depending on the running task in my build.sbt:
ngScalaModule in fastOptJS := "./plugintest2-fastopt.js"
ngScalaModule in fullOptJS := "./plugintest2-opt.js"
Now when i run the task fastOptJS or fullOptJS the value ngScalaModule is not used when I try this without scoping:
ngScalaModule in fastOptJS := "./plugintest2-fastopt.js"
The value is used.
My question is, weather I have done something totally wrong or does the plugin which provieds ngScalaModule has to explicit implement the scoping.
The value ngScalaModule is provided by an other sbt plugin as the two tasks, just in case it makes a difference.
Thanks

In the definition of Angulate2Plugin, we can indeed see that the setting ngScalaModule is only ever read scoped only by the project (and not the configuration nor the task, which are the other two scope axes in sbt).
Therefore, the setting that you define ngScalaModule in fastOptJS is not used by anything. angulate2 will only read ngScalaModule (unscoped).
That means that angulate2 would have to be changed for you to be able to specify a different ngScalaModule in fastOptJS versus fullOptJS. Note that angulate2 itself depends on SJSXPlugin (by the same author) for its sjsxSnippets settings, which is also only scoped per project. So this might have some deep consequences.

Related

Adding resource to project-specific SBT autoplugins

SBT lets you define autoplugins specific to your project by putting them in ./project.
I'm trying to add resources to one such autoplugin - by which I mean something that it could access through a call to getClass.getResourceAsStream.
I have, however, not been able to work out how to do that, or even if it was possible. There's no documentation that I could find on the subject, and the obvious (simply putting resources in ./project with the plugin) fails.
Is what I'm trying to achieve possible?
Yes, you need to place your resource in ./project/src/main/resources/
For a quick demonstration that this works, assume the file name is test.txt, put the following in your build.sbt:
lazy val hello = taskKey[Unit]("prints the content of test.txt")
hello := println(IO.readStream(getClass.getResourceAsStream("test.txt")))

How can I change the compiler flags of an sbt project without causing recompilation?

It often comes up during testing and debugging a Scala project built with sbt that I need to pass some extra compiler flags for a particular file. For example -Xlog-implicits to debug implicit resolution problems. However, changing scalacOptions either in build.sbt or the console invalidates the cache and causes the whole project / test suite to be recompiled. In addition to being annoying to wait so long, this also means that a lot of noise from irrelevant files is printed. Instead it would be better if I could compile a specific file with some extra flags from the sbt console, but I did not find a way to do this.
Problem
The reason why changing the scalac options triggers recompilation is because Zinc, Scala's incremental compiler, cannot possibly now which compiler flags affect the semantics of the incremental compilation, so it's pessimistic about it. I believe this can be improved, and some whitelisted flags can be supported, so that next time people like you don't have to ask about it.
Nevertheless, there's a solution to this problem and it's more generic than it looks like at first sight.
Solution
You can create a subproject in your sbt build which is a copy of the project you want to "log implicits" in, but with -Xlog-implicits enabled by default.
// Let's say foo is your project definition
lazy val foo = project.settings(???)
// You define the copy of your project like this
lazy val foo-implicits = foo
.copy(id = "foo-implicits")
.settings(
target := baseDirectory.value./("another-target"),
scalacOptions += "-Xlog-implicits"
)
Note the following properties of this code snippet:
We redefine the project ID because when we reuse a project definition the ID is still the same as the previous one (foo in this case). Sbt fails when there are projects that share the same id.
We redefine the target directory because we want to avoid recompilation. If we keep the same as before, then recompiling foo-implicits will delete the compilation products of the previous compilation (and viceversa). That's exactly what we want to avoid.
We add -Xlog-implicits to the Scalac options as you request in this question. In a generic solution, this piece of code should go away.
When is this useful?
This is useful not only for your use case, but when you want to have different modules of the same project in different Scala versions. The following has two benefits:
You don't use cross-compilation in sbt ++, which is known to have some memory issues.
You can add new library dependencies that only exist for a concrete Scala version.
It has more applications, but I hope this addresses your question.

SBT 0.13.8 what does the SettingKey.~= method do

The SettingKey.~= method is used to exclude dependencies from libraryDependencies (see play 2.3.8 sbt excluding logback), but trying to find out what it does is hard as:
There is no documentation about this function at http://www.scala-sbt.org/0.13.12/api/index.html#sbt.SettingKey,
It cannot be searched using Google as it uses symbols in the method name and
Examination of the SBT source code (https://github.com/sbt/sbt/blob/0.13/main/settings/src/main/scala/sbt/Structure.scala#L47) does not provide an obvious answer.
Can anyone shed light on what this does?
someScopedKey ~= f
is equivalent to
someScopedKey := f(someScopedKey.value)
In other words, it transforms the previous value of the setting/task with a given function. That's literally all there is to know about it.

Define custom test configurations in sbt

I need to define a custom test configuration in sbt which runs test, but with some extra settings. I've been looking around trying to figure out how to do this, but I can't seem to get it right.
What I would like to do is something like this: > test which would run the normal test task and > pipelinetest which would exactly the same as test, only with (javaOptions += "-Dpipeline.run=run".
I've figured out how the set the javaOptions for test, like this:
javaOptions in test += "-Dpipeline.run=run" so what I would like to be able to do is this: javaOptions in pipelinetest += "-Dpipeline.run=run"
How would I define pipelinetest to achieve this goal? Do this need to be a new task? Or does would this be a setting in test. I'm very new to sbt and quite confused over this at the moment, and reading the documentation didn't help, so any help would be greatly appreciated.
I have only a partial answer, but I thought this might be useful info. I was just trying to do something similar for the sbt build in Spark -- I wanted to have a way to run tests with a debugger. Mark Harrah's comment pointed me in the right direction. The change I made was:
lazy val TestDebug = config("testDebug") extend(Test)
...
baseProject
.configs(TestDebug)
.settings(inConfig(TestDebug)(Defaults.testTasks): _*)
.settings(Seq(
javaOptions in TestDebug ++= "-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"
.split(" ").toSeq))
This left my usual invocations of test, testOnly, etc. alone, but now I could also run testDebug:testOnly ..., which would use the extra options defined above. (it probably also created testDebug:test, etc. with those extra options, which aren't useful, but oh well.)
I didn't really understand why, but one important part for me to get this to work was to use inConfig(TestDebug)(Defaults.testTasks), instead of inConfig(TestDebug)(Defaults.testSettings).
In my case, I ran into trouble figuring out how to (a) get it to work for a multi-project build and (b) our build is even weirder b/c its based on a POM file, which makes the project definitions different than every example.
As usual, my issue with sbt is that I find info which seems related, but my build has some unusual aspects which makes me unable to completely cargo-cult the answer; and though it seems like I need trivial modifications, without a thorough understanding, its hard to modify the examples.

Global launch configuration in Eclipse?

This seems like a simple thing, but I can't find an answer in the existing questions:
How do you add a global argument to all your present and existing run or debug configurations? In my case, I need a VM argument, but I see that this could be useful for runline arguments as well.
Basically, every time I create a unit test I need to create a configuration (or run, which creates one), and then manually edit each one with the same VM argument. This seems silly for such a good tool.
This is not true. You can add the VM arguments to the JRE definition. This is exactly what it is for. I use it myself so that assertions are enabled and heap is 1024mb on every run, even future ones.
Ouch: 7-years bug, asking for running configuration template, precisely for that kind or reason.
This thread proposes an interesting workaround, based on duplicating a fake configuration based on string substitution:
You can define variables in Window->Preferences->Run/Debug->String Substitution. For example you can define a projectName_log4j variable with the
correct -Dlog4j.configuration=... value.
In a run configuration you can use ${projectName_log4j} and you don't have to remember the real value.
You can define a project-specific "empty" run configuration.
Set the project and the arguments fields in this configuration but not the main class. If you have to create a new run configuration for this project select this one and use 'Duplicate' from its popup-menu to copy this configuration.
You have to simply set the main class and the program arguments.
Also you can combine both solutions: use a variable and define an "empty"
run configuration which use this variable. The great advantage in this case
is when you begin to use a different log4j config file you have to change
only the variable declaration.
Not ideal, but it may alleviate your process.