The ways I know about so far are
Create an ant build.xml file, make compile and run tasks, and include appropriate jars in a classpath=
Make at sbt project and include dependencies with version numbers in build.sbt
Make a maven project and include dependencies in the xml file
Run from the command line setting -classpath explicitly
None of these are bad, but it feels like extra work after being babied with
import json
json.loads('[1, 2]')
and having that work right off the bat, provided I have json installed. In particular tracking down appropriate versions on Mavenhub gets a little tiresome.
Though maybe I'm just being too picky ;-)
What you want is xsbtscript: https://github.com/paulp/xsbtscript
It allows you to create a single script file which includes both the sbt config your code requires along with the Scala code itself.
I think scalas from SBT is better. Either install conscript and run this command:
cs harrah/xsbt --branch v0.10.1
Or create it by hand:
java -Dsbt.main.class=sbt.ScriptMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$#"
And then use it like this:
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
import json.{ Js, JsObject }
def process(param: JsObject) = {
val Search.text(txt) = param
val Search.from_user(usr) = param
val Search.created_at(time) = param
"(" + time + ")" + usr + ": " + txt
}
Http.x((Search("#scala") lang "en") ~> (_ map process foreach println))
Paul's xsbtscript is basically a shell that downloads and install all necessary components to do the same thing. It usually works well, but has some limitations (won't go through authenticated proxies, for instance).
Related
I'm currently writing an sbt plugin. The tasks that I define are using functionality from libraries that are otherwise unrelated to sbt and that I don't want to change. These libraries use slf4j for logging. I would like the logging output to show up in the sbt console as if I had used streams.value.log, so that I can e. g. turn debug logging on and off in the usual sbt ways like set someTask / logLevel := Level.Debug. Is there a way to do this?
It seems like sbt-slf4j is supposed to solve my problem:
https://index.scala-lang.org/eltimn/sbt-slf4j/sbt-slf4j/1.0.4?target=_2.12
But I wasn't able to get it to work. My project/build.sbt looks like this:
libraryDependencies += "com.eltimn" %% "sbt-slf4j" % "1.0.4"
resolvers += Resolver.bintrayRepo("eltimn", "maven")
and build.sbt like so:
import org.slf4j.LoggerFactory
import org.slf4j.impl.SbtStaticLoggerBinder
val foo = taskKey[Unit]("foobar")
foo := {
SbtStaticLoggerBinder.sbtLogger = streams.value.log
// symbolic implementation, the actual implementation lives in a library
val logger = LoggerFactory.getLogger("foobar")
logger.warn("Hello!")
}
But running foo does not result in a warning being printed. A warning is printed if I change it to LoggerFactory.getLogger(streams.value.log.name), but this is not an option because again, the code lives in a library.
Is there any good way to solve this?
Is it somehow possible to use an external library inside the build.sbt file?
E.g. I want to write something like this:
import scala.io.Source
import io.circe._ // not possible
version := myTask
lazy val myTask: String = {
val filename = "version.txt"
Source.fromFile(filename).getLines.mkString(", ")
// do some json parsing using the circe library
// ...
}
One of the things I actually like about sbt is that the build project is (in most ways) just another project (which is also potentially configured by a meta-build project configured by a meta-meta-build project, etc.). This means you can just drop the following line into a project/build.sbt file:
libraryDependencies += "io.circe" %% "circe-jawn" % "0.11.1"
You could also add this to plugins.sbt if you wanted, or any other .sbt file in the projects directory, since the filenames (excluding the extension) have no meaning beyond human convention, but I'd suggest following convention and going with build.sbt.
Note though that sbt implicitly imports sbt.io in .sbt files, so the circe import in your build.sbt (at the root level—i.e. the build config, not the build build config) will need to look like this:
import _root_.io.circe.jawn.decode
scalaVersion := decode[String]("\"2.12.8\"").right.get
(For anyone who hasn't seen it before, the _root_ here just means "start the package hierarchy here instead of assuming io is the imported one".)
I have a requirement where I need to download a bunch of jars from a url and then place them inside a lib directory and then add them to unmanaged dependancy.
I am kind of stuck on how to do this in build.sbt. Went over sbt documentation and I found processbuilder. With that in mind, I came up with the below code.
for(i <- jarUrls) {
val wget = s"wget -P $libDir $anonUser#$hgRootURL/$i"
wget !
}
This runs wget on a bunch of jars and then places the file in the mentioned folder. Pretty simple code, but I am unable to run this. The error that I get is "Expression of type Unit must confirm to DslEntry in SBT file".
How to accomplish this?
build.sbt isn't just scala file, sbt does special preprocessing on it (that's why you don't have to def project = etc).
Your problem happens because every line of code (except imports and definitions) in build.sbt must return expression of type DslEntry as sbt sees every line of code as setting. When do you want your wget to get executed? usual way is to define Task:
lazy val wget = taskKey[Unit]("Wget")
wget := {
for(i <- List(1,2,3)) {
val wget = s"wget -P $libDir $anonUser#$hgRootURL/$i"
wget !
}
()
}
and run it like sbt wget.
You can also make wget task dependent on some other task (or you can think of them as events) in sbt.
See http://www.scala-sbt.org/0.13/docs/Tasks.html
Of course, there are tricky unsafe ways, like:
val init: Unit = {
//any code you want here
}
But I wouldn't recommend it since you probably want those files during let's say compile stage or something:
wget := {
your code here
} dependsOn compile
you can also use regular scala build instead of build.sbt: http://www.scala-sbt.org/0.13/docs/Full-Def.html
I have a Play! 2 for Scala application, and I am using Specs2 for tests. I can run all tests with the test command, or a particular specification with test-only MyParticularSpec.
What I would like to do is mark some particular specifications, or even single methods inside a specification, in order to do things like:
running all tests that are not integration (that is, that do not access external resources)
running all tests that do not access external resources in write mode (but still running the reading tests)
running all tests but a given one
and so on.
I guess something like that should be doable, perhaps by adding some annotations, but I am not sure how to go for it.
Does there exist a mechanism to selectively run some tests and not other ones?
EDIT I have answered myself when using test-only. Still the command line option does not work for the test task. Following the sbt guide I have tried to create an additional sbt configuration, like
object ApplicationBuild extends Build {
// more settings
lazy val UnitTest = config("unit") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.9" % "unit"
val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA)
.configs(UnitTest)
.settings(inConfig(UnitTest)(Defaults.testTasks) : _*)
.settings(
testOptions in UnitTest += Tests.Argument("exclude integration"),
libraryDependencies += specs
)
}
This works when I pass arguments without options, for instance when I put Test.Argument("plan"). But I was not able to find how to pass a more complex argument. I have tried
Tests.Argument("exclude integration")
Tests.Argument("exclude=integration")
Tests.Argument("-exclude integration")
Tests.Argument("-exclude=integration")
Tests.Argument("exclude", "integration")
Tests.Argument("exclude \"integration\"")
and probably more. Still not any clue what is the correct syntax.
Does anyone know how to pass arguments with options to specs2 from sbt?
First, following the specs2 guide one must add tags to the specifications, like this
class MySpec extends Specification with Tags {
"My spec" should {
"exclude this test" in {
true
} tag ("foo")
"include this one" in {
true
}
}
}
The command line arguments to include are documented here
Then one can selectively include or exclude test with
test-only MySpec -- exclude foo
test-only MySpec -- include foo
You can also use without any change to your build
test-only * -- exclude integration
Tested in Play 2.1-RC3
If you want to pass several arguments you can add several strings to one Test.Argument
testOptions in Test += Tests.Argument("include", "unit")
There are examples of this in the specs2 User Guide here and in the Play documentation there.
I'm using Play2.2, and there are 2 ways to do this depending on whether or not you are in the play console.
From the console type: test-only full.namespace.to.TestSpec
From the terminal type: test-only "full.namespace.to.TestSpec"
I came across this question while trying to figure out how to do something similar for ScalaTest with Play. SBT has detailed documentation on how to configure additional test configurations but these could use a bit of tweaking for Play.
Apart from the subtly different Project configuration I found that I wanted to crib a bunch of the test settings from PlaySettings. The following is running and generating an Intellij project with integration test sources in the "/it" directory. I may still be missing reporting and lifecycle hooks,
object BuildSettings {
def testSettings = {
// required for ScalaTest. See http://stackoverflow.com/questions/10362388/using-scalatest-in-a-playframework-project
testOptions in Test := Nil
}
def itSettings = {
// variously cribbed from https://github.com/playframework/Play20/blob/master/framework/src/sbt-plugin/src/main/scala/PlaySettings.scala
sourceDirectory in IntegrationTest <<= baseDirectory / "it"
scalaSource in Test <<= baseDirectory / "it"
libraryDependencies += "play" %% "play-test" % play.core.PlayVersion.current % "it"
}
}
object ApplicationBuild extends Build {
val main = play.Project(
appName,
appVersion,
Dependencies.dependencies)
.configs( IntegrationTest )
.settings(Dependencies.resolutionRepos)
.settings(BuildSettings.testSettings)
.settings(Defaults.itSettings : _*)
.settings(BuildSettings.itSettings)
}
Is there a way to tell sbt to package all needed libraries (scala-library.jar) into the main package, so it is stand-alone? (static?)
Edit 2011:
Since then, retronym (which posted an answer in this page back in 2010), made this sbt-plugin "sbt-onejar", now in its new address on GitHub, with docs updated for SBT 0.12.
Packages your project using One-JAR™
onejar-sbt is a simple-build-tool plugin for building a single executable JAR containing all your code and dependencies as nested JARs.
Currently One-JAR version 0.9.7 is used. This is included with the plugin, and need not be separately downloaded.
Original answer:
Directly, this is not possible without extending sbt (a custom action after the model of the "package" sbt action).
GitHub mentions an assembly task, custom made for jetty deployment. You could adapt it for your need though.
The code is pretty generic (from this post, and user Rio):
project / build / AssemblyProject.scala
import sbt._
trait AssemblyProject extends BasicScalaProject
{
def assemblyExclude(base: PathFinder) = base / "META-INF" ** "*"
def assemblyOutputPath = outputPath / assemblyJarName
def assemblyJarName = artifactID + "-assembly-" + version + ".jar"
def assemblyTemporaryPath = outputPath / "assembly-libs"
def assemblyClasspath = runClasspath
def assemblyExtraJars = mainDependencies.scalaJars
def assemblyPaths(tempDir: Path, classpath: PathFinder, extraJars: PathFinder, exclude: PathFinder => PathFinder) =
{
val (libs, directories) = classpath.get.toList.partition(ClasspathUtilities.isArchive)
for(jar <- extraJars.get ++ libs) FileUtilities.unzip(jar, tempDir, log).left.foreach(error)
val base = (Path.lazyPathFinder(tempDir :: directories) ##)
(descendents(base, "*") --- exclude(base)).get
}
lazy val assembly = assemblyTask(assemblyTemporaryPath, assemblyClasspath, assemblyExtraJars, assemblyExclude) dependsOn(compile)
def assemblyTask(tempDir: Path, classpath: PathFinder, extraJars: PathFinder, exclude: PathFinder => PathFinder) =
packageTask(Path.lazyPathFinder(assemblyPaths(tempDir, classpath, extraJars, exclude)), assemblyOutputPath, packageOptions)
}
It takes a bit of work, but you can also use Proguard from within SBT to create a standalone JAR.
I did this recently in the SBT build for Scalala.
Working off of what #retronym offered above, I built a simple example that builds a stand alone jar which includes the Scala libraries (i.e. scala-library.jar) using Proguard with sbt. Thanks, retronym.
The simplest example using sbt-assembly
Create directory project in your home project dir with file assembly.sbt including
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")
In file build.sbt
import AssemblyKeys._ // put this at the top of the file
assemblySettings
jarName += "Mmyjarnameall.jar"
libraryDependencies ++= Seq( "exmpleofmydependency % "mydep" % "0.1" )
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case s if s.endsWith(".class") => MergeStrategy.last
case s if s.endsWith("pom.xml") => MergeStrategy.last
case s if s.endsWith("pom.properties") => MergeStrategy.last
case x => old(x)
}
}
The simplest method is just to create the jar from the command line. If you don't know how to do this I would highly recommend that you do so. Automation is useful, but its much better if you know what the automation is doing.
The easiest way to automate the creation of a runnable jar is to use a bash script or batch script in windows.
The simplest way in sbt is just to add the Scala libraries you need to the resource directories:
unmanagedResourceDirectories in Compile := Seq(file("/sdat/bins/ScalaCurrent/lib/scalaClasses"))
So in my environment ScalaCurrent is a link to the current Scala library. 2.11.4 at the time of writing. The key point is that I extract the Scala library but place it inside a ScalaClassses directory. Each extracted library needs to go into its top level directory.