What is the best way to make a SBT project work in offline environment?
There is any ways to compile it outside (in network env),
e.g uber jar with all its dependencies, and then in the offline env
just run it?
For example in Java Maven,
We can compile it with uber jar with all dependencies,
and then in the offline env just java -jar MyJar.jar ...
Thanks.
Best bet for creating an uber/fat jar is the sbt assembly plugin. Include it in your project by putting something like the following in project/assembly.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")
Then just running sbt assembly should create a jar with all the dependencies in target/scala_$SCALA_VERSION. You may need to designate a main class for the jar if you want to run it with java -jar (as opposed to java -cp $jarfile $mainclass):
mainClass in assembly := Some("package.mainClass")
Related
How can i run a scala script inside a sbt project which can access all classes of the sbt project and typesafe config as well? Basically I want the script to run in a similar way as the sbt console.
one option is to assemble a jar using sbt-assembly
you would need to add the following to a .sbt file to your project directory
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")
and add a minimum of two lines to your build file.
assemblyJarName in assembly := "something.jar"
mainClass in assembly := Some("com.example.Main")
then you can run the 'assembly' task from sbt, this will build and executable "fat" jar with all your dependencies and configuration.
You can use the launcher and the command system to implement an interactive appliation with autocomplete etc.
Here is a tutorial:
http://www.scala-sbt.org/0.13/docs/Command-Line-Applications.html
You have to invoke the application separately, though; I don't think it is possible to run it directly from the sbt prompt within the application directory.
Apparently project dependencies are not being packaged into the jar generated by:
sbt package
How can dependencies be included?
Well, I use sbt-assembly plugin to create jar with dependencies,
(1) add sbt-assembly to projects/assembly.sbt
echo 'addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.8")' > project/assembly.sbt
(2) run sbt clean assembly to build the jar.
It will create ${name}-assembly-${version}.jar in target/scala-${scalaVersion}
(3) Only in case you get infamous de-duplicate error, use assemblyMergeStrategy as described in here
There's a project called onejar that will package a project and all its dependencies into a single jar file. There is an SBT plugin as well:
https://github.com/sbt/sbt-onejar
However if you're just looking to create a standard package (deb, rpm, etc.) there is sbt-native-packager:
https://github.com/sbt/sbt-native-packager
It can place all your dependencies into a Linux package and add the appropriate wrappers to load all your dependencies and start your program or service.
I am trying to generate, with sbt assembly, from a single project several jars. Each containing some of the dependencies.
So far I have found only this QA that is close to what I am looking for. However I don't need to have separate configs, basically when I run assembly, I just want to generate all the different jars.
To be more concrete. I want to generate:
One jar with my code and some general dependencies
One jar with hadoop dependencies <- this is the problem, as I don't know how to say, generate another jar that has only those dependencies.
One jar with scala
Without going deep into complex sbt configurations, you could try another approach. The hadoop dependencies being standard, you could mark them as provided in your build to exclude them.
"org.apache.hadoop" % "hadoop-client" % "2.6.0" % "provided"
For Scala, the library jar is also standard and can be downloaded separately by your "user". To remove it from the fat jar, use the following setting (assembly 0.13.0):
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)
The user of your fat jar is then aked to provide both Scala and Hadoop libraries in the classpath.
For example, when using Spark this is the correct approach as these two libraries are both provided by the Spark running environment. The same logic applies for the Hadoop MapReduce environment.
This is for Scala 2.11.1 and sbt 0.13.5.
Say I have a Scala/sbt project with the following directory structure:
root/
build.sbt
src/ ..
project/
plugins.sbt
build.properties
LolUtils.scala
and I want to use some external library in LolUtils.scala. How is this generally accomplished in sbt?
If I simply add the libs I need into build.sbt via libraryDependencies += .. then it doesn't find them and fails on the import line with not found: object ...
If I add a separate project/build.sbt, for some reason it starts failing to resolve my plugins, plus I need to manually specify the Scala version in the nested project/build.sbt, which is unnecessary duplication.
What's the best way to accomplish this?
sbt is recursive which means that it uses itself to compile a build definition, i.e. *.sbt files and *.scala files under project directory. To add extra dependencies to use them in the build definition you have to declare them in a project/build.sbt.
There is one caveat to that. You can set any scalaVersion to your project, that is in build.sbt, but you should not modify scalaVersion in the project/build.sbt as it might conflict with the version sbt itself uses (that may or may not lead to binary incompatibility for plugins).
Sbt 0.13.5 is using Scala 2.10.4, and the library you're going to use must be compatible with that particular version of Scala.
> about
[info] This is sbt 0.13.5
...
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.4
I'm having a good time learning Scala, but I'm having the hardest time grasping how to set up a development environment.
In Ruby
File hierarchy
my_app/
|
+-- Gemfile
+-- app.rb
Gemfile
source :rubygems
gem "mechanize"
app.rb
require "mechanize"
agent = Mechanize.new
page = agent.get("http://google.com")
Install dependencies and run it
$ bundle install
$ ruby app.rb
What's the Scala equivalent with sbt?
I'm reading about sbt and how packages/imports/jar dependencies work in Java/Scala, but I can't seem to filter out the bare bones necessities.
What's the minimal file hierarchy to replicate the above with Scala?
Here's the Java Mechanize lib available on Maven: http://search.maven.org/#search|ga|1|mechanize
Once you run sbt and download the Mechanize dependencies, how to you discern the necessary import statements you need to get this to work?
val agent = new MechanizeAgent
val page: HtmlDocument = agent.get("http://www.google.com")
I got the above working in Eclipse by manually importing the .jars and then importing packages from the libraries until the compiler/runtime errors stopped and the agent worked. But that experience was discouraging and I've come here to repent.
Intent of this question: The Java ecosystem/workflow is overwhelming to me as someone that's used to Ruby's effortless, IDEless workflow. I think a bare bones equivalent would give me a place to start building upon.
Ideally, I'd like to get Scala development working with just Vim and the command line before becoming dependent on Eclipse.
sbt uses a library called ivy to import projects from the main maven repository. There are a few repositories sbt is preconfigured to work with, including the main maven repository.
Once these libraries are "resolved" (downloaded to your computer and hooked up to your project), the eclipse plugin will create dependencies to each jar in the eclipse project generated.
Here is how you configure it.
sbt Managed Dependencies
http://www.scala-sbt.org/release/docs/Getting-Started/Basic-Def.html#adding-library-dependencies
Add a dependency in your project's build.sbt file. If you add a dependency that depends on a specific version of scala, use two %% between the group and artifact name. Don't forget to add an empty line between each command in your build.sbt file.
libraryDependencies += "com.gistlabs" % "mechanize" % "0.11.0"
libraryDependencies += "org.scalatest" %% "scalatest" % "1.6.1" % "test"
Update the dependencies by running the update command:
$ sbt update
sbt Eclipse Plugin
https://github.com/typesafehub/sbteclipse/wiki/Installing-sbteclipse
You can install the sbt eclipse plugin globally by creating a file at ~/.sbt/plugins/plugins.sbt and putting this line into it:
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.1.0")
Whenever you add or update a dependency, run the following command and refresh your eclipse project:
$ sbt eclipse
I'd like to go a step farther than ffk's answer, do much more hand-holding, and actually provide the direct translation of the Ruby example to Scala + sbt.
File hierarchy
Crawler/
+- build.sbt
+- src/
+- main/
+- scala/
+- Crawler.scala
build.sbt
libraryDependencies += "com.gistlabs" % "mechanize" % "0.11.0"
Crawler.scala
import com.gistlabs.mechanize.MechanizeAgent
import com.gistlabs.mechanize.document.Document
object Crawler extends App {
val agent = new MechanizeAgent
val page: Document = agent.get("http://google.com")
}
Install dependencies and run it
$ sbt run
To make the project importable into Eclipse or IntelliJ, you need the sbteclipse-plugin or sbt-idea plugin. But rather than having to declare these plugins in every build.sbt for every new project, you can declare them in a global build.sbt:
// in ~/.sbt/plugins/build.sbt
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.1.0")
addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.2.0")
Then, back in your Scala app's root directory:
$ sbt eclipse
or
$ gen-idea
Afterwards, you should be able to open it in the respective IDE.
Note: Whenever you add dependencies in your build.sbt, you'll need to rerun the sbt eclipse/gen-idea command so the IDE can pick it up.