I'm trying to use the sbt-native-packager to produce a Docker image of my Scala play app, I followed the steps described at http://www.scala-sbt.org/sbt-native-packager/formats/docker.html
This is my configuration:
on my plugins.sbt I added the dependency for sbt native packager:
// SBT Native
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.2.1")
on my build.sbt I added the plugins for Universal and Docker:
.enablePlugins(PlayScala, JavaAppPackaging)
I also added some extra properties:
javaOptions in Universal ++= Seq(
// JVM memory tuning
"-J-Xmx1024m",
"-J-Xms512m",
// Since play uses separate pidfile we have to provide it with a proper path
// name of the pid file must be play.pid
s"-Dpidfile.path=/var/run/${packageName.value}/play.pid",
// Use separate configuration file for production environment
s"-Dconfig.file=/usr/share/${packageName.value}/conf/production.conf",
// Use separate logger configuration file for production environment
s"-Dlogger.file=/usr/share/${packageName.value}/conf/logback.xml"
)
// exposing the play ports
dockerExposedPorts in Docker := Seq(9000, 9443)
Then I generate the docker image using the plugin and SBT CLI:
docker:publishLocal
the dockerfile gets generated at ./target/docker/Dockerfile
when I inspect the file I see:
FROM openjdk:latest
WORKDIR /opt/docker
ADD opt /opt
RUN ["chown", "-R", "daemon:daemon", "."]
USER daemon
ENTRYPOINT ["bin/root"]
CMD []
which doesn't seem to contain all the necessary steps to run the app, when I use docker build . I get :
java.nio.file.NoSuchFileException: /var/run/root/play.pid
It seems like the Dockerfile is missing some steps where it should mkdir /var/run/{APP_NAME}/
(* creating folder inside docker container instance)
and chown that folder in order for play to create the PID file.
how to fix the above error ?
What's the error message when starting the docker image and how do you start it?
Then there are a couple of notable things.
play ships with native-packager
You shouldn't have to add any plugin, but only configure docker relevant stuff. You already linked the correct documentation for the package format ( docker ).
Archetypes Vs Formats
Your configuration won't work without the play plugin. Take a look at http://www.scala-sbt.org/sbt-native-packager/archetypes/java_app/index.html which explains how to configure a simple application build.
I'd also recommend to read the format and Archetypes section here:
http://www.scala-sbt.org/sbt-native-packager/introduction.html#archetype-plugins
Native docker build
Native-packager currently generates two docker files, which is confusing and not relevant. Sorry for that confusion. We plan to remove the redundant docker file.
Simply go one level deeper and run the docker build command.
Hope that helps,
Muki
Related
I am trying to dockerize the Rest API built with Scala Play so I can use AWS EC2 or Kubernetes to deploy it.
Following this:
:
https://guilhebl.github.io/scala/backend/docker/play/2017/08/23/scala-play-docker-sbt-native-packager-example/ and
it failed using this sequence of command:
sbt playGenerateSecret
sbt dist
sbt docker:publishLocal
docker run -p 9000:9000 -e APPLICATION_SECRET="token from above" play-scala-rest-api-example:1.0-SNAPSHOT
with this official example app:
https://github.com/playframework/play-samples/tree/2.8.x/play-scala-rest-api-example
Other teams need a Dockerfile to deploy the app to AWS EC2 with Jenkins. Right now it seems like a Dockerfile is not generated until sbt docker:publishLocal
It would be better if we can find some official tutorials showing how to do this.
Answers from mfirry:
sbt docker:stage
from https://www.scala-sbt.org/sbt-native-packager/formats/docker.html#tasks
I have docker file with:
FROM amazonlinux:2017.03
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 0.13.13
# Install Java8
RUN yum install -y java-1.8.0-openjdk-devel
# Install Scala and SBT
RUN yum install -y https://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.rpm
RUN yum install -y https://dl.bintray.com/sbt/rpm/sbt-0.13.13.rpm
RUN sbt sbtVersion
COPY . /root
WORKDIR /root
# Exposing port 80
EXPOSE 80
RUN sbt compile
CMD sbt run
And sbt configuration file with:
name := "hello"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-databind" % "2.9.5"
libraryDependencies += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.9.5"
Each time when I build docker container sbt download jackson library anew. How can I speed up this process, may execute part of sbt file before compilation.
Before I have add RUN sbt sbtVersion to Dockerfile sbt have download itself completely and after I add this command have cached and not run each time when I build docker container.
May be there is same tricks with caching in docker downloading libraries in sbt?
First, you don't need to install the scala RPM as SBT itself downloads Scala for you (whichever version is configured in your build).
Second, each RUN command creates a new layer which you want to avoid usually. Combine them:
RUN cmd1 \
&& cmd2 \
&& cmd3
Why do you want to build an image for each of your builds? That seems wasteful. Usually you build your stuff outside of a Docker image and only package up the results.
My advice would be to use the sbt-native-packager SBT plugin with its Docker integration to simply build an image from your artefacts after you have build them. That way you would only need a JRE in your image, not a JDK, not SBT. Also, you would not need to wait for SBT to initialize when starting your image.
You could use multi-stage builds if you have a new Docker version installed.
You can reference to travis's solution:
.travis.yaml:
# These directories are cached to S3 at the end of the build
cache:
directories:
- $HOME/.ivy2/cache
- $HOME/.sbt/boot/
Travis will backup the above two folders to s3, then every time before user rebuild, it will fetch the two folders from cache.
So, we can know the possible way for you to not download jackson library again could be separate the .ivy2 & .sbt folder and put them in something like a cache.
For your situation, I think the best solution is store these in a base image, the base image's Dockerfile could simply same as the one you are using now(of course, you can simple it, just add jackson library define in build.sbt, the aim just want the .ivy2 have the jackson library), then E.g. tag it as mybaseimage:v1
Then your new project's Dockerfile could use base mybaseimage:v1, as mybaseimage already have the library in .ivy2& .sbt, so no need to download jackson again every time you build your project's Dockerfile.
The solution maybe ugly, but I think could act as a workaround, just FYI.
I have a scala SBT project where I'm using the native packager plugin. I'm bundling as a JavaServerAppPackaging and would like to generate scripts for automatically registering the application for startup and shutdown with rc.d scripts (Amazon Linux).
In my plugins.sbt:
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.2.0-M5")
In build.sbt
lazy val server =
DefProject("some/server", "server")
.settings(serverModuleDeps)
.settings(ServerSettings.allSettings: _*)
.settings(CloudFormation.defaultSettings: _*)
.settings(serverLoading in Universal := Option(ServerLoader.SystemV))
.settings(serviceAutostart:=true)
.settings(startRunlevels:=Option("3"))
.settings(stopRunlevels:=Option("3"))
.settings(stackRegion := "US_WEST_2")
.settings(codedeployAWSCredentialsProvider := Option(new ProfileCredentialsProvider("devcredentialsprovider")))
.dependsOn(sharedJvm)
.dependsOn(langJVM)
.enablePlugins(JavaServerAppPackaging, SystemVPlugin)
.settings(daemonUser:="ec2-user")
.configure(InBrowserTesting.jvm)
when I run sbt stage I can see a universal folder containing a bin folder with a sh and a cmd file to start the application. However, there is not code to register/start the application as a system service. Is there any additional configuration required to have the plugin generate scripts for registering the application? What am I missing?
I have a created a basic project to demonstrate the issue: https://github.com/MojoJojo/sbt-native-packager-test
Your configuration is correct. Your sbt command isn't :)
with packageBin ( which IIRC triggers universal:packageBin ) generates only a universal zip file. A systemloader is a operating system specific part. That's why it's not included in a universal zip.
Generate a debian or rpm file with
debian:packageBin
rpm:packageBin
The generated deb or rpm package will have the systemloader files included, because they are in the place a rpm/debian based system would expect them.
Related issue: https://github.com/sbt/sbt-native-packager/issues/869
I have a question about seemingly unnecessary recompilations by SBT. I have the following scenario: I’m running SBT inside a docker container, by attaching the volume with my application source code to the container, and launching the container with sbt as entry point. If I continuously run SBT inside that container, it doesn’t recompile whole app, which is good.
However, if I start SBT natively on OS X, it does the full recompilation. If after that I start it again inside docker, it again does the full recompilation. It takes very long, and is really annoying. What could be the reason for such behavior?
Here's how I launch the SBT in the container:
docker run --name=bla -it --net=host -v /Users/me/.ivy2:/tmp/.ivy2 \
-v /Users/me/.aws/config:/root/.aws/config \
-v /Users/me/.sbt:/root/.sbt \
-v /Users/me/projects/myapp:/src 01ac0b888527 \
/bin/sh -c 'sbt -Dsbt.ivy.home=/tmp/.ivy2 -Divy.home=/tmp/.ivy2 -jvm-debug 5005 -mem 3072'
My Java, Scala and SBT versions are the same on the host and in the container. Concretely: Scala 2.11.8, Java 1.8.0_77, SBT 0.13.11
Okay, after a day of debugging I've found the way around this problem.
SBT invalidates the compiled classes mainly based on the following rules:
Canonical path
Last modification date
That is, paths and modification dates have to be exactly same for
Source code files
Ivy dependencies jars
JRE's jars
First 2 points are quite easy to achieve, because it's just mapping of docker volumes. The crucial thing is to map to exactly same path as on the host machine. For example, if you work on OS X as I do, path your project sources probably looks like this: /Users/<username>/projects/bla, so in your docker run command you have to do something like:
docker run ... -v /Users/<username>/projects/bla:/Users/<username>/projects/bla ...
You don't care about timestamps for sources and ivy jars, because they will be exactly same (it's the same files).
Where you have to care about timestamps is the JRE stuff. I build the docker image with JRE baked in (using sbt-docker plugin), so I ended up reading modification date of local JRE libs and setting same dates inside the image:
new mutable.Dockerfile {
...
val hostJreTimestamp = new Date(new File(javaHome + "/jre/lib/rt.jar").lastModified()).toString
val hostJceTimestamp = new Date(new File(javaHome + "/jre/lib/jce.jar").lastModified()).toString
runRaw(s"""touch -d "$hostJreTimestamp" $javaHome/jre/lib/rt.jar""")
runRaw(s"""touch -d "$hostJceTimestamp" $javaHome/jre/lib/jce.jar""")
...
}
And, of course, JRE should also be installed to exactly same path as on the host, which might be problematic if you used to install Java from RPM, for example. I ended up downloading server JRE (which is distributed as .tar.gz) and extracting it to the right path manually.
So, long story short, it worked in the end. No recompilation, no long waiting time. I was able to find the relevant information from 2 main sources: SBT source code, particularly this function: https://github.com/sbt/sbt/blob/0.13/compile/inc/src/main/scala/sbt/inc/IncrementalCommon.scala#L271, and enabling SBT debug output in build.sbt:
logLevel := Level.Debug
incOptions ~= { _.copy(apiDebug = true, relationsDebug = true) }
(prepare for a lot of output)
This is just a guess. As rumoku suggest it may be an issue with repositories but I think it's related with SBT itself. From SBT's point of view you are running two different machines and it must compile all when it thinks the files have changed.
I don't know how SBT or the compiler identifies the versions of Scala and Java but it may be the case that even though you have the exact same versions of Java and Scala in both environments SBT thinks they are different. Which makes sense as the are different OS.
I know if I add withSources when I define one dependency, sbt can download that sources jar file automatically.
For example,
val specs = "org.scala-tools.testing" % "specs_2.8.1" % "1.6.6" % "test" withSources ()
But for the scala-library.jar and scala-compiler.jar, I don't need define them explicitly, how can I get sbt download their sources for me? So, I don't need config it manually after generate idea project using sbt-idea-plugin.
You have to change the boot properties. There is a nice description in the recent blog decodified from Mathias:
"How to make SBT download scala library sources" (started from #hseeberger key starting points)
Here is the relevant part (in case that link ever goes stale)
First, forget about trying to find some “hidden” setting in your SBT project definition enabling Scala library source download! It does not exist (at least not in SBT version 0.7.x).
Rather, there are these two things you need to do in order to whip SBT into submission:
Create an alternative configuration file for your SBT launcher.
Make the SBT launcher use it.
These are the steps in detail:
Find your sbt-launcher-0.7.x.jar file.
Since I’m on OS/X and use SBT via Homebrew mine lives at /usr/local/Cellar/sbt/0.7.5.RC0/libexec/sbt-launch-0.7.5.RC0.jar.
Extract the sbt.boot.properties from the sbt sub directory in the launcher jar
Fire up your favorite editor and change line 3 to classifiers: sources (uncomment the line)
Find the sbt script file you created during your SBT setup (e.g. ~/bin/sbt, or, when using Homebrew, /usr/local/Cellar/sbt/0.7.x/bin/sbt)
Add the path to your tweaked sbt.boot.properties file, prepended with an ’#’ character and in double quotes, as the second-to-last argument of the java call.
This is what my sbt script file looks like:
#!/bin/sh
java -Xmx768M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256m \
-jar /usr/local/Cellar/sbt/0.7.5.RC0/libexec/sbt-launch-0.7.5.RC0.jar \
"#/usr/local/Cellar/sbt/0.7.5.RC0/libexec/sbt.boot.properties" \
"$#"
Once you have completed these steps SBT should happily download the scala-...-sources.jar files for the Scala compiler and standard library for any new project you create.
To have SBT do this for an existing project, you have to manually delete the project/boot/scala-{version} directory before performing an ‘sbt update’ (SBT does not fetch additional source artifacts if the main jar is already present).
Once you have a custom sbt.boot.properties file, there are also other ways to supply it to the SBT launcher.
See SO question "how do I get sbt to use a local maven proxy repository (Nexus)?"
Based on Michael Slinn comments:
If you are using sbt 0.11.x and above, use this command:
sbt update-sbt-classifiers
Two pieces of information.
(1) SBT Documentation
http://www.scala-sbt.org/0.13.5/docs/Detailed-Topics/Library-Management.html
and I quote:
"To obtain particular classifiers for all dependencies transitively, run the updateClassifiers task. By default, this resolves all artifacts with the sources or javadoc classifier."
This means you should not need to do anything, but you can make it explicit and put in you build.sbt:
transitiveClassifiers := Seq("sources", "javadoc")
To actually get the sources downloaded by SBT then you do:
"updateClassifiers"
(2) If you are working with Eclipse scala IDE - most likely you are as development of plugins for Eclipse/Netebeans is a lot more active for eclipse - then you should configure your ecplise to find out the sources if you do the following.
EclipseKeys.withSource := true
Here is the documentation you should read,
https://github.com/typesafehub/sbteclipse/wiki/Using-sbteclipse