xsbt-web-plugin Running the web servelet container outside of sbt? - scala

I'm using the xsbt-web-plugin to host my servelet. It works fine, using container:start.
I now need it to run in the background, like a daemon, even if I hang up, and ideally, even if the machine reboots. I'd rather not have to invoke sbt.
I know that the plugin can package a WAR file, but I'm not running tomcat or anything like that. I just want to do what container:start does, but in a more robust (read: noninteractive) way.
(My goal is for a demo of dev: I'd hate for my ssh session to drop sbt, or something similar like that, while people are using the demo. But we're not ready for production yet and have no servelet infrastructure.)

xsbt-web-plugin is really not meant to act as a production server (with features like automatic restarting, fault recovery, running on boot, etc.), however I understand the utility of using it this way for small-scale development purposes.
You have a couple of options:
First approach
Run sbt in a screen session, which you can (dis)connect at will without interrupting sbt.
Second approach
Override the shutdown function that triggers on sbt's exit hook, so that the container keeps running after sbt stops.
For this approach, add the following setting to your sbt configuration:
build.sbt:
onLoad in Global := { state => state }
Note that this will override the onLoad setting entirely, so in the (unlikely) case that you have it configured to do other important things, they won't happen.
Now you can launch your container either by running container:start from sbt and then exiting sbt, or simply by running sbt container:start from the command line, which will return after forking the container JVM. Give it a few seconds, then you should be able to request to localhost:8080.

Related

On Mac how to start a spark-shell in same environment as the environment running in an Intellij project?

I am working on a spark project using scala and maven, and some time I feel it would be very helpful if I can ran the project in an interactive mode.
My question if it is possible (and how) to bring up a spark environment in terminal that same as the environment running in a IntelliJ project?
Or even better (if it is possible) -- start a PERL environment, under IntelliJ debug model, during code ceased running at a break point. So we can continue play with all variables and instances created so far.
Yes, it is possible, though not very straightforward. I first build Fat jar using sbt assembly plugin (https://github.com/sbt/sbt-assembly) and then use a debug configuration like the one below to start it in debugger. Note that org.apache.spark.deploy.SparkSubmit is used as a main class, not your application main class. You app main class is specified in the --class parameter instead.
It is a bit tedious to have to create app jar file before starting each debug session (if sources were changed). I couldn't get SparkSubmit to work with the compiled by IntelliJ class files directly. I'd be happy to hear about alternative ways of doing this.
*Main class:*
org.apache.spark.deploy.SparkSubmit
*VM Options:*
-cp <SPARK_DIR>/conf/:<SPARK_DIR>/jars/* -Xmx6g -Dorg.xerial.snappy.lib.name=libsnappyjava.jnilib -Dorg.xerial.snappy.tempdir=/tmp
*Program arguments:*
--master
local[*]
--class
com.example.YourSparkApp
<PROJECT_DIR>/target/scala-2.11/YourSparkAppFat.jar
<APP_ARGS>
If you don't care much about initialization or can insert a loop in the code where the app waits for a keystroke or any other kind of signal before continuing, then you can start you app as usual and simply attach IntelliJ to the app process (Run > Attach to Local Process...).

How can I detect if a Scala Play Framework app is running on console mode?

I have some code that warms up my app's caches that I'd like to run in production or when I start my app with sbt run. However, when I run sbt console, I'd like to skip this code so that I can get to testing on the REPL very quickly without any delays.
Is there a way to detect if my app is being run within sbt console so that I can avoid warming up the caches?

Is there any way to fork the SBT console into a new JVM?

For all the reasons listed here:
http://www.scala-sbt.org/0.13/docs/Running-Project-Code.html
it's sometimes necessary to run your Scala code in a separate JVM from the one in which SBT is running. That's also true of the REPL, which you access from the console or test:console commands.
Unfortunately, it doesn't appear that SBT supports running the console in its own JVM (and I'm posting this question here, as requested in the message):
https://groups.google.com/forum/#!topic/simple-build-tool/W0q62PfSIMo
Can someone confirm that this isn't possible and suggest a possible workaround? I'm trying to play with a ScalaFX app in the console, and I have to quit SBT completely each time I run it. It'd be nice to just have to quit the console and keep SBT running.

Scala - sbt: Is it safe to compile while running?

I often have to run some time-consuming experiments in scala, and usually I run a second sbt
instance for the same project where I make changes to the code that is running in the other instance and compile.
The reason I do this is so that I don't have to wait for a long running process to finish before I make progress with my code.
My question is: Is it safe to do that, or is there a possibility that recompiling parts of currently running code in sbt/scala will cause problems in my running process?
What I have observed so far is that most of the time it is fine, but I did run into a class not defined error once when refactoring my code while running.
As #marcus mentioned, the compiler writing a .class file that has not yet been loaded by your running JVM stands the chance of being loaded and not matching the other compiled classes. In many instances you'll be fine, but it could cause problems. There are a few things you can do in this situation:
Compile in separate directories. Check your code out into two completely different directories and do local commits (assuming you're using git) to push/pull from one copy of the repository to another. This will ensure that your testing doesn't get the compilation changes until you're ready (when you "pull" from the development repository).
Use an automated CI system like Jenkins or Travis to run your tests on each commit. This will, similarly to #1, not conflict with your development work since it is a separate checkout of the code.
Use sbt-revolver which runs the program in a separate JVM with the re-start command and will restart it whenever there are changes. This would interrupt your testing, however.
Use JRebel which does a better job of reloading classes than the JVM or most IDEs.

Running an external process to support an automated IntelliJ test

As a simple example, I have some tests which rely on a fresh (read "empty") local Redis instance. My typical workflow has been to fire up the instance from the terminal and just restart or flushdb manually.
If possible, I'd love to wrap this up in the Run configuration of my tests. The configuration dialog allows me to setup "Before launch" tasks, but these appear to run sequentially. I really want something running in another process in the background that can be shut down at the end of the tests.
I have a few other external processes that I'd like to handle in a similar fashion. I'm not sure the Run/Debug configuration is the right approach. I'm using Scala, and I'm open to other tools if they better suit the objective. The end goal is to have as much as possible a single command that will fire up all the dependencies and shut them down at the end of the test run.
I think I would implement a base class for these tests which spins-up Redis in a stage before running tests and then shuts it down after running tests.
For example in ScalaTest you would use the BeforeAndAfter trait:
http://doc.scalatest.org/2.2.1/#org.scalatest.BeforeAndAfter