Play framework: Running separate module of multi-module application - scala

I'm trying to create a multi-module application and run one of it's modules separately from the others (from another machine).
Project structure looks like this:
main
/ \
module1 module2
I want to run a module1 as a separate jar file (or there is a better way of doing this?), which I will run from another machine (I want to connect it to the main app using Akka remoting).
What I'm doing:
Running "play dist" command
Unzipping module1.zip from universal folder
Setting +x mode to bin/module1 executable.
Setting my main class (will paste it below): instead of play.core.server.NettyServer im putting my main class: declare -r app_mainclass="module1.foo.Launcher"
Running with external application.conf file.
Here is my main class:
class LauncherActor extends Actor {
def receive = {
case a => println(s"Received msg: $a ")
}
}
object Launcher extends App {
val system = ActorSystem("testsystem")
val listener = system.actorOf(Props[LauncherActor], name = "listener")
println(listener.path)
listener ! "hi!"
println("Server ready")
}
Here is the console output:
#pavel bin$ ./module1 -Dconfig.file=/Users/pavel/projects/foobar/conf/application.conf
[WARN] [10/18/2013 18:56:03.036] [main] [EventStream(akka://testsystem)] [akka.event-handlers] config is deprecated, use [akka.loggers]
akka://testsystem/user/listener
Server ready
Received msg: hi!
#pavel bin$
So the system switches off as soon as it gets to the last line of the main method. If I run this code without Play - it works as expected, the object is loaded and it waits for messages, which is expected behavior.
Maybe I'm doing something wrong? Or should I set some options in module1 executable? Other ideas?
Thanks in advance!
Update:
Versions:
Scala - 2.10.3
Play! - 2.2.0
SBT - 0.13.0
Akka - 2.2.1
Java 1.7 and 1.6 (tried both)
Build properties:
lazy val projectSettings = buildSettings ++ play.Project.playScalaSettings ++ Seq(resolvers := buildResolvers,
libraryDependencies ++= dependencies) ++ Seq(scalacOptions += "-language:postfixOps",
javaOptions in run ++= Seq(
"-XX:MaxPermSize=1024m",
"-Xmx4048m"
),
Keys.fork in run := true)
lazy val common = play.Project("common", buildVersion, dependencies, path = file("modules/common"))
lazy val root = play.Project(appName, buildVersion, settings = projectSettings).settings(
resolvers ++= buildResolvers
).dependsOn(common, module1, module2).aggregate(common, module1, module2)
lazy val module1 = play.Project("module1", buildVersion, path = file("modules/module1")).dependsOn(common).aggregate(common)
lazy val module2: Project = play.Project("module2", buildVersion, path = file("modules/module2")).dependsOn(common).aggregate(common)

So I found a dirty workaround and I will use it until I will find a better solution. In case someone is interested, I've added this code at the bottom of the Server object:
val shutdown = Future {
readLine("Press 'ENTER' key to shutdown")
}.map { q =>
println("**** Shutting down ****")
System.exit(0)
}
import scala.concurrent.duration._
Await.result(shutdown, 100 days)
And now system works until I will hit the ENTER key in the console. Dirty, I agree, but didn't find a better solution.
If there will be something better, of course I will mark it as an answer.

Related

How to do Slick configuration via application.conf from within custom sbt task?

I want to create an set task which creates a database schema with slick. For that, I have a task object like the following in my project:
object CreateSchema {
val instance = Database.forConfig("localDb")
def main(args: Array[String]) {
val createFuture = instance.run(createActions)
...
Await.ready(createFuture, Duration.Inf)
}
}
and in my build.sbt I define a task:
lazy val createSchema = taskKey[Unit]("CREATE database schema")
fullRunTask(createSchema, Runtime, "sbt.CreateSchema")
which gets executed as expected when I run sbt createSchema from the command line.
However, the problem is that application.conf doesn't seem to get taken into account (I've also tried different scopes like Compile or Test). As a result, the task fails due to com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'localDb'.
How can I fix this so the configuration is available?
I found a lot of questions here that deal with using the application.conf inside the build.sbt itself, but that is not what I need.
I have setup a little demo using SBT 0.13.8 and Slick 3.0.0, which is working as expected. (And even without modifying "-Dconfig.resource".)
Files
./build.sbt
name := "SO_20150915"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"com.typesafe" % "config" % "1.3.0" withSources() withJavadoc(),
"com.typesafe.slick" %% "slick" % "3.0.0",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"com.h2database" % "h2" % "1.3.175"
)
lazy val createSchema = taskKey[Unit]("CREATE database schema")
fullRunTask(createSchema, Runtime, "somefun.CallMe")
./project/build.properties
sbt.version = 0.13.8
./src/main/resources/reference.conf
hello {
world = "buuh."
}
h2mem1 = {
url = "jdbc:h2:mem:test1"
driver = org.h2.Driver
connectionPool = disabled
keepAliveConnection = true
}
./src/main/scala/somefun/CallMe.scala
package somefun
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory
import slick.driver.H2Driver.api._
/**
* SO_20150915
* Created by martin on 15.09.15.
*/
object CallMe {
def main(args: Array[String]) : Unit = {
println("Hello")
val settings = new Settings()
println(s"Settings read from hello.world: ${settings.hw}" )
val db = Database.forConfig("h2mem1")
try {
// ...
println("Do something with your database.")
} finally db.close
}
}
class Settings(val config: Config) {
// This verifies that the Config is sane and has our
// reference config. Importantly, we specify the "di3"
// path so we only validate settings that belong to this
// library. Otherwise, we might throw mistaken errors about
// settings we know nothing about.
config.checkValid(ConfigFactory.defaultReference(), "hello")
// This uses the standard default Config, if none is provided,
// which simplifies apps willing to use the defaults
def this() {
this(ConfigFactory.load())
}
val hw = config.getString("hello.world")
}
Result
If running sbt createSchema from Console I obtain the output
[info] Loading project definition from /home/.../SO_20150915/project
[info] Set current project to SO_20150915 (in build file:/home/.../SO_20150915/)
[info] Running somefun.CallMe
Hello
Settings read from hello.world: buuh.
Do something with your database.
[success] Total time: 1 s, completed 15.09.2015 10:42:20
Ideas
Please verify that this unmodified demo project also works for you.
Then try changing SBT version in the demo project and see if that changes something.
Alternatively, recheck your project setup and try to use a higher version of SBT.
Answer
So, even if your code resides in your src-folder, it is called from within SBT. That means, you are trying to load your application.conf from within the classpath context of SBT.
Slick uses Typesafe Config internally. (So the approach below (described in background) is not applicable, as you can not modify the Config loading mechanism itself).
Instead try the set the path to your application.conf explicitly via config.resource, see typesafe config docu (search for config.resource)
Option 1
Either set config.resource (via -Dconfig.resource=...) before starting sbt
Option 2
Or from within build.sbt as Scala code
sys.props("config.resource") = "./src/main/resources/application.conf"
Option 3
Or create a Task in SBT via
lazy val configPath = TaskKey[Unit]("configPath", "Set path for application.conf")
and add
configPath := Def.task {
sys.props("config.resource") = "./src/main/resources/application.conf"
}
to your sequence of settings.
Please let me know, if that worked.
Background information
Recently, I was writing a custom plugin for SBT, where I also tried to access a reference.conf as well. Unfortunately, I was not able to access any of .conf placed within project-subfolder using the default ClassLoader.
In the end I created a testenvironment.conf in project folder and used the following code to load the (typesafe) config:
def getConfig: Config = {
val classLoader = new java.net.URLClassLoader( Array( new File("./project/").toURI.toURL ) )
ConfigFactory.load(classLoader, "testenvironment")
}
or for loading a genereal application.conf from ./src/main/resources:
def getConfig: Config = {
val classLoader = new java.net.URLClassLoader( Array( new File("./src/main/resources/").toURI.toURL ) )
// no .conf basename given, so look for reference.conf and application.conf
// use specific classLoader
ConfigFactory.load(classLoader)
}

How to include file in production mode for Play framework

An overview of my environments:
Mac OS Yosemite, Play framework 2.3.7, sbt 0.13.7, Intellij Idea 14, java 1.8.0_25
I tried to run a simple Spark program in Play framework, so I just create a Play 2 project in Intellij, and change some files as follows:
app/Controllers/Application.scala:
package controllers
import play.api._
import play.api.libs.iteratee.Enumerator
import play.api.mvc._
object Application extends Controller {
def index = Action {
Ok(views.html.index("Your new application is ready."))
}
def trySpark = Action {
Ok.chunked(Enumerator(utils.TrySpark.runSpark))
}
}
app/utils/TrySpark.scala:
package utils
import org.apache.spark.{SparkContext, SparkConf}
object TrySpark {
def runSpark: String = {
val conf = new SparkConf().setAppName("trySpark").setMaster("local[4]")
val sc = new SparkContext(conf)
val data = sc.textFile("public/data/array.txt")
val array = data.map ( line => line.split(' ').map(_.toDouble) )
val sum = array.first().reduce( (a, b) => a + b )
return sum.toString
}
}
public/data/array.txt:
1 2 3 4 5 6 7
conf/routes:
GET / controllers.Application.index
GET /spark controllers.Application.trySpark
GET /assets/*file controllers.Assets.at(path="/public", file)
build.sbt:
name := "trySpark"
version := "1.0"
lazy val `tryspark` = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.10.4"
libraryDependencies ++= Seq( jdbc , anorm , cache , ws,
"org.apache.spark" % "spark-core_2.10" % "1.2.0")
unmanagedResourceDirectories in Test <+= baseDirectory ( _ /"target/web/public/test" )
I type activator run to run this app in development mode then type localhost:9000/spark in the browser, it shows result 28 as expected. However, when I want type activator start to run this app in production mode it shows the following error message:
[info] play - Application started (Prod)
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] application -
! #6kik15fee - Internal server error, for (GET) [/spark] ->
play.api.Application$$anon$1: Execution exception[[InvalidInputException: Input path does not exist: file:/Path/to/my/project/target/universal/stage/public/data/array.txt]]
at play.api.Application$class.handleError(Application.scala:296) ~[com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]
at play.api.DefaultApplication.handleError(Application.scala:402) [com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [com.typesafe.play.play_2.10-2.3.7.jar:2.3.7]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [org.scala-lang.scala-library-2.10.4.jar:na]
Caused by: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/Path/to/my/project/target/universal/stage/public/data/array.txt
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.2.0.jar:na]
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270) ~[org.apache.hadoop.hadoop-mapreduce-client-core-2.2.0.jar:na]
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201) ~[org.apache.spark.spark-core_2.10-1.2.0.jar:1.2.0]
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) ~[org.apache.spark.spark-core_2.10-1.2.0.jar:1.2.0]
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) ~[org.apache.spark.spark-core_2.10-1.2.0.jar:1.2.0]
It seems that my array.txt file is not loaded in the production mode. How can solve this problem?
The problem here is that the public directory will not be available in your root project dir when you run in production. It is packaged as a jar (usually in STAGE_DIR/lib/PROJ_NAME-VERSION-assets.jar) so you will not be able to access them this way.
I can see two solutions here:
1) Place the file in the conf directory. This will work, but seems very dirty especially if you intend to use more data files;
2) Place those files in some directory and tell sbt to package it as well. You can keep using the public directory although it seems better to use a different dir especially if you would want to have many more files.
Supposing array.txt is placed in a dir named datafiles in your project root, you can add this to build.sbt:
mappings in Universal ++=
(baseDirectory.value / "datafiles" * "*" get) map
(x => x -> ("datafiles/" + x.getName))
Don't forget to change the paths in your app code:
// (...)
val data = sc.textFile("datafiles/array.txt")
Then just do a clean and when you run either start, stage or dist those files will be available.

SBT: how to modify it:test with a task that calls the original test task?

In the snippet of my Build.scala file below, the itTestWithService task starts a test server before and after running the integration tests.
I'd like to attach this itTestWithService task to the it:test key. But how?
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
// I'd like the following but it creates a cycle that fails at runtime:
// test in IntegrationTest <<= testWithService
itTestWithService <<= testWithService
)
val itTestWithService = taskKey[Unit]("run integration test with background server")
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithService = Def.task {
val log = streams.value.log
val launched = (start in testService).value
launched match {
case Success(_) =>
testAndStop.value
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
log.error(s"failed to start test server: $e \n ${stack}")
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
val _ = (test in IntegrationTest).value
stop in testService
}
In a related github issue discussion, Josh suggested an approach to the particular case I asked about (overriding it:test with a task that calls the original test task).
The approach works by reimplementing the test task. I don't know if there's a more general way to get access to the original version of the task. (A more general way would be a better answer than this one!)
Here's how to reimplement the it:test task:
/** run integration tests (just like it:test does, but explicitly so we can overwrite the it:test key */
lazy val itTestTask: Initialize[Task[Unit]] = Def.taskDyn {
for {
results <- (executeTests in IntegrationTest)
} yield { Tests.showResults(streams.value.log, results, "test missing?") }
}
Here's the composite integration test task (evolved slighly from the original question, though that version should work too):
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithServiceTask = Def.taskDyn {
(start in testService).value match {
case Success(_) =>
testAndStop
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
streams.value.log.error(s"failed to start test server: $e \n ${stack}")
emptyTask
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
SbtUtil.sequence(itTestTask, stop in testService)
}
val emptyTask = Def.task {}
And now plugging the composite task we've built into the it:test key doesn't create a cycle:
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
test in IntegrationTest <<= testWithServiceTask,
)
You can add custom test tags in build.scala. Here's the sample code from one of my projects. Keep in mind, you don't have to bind it to it:test. You could name it anything you want.
lazy val AcceptanceTest = config("acc") extend(Test)
lazy val Kernel = Project(
id = "kernel",
base = file("."),
settings = defaultSettings ++ AkkaKernelPlugin.distSettings ++ Seq(
libraryDependencies ++= Dependencies.Kernel,
distJvmOptions in Dist := "-Xms256M -Xmx2048M",
outputDirectory in Dist := file("target/dist"),
distMainClass in Dist := "akka.kernel.Main system.SystemKernel"
)
).configs(AcceptanceTest)
.settings(inConfig(AcceptanceTest)(Defaults.testTasks): _*)
.settings(testOptions in AcceptanceTest := Seq(Tests.Argument("-n",
"AcceptanceTest"), Tests.Argument("-oD")))
Just pay attention to the lazy val at the top and the .configs part.
With that setup, when I type acc:test, it runs all tests with the AcceptanceTestTag. And you can just boot the server as part of what gets called by your test suite. Can even tag tests by needing server and not needing it so you can separate them once your suite gets big and takes longer to run.
Edit: Added to respond to comments.
To Tag tests, create a tag like this
import org.scalatest.Tag
object AcceptanceTest extends Tag("AcceptanceTest")
then put it here in your tests...
it("should allow any actor to subscribe to any channel", AcceptanceTest) {
And that cooresponds to the same build setup above. Only tests with That tag will be run when I call acc:test.
What came to mind for your problem was the solution I use for that same situation. Right now, you're doing the work in build.scala. I'm not sure if it's possible to do what you're saying right there... but what I do does the same thing but a bit differently. I have a trait that I mix-in to all tests that need a vagrant server. And I tag the tests that use it as VagrantTest.
And it works like a singleton. If one or more tests need it, it boots. But it will only boot one and all tests use it.
You could try doing the same, but over-ride it:test in the config file. Instead of "acc" in the example above, put "it". If that's not quite what you're looking for, may have to see if anyone else pops in.
So, basically when I call it:test it accesses running all tests with IntegrationTest test tag on it (as well as those with VagrantTest and a few others). So all the longer-running server tests don't get run as much (takes too long).

scala.tools.nsc.IMain within Play 2.1

I googled a lot and am totally stuck now. I know, that there are similar questions but please read to the end. I have tried all proposed solutions and none did work.
I am trying to use the IMain class from scala.tools.nsc within a Play 2.1 project (Using Scala 2.10.0).
Controller Code
This is the code, where I try to use the IMain in a Websocket. This is only for testing.
object Scala extends Controller {
def session = WebSocket.using[String] { request =>
val interpreter = new IMain()
val (out,channel) = Concurrent.broadcast[String]
val in = Iteratee.foreach[String]{ code =>
interpreter.interpret(code) match {
case Results.Error => channel.push("error")
case Results.Incomplete => channel.push("incomplete")
case Results.Success => channel.push("success")
}
}
(in,out)
}
}
As soon as something gets sent over the Websocket the following error gets logged by play:
Failed to initialize compiler: object scala.runtime in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
Build.scala
object ApplicationBuild extends Build {
val appName = "escalator"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
"org.scala-lang" % "scala-compiler" % "2.10.0"
)
val main = play.Project(appName, appVersion, appDependencies).settings(
)
}
What I have tried so far
All this didn't work:
I have included fork := true in the Build.scala
A Settings object with:
embeddedDefaults[MyType]
usejavacp.value = true
The soultion proposed as answer to Question Embedded Scala REPL inherits parent classpath
I dont know what to do now.
The problem here is that sbt doesnt add scala-library to the class path.
The following workaround works.
First create a folder lib in the top project directory(the parent of app,conf etc) and copy there the scala-library.jar
Then you can use the following code to host an interpreter :
val settings = new Settings
settings.bootclasspath.value +=scala.tools.util.PathResolver.Environment.javaBootClassPath + File.pathSeparator + "lib/scala-library.jar"
val in = new IMain(settings){
override protected def parentClassLoader = settings.getClass.getClassLoader()
}
val res = in.interpret("val x = 1")
The above creates the bootclasspath by adding to the java class the scala library. It's not a problem with play framework it comes from the sbt. The same problem occures for any scala project when it runs with sbt. Tested with a simple project. When it runs from eclipse its works fine.
EDIT: Link to sample project demonstrating the above.`
I wonder if the reflect jar is missing. Try adding this too in appDependencies.
"org.scala-lang" % "scala-reflect" % "2.10.0"

sbt 0.11 run task examples needed

My projects are still using sbt 0.7.7 and I find it very convenient to have utility classes that I can run from the sbt prompt. I can also combine this with properties that are separately maintained - typically for environment related values that changes from hosts to hosts. This is an example of my project definition under the project/build directory:
class MyProject(info: ProjectInfo) extends DefaultProject(info) {
//...
lazy val extraProps = new BasicEnvironment {
// use the project's Logger for any properties-related logging
def log = MyProject.this.log
def envBackingPath = path("paths.properties")
// define some properties that will go in paths.properties
lazy val inputFile = property[String]
}
lazy val myTask = task { args =>
runTask(Some("foo.bar.MyTask"),
runClasspath, extraProps.inputFile.value :: args.toList).dependsOn(compile)
describedAs "my-task [options]"
}
}
I can then use my task as my-task option1 option2 under the sbt shell.
I've read the new sbt 0.11 documentation at https://github.com/harrah/xsbt/wiki including the sections on Tasks and TaskInputs and frankly I'm still struggling on how to accomplish what I did on 0.7.7.
It seems the extra properties could simply be replaced a separate environment.sbt, that tasks have to be defined in project/build.scala before being set in build.sbt. It also looks like there is completion support, which looks very interesting.
Beyond that I'm somewhat overwhelmed. How do I accomplish what I did with the new sbt?
You can define a task like this :
val myTask = InputKey[Unit]("my-task")
And your setting :
val inputFile = SettingKey[String]("input-file", "input file description")
You can also define a new configuration like :
lazy val ExtraProps = config("extra-props") extend(Compile)
add this config to your project and use it to set settings for this configuration :
lazy val root = Project("root", file(".")).config( ExtraProps ).settings(
inputFile in ExtraProps := ...
...
myTask in ExtraPops <<= inputTask { (argTask:TaskKey[Seq[String]]) =>
(argTask, inputFile) map { (args:Seq[String], iFile[String]) =>
...
}
}
).dependsOn(compile)
then launch your task with extra-props:my-task