Scala Play no application started when grabbing data sources from application.conf - scala

I am trying to read in data sources from my application.conf file, but every time I run my server, or try and run test cases, I am getting an error saying that there is no application started.
Here is an example of what I am trying to do:
Unit test that is trying to read a property from my application.conf
class DbConfigWebUnitTest extends PlaySpec with OneAppPerSuite {
implicit override lazy val app: FakeApplication = FakeApplication(
additionalConfiguration = Map("db.test.url" -> "jdbc:postgresql://localhost:5432/suredbitswebtest",
"db.test.user" -> "postgres", "db.test.password" -> "postgres", "db.test.driver" -> "org.postgresql.Driver"))
val dbManagementWeb = new DbManagementWeb with DbConfigWeb with DbTestQualifier
"DbConfigWebTest" must {
"have the same username as what is defined in application.conf" in {
dbManagementWeb.username must be("postgres")
}
}
}
Here is my DbConfigWeb
import play.api.Play.current
trait DbConfigWeb extends DbConfig { qualifier: DbQualifier =>
val url: String = current.configuration.getString(qualifier + ".url").get
val username: String = current.configuration.getString(qualifier + ".user").get
val password: String = current.configuration.getString(qualifier + ".password").get
val driver: String = current.configuration.getString(qualifier + ".driver").get
override def database: DatabaseDef = JdbcBackend.Database.forURL(url, username, password, null, driver)
override implicit val session = database createSession
}
trait DbQualifier {
val qualifier: String
}
trait DbProductionQualifier extends DbQualifier {
override val qualifier = "db.production"
}
trait DbTestQualifier extends DbQualifier {
override val qualifier = "db.test"
}
and lastly here is my stack trace:
[suredbits-web] $ last test:test
[debug] Forking tests - parallelism = false
[debug] Create a single-thread test executor
[debug] Runner for sbt.FrameworkWrapper produced 0 initial tasks for 0 tests.
[debug] Runner for org.scalatest.tools.Framework produced 2 initial tasks for 2 tests.
[debug] Running TaskDef(com.suredbits.web.db.DbConfigWebUnitTest, sbt.ForkMain$SubclassFingerscan#48687c55, false, [SuiteSelector])
[error] Uncaught exception when running com.suredbits.web.db.DbConfigWebUnitTest: java.lang.RuntimeException: There is no started application
sbt.ForkMain$ForkError: There is no started application
at scala.sys.package$.error(package.scala:27)
at play.api.Play$$anonfun$current$1.apply(Play.scala:71)
at play.api.Play$$anonfun$current$1.apply(Play.scala:71)
at scala.Option.getOrElse(Option.scala:120)
at play.api.Play$.current(Play.scala:71)
at com.suredbits.web.db.DbConfigWeb$class.$init$(DbConfigWebProduction.scala:14)
at com.suredbits.web.db.DbConfigWebUnitTest$$anon$1.<init>(DbConfigWebUnitTest.scala:14)
at com.suredbits.web.db.DbConfigWebUnitTest.<init>(DbConfigWebUnitTest.scala:14)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:379)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:641)
at sbt.ForkMain$Run$2.call(ForkMain.java:294)
at sbt.ForkMain$Run$2.call(ForkMain.java:284)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I think the key problem is that vals in Scala traits are initialized at construction time, which is prior to the test Play application being started (presumably its lifecycle is tied to each spec example.) You have a couple of workarounds:
make everything in DbConfigWeb a def or perhaps a lazy val
give DbConfigWeb an abstract play.api.Application field from which to extract the config values (rather than using current) and pass it explicitly (the fake application) to whatever DbManagementWeb is as a constructor parameter
Here's a simplified version, using the first approach (which works for me):
import play.api.Play.current
trait DbConfig
trait DbConfigWeb extends DbConfig {
self: DbQualifier =>
// Using defs instead of vals
def url: String = current.configuration.getString(qualifier + ".url").get
def username: String = current.configuration.getString(qualifier + ".user").get
def password: String = current.configuration.getString(qualifier + ".password").get
def driver: String = current.configuration.getString(qualifier + ".driver").get
}
trait DbQualifier {
val qualifier: String
}
trait DbTestQualifier extends DbQualifier {
override val qualifier = "db.test"
}
and the spec:
import controllers.{DbConfigWeb, DbTestQualifier}
import org.scalatestplus.play.{OneAppPerSuite, PlaySpec}
import play.api.test.FakeApplication
class DbConfigTest extends PlaySpec with OneAppPerSuite {
implicit override lazy val app: FakeApplication = FakeApplication(
additionalConfiguration = Map("db.test.url" -> "jdbc:h2:mem:play",
"db.test.user" -> "sa", "db.test.password" -> "", "db.test.driver" -> "org.h2.Driver"))
val dbManagementWeb = new DbConfigWeb with DbTestQualifier
"DbConfigWebTest" must {
"have the same username as what is defined in application.conf" in {
dbManagementWeb.username must be("sa")
}
}
}
Personally I prefer the second approach, which keeps the application state passed around explicitly rather than relying on play.api.Play.current, which you cannot rely on always being started.
You mentioned in the comments that lazy vals were not working for you but I can only conjecture that some chain of calls was forcing initialization: check again that this isn't the case.
Note also that order of initialization for vals can be complex and, while some might disagree, it's a pretty safe bet to stick to defs as trait members unless you're sure it's some expensive operation (in which case a lazy val might be an option.)

Related

How to bind Slick dependency with Lagom?

So, I have this dependency which is used to create tables and interact with Postgres. Here is a Sample Class:
class ConfigTable {
this: DBFactory =>
import driver.api._
implicit val configKeyMapper = MappedColumnType.base[ConfigKey, String](e => e.toString, s => ConfigKey.withName(s))
val configs = TableQuery[ConfigMapping]
class ConfigMapping(tag: Tag) extends Table[Config](tag, "configs") {
def key = column[ConfigKey]("key")
def value = column[String]("value")
def * = (key, value) <> (Config.tupled, Config.unapply _)
}
/**
* add config
*
* #param config
* #return
*/
def add(config: Config): Try[Config] = try {
sync(db.run(configs += config)) match {
case 1 => Success(config)
case _ => Failure(new Exception("Unable to add config"))
}
} catch {
case ex: PSQLException =>
if (ex.getMessage.contains("duplicate key value")) Failure(new Exception("alt id already exists."))
else Failure(new Exception(ex.getMessage))
}
def get(key: ConfigKey): Option[Config] = sync(db.run(configs.filter(x => x.key === key).result)).headOption
def getAll(): Seq[Config] = sync(db.run(configs.result))
}
object ConfigTable extends ConfigTable with PSQLComponent
PSQLComponent is the Abstraction for Database meta configuration:
import slick.jdbc.PostgresProfile
trait PSQLComponent extends DBFactory {
val driver = PostgresProfile
import driver.api.Database
val db: Database = Database.forConfig("db.default")
}
DBFactory is again an abstraction:
import slick.jdbc.JdbcProfile
trait DBFactory {
val driver: JdbcProfile
import driver.api._
val db: Database
}
application.conf:
db.default {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/db"
user = "user"
password = "pass"
hikaricp {
minimumIdle = ${db.default.async-executor.minConnections}
maximumPoolSize = ${db.default.async-executor.maxConnections}
}
}
jdbc-defaults.slick.profile = "slick.jdbc.PostgresProfile$"
lagom.persistence.jdbc.create-tables.auto=false
I compile and publish this dependency to nexus and trying to use this in my Lagom Microservice.
Here is the Loader Class:
class SlickExapleAppLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) {
override def serviceLocator: ServiceLocator = NoServiceLocator
}
override def loadDevMode(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) with LagomDevModeComponents {
}
override def describeService = Some(readDescriptor[SlickExampleLMSServiceImpl])
}
abstract class SlickExampleApp(context: LagomApplicationContext)
extends LagomApplication(context)
// No Idea which to use and how, nothing clear from doc too.
// with ReadSideJdbcPersistenceComponents
// with ReadSideSlickPersistenceComponents
// with SlickPersistenceComponents
with AhcWSComponents {
wire[SlickExampleScheduler]
}
I'm trying to implement it in this scheduler:
class SlickExampleScheduler #Inject()(lmsService: LMSService,
configuration: Configuration)(implicit ec: ExecutionContext) {
val brofile = `SomeDomainObject`
val gson = new Gson()
val concurrency = Runtime.getRuntime.availableProcessors() * 10
implicit val timeout: Timeout = 3.minute
implicit val system: ActorSystem = ActorSystem("LMSActorSystem")
implicit val materializer: ActorMaterializer = ActorMaterializer()
// Getting Exception Initializer here..... For ConfigTable ===> ExceptionLine
val schedulerImplDao = new SchedulerImplDao(ConfigTable)
def hitLMSAPI = {
println("=============>1")
schedulerImplDao.doSomething()
}
system.scheduler.schedule(2.seconds, 2.seconds) {
println("=============>")
hitLMSAPI
}
}
Not sure if it's the correct way, or if it's not what is the correct way of doing this. It is the project requirement to keep the Data Models separate from the service for the obvious reasons of re-usability.
Exception Stack:
17:50:38.666 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=ForkJoinPool-1-worker-1, akkaTimestamp=12:20:38.665UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - Started up successfully
17:50:38.707 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=lms-impl-application-akka.actor.default-dispatcher-6, akkaTimestamp=12:20:38.707UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - No seed-nodes configured, manual cluster join required
java.lang.ExceptionInInitializerError
at com.slick.init.impl.SlickExampleScheduler.<init>(SlickExampleScheduler.scala:29)
at com.slick.init.impl.SlickExampleApp.<init>(SlickExapleAppLoader.scala:42)
at com.slick.init.impl.SlickExapleAppLoader$$anon$2.<init>(SlickExapleAppLoader.scala:17)
at com.slick.init.impl.SlickExapleAppLoader.loadDevMode(SlickExapleAppLoader.scala:17)
at com.lightbend.lagom.scaladsl.server.LagomApplicationLoader.load(LagomApplicationLoader.scala:76)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$5(LagomReloadableDevServerStart.scala:176)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$3(LagomReloadableDevServerStart.scala:173)
at scala.Option.map(Option.scala:163)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$2(LagomReloadableDevServerStart.scala:149)
at scala.util.Success.flatMap(Try.scala:251)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$1(LagomReloadableDevServerStart.scala:147)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:658)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.lang.NullPointerException
at com.example.db.models.LoginTable.<init>(LoginTable.scala:29)
at com.example.db.models.LoginTable$.<init>(LoginTable.scala:293)
at com.example.db.models.LoginTable$.<clinit>(LoginTable.scala)
... 24 more
This is how it is woking:
abstract class SlickExampleApp(context: LagomApplicationContext) extends LagomApplication(context)
with SlickPersistenceComponents with AhcWSComponents {
override implicit lazy val actorSystem: ActorSystem = ActorSystem("LMSActorSystem")
override lazy val materializer: ActorMaterializer = ActorMaterializer()
override lazy val lagomServer = serverFor[SlickExampleLMSService](wire[SlickExampleLMSServiceImpl])
lazy val externalService = serviceClient.implement[LMSService]
override def connectionPool: ConnectionPool = new HikariCPConnectionPool(environment)
override def jsonSerializerRegistry: JsonSerializerRegistry = new JsonSerializerRegistry {
override def serializers: immutable.Seq[JsonSerializer[_]] = Vector.empty
}
val loginTable = wire[LoginTable]
wire[SlickExampleScheduler]
}
> One thing I'd like to report is: Lagom docs about the application.conf configuration of slick is not correct, it misleaded me for two days, the I digged into the Liberary code and this is how it goes:
private val readSideConfig = system.settings.config.getConfig("lagom.persistence.read-side.jdbc")
private val jdbcConfig = system.settings.config.getConfig("lagom.persistence.jdbc")
private val createTables = jdbcConfig.getConfig("create-tables")
val autoCreateTables: Boolean = createTables.getBoolean("auto")
// users can disable the usage of jndiDbName for userland read-side operations by
// setting the jndiDbName to null. In which case we fallback to slick.db.
// slick.db must be defined otherwise the application will fail to start
val db = {
if (readSideConfig.hasPath("slick.jndiDbName")) {
new InitialContext()
.lookup(readSideConfig.getString("slick.jndiDbName"))
.asInstanceOf[Database]
} else if (readSideConfig.hasPath("slick.db")) {
Database.forConfig("slick.db", readSideConfig)
} else {
throw new RuntimeException("Cannot start because read-side database configuration is missing. " +
"You must define either 'lagom.persistence.read-side.jdbc.slick.jndiDbName' or 'lagom.persistence.read-side.jdbc.slick.db' in your application.conf.")
}
}
val profile = DatabaseConfig.forConfig[JdbcProfile]("slick", readSideConfig).profile
The configuration it requires is very much different than the suggested one on the Doc.

Decoupling non-serializable object to avoid Serialization error in Spark

The following class contains the main function which tries to read from Elasticsearch and prints the documents returned:
object TopicApp extends Serializable {
def run() {
val start = System.currentTimeMillis()
val sparkConf = new Configuration()
sparkConf.set("spark.executor.memory","1g")
sparkConf.set("spark.kryoserializer.buffer","256")
val es = new EsContext(sparkConf)
val esConf = new Configuration()
esConf.set("es.nodes","localhost")
esConf.set("es.port","9200")
esConf.set("es.resource", "temp_index/some_doc")
esConf.set("es.query", "?q=*:*")
esConf.set("es.fields", "_score,_id")
val documents = es.documents(esConf)
documents.foreach(println)
val end = System.currentTimeMillis()
println("Total time: " + (end-start) + " ms")
es.shutdown()
}
def main(args: Array[String]) {
run()
}
}
Following class converts the returned document to JSON using org.json4s
class EsContext(sparkConf:HadoopConfig) extends SparkBase {
private val sc = createSCLocal("ElasticContext", sparkConf)
def documentsAsJson(esConf:HadoopConfig):RDD[String] = {
implicit val formats = DefaultFormats
val source = sc.newAPIHadoopRDD(
esConf,
classOf[EsInputFormat[Text, MapWritable]],
classOf[Text],
classOf[MapWritable]
)
val docs = source.map(
hit => {
val doc = Map("ident" -> hit._1.toString) ++ mwToMap(hit._2)
write(doc)
}
)
docs
}
def shutdown() = sc.stop()
// mwToMap() converts MapWritable to Map
}
Following class creates the local SparkContext for the application:
trait SparkBase extends Serializable {
protected def createSCLocal(name:String, config:HadoopConfig):SparkContext = {
val iterator = config.iterator()
for (prop <- iterator) {
val k = prop.getKey
val v = prop.getValue
if (k.startsWith("spark."))
System.setProperty(k, v)
}
val runtime = Runtime.getRuntime
runtime.gc()
val conf = new SparkConf()
conf.setMaster("local[2]")
conf.setAppName(name)
conf.set("spark.serializer", classOf[KryoSerializer].getName)
conf.set("spark.ui.port", "0")
new SparkContext(conf)
}
}
When I run TopicApp I get the following errors:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:324)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:323)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.map(RDD.scala:323)
at TopicApp.EsContext.documents(EsContext.scala:51)
at TopicApp.TopicApp$.run(TopicApp.scala:28)
at TopicApp.TopicApp$.main(TopicApp.scala:39)
at TopicApp.TopicApp.main(TopicApp.scala)
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext#14f70e7d)
- field (class: TopicApp.EsContext, name: sc, type: class org.apache.spark.SparkContext)
- object (class TopicApp.EsContext, TopicApp.EsContext#2cf77cdc)
- field (class: TopicApp.EsContext$$anonfun$documents$1, name: $outer, type: class TopicApp.EsContext)
- object (class TopicApp.EsContext$$anonfun$documents$1, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
... 13 more
Going through other posts that cover similar issue there were mostly recommending making the classes Serializable or try to separate the non-serializable objects from the classes.
From the error that I got I inferred that SparkContext i.e. sc is non-serializable as SparkContext is not a serializable class.
How should I decouple SparkContext, so that the applications runs correctly?
I can't run your program to be sure, but the general rule is not to create anonymous functions that refer to members of unserializable classes if they have to be executed on the RDD's data. In your case:
EsContext has a val of type SparkContext, which is (intentionally) not serializable
In the anonymous function passed to RDD.map in EsContext.documentsAsJson, you call another function of this EsContext instance (mwToMap) which forces Spark to serialize that instance, along with the SparkContext it holds
One possible solution would be removing mwToMap from the EsContext class (possibly into a companion object of EsContext - objects need not be serializable as they are static). If there are other methods of the same nature (write?) they'll have to be moved too. This would look something like:
import EsContext._
class EsContext(sparkConf:HadoopConfig) extends SparkBase {
private val sc = createSCLocal("ElasticContext", sparkConf)
def documentsAsJson(esConf: HadoopConfig): RDD[String] = { /* unchanged */ }
def documents(esConf: HadoopConfig): RDD[EsDocument] = { /* unchanged */ }
def shutdown() = sc.stop()
}
object EsContext {
private def mwToMap(mw: MapWritable): Map[String, String] = { ... }
}
If moving these methods out isn't possible (i.e. if they require some of EsContext's members) - then consider separating the class that does the actual mapping from this context (which seems to be some kind of wrapper around the SparkContext - if that's what it is, that's all that it should be).

Play 2.3 FakeApplication mode not setting in test?

I'm using play 2.3.8 and have some configuration in my GlobalSettings that change based on the mode of the application. So I have something like this:
object Global extends GlobalSettings {
override def onLoadConfig(config: Configuration, path: java.io.File, classloader: ClassLoader, mode: Mode.Mode) = {
println(mode)
val customConfig = //Based on mode.*
config ++ configuration ++ Configuration(ConfigFactory.parseMap(customConfig))
}
}
And then am trying to write tests to ensure that this behavior works:
class MyTest extends PlaySpec {
val testApp = FakeApplication(
additionalConfiguration = Map(
//SomeSettings And Stuff
"logger.application" -> "WARN",
"logger.root" -> "WARN"
)
)
val devApp = new FakeApplication(
additionalConfiguration = Map(
//SomeSettings And Stuff
"logger.application" -> "WARN",
"logger.root" -> "WARN"
)
) {
override val mode = Mode.Dev
}
val prodApp = new FakeApplication(
additionalConfiguration = Map(
//SomeSettings And Stuff
"logger.application" -> "WARN",
"logger.root" -> "WARN"
)
) {
override val mode = Mode.Prod
}
"ThisNonWorkingTestOfMine" must {
"when running application in test mode have config.thing = false" in running(testApp) {
assertResult(Mode.Test)(testApp.mode)
assertResult(false)(testApp.configuration.getBoolean("config.thing").get)
}
"when running application in dev mode have config.thing = false" in running(devApp) {
assertResult(Mode.Dev)(devApp.mode)
assertResult(false)(devApp.configuration.getBoolean("config.thing").get)
}
"when running application in prod mode have config.thing = true" in running(prodApp) {
assertResult(Mode.Prod)(prodApp.mode)
assertResult(true)(prodApp.configuration.getBoolean("config.thing").get)
}
}
}
And when I run these tests I see something a bit odd from my handy println:
Test
null
null
[info] MyTest:
[info] ThisNonWorkingTestOfMine
[info] play - Starting application default Akka system.
[info] play - Shutdown application default Akka system.
[info] - must when running application in test mode have config.thing = false
[info] play - Application started (Dev)
[info] - must when running application in dev mode have config.thing = false
[info] play - Application started (Prod)
[info] - must when running application in prod mode have config.thing = true *** FAILED ***
[info] Expected true, but got false (MyTest.scala:64)
[info] ScalaTest
How do I properly set the mode of the FakeApplication in Play 2.3? The way I have it now is based on a page from Mastering Play but clearly that isn't the way to go when using onLoadConfig it seems
Edit:
I'm also experimenting with OneAppPerTest and creating the FakeApplication in the newAppForTest method but it's still behaving oddly, with null's like the method above. This is really strange because if I set a random property like "foo" -> "bar" in the additionalConfiguration map when making my FakeApplication and then try to read it from config.getString in my Global object, it get's logged as None even though if I do app.configuration.getString in the test itself it shows bar. It feels like there is some type of disconnect here. And I don't get null for the mode if I use the FakeApplication.apply method rather than new FakeApplication
So I think this has something to do with the way that FakeApplication sets the mode to Mode.Test via override because if I copy the FakeApplication class and remove that line and create my own version of the class that let's me set the mode I have no issues. In other words, in my tests package I declare the following class:
package play.api.test
import play.api.mvc._
import play.api.libs.json.JsValue
import scala.concurrent.Future
import xml.NodeSeq
import play.core.Router
import scala.runtime.AbstractPartialFunction
import play.api.libs.Files.TemporaryFile
import play.api.{ Application, WithDefaultConfiguration, WithDefaultGlobal, WithDefaultPlugins }
case class FakeModeApplication(
override val path: java.io.File = new java.io.File("."),
override val classloader: ClassLoader = classOf[FakeModeApplication].getClassLoader,
val additionalPlugins: Seq[String] = Nil,
val withoutPlugins: Seq[String] = Nil,
val additionalConfiguration: Map[String, _ <: Any] = Map.empty,
val withGlobal: Option[play.api.GlobalSettings] = None,
val withRoutes: PartialFunction[(String, String), Handler] = PartialFunction.empty,
val mode: play.api.Mode.Value
) extends {
override val sources = None
} with Application with WithDefaultConfiguration with WithDefaultGlobal with WithDefaultPlugins {
override def pluginClasses = {
additionalPlugins ++ super.pluginClasses.diff(withoutPlugins)
}
override def configuration = {
super.configuration ++ play.api.Configuration.from(additionalConfiguration)
}
override lazy val global = withGlobal.getOrElse(super.global)
override lazy val routes: Option[Router.Routes] = {
val parentRoutes = loadRoutes
Some(new Router.Routes() {
def documentation = parentRoutes.map(_.documentation).getOrElse(Nil)
// Use withRoutes first, then delegate to the parentRoutes if no route is defined
val routes = new AbstractPartialFunction[RequestHeader, Handler] {
override def applyOrElse[A <: RequestHeader, B >: Handler](rh: A, default: A => B) =
withRoutes.applyOrElse((rh.method, rh.path), (_: (String, String)) => default(rh))
def isDefinedAt(rh: RequestHeader) = withRoutes.isDefinedAt((rh.method, rh.path))
} orElse new AbstractPartialFunction[RequestHeader, Handler] {
override def applyOrElse[A <: RequestHeader, B >: Handler](rh: A, default: A => B) =
parentRoutes.map(_.routes.applyOrElse(rh, default)).getOrElse(default(rh))
def isDefinedAt(x: RequestHeader) = parentRoutes.map(_.routes.isDefinedAt(x)).getOrElse(false)
}
def setPrefix(prefix: String) {
parentRoutes.foreach(_.setPrefix(prefix))
}
def prefix = parentRoutes.map(_.prefix).getOrElse("")
})
}
}
And then in my test I can use it like so:
val devApp = new FakeModeApplication(
additionalConfiguration = Map(
//SomeSettings And Stuff
"logger.application" -> "WARN",
"logger.root" -> "WARN"
), mode = Mode.Dev
)
And then the mode value comes through as what I set it to and not as null.
I'm posting this as an answer because it does solve the issue I'm facing, but I don't have an understanding of why the new keyword when making a FakeApplication like so: new FakeApplication() { override mode ... } causes the mode to come through as null in the onLoadConfig method on GlobalSettings. This feels like a hack rather than a solution and I'd appreciate if anyone with enough knowledge around this could post a solution that involves not copying the full FakeApplication class and changing one line.

Op-Rabbit with Spray-Json in Akka Http

I am trying to use the library Op-Rabbit to consume a RabbitMQ queue in an Akka-Http project.
I want to use Spray-Json for the marshalling/ un marshalling.
import com.spingo.op_rabbit.SprayJsonSupport._
import com.spingo.op_rabbit.stream.RabbitSource
import com.spingo.op_rabbit.{Directives, RabbitControl}
object Boot extends App with Config with BootedCore with ApiService {
this: ApiService with Core =>
implicit val materializer = ActorMaterializer()
Http().bindAndHandle(routes, httpInterface, httpPort)
log.info("Http Server started")
implicit val rabbitControl = system.actorOf(Props[RabbitControl])
import Directives._
RabbitSource(
rabbitControl,
channel(qos = 3),
consume(queue(
"such-queue",
durable = true,
exclusive = false,
autoDelete = false)),
body(as[User])).
runForeach { user =>
log.info(user)
} // after each successful iteration the message is acknowledged.
}
In a separate file:
case class User(id: Long,name: String)
object JsonFormat extends DefaultJsonProtocol {
implicit val format = jsonFormat2(User)
}
The error I am getting is:
could not find implicit value for parameter um: akka.http.scaladsl.unmarshalling.FromRequestUnmarshaller[*.*.models.User]
[error] body(as[User])). // marshalling is automatically hooked up using implicits
[error] ^
[error]could not find implicit value for parameter um: com.spingo.op_rabbit.RabbitUnmarshaller[*.*.models.User]
[error] body(as[User])
[error] ^
[error] two errors found
Im not sure how to get the op-rabbit spray-json support working properly.
Thanks for any help.
Try to provide an implicit marshaller for your User class like they do it for Int (in RabbitTestHelpers.scala):
implicit val simpleIntMarshaller = new RabbitMarshaller[Int] with RabbitUnmarshaller[Int] {
val contentType = "text/plain"
val contentEncoding = Some("UTF-8")
def marshall(value: Int) =
value.toString.getBytes
def unmarshall(value: Array[Byte], contentType: Option[String], charset: Option[String]) = {
new String(value).toInt
}
}

path parameter: Invalid path Scala Play Config file

I am getting a parsing exception inside of my application.conf file.
Here is my application.conf
db.test.driver = org.postgresql.Driver
db.test.user = "postgres"
db.test.password = "postgres"
db.test.url = "jdbc:postgresql://localhost:5432/gasguru"
here is the code that I am trying to use to read from my application.conf
trait DbConfigWeb extends DbConfig { qualifier: DbQualifier =>
def url: String = current.configuration.getString(qualifier + ".url").get
println(url)
def username: String = current.configuration.getString(qualifier + ".user").get
def password: String = current.configuration.getString(qualifier + ".password").get
def driver: String = current.configuration.getString(qualifier + ".driver").get
override def database: DatabaseDef = JdbcBackend.Database.forURL(url, username, password, null, driver)
override implicit val session = database createSession
}
trait DbQualifier {
val qualifier: String
}
trait DbProductionQualifier extends DbQualifier {
override val qualifier = "db.production"
}
trait DbTestQualifier extends DbQualifier {
override val qualifier = "db.test"
}
here is the test case I am trying to run:
class DbConfigWebUnitTest extends PlaySpec with OneAppPerSuite with BeforeAndAfterAll {
"DbConfigWebTest" must {
"have the same username as what is defined in application.conf" in {
val dbManagementWeb = new DbConfigWeb with DbTestQualifier
dbManagementWeb.username must be("postgres")
}
"have the same password as what is defined in application.conf" in {
val dbManagementWeb = new DbConfigWeb with DbTestQualifier
dbManagementWeb.username must be("postgres")
}
"have the qualifier db.test" in {
val dbManagementWeb = new DbConfigWeb with DbTestQualifier
dbManagementWeb.qualifier must be ("db.test")
}
}
}
and finally the error message:
[info] - must have the qualifier db.test *** FAILED ***
[info] com.typesafe.config.ConfigException$BadPath: path parameter: Invalid path 'com.suredbits.web.db.DbConfigWebUnitTest$$anonfun$1$$anonfun$apply$mcV$sp$3$$anon$3#1e2cbe08.url': Token not allowed in path expression: '$' ('$' not followed by {, '$' not allowed after '$') (you can double-quote this token if you really want it here)
[info] at com.typesafe.config.impl.Parser.parsePathExpression(Parser.java:1095)
[info] at com.typesafe.config.impl.Parser.parsePath(Parser.java:1135)
[info] at com.typesafe.config.impl.Path.newPath(Path.java:224)
[info] at com.typesafe.config.impl.SimpleConfig.hasPath(SimpleConfig.java:80)
[info] at play.api.Configuration.reportError(Configuration.scala:743)
[info] at play.api.Configuration.readValue(Configuration.scala:132)
[info] at play.api.Configuration.getString(Configuration.scala:151)
[info] at com.suredbits.web.db.DbConfigWeb$class.url(DbConfigWebProduction.scala:14)
[info] at com.suredbits.web.db.DbConfigWebUnitTest$$anonfun$1$$anonfun$apply$mcV$sp$3$$anon$3.url(DbConfigWebUnitTest.scala:27)
[info] at com.suredbits.web.db.DbConfigWeb$class.$init$(DbConfigWebProduction.scala:15)
[info] ...
You are concatenating a object with a string, which will call the default toString method that produces your.class.name#hash. You need to call the method from the trait instead:
current.configuration.getString(qualifier.qualifier + ".url").get
and do the same on the other calls.
Or you can just override the toString method to return the qualifier value and use it the same way you are using it now
trait DbQualifier {
val qualifier: String
override def toString = qualifier
}