Is there a way to prevent Play from auto-reloading? - scala

While working on some projects, I would sometimes prefer to disable by the auto-reloading feature of Play (and only reload manually).
Is there a way to quickly achieve this? (Other than typing start at the play prompt, which adds some overhead as it packages the app.)

Create a new Scala app that will start the Play app:
import play.api.{Application, ApplicationLoader, Environment, Mode, Play}
import play.core.server.{ServerConfig, ServerProvider}
object MyPlayApp extends App {
val config = ServerConfig(mode = Mode.Dev)
val application: Application = {
val environment = Environment(config.rootDir, this.getClass.getClassLoader, Mode.Dev)
val context = ApplicationLoader.createContext(environment)
val loader = ApplicationLoader(context)
loader.load(context)
}
Play.start(application)
val serverProvider: ServerProvider = ServerProvider.fromConfiguration(this.getClass.getClassLoader, config.configuration)
serverProvider.createServer(config, application)
}
Then run it: sbt "runMain MyPlayApp"

Related

docker container based library to support elastic4s

Im using elastic4s and also interested in using a docker container based testing environment for my elastic search.
There are few libraries like: testcontainers-scala and docker-it-scala, but can't find how I integrate elastic4s into those libraries, did someone ever used a docker container based testing env?
currently my spec is very simple:
class ElasticSearchApiServiceSpec extends FreeSpec {
implicit val defaultPatience = PatienceConfig(timeout = Span(100, Seconds), interval = Span(50, Millis))
val configuration: Configuration = app.injector.instanceOf[Configuration]
val elasticSearchApiService = new ElasticSearchApiService(configuration)
override protected def beforeAll(): Unit = {
elasticSearchApiService.elasticClient.execute {
index into s"peopleIndex/person" doc StringDocumentSource(PeopleFactory.rawStringGoodPerson)
}
// since ES is eventually
Thread.sleep(3000)
}
override protected def afterAll(): Unit = {
elasticSearchApiService.elasticClient.execute {
deleteIndex("peopleIndex")
}
}
"ElasticSearchApiService Tests" - {
"elastic search service should retrieve person info properly - case existing person" in {
val personInfo = elasticSearchApiService.getPersonInfo("2324").futureValue
personInfo.get.name shouldBe "john"
}
}
}
and when I run it, I run elastic search in the background from my terminal, but I want to use containers now so it will be less dependent.
I guess you don't want to depend on ES server running on your local machine for the tests. Then the simplest approach would be using testcontainers-scala's GenericContainer to run official ES docker image this way:
class GenericContainerSpec extends FlatSpec with ForAllTestContainer {
override val container = GenericContainer("docker.elastic.co/elasticsearch/elasticsearch:5.5.1",
exposedPorts = Seq(9200),
waitStrategy = Wait.forHttp("/")
)
"GenericContainer" should "start ES and expose 9200 port" in {
assert(Source.fromInputStream(
new URL(
s"http://${container.containerIpAddress}:${container.mappedPort(9200)}/_status")
.openConnection()
.getInputStream)
.mkString
.contains("ES server is successfully installed"))
}
}

Gatling Performance - flushHttpCache

New to Gatling and trying to understand how to get "exec(flushHttpCache)" incorporated into my script as I am trying to stop redirects from occurring as these will skew my results.
I have:
val getStartPage = feed(feeder).exec(http("Test start page (start-page)")
.exec(flushHttpCache) // <- this fails on compile "flushHttpCache is not a member of io.gatling.http.request.builder.Http"
.get("/start-page?id=${userId}")
.check(status.is(200))
.check(regex("Start now").exists))
.pause(longPause)
then:
class myPerformanceTest extends Simulation with HttpConfiguration
{
val happyPath = scenario("testUsers")
.exec(getStartPage)
setUp(
happyPath.inject(atOnceUsers(1))
).protocols(httpconf)
}
I tried moving ".exec(flushHttpCache)" to: val happyPath = scenario("testUsers").exec(flushHttpCache) still no luck.
How do I incorporate the "flushHttpCache" into a script?
Any help appreciated
You should import
import io.gatling.http.Predef._
not
import io.gatling.http.request.builder.Http
The second part of question will work as you tried, with this import.

Unable to use WURFL with Scala

When I run WURFL demo app for scala:
object Demo {
def main(args: Array[String]) {
// Create WURFL passing a GeneralWURFLEngine object with a wurfl xml
val wurflWrapper = new Wurfl(new GeneralWURFLEngine("classpath:/resources/wurfl.zip"))
// Set cache provider
wurflWrapper.setCacheProvider(new LRUMapCacheProvider)
// Set Performance/Accuracy Mode
wurflWrapper.setTargetAccuracy
// Set Capability Filter
wurflWrapper.setFilter(
"can_assign_phone_number",
"marketing_name",
"brand_name",
"model_name",
"is_smarttv",
"is_wireless_device",
"device_os",
"device_os_version",
"is_tablet",
"ux_full_desktop",
"pointing_method",
"preferred_markup",
"resolution_height",
"resolution_width",
"xhtml_support_level")
// User-Agent here
var userAgent = ""
// Defining headers
var headers = Map("Accept-Datetime"->"Thu, 31 May 2007 20:35:00 GMT")
headers += ("Content-Type"-> "application/x-www-form-urlencoded")
var device = wurflWrapper.deviceForHeaders(userAgent, headers)
val matchType = device.matchType
if (matchType == MatchType.conclusive)
{
println("Match Type is conclusive")
}
val wireless = device.capability("is_wireless_device")
println("Is wireless: " + wireless)
}
}
I get this exception:
[main] ERROR net.sourceforge.wurfl.core.GeneralWURFLEngine - cannot initialize: java.lang.NullPointerException: in is null
java.lang.NullPointerException: in is null
at java.util.zip.ZipInputStream.<init>(ZipInputStream.java:101)
at java.util.zip.ZipInputStream.<init>(ZipInputStream.java:80)
at net.sourceforge.wurfl.core.resource.FileLoader.fromZipFile(FileLoader.java:248)
at net.sourceforge.wurfl.core.resource.FileLoader.openInputStream(FileLoader.java:230)
at net.sourceforge.wurfl.core.resource.FileLoader.getStream(FileLoader.java:288)
at net.sourceforge.wurfl.core.resource.XMLResource.getData(XMLResource.java:163)
at net.sourceforge.wurfl.core.resource.DefaultWURFLModel.init(DefaultWURFLModel.java:115)
at net.sourceforge.wurfl.core.resource.DefaultWURFLModel.<init>(DefaultWURFLModel.java:107)
at net.sourceforge.wurfl.core.GeneralWURFLEngine.init(GeneralWURFLEngine.java:340)
at net.sourceforge.wurfl.core.GeneralWURFLEngine.initIfNeeded(GeneralWURFLEngine.java:319)
at net.sourceforge.wurfl.core.GeneralWURFLEngine.getDeviceForRequest(GeneralWURFLEngine.java:451)
at com.scientiamobile.wurfl.Wurfl.deviceForHeaders(Wurfl.scala:77)
at com.Demo$.main(Demo.scala:49)
at com.Demo.main(Demo.scala)
Exception in thread "main" net.sourceforge.wurfl.core.exc.WURFLRuntimeException: WURFL unexpected exception
at net.sourceforge.wurfl.core.GeneralWURFLEngine.initIfNeeded(GeneralWURFLEngine.java:322)
at net.sourceforge.wurfl.core.GeneralWURFLEngine.getDeviceForRequest(GeneralWURFLEngine.java:451)
at com.scientiamobile.wurfl.Wurfl.deviceForHeaders(Wurfl.scala:77)
at com.Demo$.main(Demo.scala:49)
at com.Demo.main(Demo.scala)
Caused by: java.lang.NullPointerException: in is null
at java.util.zip.ZipInputStream.<init>(ZipInputStream.java:101)
at java.util.zip.ZipInputStream.<init>(ZipInputStream.java:80)
at net.sourceforge.wurfl.core.resource.FileLoader.fromZipFile(FileLoader.java:248)
at net.sourceforge.wurfl.core.resource.FileLoader.openInputStream(FileLoader.java:230)
at net.sourceforge.wurfl.core.resource.FileLoader.getStream(FileLoader.java:288)
at net.sourceforge.wurfl.core.resource.XMLResource.getData(XMLResource.java:163)
at net.sourceforge.wurfl.core.resource.DefaultWURFLModel.init(DefaultWURFLModel.java:115)
at net.sourceforge.wurfl.core.resource.DefaultWURFLModel.<init>(DefaultWURFLModel.java:107)
at net.sourceforge.wurfl.core.GeneralWURFLEngine.init(GeneralWURFLEngine.java:340)
at net.sourceforge.wurfl.core.GeneralWURFLEngine.initIfNeeded(GeneralWURFLEngine.java:319)
... 4 more
The "wurfl.zip" is well located under "resources".
I also tried adding it to main Scala classes path, but still not luck.
From a code perspective
val wurflWrapper = new Wurfl(new GeneralWURFLEngine("classpath:/resources/wurfl.zip"))
is a proper way to initialize your WURFL engine.
You may want to provide information about how you're running the demo, if you are running it inside an IDE (IDEA, Eclipse or Netbeans), or using command line, or other ways. It can also be useful to tell whether you're using Maven or not.
In case you are running it using command line, please provide a sample of how you launch the Scala app and how you set the classpath.
Assuming a scenario where you are compiling with maven and executing the project directly into the target dir using -cp classes, execution will result in your classpath error because resource files are not included in the classes directory.
Make sure that wurfl-scala-example-.jar is included your classpath.
If you are using the Demo project inside IntelliJ IDEA, please make sure that the resource directory is marked as "resource", otherwise IDEA run tool will not include the wurfl.zip file as a resource.
Hope this helps.

Spray, Slick, Spark - OutOfMemoryError: PermGen space

I have successfully implemented a simple web service using Spray and Slick that passes an incoming request through a Spark ML Prediction Pipeline. Everything was working fine until I tried to add a data layer. I have chosen Slick it seems to be popular.
However, I can't quite get it to work right. I have been basing most of my code on the Hello-Slick Activator Template. I use a DAO object like so:
object dataDAO {
val datum = TableQuery[Datum]
def dbInit = {
val db = Database.forConfig("h2mem1")
try {
Await.result(db.run(DBIO.seq(
datum.schema.create
)), Duration.Inf)
} finally db.close
}
def insertData(data: Data) = {
val db = Database.forConfig("h2mem1")
try {
Await.result(db.run(DBIO.seq(
datum += data,
datum.result.map(println)
)), Duration.Inf)
} finally db.close
}
}
case class Data(data1: String, data2: String)
class Datum(tag: Tag) extends Table[Data](tag, "DATUM") {
def data1 = column[String]("DATA_ONE", O.PrimaryKey)
def data2 = column[String]("DATA_TWO")
def * = (data1, data2) <> (Data.tupled, Data.unapply)
}
I initialize my database in my Boot object
object Boot extends App {
implicit val system = ActorSystem("raatl-demo")
Classifier.initializeData
PredictionDAO.dbInit
// More service initialization code ...
}
I try to add a record to my database before completing the service request
val predictionRoute = {
path("data") {
get {
parameter('q) { query =>
// do Spark stuff to get prediction
DataDAO.insertData(data)
respondWithMediaType(`application/json`) {
complete {
DataJson(data1, data2)
}
}
}
}
}
When I send a request to my service my application crashes
java.lang.OutOfMemoryError: PermGen space
I suspect I'm implementing the Slick API incorrectly. its hard to tell from the documentation, because it stuffs all the operations into a main method.
Finally, my conf is the same as the activator ui
h2mem1 = {
url = "jdbc:h2:mem:raatl"
driver = org.h2.Driver
connectionPool = disabled
keepAliveConnection = true
}
Has anyone encountered this before? I'm using Slick 3.1
java.lang.OutOfMemoryError: PermGen space is normally not a problem with your usage, here is what oracle says about this.
The detail message PermGen space indicates that the permanent generation is full. The permanent generation is the area of the heap where class and method objects are stored. If an application loads a very large number of classes, then the size of the permanent generation might need to be increased using the -XX:MaxPermSize option.
I do not think this is because of incorrect implementation of the Slick API. This probably happens because you are using multiple frameworks that loads many classes.
Your options are:
Increase the size of perm gen size -XX:MaxPermSize
Upgrade to Java 8. The perm gen space is now replaced with MetaSpace which is tuned automatically

Play reports that it can't get ClosableLazy value after it has been closed

I am trying to run specification test in Play/Scala/ReactiveMongo project. Setup is like this:
class FeaturesSpec extends Specification {
"Features controller" should {
"create feature from JSON request" in withMongoDb { app =>
// do test
}
}
With MongoDbFixture as follows:
object MongoDBTestUtils {
def withMongoDb[T](block: Application => T): T = {
implicit val app = FakeApplication(
additionalConfiguration = Map("mongodb.uri" -> "mongodb://localhost/unittests")
)
running(app) {
def db = ReactiveMongoPlugin.db
try {
block(app)
} finally {
dropAll(db)
}
}
}
def dropAll(db: DefaultDB) = {
Await.ready(Future.sequence(Seq(
db.collection[JSONCollection]("features").drop()
)), 2 seconds)
}
}
When test runs, logs are pretty noisy, complain about resource being already closed. Although tests work correctly, this is weird and I would like to know why this occurs and how to fix it.
Error:
[info] application - ReactiveMongoPlugin stops, closing connections...
[warn] play - Error stopping plugin
java.lang.IllegalStateException: Can't get ClosableLazy value after it has been closed
at play.core.ClosableLazy.get(ClosableLazy.scala:49) ~[play_2.11-2.3.7.jar:2.3.7]
at play.api.libs.concurrent.AkkaPlugin.applicationSystem(Akka.scala:71) ~[play_2.11-2.3.7.jar:2.3.7]
at play.api.libs.concurrent.Akka$$anonfun$system$1.apply(Akka.scala:29) ~[play_2.11-2.3.7.jar:2.3.7]
at play.api.libs.concurrent.Akka$$anonfun$system$1.apply(Akka.scala:29) ~[play_2.11-2.3.7.jar:2.3.7]
at scala.Option.map(Option.scala:145) [scala-library-2.11.4.jar:na]
The exception means that you are using the ReactiveMongo plugin after the application has stopped.
You might wanna try using Around:
class withMongoDb extends Around with Scope {
val db = ReactiveMongoPlugin.db
override def around[T: AsResult](t: => T): Result = try {
val res = t
AsResult.effectively(res)
} finally {
...
}
}
You should also take a look at Flapdoodle Embedded Mongo, with that you don't have to delete databases after testing IIRC.
This problem likely occurs because your test exercises code that references a closed MongoDB instance. After each Play Specs2 test runs, the MongoDb connection is reset, thus your first test may pass, but a subsequent test may hold a stale reference to the closed instance, and as a result fail.
One way to solve this issue is to ensure the following criteria are met in your application:
Avoid using val or lazy val for MongoDb database resources
(Re)Initialize all database references on application start.
I wrote up a blog post that describes a solution to the problem within the context of a Play Controller.