My FlatSpec tests are throwing:
java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#dda460e rejected from java.util.concurrent.ThreadPoolExecutor#4f489ebd[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
But only when I run more than one suite, on the second suite forward; it seems there's something that isn't reset between tests. I'm using OneAppPerSuite to provide the app context. Whenever I use OneAppPerTest, it fails again after the first test/Suite.
I have a override def beforeEach = tables.foreach ( _.truncate ) set up to clear the tables, where truncate just deletes all from a table: Await.result (db.run (q.delete), Timeout.Inf)
I have the following setup for my DAO layer:
SomeMappedDaoClass extends SomeCrudBase with HasDatabaseConfig
where
trait SomeCrudBase { self: HasDatabaseConfig =>
override lazy val dbConfig = DatabaseConfigProvider.get[JdbcProfile](Play.current)
implicit lazy val context = Akka.system.dispatchers.lookup("db-context")
}
And in application.conf
db-context {
fork-join-executor {
parallelism-factor = 5
parallelism-max = 100
}
}
I was refactoring the code to move away from Play's Guice DI. Before, when it had #Inject() (val dbConfigProvider: DatabaseConfigProvider) and extended HasDatabaseConfigProvider instead on the DAO classes, everything worked perfectly. Now it doesn't, and I don't know why.
Thank you in advance!
Just out of interest is SomeMappedDaoClass an object? (I know it says class but...).
When testing the Play framework I have run into this kind of issue when dealing with objects that setup connections to parts of the Play Framework.
Between tests and between test files the Play app is killed and restarted, however, the objects created persist (because they are objects, they are initialised once within a JVM context--I think).
This can result in an object with a connection (be it for slick, an actor, anything...) that is referencing the first instance of the app used in a test. When the app is terminated and a new test starts a new app, that connection is now pointing to nothing.
I came across the same issue and in my case, the above answers did not work out.
My Solution -
implicit val app = new FakeApplication(additionalConfiguration = inMemoryDatabase())
Play.start(app)
Add above code in your first test case and don't add Play.stop(app). As all the test cases are already refering the first application, it should not be terminated. This worked for me.
Related
I'm using Play 2.5 with Slick. The docs on this topic simply state that everything is managed by Slick and Play's Slick module. This example however, prints Dispatcher[akka.actor.default-dispatcher]:
class MyDbioImpl #Inject()(protected val dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext)
with HasDatabaseConfigProvider[JdbcProfile] {
import profile.api._
def selectSomeStuff(): Future[MyResult] = db.run {
println(ec)
[...]
}
}
Since the execution context is printed inside db.run, it seems like all of my database access will also be executed on the default execution context.
I found this answer to an older question which, at the time, solved the problem. But this solution is since deprecated, it is suggested to use dependency injection to acquire the application context. When I try to do this, I get an error saying that play.akka.actor.slick-context does not exist...
class MyDbioProvider #Inject()(actorSystem: ActorSystem,
protected val dbConfigProvider: DatabaseConfigProvider)
extends Provider[MyDbioImpl] {
override def get(): MyDbioImpl = {
val ec = actorSystem.dispatchers.lookup("play.akka.actor.slick-context")
new MyDbioImpl(dbConfigProvider)(ec)
}
}
Edit:
Is Slick's execution context a "normal" execution context which is defined in a config file somewhere? Where does the context switch take place? I assumed the entry point to the "database world" is at db.run.
According to Slick:
Every Database contains an AsyncExecutor that manages the thread pool
for asynchronous execution of Database I/O Actions. Its size is the
main parameter to tune for the best performance of the Database
object. It should be set to the value that you would use for the size
of the connection pool in a traditional, blocking application (see
About Pool Sizing in the HikariCP documentation for further
information). When using Database.forConfig, the thread pool is
configured directly in the external configuration file together with
the connection parameters. If you use any other factory method to get
a Database, you can either use a default configuration or specify a
custom AsyncExecutor.
Basically it says you don't need to create an isolated ExecutionContext since Slick already isolates a thread pool internally. Any call you make to Slick is non-blocking thus you should use the default ExecutionContext.
Slick's implementation of this can be seen in the BasicBackend.scala file: the runInContextSafe method. The code is as follows:
val promise = Promise[R]
val runnable = new Runnable {
override def run() = {
try {
promise.completeWith(runInContextInline(a, ctx, streaming, topLevel, stackLevel = 1))
} catch {
case NonFatal(ex) => promise.failure(ex)
}
}
}
DBIO.sameThreadExecutionContext.execute(runnable)
promise.future
As shown above, Promise is used here, and then its code is executed quickly using its internal thread pool, and the Future object of the Promise is returned. Therefore, when Await.result/ready is executed, the Promise here is probably already executed by Slick's internal thread, so it is enough to get the result, and it is possible to execute Await.result/ready in an environment such as Play. Of non-blocking.
For details, please refer to Scala's documentation on Future and Promise: https://docs.scala-lang.org/overviews/core/futures.html
I'm looking to schedule something to run once per day, and the code I want to run involves making updates to entries in the database. I have managed to schedule the running of some simple code by overriding the onStart method in Global.scala using Akka, as follows
override def onStart(app: Application) = {
Akka.system.scheduler.schedule(0.second, 1.second) {
println("hello!")
}
}
The issue is that I want to do something more complicated than logging here, I want to make updates to the database, so I would want to call some function in a models file (models/SomeTable.scala), but I can't import that code in Global.scala.
It sounds like if I want to do something more complicated like that, I should be using Akka's actor system, but I am far from understanding how that works. I've found documentation for how to create Actors through Akka's documentation, though how to incorporate that into my Play project is unclear. Where do I write this Actor class, and how is that imported (what is Global.scala allowed to have access to..?)? And if I don't need to use actors for this, does anyone have some insight into how imports and such work in this part of a Play project?
Note that this part of the Play framework underwent large changes going from Play version 2.3.* to 2.4.*, so it definitely should not be expected that solutions in 2.4.* would likely work here
The information I've gotten above has come mostly from Play's documentation, along with a bunch of related SO questions:
How to schedule task daily + onStart in Play 2.0.4?
how to write cron job in play framework 2.3
Scheduling delaying of jobs tasks in Play framework 2.x app
Where is the job support in Play 2.0?
Was asynchronous jobs removed from the Play framework? What is a better alternative?
Thanks so much in advance!
First of all you definitely need to read about akka.
But for your specific task you do not need to import anything into Global. You just need to start your worker actor. And this actor can schedule regular action itself. As Template
import akka.actor.{Actor, Cancellable, Props}
import scala.concurrent.duration._
class MyActor extends Actor {
private var cancellable: Option[Cancellable] = None
override def preStart(): Unit = {
super.preStart()
cancellable = Some(
context.system.scheduler.schedule(
1.second,
24.hours,
self,
MyActor.Tick
)(context.dispatcher)
)
}
override def postStop(): Unit = {
cancellable.foreach(_.cancel())
cancellable = None
super.postStop()
}
def receive: Receive = {
case MyActor.Tick =>
// here is the start point for your execution
// NEW CODE WILL BE HERE
}
}
object MyActor {
val Name = "my-actor"
def props = Props(new MyActor)
case object Tick
}
Here you have an actor class. With preStart and postStop (read more about actors lifecycle) where defined and cancelled schedule. Scheduled action is sending Tick message to self (in other words actor will receive each 24 hours Tick message and if receive is defined for Tick, this message will be processed).
So you just need to start you implementation where I placed comment.
From Global you just need to start this action in onStart:
Akka.system.actorOf(MyActor.props, MyActor.Name)
I used the DAO design GitHub Gist by almeidap as an example for my database layer. Unfortunately, current is deprecated since the Play release 2.5, so I can't use:
trait MongoHelper extends ContextHelper{
lazy val db = ReactiveMongoPlugin.db
}
nor
trait MongoHelper extends ContextHelper {
lazy val reactiveMongoApi = current.injector.instanceOf[ReactiveMongoApi]
lazy val db = reactiveMongoApi.db
}
Since I cannot inject reactiveMongoApi into a trait I am wondering how I can solve this issue. Despite the fact that the use of deprecated method is discouraged, I cannot start my application because I get the exception There is no started application caused by my startup code which inserts dummy data into my database on app launch.
I have an Play 2.5.3 application which uses Slick for reading an object from DB.
The service classes are built in the following way:
class SomeModelRepo #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) {
val dbConfig = dbConfigProvider.get[JdbcProfile]
import dbConfig.driver.api._
val db = dbConfig.db
...
Now I need some standalone Scala scripts to perform some operations in the background. I need to connect to the DB within them and I would like to reuse my existing service classes to read objects from DB.
To instantiate a SomeModelRepo class' object I need to pass some DatabaseConfigProvider as a parameter. I tried to run:
object SomeParser extends App {
object testDbProvider extends DatabaseConfigProvider {
def get[P <: BasicProfile]: DatabaseConfig[P] = {
DatabaseConfigProvider.get("default")(Play.current)
}
}
...
val someRepo = new SomeModelRepo(testDbProvider)
however I have an error: "There is no started application" in the line with "(Play.current)". Moreover the method current in object Play is deprecated and should be replaced with DI.
Is there any way to initialize my SomeModelRepo class' object within the standalone object SomeParser?
Best regards
When you start your Play application, the PlaySlick module handles the Slick configurations for you. With it you have two choices:
inject DatabaseConfigProvider and get the driver from there, or
do a global lookup via DatabaseConfigProvider.get[JdbcProfile](Play.current), which is not preferred.
Either way, you must have your Play app running! Since this is not the case with your standalone scripts you get the error: "There is no started application".
So, you will have to use Slick's default approach, by instantiating db directly from config:
val db = Database.forConfig("default")
You have lot's of examples at Lightbend's templates.
EDIT: Sorry, I didn't read the whole question. Do you really need to have it as another application? You can run your background operations when your app starts, like here. In this example, InitialData class is instantiated as eager singleton, so it's insert() method is run immediately when app starts.
I have a spark application that involves 2 scala companion objects as follows.
object actualWorker {
daoClient
def update (data, sc) {
groupedData = sc.getRdd(data).filter. <several_operations>.groupByKey
groupedData.foreach(x => daoClient.load(x))
}
}
object SparkDriver {
getArgs
sc = getSparkContext
actualWorker.update(data, sc : sparkContext)
}
The challenge I have is in writing unit-test for this spark application. I am using Mockito and ScalaTest, Junit for these tests.
I am not able to mock the daoClient while writing the unit test. [EDIT1: Additional challenge is the fact that my daoClient is not serializable. Because I am running it on spark, I simply put it in an object (not class) and it works on spark; but it makes it non unit-testable ]
I have tried the following:
Make ActualWorker a class that can have a uploadClient passed in the
Constructor. Create a client and instantiate it in Actual Worker
Problem: Task not serializable exception.
Introduce a trait for upload client. But still I need to instantiate a client at some point in the SparkDriver, which I fear will cause the Task Not serializable exception.
Any inputs here will be appreciated.
PS: I am fairly new to Scala and spark
While technically not exactly a unit testing framework, I've used https://github.com/holdenk/spark-testing-base to test my Spark code and it works well.