How to bind Slick dependency with Lagom? - scala

So, I have this dependency which is used to create tables and interact with Postgres. Here is a Sample Class:
class ConfigTable {
this: DBFactory =>
import driver.api._
implicit val configKeyMapper = MappedColumnType.base[ConfigKey, String](e => e.toString, s => ConfigKey.withName(s))
val configs = TableQuery[ConfigMapping]
class ConfigMapping(tag: Tag) extends Table[Config](tag, "configs") {
def key = column[ConfigKey]("key")
def value = column[String]("value")
def * = (key, value) <> (Config.tupled, Config.unapply _)
}
/**
* add config
*
* #param config
* #return
*/
def add(config: Config): Try[Config] = try {
sync(db.run(configs += config)) match {
case 1 => Success(config)
case _ => Failure(new Exception("Unable to add config"))
}
} catch {
case ex: PSQLException =>
if (ex.getMessage.contains("duplicate key value")) Failure(new Exception("alt id already exists."))
else Failure(new Exception(ex.getMessage))
}
def get(key: ConfigKey): Option[Config] = sync(db.run(configs.filter(x => x.key === key).result)).headOption
def getAll(): Seq[Config] = sync(db.run(configs.result))
}
object ConfigTable extends ConfigTable with PSQLComponent
PSQLComponent is the Abstraction for Database meta configuration:
import slick.jdbc.PostgresProfile
trait PSQLComponent extends DBFactory {
val driver = PostgresProfile
import driver.api.Database
val db: Database = Database.forConfig("db.default")
}
DBFactory is again an abstraction:
import slick.jdbc.JdbcProfile
trait DBFactory {
val driver: JdbcProfile
import driver.api._
val db: Database
}
application.conf:
db.default {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/db"
user = "user"
password = "pass"
hikaricp {
minimumIdle = ${db.default.async-executor.minConnections}
maximumPoolSize = ${db.default.async-executor.maxConnections}
}
}
jdbc-defaults.slick.profile = "slick.jdbc.PostgresProfile$"
lagom.persistence.jdbc.create-tables.auto=false
I compile and publish this dependency to nexus and trying to use this in my Lagom Microservice.
Here is the Loader Class:
class SlickExapleAppLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) {
override def serviceLocator: ServiceLocator = NoServiceLocator
}
override def loadDevMode(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) with LagomDevModeComponents {
}
override def describeService = Some(readDescriptor[SlickExampleLMSServiceImpl])
}
abstract class SlickExampleApp(context: LagomApplicationContext)
extends LagomApplication(context)
// No Idea which to use and how, nothing clear from doc too.
// with ReadSideJdbcPersistenceComponents
// with ReadSideSlickPersistenceComponents
// with SlickPersistenceComponents
with AhcWSComponents {
wire[SlickExampleScheduler]
}
I'm trying to implement it in this scheduler:
class SlickExampleScheduler #Inject()(lmsService: LMSService,
configuration: Configuration)(implicit ec: ExecutionContext) {
val brofile = `SomeDomainObject`
val gson = new Gson()
val concurrency = Runtime.getRuntime.availableProcessors() * 10
implicit val timeout: Timeout = 3.minute
implicit val system: ActorSystem = ActorSystem("LMSActorSystem")
implicit val materializer: ActorMaterializer = ActorMaterializer()
// Getting Exception Initializer here..... For ConfigTable ===> ExceptionLine
val schedulerImplDao = new SchedulerImplDao(ConfigTable)
def hitLMSAPI = {
println("=============>1")
schedulerImplDao.doSomething()
}
system.scheduler.schedule(2.seconds, 2.seconds) {
println("=============>")
hitLMSAPI
}
}
Not sure if it's the correct way, or if it's not what is the correct way of doing this. It is the project requirement to keep the Data Models separate from the service for the obvious reasons of re-usability.
Exception Stack:
17:50:38.666 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=ForkJoinPool-1-worker-1, akkaTimestamp=12:20:38.665UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - Started up successfully
17:50:38.707 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=lms-impl-application-akka.actor.default-dispatcher-6, akkaTimestamp=12:20:38.707UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - No seed-nodes configured, manual cluster join required
java.lang.ExceptionInInitializerError
at com.slick.init.impl.SlickExampleScheduler.<init>(SlickExampleScheduler.scala:29)
at com.slick.init.impl.SlickExampleApp.<init>(SlickExapleAppLoader.scala:42)
at com.slick.init.impl.SlickExapleAppLoader$$anon$2.<init>(SlickExapleAppLoader.scala:17)
at com.slick.init.impl.SlickExapleAppLoader.loadDevMode(SlickExapleAppLoader.scala:17)
at com.lightbend.lagom.scaladsl.server.LagomApplicationLoader.load(LagomApplicationLoader.scala:76)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$5(LagomReloadableDevServerStart.scala:176)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$3(LagomReloadableDevServerStart.scala:173)
at scala.Option.map(Option.scala:163)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$2(LagomReloadableDevServerStart.scala:149)
at scala.util.Success.flatMap(Try.scala:251)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$1(LagomReloadableDevServerStart.scala:147)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:658)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.lang.NullPointerException
at com.example.db.models.LoginTable.<init>(LoginTable.scala:29)
at com.example.db.models.LoginTable$.<init>(LoginTable.scala:293)
at com.example.db.models.LoginTable$.<clinit>(LoginTable.scala)
... 24 more

This is how it is woking:
abstract class SlickExampleApp(context: LagomApplicationContext) extends LagomApplication(context)
with SlickPersistenceComponents with AhcWSComponents {
override implicit lazy val actorSystem: ActorSystem = ActorSystem("LMSActorSystem")
override lazy val materializer: ActorMaterializer = ActorMaterializer()
override lazy val lagomServer = serverFor[SlickExampleLMSService](wire[SlickExampleLMSServiceImpl])
lazy val externalService = serviceClient.implement[LMSService]
override def connectionPool: ConnectionPool = new HikariCPConnectionPool(environment)
override def jsonSerializerRegistry: JsonSerializerRegistry = new JsonSerializerRegistry {
override def serializers: immutable.Seq[JsonSerializer[_]] = Vector.empty
}
val loginTable = wire[LoginTable]
wire[SlickExampleScheduler]
}
> One thing I'd like to report is: Lagom docs about the application.conf configuration of slick is not correct, it misleaded me for two days, the I digged into the Liberary code and this is how it goes:
private val readSideConfig = system.settings.config.getConfig("lagom.persistence.read-side.jdbc")
private val jdbcConfig = system.settings.config.getConfig("lagom.persistence.jdbc")
private val createTables = jdbcConfig.getConfig("create-tables")
val autoCreateTables: Boolean = createTables.getBoolean("auto")
// users can disable the usage of jndiDbName for userland read-side operations by
// setting the jndiDbName to null. In which case we fallback to slick.db.
// slick.db must be defined otherwise the application will fail to start
val db = {
if (readSideConfig.hasPath("slick.jndiDbName")) {
new InitialContext()
.lookup(readSideConfig.getString("slick.jndiDbName"))
.asInstanceOf[Database]
} else if (readSideConfig.hasPath("slick.db")) {
Database.forConfig("slick.db", readSideConfig)
} else {
throw new RuntimeException("Cannot start because read-side database configuration is missing. " +
"You must define either 'lagom.persistence.read-side.jdbc.slick.jndiDbName' or 'lagom.persistence.read-side.jdbc.slick.db' in your application.conf.")
}
}
val profile = DatabaseConfig.forConfig[JdbcProfile]("slick", readSideConfig).profile
The configuration it requires is very much different than the suggested one on the Doc.

Related

Akka gRPC + Slick application causes "IllegalStateException: Cannot initialize ExecutionContext; AsyncExecutor already shut down"

I try to develop gRPC server with Akka-gRPC and Slick. I also use Airframe for DI.
Source code is here
The issue is that it cause failure if it receive request when execute as gRPC server.
If it doesn't start as a gRPC server, but just reads resources from the database, the process succeeds.
What is the difference?
At Follows, It read object from database with slick.
...Component is airframe object. It will use by main module.
trait UserRepository {
def getUser: Future[Seq[Tables.UsersRow]]
}
class UserRepositoryImpl(val profile: JdbcProfile, val db: JdbcProfile#Backend#Database) extends UserRepository {
import profile.api._
def getUser: Future[Seq[Tables.UsersRow]] = db.run(Tables.Users.result)
}
trait UserResolveService {
private val repository = bind[UserRepository]
def getAll: Future[Seq[Tables.UsersRow]] =
repository.getUser
}
object userServiceComponent {
val design = newDesign
.bind[UserResolveService]
.toSingleton
}
Follows is gRPC Server source code.
trait UserServiceImpl extends UserService {
private val userResolveService = bind[UserResolveService]
private val system: ActorSystem = bind[ActorSystem]
implicit val ec: ExecutionContextExecutor = system.dispatcher
override def getAll(in: GetUserListRequest): Future[GetUserListResponse] = {
userResolveService.getAll.map(us =>
GetUserListResponse(
us.map(u =>
myapp.proto.user.User(
1,
"t_horikoshi#example.com",
"t_horikoshi",
myapp.proto.user.User.UserRole.Admin
)
)
)
)
}
}
trait GRPCServer {
private val userServiceImpl = bind[UserServiceImpl]
implicit val system: ActorSystem = bind[ActorSystem]
def run(): Future[Http.ServerBinding] = {
implicit def ec: ExecutionContext = system.dispatcher
val service: PartialFunction[HttpRequest, Future[HttpResponse]] =
UserServiceHandler.partial(userServiceImpl)
val reflection: PartialFunction[HttpRequest, Future[HttpResponse]] =
ServerReflection.partial(List(UserService))
// Akka HTTP 10.1 requires adapters to accept the new actors APIs
val bound = Http().bindAndHandleAsync(
ServiceHandler.concatOrNotFound(service, reflection),
interface = "127.0.0.1",
port = 8080,
settings = ServerSettings(system)
)
bound.onComplete {
case Success(binding) =>
system.log.info(
s"gRPC Server online at http://${binding.localAddress.getHostName}:${binding.localAddress.getPort}/"
)
case Failure(ex) =>
system.log.error(ex, "occurred error")
}
bound
}
}
object grpcComponent {
val design = newDesign
.bind[UserServiceImpl]
.toSingleton
.bind[GRPCServer]
.toSingleton
}
Follows is main module.
object Main extends App {
val conf = ConfigFactory
.parseString("akka.http.server.preview.enable-http2 = on")
.withFallback(ConfigFactory.defaultApplication())
val system = ActorSystem("GRPCServer", conf)
val dbConfig: DatabaseConfig[JdbcProfile] =
DatabaseConfig.forConfig[JdbcProfile](path = "mydb")
val design = newDesign
.bind[JdbcProfile]
.toInstance(dbConfig.profile)
.bind[JdbcProfile#Backend#Database]
.toInstance(dbConfig.db)
.bind[UserRepository]
.to[UserRepositoryImpl]
.bind[ActorSystem]
.toInstance(system)
.add(userServiceComponent.design)
.add(grpcComponent.design)
design.withSession(s =>
// Await.result(s.build[UserResolveService].getUser, Duration.Inf)) // success
// Await.result(s.build[UserServiceImpl].getAll(GetUserListRequest()), Duration.Inf)) // success
s.build[GRPCServer].run() // cause IllegalStateException when reciece request.
)
}
When UserResolveService and UserServiceImpl are called directly, the process of loading an object from the database is successful.
However, when running the application as a gRPC Server, an error occurs when a request is received.
Though I was thinking all day, I couldn't resolve...
Will you please help me to resolve.
It resolved. if execute async process, It has to start gRPC server with newSession.
I fix like that.

Play + Akka - Join the cluster and ask actor on another ActorSystem

I am able to make Play app join the existing Akka cluster and then make ask call to actor running on another ActorSystem and get results back. But I am having trouble with couple of things -
I see below in logs when play tries to join the cluster. I suspect that Play is starting its own akka cluster? I am really not sure what it means.
Could not register Cluster JMX MBean with name=akka:type=Cluster as it is already registered. If you are running multiple clust
ers in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config`
Right now I m re-initializing the actorsystem every time when the request comes to Controller which I know is not right way do it. I am new to Scala, Akka, Play thing and having difficulty figuring out how to make it Singleton service and inject into my controller.
So far I have got this -
class DataRouter #Inject()(controller: DataController) extends SimpleRouter {
val prefix = "/v1/data"
override def routes: Routes = {
case GET(p"/ip/$datatype") =>
controller.get(datatype)
case POST(p"/ip/$datatype") =>
controller.process
}
}
case class RangeInput(start: String, end: String)
object RangeInput {
implicit val implicitWrites = new Writes[RangeInput] {
def writes(range: RangeInput): JsValue = {
Json.obj(
"start" -> range.start,
"end" -> range.end
)
}
}
}
#Singleton
class DataController #Inject()(cc: ControllerComponents)(implicit exec: ExecutionContext) extends AbstractController(cc) {
private val logger = Logger("play")
implicit val timeout: Timeout = 115.seconds
private val form: Form[RangeInput] = {
import play.api.data.Forms._
Form(
mapping(
"start" -> nonEmptyText,
"end" -> text
)(RangeInput.apply)(RangeInput.unapply)
)
}
def get(datatype: String): Action[AnyContent] = Action.async { implicit request =>
logger.info(s"show: datatype = $datatype")
logger.trace(s"show: datatype = $datatype")
//val r: Future[Result] = Future.successful(Ok("hello " + datatype ))
val config = ConfigFactory.parseString("akka.cluster.roles = [gateway]").
withFallback(ConfigFactory.load())
implicit val system: ActorSystem = ActorSystem(SharedConstants.Actor_System_Name, config)
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
val ipData = system.actorOf(
ClusterRouterGroup(RandomGroup(Nil), ClusterRouterGroupSettings(
totalInstances = 100, routeesPaths = List("/user/getipdata"),
allowLocalRoutees = false, useRoles = Set("static"))).props())
val res: Future[String] = (ipData ? datatype).mapTo[String]
//val res: Future[List[Map[String, String]]] = (ipData ? datatype).mapTo[List[Map[String,String]]]
val futureResult: Future[Result] = res.map { list =>
Ok(Json.toJson(list))
}
futureResult
}
def process: Action[AnyContent] = Action.async { implicit request =>
logger.trace("process: ")
processJsonPost()
}
private def processJsonPost[A]()(implicit request: Request[A]): Future[Result] = {
logger.debug(request.toString())
def failure(badForm: Form[RangeInput]) = {
Future.successful(BadRequest("Test"))
}
def success(input: RangeInput) = {
val r: Future[Result] = Future.successful(Ok("hello " + Json.toJson(input)))
r
}
form.bindFromRequest().fold(failure, success)
}
}
akka {
log-dead-letters = off
log-dead-letters-during-shutdown = off
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${myhost}
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://MyCluster#localhost:2541"
]
} seed-nodes = ${?SEEDNODE}
}
Answers
Refer to this URL. https://www.playframework.com/documentation/2.6.x/ScalaAkka#Built-in-actor-system-name has details about configuring the actor system name.
You should not initialize actor system on every request, use Play injected actor system in the Application class, if you wish to customize the Actor system, you should do it through modifying the AKKA configuration. For that,
you should create your own ApplicationLoader extending GuiceApplicationLoader and override the builder method to have your own AKKA configuration. Rest of the things taken care by Play like injecting this actor system in Application for you.
Refer to below URL
https://www.playframework.com/documentation/2.6.x/ScalaDependencyInjection#Advanced:-Extending-the-GuiceApplicationLoader

Kafka tests failing intermittently if not starting/stopping kafka each time

I'm trying to run some integration tests for a data stream using an embedded kafka cluster. When executing all the tests in a different environment than my local, the tests are failing due to some internal state that's not removed properly.
I can get the all the tests running on the non-local environment when I start/stop the kafka cluster before/after each test but I only want to start and stop the cluster once, at the beginning and at the end of the execution of my suite of tests.
I tried to remove the local streams state but that didn't seem to work:
override protected def afterEach(): Unit = KStreamTestUtils.purgeLocalStreamsState(properties)
Is there a way to get my suit of tests running without having to start/stop cluster each time?
Right below there are the relevant classes.
class TweetStreamProcessorSpec extends FeatureSpec
with MockFactory with GivenWhenThen with Eventually with BeforeAndAfterEach with BeforeAndAfterAll {
val CLUSTER: EmbeddedKafkaCluster = new EmbeddedKafkaCluster
val TEST_TOPIC: String = "test_topic"
val properties = new Properties()
override def beforeAll(): Unit = {
CLUSTER.start()
CLUSTER.createTopic(TEST_TOPIC, 1, 1)
}
override def afterAll(): Unit = CLUSTER.stop()
// if uncommenting these lines tests works
// override def afterEach(): Unit = CLUSTER.stop()
// override protected def beforeEach(): Unit = CLUSTER.start()
def createProducer: KafkaProducer[String, TweetEvent] = {
val properties = Map(
KEY_SERIALIZER_CLASS_CONFIG -> classOf[StringSerializer].getName,
VALUE_SERIALIZER_CLASS_CONFIG -> classOf[ReflectAvroSerializer[TweetEvent]].getName,
BOOTSTRAP_SERVERS_CONFIG -> CLUSTER.bootstrapServers(),
SCHEMA_REGISTRY_URL_CONFIG -> CLUSTER.schemaRegistryUrlForcedToLocalhost()
)
new KafkaProducer[String, TweetEvent](properties)
}
def kafkaConsumerSettings: KafkaConfig = {
val bootstrapServers = CLUSTER.bootstrapServers()
val schemaRegistryUrl = CLUSTER.schemaRegistryUrlForcedToLocalhost()
val zookeeper = CLUSTER.zookeeperConnect()
KafkaConfig(
ConfigFactory.parseString(
s"""
akka.kafka.bootstrap.servers = "$bootstrapServers"
akka.kafka.schema.registry.url = "$schemaRegistryUrl"
akka.kafka.zookeeper.servers = "$zookeeper"
akka.kafka.topic-name = "$TEST_TOPIC"
akka.kafka.consumer.kafka-clients.key.deserializer = org.apache.kafka.common.serialization.StringDeserializer
akka.kafka.consumer.kafka-clients.value.deserializer = ${classOf[ReflectAvroDeserializer[TweetEvent]].getName}
akka.kafka.consumer.kafka-clients.client.id = client1
akka.kafka.consumer.wakeup-timeout=20s
akka.kafka.consumer.max-wakeups=10
""").withFallback(ConfigFactory.load()).getConfig("akka.kafka")
)
}
feature("Logging tweet data from kafka topic") {
scenario("log id and payload when consuming a update tweet event") {
publishEventsToKafka(List(upTweetEvent))
val logger = Mockito.mock(classOf[Logger])
val pipeline = new TweetStreamProcessor(kafkaConsumerSettings, logger)
pipeline.start
eventually(timeout(Span(5, Seconds))) {
Mockito.verify(logger, Mockito.times(1)).info(s"updating tweet uuid=${upTweetEvent.getUuid}, payload=${upTweetEvent.getPayload}")
}
pipeline.stop
}
scenario("log id when consuming a delete tweet event") {
publishEventsToKafka(List(delTweetEvent))
val logger = Mockito.mock(classOf[Logger])
val pipeline = new TweetStreamProcessor(kafkaConsumerSettings, logger)
pipeline.start
eventually(timeout(Span(5, Seconds))) {
Mockito.verify(logger, Mockito.times(1)).info(s"deleting tweet uuid=${delTweetEvent.getUuid}")
}
pipeline.stop
}
}
}
class TweetStreamProcessor(kafkaConfig: KafkaConfig, logger: Logger)
extends Lifecycle with TweetStreamProcessor with Logging {
private var control: Control = _
private val valueDeserializer: Option[Deserializer[TweetEvent]] = None
// ...
def tweetsSource(implicit mat: Materializer): Source[CommittableMessage[String, TweetEvent], Control] =
Consumer.committableSource(tweetConsumerSettings, Subscriptions.topics(kafkaConfig.topicName))
override def start: Future[Unit] = {
control = tweetsSource(materializer)
.mapAsync(1) { msg =>
logTweetEvent(msg.record.value())
.map(_ => msg.committableOffset)
}.batch(max = 20, first => CommittableOffsetBatch.empty.updated(first)) { (batch, elem) =>
batch.updated(elem)
}
.mapAsync(3)(_.commitScaladsl())
.to(Sink.ignore)
.run()
Future.successful()
}
override def stop: Future[Unit] = {
control.shutdown()
.map(_ => Unit)
}
}
Any help over this would be much appreciated? Thanks in advance.

How can ExecutionContext be injected on Play Framework tests?

I want to create tests for my Play Framework Application and I continue getting java.lang.RuntimeException: There is no started application. I have an Asynchronous Controller like this:
class ComputerController #Inject()(computerService: ComputerService)(implicit executionContext: ExecutionContext){
def add = Action.async {
ComputerForm.form.bindFromRequest.fold(
errorForm => Future.successful(Ok(errorForm.toString)),
data => {
val ip = data.ip
val name = data.name
val user = data.user
val password = data.password
val computer = Computer(ip,name,user,password)
val futureTask = computerService.add(newComputer)
futureTask.map(res => Redirect(routes.HomeController.home()))
}
)
}
}
A helper trait for injecting:
trait Inject {
val injector = new GuiceApplicationBuilder()
.in(new File("conf/application.conf").
.in(Mode.Test)
.injector
}
And the tests is like this:
class ComputerControllerSpec extends PlaySpec with Inject with MockitoSugar with ScalaFutures {
lazy val computerService = mock[ComputerService]
when(computerService.add(any[Computer])) thenReturn Future.successful("Computer added")
implicit lazy val executionContext = injector.instanceOf[ExecutionContext]
val controller = new ComputerController(computerService)
"Computer Controller" should {
"add a new computer" in {
val computer = ComputerFormData("127.0.0.1","Computer","user","password")
val computerForm = ComputerForm.form.fill(computer)
val result = controller.add.apply {
FakeRequest()
.withFormUrlEncodedBody(computerForm.data.toSeq: _*)
}
val bodyText = contentAsString(result)
bodyText mustBe ""
}
}
}
I have also tried:
Initializing the executionContext implicit value with ExecutionContext.global instead and got java.lang.RuntimeException: There is no started application.
Adding with OneAppPerSuite to ComputerControllerSpec and got: akka.actor.OneForOneStrategy - Cannot initialize ExecutionContext; AsyncExecutor already shut down
Changing "add a new computer" in { for "add a new computer" in new WithApplication { and got: java.lang.RuntimeException: There is no started application
I don't really know how to inject that implicit ExecutionContext to my tests.
I guess you are missing "new WithApplication()"
Try this
class ComputerControllerSpec extends PlaySpec with Inject with MockitoSugar with ScalaFutures {
lazy val computerService = mock[ComputerService]
when(computerService.add(any[Computer])) thenReturn Future.successful("Computer added")
implicit lazy val executionContext = injector.instanceOf[ExecutionContext]
val controller = new ComputerController(computerService)
"Computer Controller" should {
"add a new computer" in new WithApplication() {
val computer = ComputerFormData("127.0.0.1","Computer","user","password")
val computerForm = ComputerForm.form.fill(computer)
val result = controller.add.apply {
FakeRequest()
.withFormUrlEncodedBody(computerForm.data.toSeq: _*)
}
val bodyText = contentAsString(result)
bodyText mustBe ""
}
}
}
The way this test got to work was:
Extending PlaySpec with MockitoSugar and BeforeAndAfterAll
Overwriting:
// Before all the tests, start the fake Play application
override def beforeAll() {
application.startPlay()
}
// After the tests execution, shut down the fake application
override def afterAll() {
application.stopPlay()
}
And then all the tests run without the thrown exception.

Close or shutdown of H2 database after tests is not working

I am facing a problem of database clean-up after each test when using scalatest with Slick.
Here is code of the test:
class H2DatabaseSpec extends WordSpec with Matchers with ScalaFutures with BeforeAndAfter {
implicit override val patienceConfig = PatienceConfig(timeout = Span(5, Seconds))
val h2DB: H2DatabaseService = new H2DatabaseService
var db: Database = _
before {
db = Database.forConfig("h2mem1")
h2DB.createSchema.futureValue
}
after {
db.shutdown.futureValue
}
"H2 database" should {
"query a question" in {
val newQuestion: QuestionEntity = QuestionEntity(Some(1L), "First question")
h2DB.insertQuestion(newQuestion).futureValue
val question = h2DB.getQuestionById(1L)
question.futureValue.get shouldBe newQuestion
}
}
it should {
"query all questions" in {
val newQuestion: QuestionEntity = QuestionEntity(Some(2L), "Second question")
h2DB.insertQuestion(newQuestion).futureValue
val questions = h2DB.getQuestions
questions.futureValue.size shouldBe 1
}
}
}
Database service is just invoking run method on defined database:
class H2DatabaseService {
val db = Database.forConfig("h2mem1")
val questions = TableQuery[Question]
def createSchema =
db.run(questions.schema.create)
def getQuestionById(id: Long): Future[Option[QuestionEntity]] =
db.run(questions.filter(_.id === id).result.headOption)
def getQuestions: Future[Seq[QuestionEntity]] =
db.run(questions.result)
def insertQuestion(question: QuestionEntity): Future[Int] =
db.run(questions += question)
}
class Question(tag: Tag) extends Table[QuestionEntity](tag, "QUESTION") {
def id = column[Option[Long]]("QUESTION_ID", O.PrimaryKey, O.AutoInc)
def title = column[String]("TITLE")
def * = (id, title) <> ((QuestionEntity.apply _).tupled, QuestionEntity.unapply)
}
case class QuestionEntity(id: Option[Long] = None, title: String) {
require(!title.isEmpty, "title.empty")
}
And the database is defined as follows:
h2mem1 = {
url = "jdbc:h2:mem:test1"
driver = org.h2.Driver
connectionPool = disabled
keepAliveConnection = true
}
I am using Scala 2.11.8, Slick 3.1.1, H2 database 1.4.192 and scalatest 2.2.6.
Error that appears when tests are executed is Table "QUESTION" already exists. So it looks like shutdown() method has no effect at all (but it is invoked - checked in debugger).
Anybody knows how to handle such scenario? How to clean-up database properly after each test?
Database has not been correctly cleaned-up because of invoking the method on different object.
H2DatabaseService has it's own db object and the test it's own. Issue was fixed after refactoring H2 database service and invoking:
after {
h2DB.db.shutdown.futureValue
}