How to handle exception during creation? - scala

I am trying to test actor, that is using Akka Typed Persistence. The scenario is, the database is offline and it should send a notification to another actor, that the database is not available.
During the creation of child actor, that will persist the data to the database, it throws the following error:
akka.actor.ActorInitializationException: akka://StoreTestOffline/user/OfflineStoreActor/StoreChilid: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:202)
at akka.actor.ActorCell.create(ActorCell.scala:696)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:547)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:569)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:293)
at akka.dispatch.Mailbox.run(Mailbox.scala:228)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.IllegalArgumentException: Default journal plugin is not configured, see 'reference.conf'
at akka.persistence.Persistence$.verifyPluginConfigIsDefined(Persistence.scala:193)
at akka.persistence.typed.internal.EventSourcedSettings$.defaultJournalPluginId$1(EventSourcedSettings.scala:57)
at akka.persistence.typed.internal.EventSourcedSettings$.journalConfigFor(EventSourcedSettings.scala:61)
at akka.persistence.typed.internal.EventSourcedSettings$.apply(EventSourcedSettings.scala:39)
at akka.persistence.typed.internal.EventSourcedSettings$.apply(EventSourcedSettings.scala:22)
at akka.persistence.typed.internal.EventSourcedBehaviorImpl.apply(EventSourcedBehaviorImpl.scala:83)
at akka.actor.typed.Behavior$.start(Behavior.scala:331)
at akka.actor.typed.internal.InterceptorImpl$$anon$1.start(InterceptorImpl.scala:45)
at akka.actor.typed.internal.AbstractSupervisor.aroundStart(Supervision.scala:72)
at akka.actor.typed.internal.InterceptorImpl.preStart(InterceptorImpl.scala:68)
at akka.actor.typed.internal.InterceptorImpl$.$anonfun$apply$1(InterceptorImpl.scala:25)
at akka.actor.typed.Behavior$DeferredBehavior$$anon$1.apply(Behavior.scala:264)
at akka.actor.typed.Behavior$.start(Behavior.scala:331)
at akka.actor.typed.internal.adapter.ActorAdapter.preStart(ActorAdapter.scala:238)
at akka.actor.Actor.aroundPreStart(Actor.scala:550)
at akka.actor.Actor.aroundPreStart$(Actor.scala:550)
at akka.actor.typed.internal.adapter.ActorAdapter.aroundPreStart(ActorAdapter.scala:51)
at akka.actor.ActorCell.create(ActorCell.scala:676)
... 9 more
[ERROR] [08/30/2019 12:28:15.391] [StoreTestOffline-akka.actor.default-dispatcher-6] [akka://StoreTestOffline/user] death pact with Actor[akka://StoreTestOffline/user/OfflineStoreActor/StoreChilid#-1253371311] was triggered
akka.actor.typed.DeathPactException: death pact with Actor[akka://StoreTestOffline/user/OfflineStoreActor/StoreChilid#-1253371311] was triggered
at akka.actor.typed.Behavior$.interpretSignal(Behavior.scala:402)
at akka.actor.typed.internal.adapter.ActorAdapter.handleSignal(ActorAdapter.scala:128)
at akka.actor.typed.internal.adapter.ActorAdapter.aroundReceive(ActorAdapter.scala:87)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
at akka.actor.dungeon.DeathWatch.$anonfun$receivedTerminated$1(DeathWatch.scala:67)
at akka.actor.dungeon.DeathWatch.$anonfun$receivedTerminated$1$adapted(DeathWatch.scala:65)
at scala.Option.foreach(Option.scala:438)
at akka.actor.dungeon.DeathWatch.receivedTerminated(DeathWatch.scala:65)
at akka.actor.dungeon.DeathWatch.receivedTerminated$(DeathWatch.scala:64)
at akka.actor.ActorCell.receivedTerminated(ActorCell.scala:447)
at akka.actor.ActorCell.autoReceiveMessage(ActorCell.scala:597)
at akka.actor.ActorCell.invoke(ActorCell.scala:580)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
The way, how I spawn the child is:
object MessageSupervisorSpec {
def create(communicator: Option[ActorRef[RpcCmd]], logger: Option[ActorRef[LogCmd]]): Behavior[MessageCmd] =
Behaviors.setup { context =>
context.log.info("=============> Start MessageSupervisorSpec <=============")
val fault = Behaviors
.supervise(Persistence.create(communicator, logger))
.onFailure[ActorInitializationException](SupervisorStrategy.stop)
val store = context.spawn(fault, "StoreChilid")
context.watch(store)
def loop(): Behavior[MessageCmd] =
Behaviors.receiveMessage {
case SaveMessage(v) =>
println(v)
store ! SaveMessage(v)
Behavior.same
}
loop()
}
}
The code of the child actor:
object Persistence {
val storeName = "connector-store"
/*
* Persist all incoming messages from KAFKA or SAP
*/
private val commandHandler: Option[ActorRef[RpcCmd]] => (MessageState, MessageCmd) => Effect[MessageEvent, MessageState]
= communicator => { (_, command) =>
command match {
case SaveMessage(data) =>
Effect
.persist(MessageSaved(data))
.thenRun(state => communicator.foreach(actor => actor ! SendMessage(state.value)))
}
}
private val eventHandler: (MessageState, MessageEvent) => MessageState
= { (_, event) =>
event match {
case MessageSaved(data) => MessageState(data)
}
}
def create(communicator: Option[ActorRef[RpcCmd]], logger: Option[ActorRef[LogCmd]]): Behavior[MessageCmd] =
Behaviors.setup { context =>
context.log.info("=============> Start PersistenceMessageActor <=============")
EventSourcedBehavior[MessageCmd, MessageEvent, MessageState](
persistenceId = PersistenceId(storeName),
emptyState = MessageState(),
commandHandler = commandHandler(communicator),
eventHandler = eventHandler)
.onPersistFailure(SupervisorStrategy.restartWithBackoff(minBackoff = 10.seconds, maxBackoff = 60.seconds, randomFactor = 0.1))
.receiveSignal {
case (_, _) =>
println("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
context.log.error("PersistenceMessageActor has been terminated. Please check the persistence.")
logger.foreach(actor => actor ! SaveLog(Log(Error, "Message store actor was stopped!")))
}
}
}
I expect to receive a Signal that I can handle accordingly but I did not get anyone.
The question is, how to handle the case, when the actor can not be started, to receive the signal.

looks like the config is missing akka.persistence.journal.plugin in application.conf
where you want to store the journal for the persistent actor.

Related

Why Akka Http Client Throws an Exception on Any Other from Successful Response Status?

I am using Akka Http (v. 10.1.10) to create a client with proxy.
Each time the response is any other than successful, I get an error instead of a proper response entity:
akka.http.impl.engine.client.ProxyConnectionFailedException: The HTTP(S) proxy rejected to open a connection to hahahahahhahaahahhahaadsfsd.com:80 with status code: 503 Service Unavailable
at akka.http.impl.engine.client.HttpsProxyGraphStage$$anon$1$$anon$4.onPush(HttpsProxyGraphStage.scala:143)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:409)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
at akka.actor.ActorCell.invoke(ActorCell.scala:581)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
In the Akka Http core code, I found that it is the implementation of the handler for proxy that forces the error on purpose if the response is any other than successful:
case Connecting =>
val proxyResponse = grab(bytesIn)
parser.parseBytes(proxyResponse) match {
case NeedMoreData =>
pull(bytesIn)
case ResponseStart(_: StatusCodes.Success, _, _, _, _) =>
var pushed = false
val parseResult = parser.onPull()
require(parseResult == ParserOutput.MessageEnd, s"parseResult should be MessageEnd but was $parseResult")
parser.onPull() match {
// NeedMoreData is what we emit in overridden `parseMessage` in case input.size == offset
case NeedMoreData =>
case RemainingBytes(bytes) =>
push(sslOut, bytes) // parser already read more than expected, forward that data directly
pushed = true
case other =>
throw new IllegalStateException(s"unexpected element of type ${other.getClass}")
}
parser.onUpstreamFinish()
log.debug(s"HTTP(S) proxy connection to {}:{} established. Now forwarding data.", targetHostName, targetPort)
state = Connected
if (isAvailable(bytesOut)) pull(sslIn)
if (isAvailable(sslOut)) pull(bytesIn)
case ResponseStart(statusCode, _, _, _, _) =>
failStage(new ProxyConnectionFailedException(s"The HTTP(S) proxy rejected to open a connection to $targetHostName:$targetPort with status code: $statusCode"))
case other =>
throw new IllegalStateException(s"unexpected element of type $other")
}
I am wondering what is the reason for such implementation, if someone knows? And how to work it around to get a response entity instead of the error when response from the server is not successful?
Try recover function to capture and return response entity you want.
Http().singleRequest(HttpRequest(
uri = url
)).map(response => {
response
}) recover {
case e: Exception => {
e.printStackTrace()
// anything you want to return
HttpResponse(StatusCodes.OK)
}
}

Why does my Akka FSM event time out?

As a learning exercise for Akka FSM, I modeled a simplified order processing flow at a coffee shop. Attached is the state transition diagram. However, one of the test cases I wrote times out and I don't understand why.
FSM (case classes not shown for brevity):
class OrderSystem extends Actor with ActorLogging with LoggingFSM[State, Data] {
startWith(OrderPending, Data(OrderPending, PaymentPending))
when(OrderPending) {
case Event(BaristaIsBusy, _) => stay
case Event(BaristaIsAvailable(_, PaymentPending), _) => goto(OrderPlaced) using Data(stateName, PaymentPending)
case Event(b: BaristaIsAvailable, _) => goto(OrderReady)
}
val waiting = Data(OrderPlaced, PaymentAccepted)
when(OrderPlaced) {
case Event(b: BaristaIsAvailable, `waiting`) => println("1"); goto(OrderReady)
case Event(b: BaristaIsBusy, `waiting`) => println("2"); goto(OrderPending) using `waiting`
case Event(_, Data(_, PaymentDeclined)) => println("3"); goto(OrderClosed)
case Event(_, Data(_, PaymentPending)) => println("4"); stay
}
when(OrderReady) {
case Event(HappyWithOrder, _) => goto(OrderClosed)
case Event(NotHappyWithOrder, _) => goto(OrderPending) using Data(stateName, PaymentAccepted)
}
when(OrderClosed) {
case _ => stay
}
whenUnhandled {
case Event(e, s) => {
// state name is available as 'stateName'
log.warning("Received unhandled request {} in state {}/{}", e, stateName, s)
stay
}
}
// previous state data is available as 'stateData' and next state data as 'nextStateData'
// not necessary as LoggingFSM (if configured) will take care of logging
onTransition {
case _ -> nextState => log.info("Entering state: {} with payment activity: {} from state: {} with payment activity: {}.",
nextState, stateData.paymentActivity, nextStateData.fromState, nextStateData.paymentActivity)
}
initialize()
}
Failing test:
it should "stay in OrderPlaced state as long as customer has not paid" in {
val orderSystem = system.actorOf(Props[OrderSystem])
orderSystem ! BaristaIsAvailable(OrderPending, PaymentPending)
orderSystem ! SubscribeTransitionCallBack(testActor)
expectMsg(CurrentState(orderSystem, OrderPlaced))
orderSystem ! BaristaIsAvailable(OrderPlaced, PaymentPending)
expectMsg(CurrentState(orderSystem, OrderPlaced))
}
Logs:
2015-09-22 23:29:15.236 [order-system-akka.actor.default-dispatcher-2] [DEBUG] n.a.s.o.OrderSystem - processing Event(BaristaIsAvailable(OrderPending,PaymentPending),Data(OrderPending,PaymentPending)) from Actor[akka://order-system/system/testActor1#-2143558060]
2015-09-22 23:29:15.238 [order-system-akka.actor.default-dispatcher-2] [INFO ] n.a.s.o.OrderSystem - Entering state: OrderPlaced with payment activity: PaymentPending from state: OrderPending with payment activity: PaymentPending.
2015-09-22 23:29:15.239 [order-system-akka.actor.default-dispatcher-2] [DEBUG] n.a.s.o.OrderSystem - transition OrderPending -> OrderPlaced
4
2015-09-22 23:29:15.242 [order-system-akka.actor.default-dispatcher-2] [DEBUG] n.a.s.o.OrderSystem - processing Event(BaristaIsAvailable(OrderPlaced,PaymentPending),Data(OrderPending,PaymentPending)) from Actor[akka://order-system/system/testActor1#-2143558060]
[31m- should stay in OrderPlaced state as long as customer has not paid *** FAILED ***[0m
[31m java.lang.AssertionError: assertion failed: timeout (3 seconds)
SubscribeTransitionCallBack will only deliver a CurrentState once, and then only Transition callbacks.
You could try to do this:
it should "stay in OrderPlaced state as long as customer has not paid" in {
val orderSystem = TestFSMRef(new OrderSystem)
orderSystem ! SubscribeTransitionCallBack(testActor)
// fsm first answers with current state
expectMsgPF(1.second. s"OrderPending as current state for $orderSystem") {
case CurrentState('orderSystem', OrderPending) => ok
}
// from now on the subscription will yield 'Transition' messages
orderSystem ! BaristaIsAvailable(OrderPending, PaymentPending)
expectMsgPF(1.second, s"Transition from OrderPending to OrderPlaced for $orderSystem") {
case Transition(`orderSystem`, OrderPending, OrderPlaced) => ok
}
orderSystem ! BaristaIsAvailable(OrderPlaced, PaymentPending)
// there is no transition, so there should not be a callback.
expectNoMsg(1.second)
/*
// alternatively, if your state data changes, using TestFSMRef, you could check state data blocking for some time
awaitCond(
p = orderSystem.stateData == ???,
max = 2.seconds,
interval = 200.millis,
message = "waiting for expected state data..."
)
// awaitCond will throw an exception if the condition is not met within max timeout
*/
success
}

How to shutdown a Kafka ConsumerConnector

I have a system that pulls messages from a Kafka topic, and when it's unable to process messages because some external resource is unavailable, it shuts down the consumer, returns the message to the topic, and waits some time before starting the consumer again. The only problem is, shutting down doesn't work. Here's what I see in my logs:
2014-09-30 08:24:10,918 - com.example.kafka.KafkaConsumer [info] - [application-akka.actor.workflow-context-8] Shutting down kafka consumer for topic new-problem-reports
2014-09-30 08:24:10,927 - clients.kafka.ProblemReportObserver [info] - [application-akka.actor.workflow-context-8] Consumer shutdown
2014-09-30 08:24:11,946 - clients.kafka.ProblemReportObserver [warn] - [application-akka.actor.workflow-context-8] Sending 7410-1412090624000 back to the queue
2014-09-30 08:24:12,021 - clients.kafka.ProblemReportObserver [debug] - [kafka-akka.actor.kafka-consumer-worker-context-9] Message from partition 0: key=7410-1412090624000, msg=7410-1412090624000
There's a few layers at work here, but the important code is:
In KafkaConsumer.scala:
protected def consumer: ConsumerConnector = Consumer.create(config.asKafkaConfig)
def shutdown() = {
logger.info(s"Shutting down kafka consumer for topic ${config.topic}")
consumer.shutdown()
}
In the routine that observes messages:
(processor ? ProblemReportRequest(problemReportKey)).map {
case e: ConnectivityInterruption =>
val backoff = 10.seconds
logger.warn(s"Can't connect to essential services, pausing for $backoff", e)
stop()
// XXX: Shutdown isn't instantaneous, so returning has to happen after a delay.
// Unfortunately, there's still a race condition here, plus there's a chance the
// system will be shut down before the message has been returned.
system.scheduler.scheduleOnce(100 millis) { returnMessage(message) }
system.scheduler.scheduleOnce(backoff) { start() }
false
case e: Exception => returnMessage(message, e)
case _ => true
}.recover { case e => returnMessage(message, e) }
And the stop method:
def stop() = {
if (consumerRunning.get()) {
consumer.shutdown()
consumerRunning.compareAndSet(true, false)
logger.info("Consumer shutdown")
} else {
logger.info("Consumer is already shutdown")
}
!consumerRunning.get()
}
Is this a bug, or am I doing it wrong?
Because your consumer is a def. It creates a new Kafka instance and shut that new instance down when you call it like consumer.shutdown(). Make consumer a val instead.

Play 2.1 Scala SQLException Connection Timed out waiting for a free available connection

I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^

ReactiveMongo plugin play framework seems to restart with every query

I am trying to write a play! framework 2.1 application with ReactiveMongo, following this sample. however, with every call to the plugin, it seems that the application hangs after the operation completes, than the pluging closes and restarts, and we move on. functionality work, but i am not sure if it's not crashing and restarting along the way.
code:
def db = ReactiveMongoPlugin.db
def nodesCollection = db("nodes")
def index = Action {implicit request =>
Async {
Logger.debug("serving nodes list")
implicit val nodeReader = Node.Node7BSONReader
val query = BSONDocument(
"$query" -> BSONDocument()
)
val found = nodesCollection.find(query)
found.toList.map { nodes =>
Logger.debug("returning nodes list to requester")
Ok(views.html.nodes.nodes(nodes))
}
}
}
def showCreationForm = Action { implicit request =>
Ok(views.html.nodes.editNode(None, Node.nodeCredForm))
}
def create = Action { implicit request =>
Node.nodeCredForm.bindFromRequest.fold(
errors => {
Ok(views.html.nodes.editNode(None, errors))
},
node => AsyncResult {
Node.createNode(node._1, node._2, node._3) match {
case Right(myNode) => {
nodesCollection.insert(myNode).map { _ =>
Redirect(routes.Nodes.index).flashing("success" -> "Node Added")
}
}
case Left(message) => {
Future(Redirect(routes.Nodes.index).flashing("error" -> message))
}
}
}
)
}
logging:
[debug] application - in Node constructor
[debug] application - done inseting, redirecting to nodes page
--- (RELOAD) ---
[info] application - ReactiveMongoPlugin stops, closing connections...
[info] application - ReactiveMongo stopped. [Success(Closed)]
[info] application - ReactiveMongoPlugin starting...
what is wrong with this picture?
There seems nothing wrong with that picture. If you only showed me that log output I would say you would have changed a file in you play application. Which would cause the application to reload.
I guess that is not the case, so your database is probably located within your application directory, causing the application to reload on each change. Where is your database located?