scala, swing : thread issue with the Event Dispatch Thread(actors) - scala

I have a scala class inheriting from SimpleSwingApplication.This class defines a window (with def top = new MainFrame) and instanciates an actor. the actor's code is simple:
class Deselectionneur extends Actor {
def act() {
while (true) {
receive {
case a:sTable => {
Thread.sleep(3000)
a.peer.changeSelection(0,0,false,false)
a.peer.changeSelection(0,0,true,false)
}
}
}
}
}
and the main class uses also "substance", a API allowing gui customization(there's no more ugly swing controls with it!).
the actor is called when I leaves a given swing table with my mouse; then the actor is called & deselects all the rows of the table.
the actor behaves very well, but when I launch my program, each times the actor is called, I get this error message:
org.pushingpixels.substance.api.UiThreadingViolationException: State tracking must be done on Event Dispatch Thread
do you know how I can remove this error message?

You need to move the gui update onto the EDT
Something like (I haven't compiled this)
case a:sTable => {
scala.swing.Swing.onEDT {
Thread.sleep(3000) // this will stop GUI updates
a.peer.changeSelection(0,0,false,false)
a.peer.changeSelection(0,0,true,false)
}
}
Some background on EDT can be found here: http://docs.oracle.com/javase/tutorial/uiswing/concurrency/initial.html

Related

Spark throws Not Serializable Exception inside a foreachRDD operation

i'm trying to implement an observer pattern using scala and spark streaming. the idea is that whenever i receive a record from the stream (from kafka) i notify the observer by calling the method "notifyObservers" inside the closure. here's the code:
the stream is provided by the kafka utils.
the method notifyObserver is defined into an abstract class following the rules of the pattern.
the error I think is related on the fact that methods cant be serialize.
Am I thinking correctly? and if it was, what kind of solution should I follow?
thanks
def onMessageConsumed() = {
stream.foreachRDD(rdd => {
rdd.foreach(consumerRecord => {
val record = new Record[T](consumerRecord.topic(),
consumerRecord.value())
//notify observers with the record to compute
notifyObservers(record)
})
})
}
Yes, the classes that are used in the code that is sent to other executors (executed in foreach, etc.), should implement Serializable interface.
also, if you're notification code requires connection to some resource, you need to wrap foreach into foreachPartition, something like this:
stream.foreachRDD(rdd => {
rdd.foreachPartition(rddPartition =>
// setup connection to external component
rddPartition.foreach(consumerRecord => {
val record = new Record[T](consumerRecord.topic(),
consumerRecord.value())
notifyObservers(record)
})
// close connection to external component
})
})

Akka. How to know that all children actors finished their job

I created Master actor and child actors (created using router from Master).
Master receives some Job and splits this job into small tasks and sends them to child actors (to routees).
The problem I trying to solve is how I can properly notify my Master when child actors finished their job?
In some tutorials (Pi approximation and in example from Scala In Action book) the Master actor after receiving response from children trying to compare the size of initial array of task with size of received results:
if(receivedResultsFromChildren.size == initialTasks.size) {
// it's mean children finished their job
}
But I think it is very bad, because if some child actor throws exception then it will not send result back to sender (back to Master), so this condition will never evaluate to true.
So how properly notify master that all children finished their jobs?
I think one of the option is to Broadcast(PoisonPill) to children and then listen to Terminated(router) message (using so-called deathWatch). Is it ok solution?
If using Broadcast(PoisonPill) is better, then should I register some supervising strategy which will stop some routee in case of exception? Because if exception occurs, then routee will be restarted as I know, and it's mean that Master actor will never receive Terminated(router). Is it correct?
In Akka this is actually quite simple.
The successful children can send an ordinary reply message to the parent actor. The unexpected failures from failing actors can be caught in the supervision strategy and handled appropriately (e.g. by restarting the actor, or by stopping it and removing it from the list of actors to wait).
So it could look something like this:
var waitingFor = Set.empty[ActorRef]
override def preStart() = ??? // Start the children with their subtasks
override def supervisionStrategy = OneForOneStrategy() {
case _ => {
waitingFor -= sender()
if (waitingFor.isEmpty) ??? // processing finished
Stop
}
}
override def receive = {
case Reply => {
waitingFor -= sender()
if (waitingFor.isEmpty) ??? // processing finished
}
}

How do I warm up an actor's state from database when starting up?

My requirement is to start a long running process to tag all the products that are expired. This is run every night at 1:00 AM. The customers may be accessing some of the products on the website, so they have instances around the time when the job is run. The others are in the persistent media, not yet having instances because the customers are not accessing them.
Where should I hook up the logic to read the latest state of an actor from a persistent media and create a brand new actor? Should I have that call in the Prestart override method? If so, how can I tell the ProductActor that a new actor being created.
Or should I send a message to the ProductActor like LoadMeFromAzureTable which will load the state from the persistent media after an actor being created?
There are different ways to do it depending on what you need, as opposed to there being precisely one "right" answer.
You could use a Persistent Actor to recover state from a durable store automatically on startup (or in case of crash, to recover). Or, if you don't want to use that module (still in beta as of July 2015), you could do it yourself one of two ways:
1) You could load your state in PreStart, but I'd only go with this if you can make the operation async via your database client and use the PipeTo pattern to send the results back to yourself incrementally. But if you need to have ALL the state resident in memory before you start doing work, then you need to...
2) Make a finite state machine using behavior switching. Start in a gated state, send yourself a message to load your data, and stash everything that comes in. Then switch to a receiving state and unstash all messages when your state is done loading. This is the approach I prefer.
Example (just mocking the DB load with a Task):
public class ProductActor : ReceiveActor, IWithUnboundedStash
{
public IStash Stash { get; set; }
public ProductActor()
{
// begin in gated state
BecomeLoading();
}
private void BecomeLoading()
{
Become(Loading);
LoadInitialState();
}
private void Loading()
{
Receive<DoneLoading>(done =>
{
BecomeReady();
});
// stash any messages that come in until we're done loading
ReceiveAny(o =>
{
Stash.Stash();
});
}
private void LoadInitialState()
{
// load your state here async & send back to self via PipeTo
Task.Run(() =>
{
// database loading task here
return new Object();
}).ContinueWith(tr =>
{
// do whatever (e.g. error handling)
return new DoneLoading();
}).PipeTo(Self);
}
private void BecomeReady()
{
Become(Ready);
// our state is ready! put all those stashed messages back in the mailbox
Stash.UnstashAll();
}
private void Ready()
{
// handle those unstashed + new messages...
ReceiveAny(o =>
{
// do whatever you need to do...
});
}
}
/// <summary>
/// Marker interface.
/// </summary>
public class DoneLoading {}

New Relic async tracing in Akka without Play

I have application written using Akka library, and I want to use New Relic to monitor it. I've noticed that in case of Play applications, all async actions during request handling are properly handled, and all actors involved are shown in the web transaction trace.
But when I am trying to instrument pure Akka application using custom java transaction traces, I can't achieve the same result, all traces consist of just one line with doJob method name. Code below:
case class NewRelicRequest(...) extends Request { ... }
case class NewRelicResponse(...) extends Response { ... }
class MyApiActor extends Actor {
def receive = {
case MyRequest(_) => doJob(...)
case MyOtherRequest(_) => doOtherJob(...)
}
#Trace(dispatcher=true)
private def doJob(...) {
NewRelic.setRequestAndResponse(NewRelicRequest("/doJob"), NewRelicResponse(...))
fooActor ! msg
}
#Trace(dispatcher=true)
private def doOtherJob(...) {
NewRelic.setRequestAndResponse(NewRelicRequest("/doOtherJob"), NewRelicResponse(...))
(barActor ? msg).pipeTo(sender)
}
}
Can someone explain what cases are supported, and how can I achieve async traces similar to those I see for Play apps?

Using Akka with Scalatra

My target is building a highly concurrent backend for my widgets. I'm currently exposing the backend as a web service, which receives requests to run a specific widget (using Scalatra), fetches widget's code from DB and runs it in an actor (using Akka) which then replies with the results. So imagine I'm doing something like:
get("/run:id") {
...
val actor = Actor.actorOf("...").start
val result = actor !! (("Run",id), 10000)
...
}
Now I believe this is not the best concurrent solution and I should somehow combine listening for requests and running widgets in one actor implementation. How would you design this for maximum concurrency? Thanks.
You can start your actors in an akka boot file or in your own ServletContextListener so that they are started without being tied to a servlet.
Then you can look for them with the akka registry.
Actor.registry.actorFor[MyActor] foreach { _ !! (("Run",id), 10000) }
Apart from that there is no real integration for akka with scalatra at this moment.
So until now the best you can do is by using blocking requests to a bunch of actors.
I'm not sure but I wouldn't necessary spawn an actor for each request but rather have a pool of widget actors which you can send those requests. If you use a supervisor hierarchy then the you can use a supervisor to resize the pool if it is too big or too small.
class MyContextListener extends ServletContextListener {
def contextInitialized(sce: ServletContextEvent) {
val factory = SupervisorFactory(
SupervisorConfig(
OneForOneStrategy(List(classOf[Exception]), 3, 1000),
Supervise(actorOf[WidgetPoolSupervisor], Permanent)
}
def contextDestroyed(sce: ServletContextEvent) {
Actor.registry.shutdownAll()
}
}
class WidgetPoolSupervisor extends Actor {
self.faultHandler = OneForOneStrategy(List(classOf[Exception]), 3, 1000)
override def preStart() {
(1 to 5) foreach { _ =>
self.spawnLink[MyWidgetProcessor]
}
Scheduler.schedule(self, 'checkPoolSize, 5, 5, TimeUnit.MINUTES)
}
protected def receive = {
case 'checkPoolSize => {
//implement logic that checks how quick the actors respond and if
//it takes to long add some actors to the pool.
//as a bonus you can keep downsizing the actor pool until it reaches 1
//or until the message starts returning too late.
}
}
}
class ScalatraApp extends ScalatraServlet {
get("/run/:id") {
// the !! construct should not appear anywhere else in your code except
// in the scalatra action. You don't want to block anywhere else, but in a
// scalatra action it's ok as the web request itself is synchronous too and needs to
// to wait for the full response to have come back anyway.
Actor.registry.actorFor[MyWidgetProcessor] foreach {
_ !! ((Run, id), 10000)
} getOrElse {
throw new HeyIExpectedAResultException()
}
}
}
Please do regard the code above as pseudo code that happens to look like scala, I just wanted to illustrate the concept.