I have a SwingWorker actor which computes a plot for display from a parameters object it gets send; then draws the plot on the EDT thread. Some GUI elements can tweak parameters for this plot. When they change I generate a new parameter object and send it to the worker.
This works so far.
Now when moving a slider many events are created and queue up in the worker's mailbox. But I only need to compute the plot for the very last set of parameters. Is there a way to drop all messages from the inbox; keep the last one and process only that?
Currently the code looks like this
val worker = new SwingWorker {
def act() {
while (true) {
receive {
case params: ExperimentParameters => {
//somehow expensive
val result = RunExperiments.generateExperimentData(params)
Swing.onEDT{ GuiElement.redrawWith(result) }
}
}
}
}
}
Meanwhile I have found a solution. You can check the mailbox size of the actor and simply skip the message if it is not 0.
val worker = new SwingWorker {
def act() {
while (true) {
receive {
case params: ExperimentParameters => {
if( mailboxSize == 0) {
//somehow expensive
val result = RunExperiments.generateExperimentData(params)
Swing.onEDT{ GuiElement.redrawWith(result) }
}
}
}
}
}
}
Remember the last event without processing it, have a very short timeout, process the last event when you get the timeout
could look like (not tested)
while(true) {
var lastReceived : Option[ExperimentParameters] = None
receive {case params : ExperimentParameters => lastReceived = Some(params)}
while (!lastReceived.isEmpty) {
receiveWithin(0) {
case params: ExperimentParameters => lastReceived = Some(params)
case TIMEOUT => do your job with lastReceived.get;
}
}
}
Related
I have a sessionization use case. I keep my sessions in-memory thanks to mapWithstate() and update them for each incoming log. When a session ends, signaled with a specific log, I want to retrieve it and remove it from my State.
The problem I stumble upon is that I cannot retrieve AND remove (remove()) my session at the end of each batch, because retrieval happens outside the updateFunction() and the removal within it, i.e. once removed the session cannot be retrieved, and if a session ends, there should not be anymore logs for it, no more keys.
I can still retrieve my ended sessions but the number of "dead" sessions will escalate, thus creating an integral anomaly ("State-overflow") that if left unchecked will threaten the system itself. This solution is not acceptable.
As it seems like a common use-case, I was wondering if anyone had come up with a solution?
EDIT
Sample code below:
def mapWithStateContainer(iResultParsing: DStream[(String, SessionEvent)]) = {
val lStateSpec = StateSpec.function(stateUpdateFunction _).timeout(Seconds(TIMEOUT)
val lResultMapWithState: DStream[(String, Session)] =
iResultParsing.mapWithState(lStateSpec).stateSnapshots()
val lClosedSession: DStream[(String, Session)] =
lResultMapWithState.filter(_._2.mTimeout)
//ideally remove here lClosedSession from the state
}
private def stateUpdateFunction(iKey: String,
iValue: Option[SessionEvent],
iState: State[Session]): Option[(String, Session)] = {
var lResult = None: Option[(String, Session)]
if (iState.isTimingOut()) {
val lClosedSession = iState.get()
lClosedSession.mTimeout = true
lResult = Some(iKey, lClosedSession)
} else if (iState.exists) {
val lUpdatedSession = updateSession(lCurrentSession, iValue)
iState.update(lUpdatedSession)
lResult = Some(iKey, lUpdatedSession)
// we wish to remove the lUpdatedSession from the state once retrieved with lResult
/*if (lUpdatedSession.mTimeout) {
iState.remove()
lResult = None
}*/
} else {
val lInitialState = initSession(iValue)
iState.update(lInitialState)
lResult = Some(iKey, lInitialState)
}
lResult
}
private def updateSession(iCurrentSession: Session,
iNewData: Option[SessionEvent]): Session = {
//user disconnects manually
if (iNewData.get.mDisconnection) {
iCurrentSession.mTimeout = true
}
iCurrentSession
}
Instead of calling MapWithStateRDD.stateSnapshot, you can return the updated state as the return value of your mapWithState operation. This way, the finalized state is always available outside the your stateful DStream.
This means that you can do:
else if (iState.exists) {
val lUpdatedSession = updateSession(lCurrentSession, iValue)
iState.update(lUpdatedSession)
if (lUpdatedSession.mTimeout) {
iState.remove()
}
Some(iKey, lUpdatedSession)
}
And now change your graph to:
val lResultMapWithState = iResultParsing
.mapWithState(lStateSpec)
.filter { case (_, session) => session.mTimeout }
What happens is now that the state is being removed internally, but because you're returning it from your StateSpec function, it's available to you outside for further processing.
An API I'm polling has a field that defines the time that value is cached, cachedUntil. The goal is to create an Observable that polls and emits an event every time the cache has expired. The thing that distinguishes this case, is that the caching is not regular. I.e. Observable.interval does not apply.
In what ways is it possible to implement an Observable that has this behaviour?
The following snippet gives a function that polls the API, emits the requested events and return the cachedUntil delay to the next call.
def getContracts(subscriber: Subscriber[Set[EveContract]]): Option[Long] = {
logger.debug("Fetching new contracts")
try {
val response = parser.getResponse(auth)
if(response == null) {
subscriber.onError(new RuntimeException("Unable to fetch contracts from EVE servers"))
None
}
else if(response.hasError) {
logger.error(response.getError.toString)
subscriber.onError(new RuntimeException(response.getError.toString))
None
} else {
subscriber.onNext(response.getAll.toSet) // Emit new polled data
Some(response.getCachedUntil.getTime - new Date().getTime) // Return the cache delay
}
} catch {
case aex: ApiException ⇒
logger.error("An error occurred when querying the EVE API.")
logger.debug("ApiException: ", aex)
subscriber.onError(aex)
None
}
}
It is possible to use Scheduler workers to reschedule a call togetContracts:
Observable[Set[EveContract]](observer ⇒ {
val worker = Schedulers.newThread().createWorker()
def scheduleContracts(delay: Long) {
worker.schedule(new Action0 {
override def call(){
if(!observer.isUnsubscribed) {
val delay = getContracts(observer)
delay match {
// Reschedule a contract fetch after time d has passed.
case Some(d) ⇒
logger.debug(s"Rescheduling contract fetch in: ${d / 1000} s")
scheduleContracts(d)
case _ ⇒
// Otherwise do nothing
logger.debug("Not rescheduling contract fetch, an error has occured.")
}
} else {
logger.trace("Subscriber has unsubscribed.")
}
}
}, delay, TimeUnit.MILLISECONDS)
}
scheduleContracts(0L)
})
However, I'm very interested in possible other solutions.
I want omit all the same type of messages except the last one:
def receive = {
case Message(type:MessageType, data:Int) =>
// remove previous and get only last message of passed MessageType
}
for example when I send:
actor ! Message(MessageType.RUN, 1)
actor ! Message(MessageType.RUN, 2)
actor ! Message(MessageType.FLY, 1)
then I want to recevie only:
Message(MessageType.RUN, 2)
Message(MessageType.FLY, 1)
Of course if they will be send very fast, or on high CPU load
You could wait a very short amount of time, storing the most recent messages that arrive, and then process only those most recent ones. This can be accomplished by sending messages to yourself, and scheduleOnce. See the second example under the Akka HowTo: Common Patterns, Scheduling Periodic Messages. Instead of scheduling ticks whenever the last tick ends, you can wait until new messages arrive. Here's an example of something like that:
case class ProcessThis(msg: Message)
case object ProcessNow
var onHold = Map.empty[MessageType, Message]
var timer: Option[Cancellable] = None
def receive = {
case msg # Message(t, _) =>
onHold += t -> msg
if (timer.isEmpty) {
import context.dispatcher
timer = Some(context.system.scheduler.scheduleOnce(1 millis, self, ProcessNow))
}
case ProcessNow =>
timer foreach { _.cancel() }
timer = None
for (m <- onHold.values) self ! ProcessThis(m)
onHold = Map.empty
case ProcessThis(Message(t, data)) =>
// really process the message
}
Incoming Messages are not actually processed right away, but are stored in a Map that keeps only the last of each MessageType. On the ProcessNow tick message, they are really processed.
You can change the length of time you wait (in my example set to 1 millisecond) to strike a balance between responsivity (length of time from a message arriving to response) and efficiency (CPU or other resources used or held up).
type is not a good name for a field, so let's use messageType instead. This code should do what you want:
var lastMessage: Option[Message] = None
def receive = {
case m => {
if (lastMessage.fold(false)(_.messageType != m.messageType)) {
// do something with lastMessage.get
}
lastMessage = Some(m)
}
}
Let's say you have a gui component and 10 threads all tell it to repaint at sufficiently the same time as they all arrive before a single paint operation takes place. Instead of naively wasting resources repainting 10 times, just merge/ignore all but the last one and repaint once (or more likely, twice--once for the first, and once for the last). My understanding is that the Swing repaint manager does this.
Is there a way to accomplish this same type of behavior in a Scala Actor? Is there a way to look at the queue and merge messages, or ignore all but the last of a certain type or something?
Something like this?:
act =
loop {
react {
case Repaint(a, b) => if (lastRepaint + minInterval < System.currentTimeMillis) {
lastRepaint = System.currentTimeMillis
repaint(a, b)
}
}
If you want to repaint whenever the actor's thread gets a chance, but no more, then:
(UPDATE: repainting using the last message arguments)
act =
loop {
react {
case r#Repaint(_, _) =>
var lastMsg = r
def findLast: Unit = {
reactWithin(0) {
case r#Repaint(_, _) =>
lastMsg = r
case TIMEOUT => repaint(lastMsg.a, lastMsg.b)
}
}
findLast
}
}
So I want to write some network code that appears to be blocking, without actually blocking a thread. I'm going to send some data out on the wire, and have a 'queue' of responses that will come back over the network. I wrote up a very simple proof of concept, inspired by the producer/consumer example on the actor tutorial found here: http://www.scala-lang.org/node/242
The thing is, using receive appears to take up a thread, and so I'm wondering if theres anyway to not take up a thread and still get the 'blocking feel'. Heres my code sample:
import scala.actors.Actor._;
import scala.actors.Actor;
case class Request(val s:String);
case class Message(val s:String);
class Connection {
private val act:Actor = actor {
loop {
react {
case m:Message => receive { case r:Request => reply { m } }
}
}
}
def getNextResponse(): Message = {
return (act !? new Request("get")).asInstanceOf[Message];
}
//this would call the network layer and send something over the wire
def doSomething() {
generateResponse();
}
//this is simulating the network layer getting some data back
//and sending it to the appropriate Connection object
private def generateResponse() {
act ! new Message("someData");
act ! new Message("moreData");
act ! new Message("even more data");
}
}
object runner extends Application {
val conn = new Connection();
conn.doSomething();
println( conn.getNextResponse());
println(conn.getNextResponse());
println(conn.getNextResponse());
}
Is there a way to do this without using the receive, and thereby making it threadless?
You could directly rely on react which should block and release the thread:
class Connection {
private val act:Actor = actor {
loop {
react {
case r:Request => reply { r }
}
}
}
[...]
I expect that you can use react rather than receive and not have actors take up threads like receive does. There is thread on this issue at receive vs react.