Changing Akka actor state by passing a method with arguments to "become" - scala

I am having some trouble using become in my Akka actor. Basically, my actor has a structure like so:
// This is where I store information received by the actor
// In my real application it has more fields, though.
case class Information(list:List[AnyRef]) {
def received(x:AnyRef) = {
Information(list :+ x)
}
}
class MyActor extends Actor {
// Initial receive block that simply waits for a "start" signal
def receive = {
case Start => {
become(waiting(Information(List())))
}
}
// The main waiting state. In my real application, I have multiple of
// these which all have a parameter of type "Information"
def waiting(info:Information):Receive = {
// If a certain amount of messages was received, I decide what action
// to take next.
if(someCondition) {
decideNextState(x)
}
return {
case Bar(x) => {
//
// !!! Problem occurs here !!!
//
// This is where the problem occurs, apparently. After a decision has been
// made, (i.e. decideNextState was invoked), the info list should've been
// cleared. But when I check the size of the info list here, after a decision
// has been made, it appears to still contain all the messages received
// earlier.
//
become(waiting(info received x))
}
}
}
def decideNextState(info:Information) {
// Some logic, then the received information list is cleared and
// we enter a new state.
become(waiting((Information(List())))
}
}
Sorry for the long code snippet, but I couldn't really make it any smaller.
The part where the problem occurs is marked in the comments. I am passing a parameter to the method that returns the Receive partial function which is then passed to the become method. However, the created partial function seems to somehow preserve state from an earlier invocation. I find the problem a bit difficult to explain, but I did my best to do so in the comments in the code, so please read those and I'll answer anything that is unclear.

Your logic is a little convoluted but I'll take a shot at what could be the problem:
If someCondition is true then your actor steps into a state, let's call it S1 characterized by a value Information(List()). And then you return (by the way, avoid using return unless it is absolutely necessary) a receive method which will put your actor into a state S2 characterized by a list Information(somePreviousList :+ x). So at this point your stack of states has S1 on top. But when you receive a Bar(x) message the state S2 will be pushed, thus covering S1 and you actually transition into a state characterized by an Information with the old values + your new x.
Or something like that, the recursion in your actor is a bit mesmerizing.
But I'll suggest rewriting that code since it seems that the state which changes is something of type Information and you are manipulating this state using Akka's actor state transitions which is not at all the best tool to do that. become and unbecome are meant to be used to transition from different states of the actor's behavior. That is, an actor can have a different behavior at any time and you use become and unbecome to change between these behaviors.
Why not do something like this ?
class MyActor extends Actor {
private var info = Information(List.empty)
def receive = {
case Start => info = Information(List()) //a bit redundant, but it's just to match 1:1 with your code
case Bar(x) => {
if (someCondition) {
info = Information(List.empty)
}
info = info received x
}
}
}
I might not have captured your entire idea, but you get the picture.

Related

Scala futures and JMM

I have a question about JMM and Scala futures.
In the following code, I have non-immutable Data class. I create an instance of it inside one thread(inside Future apply body), and then subscribe on completion event.
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object Hello extends App {
Future {
new Data(1, "2")
}.foreach { d =>
println(d)
}
Thread.sleep(100000)
}
class Data(var someInt: Int, var someString: String)
Can we guarantee that:
foreach body called from the same thread, where a Data instance was created?
If not, can we guarantee that actions inside the Future.apply happens-before(in terms of JMM) actions inside foreach body?
Completion happens-before callback execution.
Disclaimer: I am the main contributor.
I had a sort-of similar question, and what I found is -
1) in the doc Intellij so conveniently pulled up for me it says
Asynchronously processes the value in the future once the value becomes available...
2) on https://docs.scala-lang.org/overviews/core/futures.html it says
The result becomes available once the future completes.
Basically, it does not anywhere I can find say explicitly that there is a memory barrier. I suspect, however, that it is a safe assumption that there is. Otherwise the language would simply not work.
No.
You can get a good idea of this by looking through the source code for Promise/DefaultPromise/Future, which schedules the callback for foreach on the execution context/adds it to the listeners without any special logic requiring it to run on the original thread...
But you can also verify it experimentally, by trying to set up an execution context and threads such that something else will already be queued for execution when the Future in which Data was created completes.
implicit val context = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2))
Future {
new Data(1, "2")
println("Data created on: " + Thread.currentThread().getName)
Thread.sleep(100)
}.foreach { _ =>
println("Data completed on: " + Thread.currentThread().getName)
}
Future { // occupies second thread
Thread.sleep(1000)
}
Future { // queue for execution while first future is still executing
Thread.sleep(2000)
}
My output:
Data created on: pool-$n-thread-1
Data completed on: pool-$n-thread-2
2.
Less confident here than I'd like to be, but I'll give it a shot:
Yes.
DefaultPromise, the construct underlying Future, is wrapping an atomic reference, which behaves like a volatile variable. Since the write-to for updating the result must happen prior to the read-from which passes the result to the listener so it can run the callback, JMM volatile variable rules turn this into a happens-before relationship.
I don't think there are any guarantees that foreach is called from the same thread
foreach will not be called until the future completes succesfully. onComplete is a more idiomatic way of providing a callback to process the result of a Future.

Activiti Java Service Task: Passivate w/out the need for receive task

this has already been answered but the solutions have not been working out for me.
Activiti asynchronous behaviour is fairly simple and only allows the user to enable a flag which tells activiti engine to insert such task in a execution queue (managing a pool of threads).
What i want is not to insert my java service task in a pool but to passivate its behaviour and only complete such task when an external signal is received and/or a callback is called.
My attempt:
class customAsyncTask extends TaskActivityBehavior {
override def execute(execution: ActivityExecution): Unit = {
val future = Future {
println(s"Executing customAsyncTask -> ${execution.getCurrentActivityName}, ${cur}")
}
future.onComplete {
case Success(result) => leave(execution)
case _ => // whatever
}
}
def signal(processInstanceId : String, transition : String) = {
val commandExecutor = main.processEngine.getProcessEngineConfiguration.asInstanceOf[ProcessEngineConfigurationImpl].getCommandExecutor
val command = new customSignal(processInstanceId, transition)
commandExecutor.execute(command)
}
}
On my previous code sample i have registered a scala future callback which when called will terminate the current activity and move to the next.
I also have a signal method which builds a custom signal that based on the processId and a name will call execution.take with the appropriate transition.
On both cases i am getting the following error (the bottom stack changes a little)
java.lang.NullPointerException
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.performOperationSync(ExecutionEntity.java:636)
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.performOperation(ExecutionEntity.java:629)
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.take(ExecutionEntity.java:453)
at org.activiti.engine.impl.persistence.entity.ExecutionEntity.take(ExecutionEntity.java:431)
at org.activiti.engine.impl.bpmn.behavior.BpmnActivityBehavior.performOutgoingBehavior(BpmnActivityBehavior.java:140)
at org.activiti.engine.impl.bpmn.behavior.BpmnActivityBehavior.performDefaultOutgoingBehavior(BpmnActivityBehavior.java:66)
at org.activiti.engine.impl.bpmn.behavior.FlowNodeActivityBehavior.leave(FlowNodeActivityBehavior.java:44)
at org.activiti.engine.impl.bpmn.behavior.AbstractBpmnActivityBehavior.leave(AbstractBpmnActivityBehavior.java:47)
Unfortunately, it is highly likely that the engine is erasing the information concerning the execution when the execute method returns, even though no complete/leave/take has been called. Even though my callback has the execution object in context, when i query for information using its proccess ID all i receive is null.
So, what i am doing wrong here? How can i achieve the behaviour that i want?
I dont see anything specific, I would have said you need to extend a class that implements SignalableActivityBehavior, but I think TaskActivityBehavior actually does this.
While the stack indicates the NPE is coming from the leave(), I am confused why leave is calling "take" since take is a transition event and really should only happen on a task labeled as synchronous.
All I can offer is, Camunda have an example implementation that is similar to your scenario. You may be able to use this to help you:
https://github.com/camunda/camunda-bpm-examples/tree/master/servicetask/service-invocation-asynchronous
It seems that activiti uses thread local variables which means that when calling methods from the scala threads (scala Executor Context) would be pointless since they do not share the context.
To solve all i have to do from my callback is make a signal call much like if i were calling from a remote system. The only difference is that i do not need to save my process instance identifier.
The code looks as such:
class AsynchronousServiceTask extends AbstractBpmnActivityBehavior {
val exec_id : String = "executionId"
override def execute(execution : ActivityExecution) = {
val future = Future { println("Something") }
future onComplete {
case _ => myobject.callSignalForMe(execution.getId)
}
}
override def signal(execution : ActivityExecution, signalName : String, signalData : AnyRef) = {
println("Signal called, leaving current activity..")
leave(execution)
}
}
Basically, myobject holds the runTimeEngine and will inject the signal in a ThreadLocal context. All clean and working as intended.

Querying a continously running operation for its current state/value in Scala

I have a procedure that continuously updates a value. I want to be able to periodically query the operation for the current value. In my particular example, every update can be considered an improvement and the procedure will eventually converge on a final, best answer, but I want/need access to the intermediate results. The speed with which the loop executes and the time it takes to converge matters.
As an example, consider this loop:
var current = 0
while(current < 100){
current = current + 1
}
I want to be able to get value of current on any loop iteration.
A solution with an Actor would be:
class UpdatingActor extends Actor{
var current : Int = 0
def receive = {
case Update => {
current = current + 1
if (current < 100) self ! Update
}
case Query => sender ! current
}
}
You could get rid of the var using become or FSM, but this example is more clear IMO.
Alternatively, one actor could run the operation and send updated results on every loop iteration to another actor, whose sole responsibility is updating the value and responding to queries about it. I don't know much about "agents" in Akka, but this seems like a potential use case for one.
What are better/alternative ways of doing this using Scala? I don't need to use actors; that was just one solution that came to mind.
Your actor-based solution is ok.
Sending the intermediate result after each change to a "result provider" actor would be a good idea as well if the calculation blocks the actor for a long time and you want to make sure that you can always get the intermediate result. Another alternative would be to make the actual calculator actor a child of the actor that collects the best result. That way the thing acts as a single actor from the outside, and you have the actor that has state (the current best result) separated from the actor that does the computation, which might fail.
An agent would be a solution somewhat between the very low level #volatile/AtomicInteger approach and an Actor. An agent is something that can only be modified by running a transform on it (and there is a queue for transforms), but which has a current state that can always be accessed. It is not location transparent though. so stay with the actor approach if you need that.
Here is how you would solve this with an agent. You have one thread which does a long-running calculation (simulated by Thread.sleep) and another thread that just prints out the best current result in regular intervals (also simulated by Thread.sleep).
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.concurrent._
import akka.agent.Agent
object Main extends App {
val agent = Agent(0)
def computation() : Unit = {
for(i<-0 until 100) {
agent.send { current =>
Thread.sleep(1000) // to simulate a long-running computation
current + 1
}
}
}
def watch() : Unit = {
while(true) {
println("Current value is " + agent.get)
Thread.sleep(1000)
}
}
global.execute(new Runnable {
def run() = computation
})
watch()
}
But all in all I think an actor-based solution would be superior. For example you could do the calculation on a different machine than the result tracking.
The scope of the question is a little wide, but I'll try :)
First, your example is perfectly fine, I don't see the point of getting rid of the var. This is what actors are for: protect mutable state.
Second, based on what you describe you don't need an actor at all.
class UpdatingActor {
private var current = 0
def startCrazyJob() {
while(current < 100){
current = current + 1
}
}
def soWhatsGoingOn: Int = current
}
You just need one thread to call startCrazyJob and a second one that will periodically call soWhatsGoingOn.
IMHO, the actor approach is better, but it's up to you to decide if it's worth importing the akka library just for this use case.

AKKA: Painless Actor Error Notifications

Beautiful People Lifestyle
A component BEAUTIFUL is using her internal akka.actor.Actor to do certain things.. ( such as "acting up" for example )
There are other MAN components that would really like to "interact" with the BEAUTIFUL
When the BEAUTIFUL finds a MAN worthy, before "interacting" with him, she agrees to take in his phone number ( let's call it an ErrorHandler ), you know just to give him a call in case he left in the morning and forgot his Rolex on her bed side table
The BEAUTIFUL though is "high maintenance" ( aren't they all.. ), and every time something bad happens inside the BEAUTIFUL's internal Actor ( e.g. an OmgBrokenNailException, UglyPurseThrowable, etc.. ), she goes nuts, needs to stop completely and to call a MAN using that phone number ( e.g. errorHandler.getOnItNowHoney( message, throwable ) )
Beautiful People Problem
AKKA supervisors allow to register two types of FaultHandlers => OneForOneStrategy and AllForOneStrategy, where if the underlying Actor overrides preRestart/postRestart, it gets access to an actual "throwable" e.g.:
override def preRestart( reason: Throwable ) { // do with throwable... }
The problem is both of these strategies will try to restart the Actor(s), which is not something that I am looking for. I am looking for the Actor to call an external ErrorHandler with the "throwable" and stop.
If I don't use these strategies, when exception is thrown from within the Actor, a postStop is called on the Actor, which is cool, but it does not take in a "throwable":
override def postStop() { // no access to "throwable"... }
Help the MAN to get that throwable
In Akka 2 you should use DeathWatch
If I understand your problem correctly, I think it would make sense to wrap your message processing in a try/catch block and let the actor handle sending the Throwable and stopping itself.
def receive = {
case message => try { message match {
case ... //your actual message processing goes here
}} catch {
case t: Throwable => {
errorHandler ! t
self.stop
}
}
}

How to call the correct method in Scala/Java based the types of two objects without using a switch statement?

I am currently developing a game in Scala where I have a number of entities (e.g. GunBattery, Squadron, EnemyShip, EnemyFighter) that all inherit from a GameEntity class. Game entities broadcast things of interest to the game world and one another via an Event/Message system. There are are a number of EventMesssages (EntityDied, FireAtPosition, HullBreach).
Currently, each entity has a receive(msg:EventMessage) as well as more specific receive methods for each message type it responds to (e.g. receive(msg:EntityDiedMessage) ). The general receive(msg:EventMessage) method is just a switch statement that calls the appropriate receive method based on the type of message.
As the game is in development, the list of entities and messages (and which entities will respond to which messages) is fluid. Ideally if I want a game entity to be able to receive a new message type, I just want to be able to code the logic for the response, not do that and have to update a match statement else where.
One thought I had would be to pull the receive methods out of the Game entity hierarchy and have a series of functions like def receive(e:EnemyShip,m:ExplosionMessage) and def receive(e:SpaceStation,m:ExplosionMessage) but this compounds the problem as now I need a match statement to cover both the message and game entity types.
This seems related to the concepts of Double and Multiple dispatch and perhaps the Visitor pattern but I am having some trouble wrapping my head around it. I am not looking for an OOP solution per se, however I would like to avoid reflection if possible.
EDIT
Doing some more research, I think what I am looking for is something like Clojure's defmulti.
You can do something like:
(defmulti receive :entity :msgType)
(defmethod receive :fighter :explosion [] "fighter handling explosion message")
(defmethod receive :player-ship :hullbreach [] "player ship handling hull breach")
You can easily implement multiple dispatch in Scala, although it doesn't have first-class support. With the simple implementation below, you can encode your example as follows:
object Receive extends MultiMethod[(Entity, Message), String]("")
Receive defImpl { case (_: Fighter, _: Explosion) => "fighter handling explosion message" }
Receive defImpl { case (_: PlayerShip, _: HullBreach) => "player ship handling hull breach" }
You can use your multi-method like any other function:
Receive(fighter, explosion) // returns "fighter handling explosion message"
Note that each multi-method implementation (i.e. defImpl call) must be contained in a top-level definition (a class/object/trait body), and it's up to you to ensure that the relevant defImpl calls occur before the method is used. This implementation has lots of other limitations and shortcomings, but I'll leave those as an exercise for the reader.
Implementation:
class MultiMethod[A, R](default: => R) {
private var handlers: List[PartialFunction[A, R]] = Nil
def apply(args: A): R = {
handlers find {
_.isDefinedAt(args)
} map {
_.apply(args)
} getOrElse default
}
def defImpl(handler: PartialFunction[A, R]) = {
handlers +:= handler
}
}
If you're really worried about the effort it takes to create/maintain the switch statement, you could use metaprogramming to generate the switch statement by discovering all EventMessage types in your program. It's not ideal, but metaprogramming is generally one of the cleanest ways to introduce new constraints on your code; in this case that'd be the requirement that if an event type exists, there is a dispatcher for it, and a default (ignore?) handler that can be overridden.
If you don't want to go that route, you can make EventMessage a case class, which should allow the compiler to complain if you forget to handle a new message type in your switch statement. I wrote a game server that was used by ~1.5 million players, and used that kind of static typing to ensure that my dispatch was comprehensive, and it never caused an actual production bug.
Chain of Responsibility
A standard mechanism for this (not scala-specific) is a chain of handlers. For example:
trait Handler[Msg] {
handle(msg: Msg)
}
Then your entities just need to manage a list of handlers:
abstract class AbstractEntity {
def handlers: List[Handler]
def receive(msg: Msg) { handlers foreach handle }
}
Then your entities can declare the handlers inline, as follows:
class Tank {
lazy val handlers = List(
new Handler {
def handle(msg: Msg) = msg match {
case ied: IedMsg => //handle
case _ => //no-op
}
},
new Handler {
def handle(msg: Msg) = msg match {
case ef: EngineFailureMsg => //handle
case _ => //no-op
}
}
)
Of course the disadvantage here is that you lose readability, and you still have to remember the boilerplate which is a no-op catch-all case for each handler.
Actors
Personally I would stick with the duplication. What you have at the moment looks a lot like treating each entity as if it is an Actor. For example:
class Tank extends Entity with Actor {
def act() {
loop {
react {
case ied: IedMsg => //handle
case ied: EngineFailureMsg => //handle
case _ => //no-op
}
}
}
}
At least here you get into the habit of adding a case statement within the react loop. This can call another method in your actor class which takes the appropriate action. Of course, the benefit of this is that you take advantage of the concurrency model provided by the actor paradigm. You end up with a loop which looks like this:
react {
case ied: IedMsg => _explosion(ied)
case efm: EngineFailureMsg => _engineFailure(efm)
case _ =>
}
You might want to look at akka, which offers a more performant actor system with more configurable behaviour and more concurrency primitives (STM, agents, transactors etc)
No matter what, you have to do some updating; the application won't just magically know which response action to do based off of the event message.
Cases are well and good, but as the list of messages your object responds to gets longer, so does its response time. Here is a way to respond to messages that will respond at the same speed no matter how many your register with it. The example does need to use the Class object, but no other reflections are used.
public class GameEntity {
HashMap<Class, ActionObject> registeredEvents;
public void receiveMessage(EventMessage message) {
ActionObject action = registeredEvents.get(message.getClass());
if (action != null) {
action.performAction();
}
else {
//Code for if the message type is not registered
}
}
protected void registerEvent(EventMessage message, ActionObject action) {
Class messageClass = message.getClass();
registeredEventes.put(messageClass, action);
}
}
public class Ship extends GameEntity {
public Ship() {
//Do these 3 lines of code for every message you want the class to register for. This example is for a ship getting hit.
EventMessage getHitMessage = new GetHitMessage();
ActionObject getHitAction = new GetHitAction();
super.registerEvent(getHitMessage, getHitAction);
}
}
There are variations of this using Class.forName(fullPathName) and passing in the pathname strings instead of the objects themselves if you want.
Because the logic for performing an action is contained in the superclass, all you have to do to make a subclass is register what events it responds to and create an ActionObject that contains the logic for its response.
I'd be tempted to elevate every message type into a method signature and Interface. How this translates into Scala I'm not totally sure, but this is the Java approach I would take.
Killable, KillListener, Breachable, Breachlistener and so on will surface the logic of your objects and commonalities between them in a way which permits runtime inspection (instanceof) as well as helping with runtime performance. Things which don't process Kill events won't be put in a java.util.List<KillListener> to be notified. You can then avoid the creation of multiple new concrete objects all the time (your EventMessages) as well as lots of switching code.
public interface KillListener{
void notifyKill(Entity entity);
}
After all, a method in java is otherwise understood as a message - just use raw java syntax.