This may be related to a previous question, but I am not so sure.......
I have an Scala/Actor-based subsystem that uses 3 cooperating actors to do some work. Each of the Actors is actually a DaemonActor. External messages are sent into a primary Actor, and occasionally messages are sent from a secondary Actor to the primary, asking it to do stuff with the data it collected from the external messages.
I wrote a test-driver Scala program that starts up the subsystem in question, and uses a DaemonActor to send messages to the subsystem (that is to the primary Actor).
It turns out that messages sent into the primary Actor were processed by the primary Actor, but messages sent from the secondary subsystem Actor to the Primary Actor were not processed.
I discovered that if I made the Actor in the test-driver program a non-Deamon Actor, and not a DaemonActor, everything worked as expected. This was 100% deterministic in that when the external test-driver used an Actor, the subsystem always behaved. When the external test-driver used a DaemonActor, the subsystem always mis-behaved. No other changes were made to the code when switching between Actors and DaemonActors.
The make things even stranger, when I made an expanded test driver that used 2 Actors to send 2 different types of messages to the subsystem, I had to make one of the test driver's actor a DaemonActor or the subsystem receiving messages mis-behaved.
Seems pretty random :-)
One caveat to note: The test driver actors actually call methods on a subsystem class which "translates" the method call into a message send to the primary subsystem actor. This is for compatibility with Java code.
I tried a number of different ways to tell if messages where being processed. However I did it I needed some info from the program, while it was running, and I devolved to printing stuff out. Thus my reference to the question thread about printing and flushing buffers. The only thing that seemed to affect behaviour was Actor vs. DaemonActor.
I could send out code, but it would be kind of a lot.
Any insight would be appreciated!
One possible problem is that you haven't started the actors?
What exactly is the workflow around the DaemonActor, and are you using Scala 2.7 or 2.8? If you post your code on a gist or pastebin-type system, I'm sure many of us would be happy to look at it. :)
In 2.8, regular actors prevent the runtime from terminating while they active; DaemonActors, as the name implies, do not. If you are simply sending the DaemonActor one or more messages and then the program ends, it may never even get to the point of sending messages to the other Actors.
Related
Is it possible in Akka Actors to install some kind of 'hook' that allows you to run a self-defined piece of code every time a new message arrives in an actor? Note, this is not the moment when the actor starts handling the message with receive but the moment when the message arrives in the actor and is put into its mailbox. Also note that I want to change the default behavior, not just the behavior for one individual actor. Ideally I would change this behavior at just one spot throughout my code and it would affect all actors automatically, or by only requiring 1-2 lines of code in each file/actor (such as an import statement).
For example, using this hook it should be possible to log a message every time it arrives or to calculate and print the fibonacci of the size of the mailbox before/after insertion.
If you control the spawning of the actor (or are willing to use this mailbox as the default for actors which don't specifically set a mailbox), you can use a custom mailbox. See the docs for details.
I have some actors that kill themselves when idle or other system constraints require them to. The actors that have ActorRefs to them are watching for their Terminated(ref), but there is a race condition of messages meant for the actors being sent before the termination arrives and I'm trying to figure out a clean way to handle that.
I was considering subscribing to DeadLetter and using that to signal the sender that their ref is stale and that they need to get or spawn a new target ActorRef.
However, in Akka Typed, I cannot find any way to get to dead letters other than using the untyped co-existence path, so I figure I'm likely approaching this wrong.
Is there a better pattern for dealing dead downstream refs and re-directing messages to a new downstream refs, short of requiring some kind of ack hand-shake for every message?
Consider dead letters as a debugging tool rather something to use to implement delivery guarantees with (true for both Akka typed and untyped).
If an actor needs to be certain that a message was delivered the message protocol will need to include an an ack. To do resending the actor will also need to keep a buffer for in-flight/not yet acknowledged messages to be able to resend.
We have some ideas on an abstraction for different levels of reliability for message delivery, we'll see if that fits in Akka 2.6 or happens later though, prototyped in: https://github.com/akka/akka/pull/25099
I'm currently implementing a system that that receives inbound messages from an external monitoring system. I'm translating these messages into more concise 'events', and I'm using these to alter the state of 'Managed System' objects. Akka Actors seemed like a good use case for encapsulating mutable state in concurrent applications.
The managed systems are identified by a name (99% of the time this is a hostname). Whenever a proper event is received, the system routes the message to the correct actor based on the name property. At first I used to use actorSelection and the complete paths of said actors, but that was very ugly, and I saw several people advise against relying on the fully qualified name of an actor to deliver message.
So I've set up a simple EventBus, which is great as I can now simply do:
eventBus.subscribe(subscriber1, "/managedSystem01")
eventBus.subscribe(subscriber2, "/managedSystem02")
eventBus.publish(MonitoringEvent("/managedSystem01", MonitoringMessage("managedSystem01", "N", "CPU_LOAD_HIGH", True)))
eventBus.publish(MonitoringEvent("/managedSystem02", MonitoringMessage("managedSystem02", "Y", "DISK_USAGE_HIGH", True)))
Of course, I now have the issue that, should I receive and event that concerns a managed system for which I've not spawned an actor yet (this is entirely possibly, it is impossible for me to get an absolute list of managed systems unfortunately), the message will be routed to the dead-letter mailbox.
Ideally I don't want this to happen. When it is unable to address a specific actor, I want to spawn a new one dynamically.
I suppose that, theoretically, I could subscribe to DeadLetter messages but:
That sounds a little 'hacky', since those message are essentially reserved for the system
Is it even possible to recover the original message (in my case, the MonitoringMessage) that is sent to the DeadLetter mailbox?
Alternatively is there a way to check if there are ZERO subscribers to a certain "topic"?
What you describe ("send to Actor by some identifier, if it does not exist buffer until it gets created and then deliver to that newly on-demand created Actor") is implemented in Akka as Cluster Sharding.
While it is designed primarily for sharding load (work) across a cluster, you could use it locally as well, since your requirement is essentially a scaled down (to one node) version of problem that it solves. It takes care of starting new Actors if they don't exist for a given identifier etc, so you'd simply subscribe the shard-region to the events and it'll take care of creating the actors for you.
It is said:
Akka ensures that each instance of an actor runs in its own lightweight thread and that messages are processed one at a time.
Can you please explain what is the reason of processing messages one at a time in an Actor?
This way we can guarantee thread safety inside an Actor.
Because an actor will only ever handle one message at any given time, we can guarantee that accessing the actor's local state is safe to access, even though the Actor itself may be switching Threads which it is executing on. Akka guarantees that the state written while handling message M1 are visible to the Actor once it handles M2, even though it may now be running on a different thread (normally guaranteeing this kind of safety comes at a huge cost, Akka handles this for you).
It also originates from the original Actor model description, which is an concurrency abstraction, described as actors who can only one by one handle messages and respond to these by performing one of these actions: send other messages, change it's behaviour or create new actors.
I'm using slick to store data in database, and there I use the threadLocalSession to store the sessions.
The repositories are used to do the crud, and I have an Akka service layer that access the slick repositories.
I found this link, where Adam Gent asks something near what I'm asking here: Akka and Java libraries that use ThreadLocals
My concern is about how does akka process a message, as I store the database session in a threadLocal, can I have two messages been processed at the same time in the same thread?
Let's say: Two add user messages (A and B) sent to the userservice, and message A is partially processed, and stopped, thread B start to process in the same thread that thread A has started to process, which will have the session stored in it's localSession?
Each actor processes its messages one at a time, in the order it received them*. Therefore, if you send messages A, B to the same actor, then they are never processed concurrently (of course the situation is different if you send each of the messages to different actors).
The problem with the use of ThreadLocals is that in general it is not guaranteed that an actor processes each of its messages on the same thread.
So if you send a message M1 and then a message M2 to actor A, it is guaranteed that M1 is processed before M2. What is not guaranteed that M2 is processed on the same thread as M1.
In general, you should avoid using ThreadLocals, as the whole point of actors is that they are a unit of consistency, and you are safe to modify their internal state via message passing. If you really need more control on the threads which execute the processing of messages, look into the documentation of dispatchers: http://doc.akka.io/docs/akka/2.1.0/java/dispatchers.html
*Except if you change their mailbox implementation, but that's a non-default behavior.