Send Akka messages with database update - scala

I am trying to implement a method in scala that performs couple of database updates using Slick (in the same DB transaction) and then sends several akka messages. Both sending messages and db updates should be atomic. In JEE world it happens pretty much transparently with JMS and DB(JPA for instance) participating in the same transaction and being coordinated by JTA. How do I achieve it with Akka and Slick. Examples would be very beneficial.

To continue the discussion in comments, As I see the solution to your problem:
Start with the main actor which performs db interaction. E.g. on message Start it updates database using Slick and saves Connection to actor's instance varibale and send messages to child actors. That actors have to send to your Main actor confirmation, for example message ConfirmTratnsaction. In the reaction on that message you perform commit on previously saved Connection and close it(or release it to the pool). Also, Main actor has to supervise that child actors. If that actor fail after sending message(or timeout occurs) you have to rollback transaction via saved Connection

Related

Order fulfilment with Akka FSM, storing state

I am trying to build order fulfilment component with AKKA FSM. I have few basic doubts on how the state is been stored and taken further upon event from user.
Consider states
ORDER_CLEAN, ORDER_INIT, ORDER_PAYMENT_WAITING, ORDER_PAYMENT_SUCCESS, ORDER_DELIVERY, ORDER_COMPLETE
Events as
EV_CART_CHECKOUT, EV_PROCEED_PAYMENT, EV_PAYMENT_SUCCESSFUL, EV_ITEMS_PACKED, EV_DELIVERED
State changes as
(EV_CART_CHECKOUT, ORDER_CLEAN) -> ORDER_INIT
(EV_PROCEED_PAYMENT, ORDER_INIT) -> ORDER_PAYMENT_WAITING
(EV_PAYMENT_SUCCESSFUL, ORDER_PAYMENT_WAITING) -> ORDER_PAYMENT_SUCCESS
(EV_ITEMS_PACKED, ORDER_PAYMENT_SUCCESS) -> ORDER_DELIVERY
(EV_DELIVERED, ORDER_DELIVERY) -> ORDER_COMPLETE
Questions
When we create FSM actors starting at ORDER_CLEAN with event EV_CART_CHECKOUT, would this actor is alive till we bring it to ORDER_COMPLETE(assuming we stop actor at this state) state?
If yes to above point, in that case as we store order status on database how do we trigger new event on that actor? Is that do we need to maintain order_id to actor mapping and trigger event? What if there are 10K unique orders are currently being processed then we maintain mapping for all 10K actors is it? If so what is best data structure for maintaining these mappings for larger number of orders?
In continuation to 2nd point, what if actors go down how to bring back actors to same state? Is supervisor actor only way to solve this? Or do we need to check actor status and then send event?
At any point of state, user might not trigger next event may be for days, then is it good to keep actor live for such longer time or is it good to create new actor with updated state?
What are the better approaches to address these problems with akka FSM
If we are talking about non-persistent Actor, generally speaking, we
can't assume it will be alive between events. You simply might
restart or redeploy the service, so the answer to your 1. question
is no.
To trigger a new event to the actor, you should create this actor initialise state machine with last valid state from the DB.
You could either use Akka Persistence or just read current order state from the DB and pass it to the actor
Actors are very lightweight objects, but talking about 10k events I would suggest to terminate actor after each transition

Where to put calculation executed regularly that updates actor's internal state?

I am learning Scala and Akka.
In the problem I am trying to solve I want an actor to be reading a real-time data stream and perform a certain calculation that would update its state.
Every 3 seconds I am sending a request through a Scheduler for the actor to return to its state.
While I have pretty much everything implemented, with my actor having a broadcaster and receiver and the function to update the state right. I am not entirely sure how to do it, I could potentially put the calculations always running in a separate thread inside the actor but I would like to now if there is a more elegant way to make this in scala.
I would suggest to divide the work between two actors. The parent actor would manage child worker actor and would track the state. It sends a message to the child worker actor to trigger data processing.
The child worker actor processes the data stream - don't forget to wrap the processing into a Future so that it doesn't block the actor from processing messages. It also periodically sends messages to the master with current state. So the child worker is stateless, it sends notifications when its state changes.
If you want to know the current state of the work overall, you ask the master. In principle, you can merge this into one actor which sends the status message to itself. I wouldn't update the state directly to avoid concurrency issues. The reason is that the data processing work running in the Future can possible run on a different thread than message processing.

Akka: what is the reason of processing messages one at a time in an Actor?

It is said:
Akka ensures that each instance of an actor runs in its own lightweight thread and that messages are processed one at a time.
Can you please explain what is the reason of processing messages one at a time in an Actor?
This way we can guarantee thread safety inside an Actor.
Because an actor will only ever handle one message at any given time, we can guarantee that accessing the actor's local state is safe to access, even though the Actor itself may be switching Threads which it is executing on. Akka guarantees that the state written while handling message M1 are visible to the Actor once it handles M2, even though it may now be running on a different thread (normally guaranteeing this kind of safety comes at a huge cost, Akka handles this for you).
It also originates from the original Actor model description, which is an concurrency abstraction, described as actors who can only one by one handle messages and respond to these by performing one of these actions: send other messages, change it's behaviour or create new actors.

Akka FSM actor and round-robin routing

I want to convert some set of actors into FSM using Akka FSM. Currently system is designed in the way that every actor knows what to do with results of it's action and which actor is next in sequence of processing.
Now I want to have some sort of dedicated actors, which are doing only things they should know (and now know about entire message routing), and central FSM, which knows how to route messages and process transformation flow.
Client sends some request to FSM actor, FSM actor - on transition to next state - sends message to some actor in onTransition block. That actor replies to sender with some message, which is processed inside FSM state somehow until request is finished.
So far everything looks good, however I'm not sure what will happen if multiple clients will start interaction with FSM actor. Will the "workflow" be recorded somewhere, so flows from different clients won't collide at some point (like, FSM actor receives message from another client instead of originating one)?
Is it safe to have say 10 FSM actors and round-robin router, or I need to create new FSM actor on every request from client, and then kill it once finished?
Each Akka FSM actor will have only one state at a time, so you can't use multiple FSM actors with a round-robin router in this scenario. You may consider to create a new FSM actor on every request from a client. There are other options (a shared multi-user non-Akka FSM and a pool of FSM actors which may be "busy") but creation of a per-user FSM should be better solution because of the light-weight nature of Akka actors.

Akka Slick and ThreadLocal

I'm using slick to store data in database, and there I use the threadLocalSession to store the sessions.
The repositories are used to do the crud, and I have an Akka service layer that access the slick repositories.
I found this link, where Adam Gent asks something near what I'm asking here: Akka and Java libraries that use ThreadLocals
My concern is about how does akka process a message, as I store the database session in a threadLocal, can I have two messages been processed at the same time in the same thread?
Let's say: Two add user messages (A and B) sent to the userservice, and message A is partially processed, and stopped, thread B start to process in the same thread that thread A has started to process, which will have the session stored in it's localSession?
Each actor processes its messages one at a time, in the order it received them*. Therefore, if you send messages A, B to the same actor, then they are never processed concurrently (of course the situation is different if you send each of the messages to different actors).
The problem with the use of ThreadLocals is that in general it is not guaranteed that an actor processes each of its messages on the same thread.
So if you send a message M1 and then a message M2 to actor A, it is guaranteed that M1 is processed before M2. What is not guaranteed that M2 is processed on the same thread as M1.
In general, you should avoid using ThreadLocals, as the whole point of actors is that they are a unit of consistency, and you are safe to modify their internal state via message passing. If you really need more control on the threads which execute the processing of messages, look into the documentation of dispatchers: http://doc.akka.io/docs/akka/2.1.0/java/dispatchers.html
*Except if you change their mailbox implementation, but that's a non-default behavior.