Subscribing to ALL ActorEvents - azure-service-fabric

In my WebApi I want to subscribe to the ActorEvents for all of my ActorIds. Subscribing to a single ActorId is easy, and works. However, I'm wondering if there is a way to subscribe everything all and future ones at once, that I might've missed in the documentation.
Currently, I'm iterating through all my ActorIds on WebApi startup and subscribing. I then subscribe to new Ids, and unsubscribe from deleted ones. However, this is somewhat cumbersome, and since it is during startup, will throw errors and stop the api process if any of the actors aren't up and running (happens on deployment).
Also for consideration, is that the same ActorId may be used in multiple IActors. For example UserActor(Bob), Wishlist(Bob), etc.
Any suggestions on how best to subscribe to anything?
Thanks for your time and suggestions!

You can't subscribe to future Actor events, because they don't exist yet. As a workaround, you can make the Actors register themselves to a central hub, as they are created. And use that instead of ActorEvents. One way to do this is by using something like pub/sub.
An ActorId simply identifies an Actor and it is used to determine the ActorService partition it is stored on. It does not hold Actor type info, sou it's safe to re-use it across different Actor types.

Related

What are the best practices when working with data from multiple sources in Flutter/Bloc?

The Bloc manual describes the example of a simple Todos app. It works as an example, but I get stuck when trying to make it into a more realistic app. Clearly, a more realistic Todos app needs to keep working when the user temporarily loses network connection, and also needs to occasionally check the server for updates that the user might have added from another device.
So as a basic data model I have:
dataFromServer, which is refreshed every five minutes, and
localData, that describes what changes have been made locally but haven't been synchronized to the server yet.
My current idea is to have three kinds of events:
on<GetTodosFromServer>() which runs every few minutes to check the server for updates and only changes the dataFromServer,
on<TodoAdded>() (and its friends TodoDeleted, TodoChecked, and so on) which get triggered when the user changes the data, and only change the localData, and
on<SyncTodoToServer>() which runs whenever the user changes the todo list, or when network connectivity is restored, and tries to send the changes to the server, retrieves the new value from the server, and then sets the new dataFromServer and localData.
So obviously there's a lot of interaction between these three methods. When a new todo is added after the synchronization to the server starts, but before synchronization is finished, it needs to stay in the local changes object. When GetTodosFromServer and SyncTodoToServer both return server data, they need to find out who has the latest data and keep that. And so on.
Coming from a Redux background, I'm used to having two reducers (one for local data, one for server data) that would only respond to simple actions. E.g. an action { "type": "TodoSuccessfullySyncedToServer", uploadedData: [...], serverResponse: [...] } would be straightforward to parse for both the localData and the dataFromServer reducer. The reducer doesn't contain any of the business logic, it receives actions one by one and all you need to think about inside the reducer is the state before the action, the action itself, and the state after the action. Anything you rely on to handle the action will be in the action itself, not in the context. So different pieces of code that generate those actions can just fire these actions without thinking, knowing that the reducer will handle them correctly.
Bloc on the other hand seems to mix business logic and updating the state. API calls are made within the event handlers, which will emit a value possibly many seconds later. So every time you return from an asynchronous call in an event handler, you need to think about how the state might have changed while that call was happening and the consequences this has on what you're currently doing. Also, an object in the state can be updated by different events that need to coordinate among themselves how to avoid conflicts while doing so.
Is there a best practice on how to avoid the complexity that brings? Is it best practice to split large events into "StartSyncToServer" and "SuccessfullySyncedToServer" events where the second behaves a lot like a Redux reducer? I don't see any of that in the examples, so is there another way this complexity is typically avoided in Bloc? Or is Bloc entirely unopinionated on these things?
I'm not looking for personal opinions here, only if there's something I missed in the Bloc manual (or other authoritative source) about how this was intended to work.

Is there a way to rely on Postgres Notify/Listen mechanism?

I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.

Design of a simple REST API with Akka + Persistence

I'm building a simple REST API for generating some objects that must be created and sent periodically out of the API. The nature of the objects doesn't matter, neither the framework supporting the REST interface (Spray, Play Framework, whatever else). My question is, what would be a good scalable actor design for this system using Akka? Suppose the service crashes or it's migrated or whatever that causes to stop it. In order to recover the description of the tasks about what objects must be sent and when, is akka-persistence a good way to go here? or it's better to persist such things in a traditional DB?
Thanks.
NOTE: also I would like to know, supposing there's some actor which is not stateful himself, but creates many children actors, if it's a good practice to use akka-persistence in order to replay the messages which causes this actor to create his children again (the children being also non-stateful).
In a traditional DB you would most likely end up modeling this with timestamps and events, and with event sourcing this is already the native model.
Akka-persistence would be a natural fit for this scenario since it will persist every event about what objects must be created and sent periodically out. The snapshot support will also help with speed of recovery when the number of events gets very large.
In the case of crashes or migration, the recovery process will handle this just fine.
Regarding your note, if the actor is truly stateless then there is no need to persist the events that cause the children to be created since they can be recreated on demand. If the existence of the children does need to be recovered, then the actor is not stateless. In that case then it may indeed make sense to persist those events.

How do I introduce a new event denormalizer in a CQRS system?

According to CQRS à la Greg Young, event handlers (and the downstream event denormalizers) react on incoming events that were published before by the event publisher.
Now lets suppose that at runtime we want to add a new event denormalizer: Basically, this is easy, but it needs to get to its data to the current state.
What is the best way to do this?
Should I send an out-of-order request to the event store and ask for all previously emitted events?
Or is there a better way to do this?
You can fetch and replay all (required) events against the new handler. This can be done in a separate process since what you essentially want is to get the persisted view models into the proper state.
Have a look at Rinat Abdullin's Lokad.CQRS sample project for a production example. Especially the SaaS.Engine.StartupProjectionRebuilder might be an interesting source even though it's rather complex.
One can also build the projections so that they remember what event they saw last. Then on any startup, they ask for this event and all forward. Re-starting an old projection and building a new one then become roughly the same thing.
If you embrace bounded context complex integration you may need to drop the entire read model and rebuild it.

EventStore and more than one unit of work?

In the reply to few questions, Jonathan Oliver mentions using an AsynchronousCommitDispatcher to handle multiple unit of works.
I am still in the design stage of my project (and still learning CRQS and ES) and have a few questions:
Would I create an AsynchronousCommitDispatcher for each aggregate root that will be affected by a domain event being raised?
What happens if I have some sort of locking mechanism where the dispatched event cannot make a change to an aggregate root if it is locked by another user? Does AsynchronousCommitDispatcher retry if there is a lock?
What if the system goes down before an domain event is handled? Unless I persist the fact that it has not been handled, wont it be lost?
My initial understanding was that the types of Dispatchers were for messaging across the wire or for updating the read model. Here we are using it to update another aggregate root. I this correct?
TIA
JD
The commit dispatchers are all about pushing events onto the wire after everything has been completely successfully. No, you don't need more than one dispatcher for a given endpoint. The AsyncCommitScheduler (which uses a dispatcher) is multi-threaded and can dispatch more than one event at a time.
A dispatcher is not about handling an incoming message--that's what your message handlers are for. The dispatcher just sends once everything is complete.
Yes dispatchers can help update read models, but not in the way you think. Instead, the dispatchers just push the messages into your messaging framework (MSMQ, RabbitMQ, or, at a higher level, NServiceBus/MassTransit). Then once a message is received at your view models, you update your view model tables accordingly.