Hard restart directive in Akka? - scala

Is there an elegant way of doing a hard restart of an actor - i.e. clearing out the mailbox along with internal state?
I know it can be done by calling context.stop and reinitializing upon the DeathWatch / Terminated message, but that's a bit clunky.

No, clearing out the mailbox is exactly what is done by terminating the actor. If you were to try that without the termination semantics, how could you ever be sure that you cleared everything? New messages could come in at any point in time.
So, to do that hard restart you
return the Stop directive from the supervisor strategy
then create a new child once you receive that actor’s Terminated message.

Related

How to persist and replay NestJS CQRS event and saga across restart?

I am making an application which will need to use NestJS' CQRS module, as the requirements naturally lend themselves to that pattern.
Updates to the application logic are expected to be frequent and to happen during busy hours (that's just how my management works...), so the application needs to be able to restart gracefully. However, this means that events started just before the shutdown may not finish, or even if they do, some sagas may not trigger due to some events having happened before the restart... I'd like to ensure that doesn't happen.
I'm aware of NestJS' OnApplicationShutdown and OnApplicationBootstrap hooks, which is exactly for this purpose, but what I'm not sure is what I should do there. How can I capture all events that have unfinished handlers and sagas? Then after a restart, how can I make the event bus aware of the events monitored by sagas, without executing the already executed handlers?
I guess the second part could be worked around with a random ID per event/handler combo, that will be looked up in a log, and if present, the handler will be skipped, and if not, it will be executed and added to the log... But even with such a workaround, I don't see how I could do the first part. There will be a lot of events, and sagas (by definition) execute commands, meaning they have side effects... Even if all commands can become idempotent, the sheer quantity of events and frequent restarts means restarting from the very first command is a no go.
I've seen this package but I'm not sure if it solves this particular use case, or if it's really just logging the events, and pretty much nothing more.

Is there a way to rely on Postgres Notify/Listen mechanism?

I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.

Is the callbacks in QuickFIX client guaranteed to be called?

I have a QuickFIX initiator which receives frequent market data updates.
Altough i process each update as quickly as possible, still i have a concern regarding the callbacks.
Lets say, QuickFIX called my callback function and while i'm processing it, it's again called my function before the previous call is going on. What will happen in this situation? Is it guaranteed that i will be called for the next call or the engine can skip it because of the previous call is still going on?
Thanks
If the message is not malformed, then yes, a callback should be triggered for every message received.
(If it is malformed, the engine will reject automatically and will not pass the message to your application code.)
Incoming messages are actually collected in a queue while you are processing them. For this reason, you should not perform time-intensive operations in the the callbacks. If you have some lengthy processing, you should dispatch it to another thread so as to not cause the queue to back up.

How to recover a Perl program from where it exited

I have a Perl programs that will takes a long time to run. The user may exit it occasionally and I hope to implement a mechanism to recover the program from where it exited.
I have an idea to use Storable/Dumper module to save the state of the program before it exited and restore the state after it resumed.
But how can I move the program to where it exited? Can I just set a recover point from where it exited and move to the recover point directly after it resumed?
You can use the concept of transactions and design the program around that, but having the user kill a process as an expected way of interacting with it doesn't sound like a good idea.
Maybe giving better feedback to the user about the program state would solve this issue instead of dealing with hacky behaviour.

Microsoft Message Queue Missing Messages

I am using C# and .Net Framework 1.1 (yes its old but I inherited this stuff and can't convert up). I places messages on a transactional queue but it does not get on the queue about 50% of the time. Running workgroup and Windows/XP Professional with all service packs installed. I don't see any messages in the dead letter queue either.
Any ideas where to look?
If it isn't hitting the queue at all and isn't going to the dead-letter queue, it suggests the item isn't being sent to the queue. You should be able to confirm that this is the case by switching on the journal for the queue.
Assuming it isn't hitting the queue, it is probably a transaction issue. I would check that you are definitely committing the message to the queue every time. Make sure there aren't any exceptions being thrown and swallowed that causes the transaction to roll back or never be committed (essentially the same thing). Also make sure there aren't any conditional statements that mean the commit gets skipped.
I would add some logging around every location where a transaction is started, committed and rolled back and also around any location where you are creating a message. You can then review you log to see the order of events and see what's going astray.
Another option would be to remove all of the transaction code and test the code against a non-transactional queue. If the messages all appear then it is a transactional problem. If not, the issue is elsewhere.
I use MSMQ a lot and the one thing I have learned through experience is that it works really well and the weak point is me :-)