Service Fabric long running process - azure-service-fabric

I have a Web Api stateless service that is creating an Actor that does some long running processing via a reminder (fire and forget). It stores its own progress in local state. I am unable to get the progress of that long running process due to the single threaded nature of the Actor, any call to the method that gets the progress will wait until the long running process has completed. Does anyone have a solution for this (without using an external data source)?

If you simply wish to get the current state of your Actor without having to wait for an Actor lock you can actually use the underlying ActorService that is hosting the Actors to query the state without interrupting, or being blocked by the long running Actor method.
The ActorService hosting Actors is really just a StatelessService (with some bells and whistles) and you can communicate with it the same way you would communicate with any Service - add an IService interface to it and the use IServiceProxy to talk to it. This SO Answer shows how you can do that How to get state from service fabric actor without waiting for other methods to complete?
If you want to get progress along the way even during the execution of your Actor method you can force a save of the state changes in the middle of your long running exectuion by calling SaveStateAsync

You could create a ProgressTrackingActor and periodically update it from the existing Actor. Query the ProgressTrackingActor for progress.
You can use an ActorReference to indicate which Actor to query progress for, or use the same ActorId value.

Related

How to add periodic task in compute function of dart?

I want to send log to server periodically after 30 sec.
For performance i want to use different thread by using compute function.
But Timer is not working in compute.
Any suggestions to do task in different threads periodically in flutter?
You could just use an Isolate directly. It's what compute does under the hood.
However, I don't think sending information to a server would block your UI thread too much.
On top of that, if you're using app state to determine the log message I would just keep it in the main Isolate.
I would probably wrap the (Material|Widget|Cupertino)App in a StatefulWidget and add a Timer.periodic in the initState.
You also have to note communicating with an Isolate means your message has to be copied to its memory. While sending an HTTP Request to a server is usually async and non-ui-blocking.

Architecting a configurable user notification service

I am building an application which needs to send notifications to users at a fixed time of day. Users can choose which time of day they would like to be notified, and which days they would like to be notified. For example, a user might like to be notified at 6am every day, or 7am only on week days.
On the back-end, I am unsure how to architect the service that sends these notifications. The solution needs to handle:
concurrency, so I can scale my servers (notifications should not be duplicated)
system restarts
if a user changes their preferences, pending notifications should be rescheduled
Using a message broker such as RabbitMQ and task scheduler such as Celery may meet your requirements.
Asynchronous, or non-blocking, processing is a method of separating the execution of certain tasks from the main flow of a program. This provides you with several advantages, including allowing your user-facing code to run without interruption.
Message passing is a method which program components can use to communicate and exchange information. It can be implemented synchronously or asynchronously and can allow discrete processes to communicate without problems. Message passing is often implemented as an alternative to traditional databases for this type of usage because message queues often implement additional features, provide increased performance, and can reside completely in-memory.
Celery is a task queue that is built on an asynchronous message passing system. It can be used as a bucket where programming tasks can be dumped. The program that passed the task can continue to execute and function responsively, and then later on, it can poll celery to see if the computation is complete and retrieve the data.
While celery is written in Python, its protocol can be implemented in any language. worker is an implementation of Celery in Python. If the language has an AMQP client, there shouldn’t be much work to create a worker in your language. A Celery worker is just a program connecting to the broker to process messages.
Also, there’s another way to be language independent, and that’s to use REST tasks, instead of your tasks being functions, they’re URLs. With this information you can even create simple web servers that enable preloading of code. Simply expose an endpoint that performs an operation, and create a task that just performs an HTTP request to that endpoint.
Here it is the python example from official documentation:
from celery import Celery
from celery.schedules import crontab
app = Celery()
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls test('hello') every 10 seconds.
sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')
# Calls test('world') every 30 seconds
sender.add_periodic_task(30.0, test.s('world'), expires=10)
# Executes every Monday morning at 7:30 a.m.
sender.add_periodic_task(
crontab(hour=7, minute=30, day_of_week=1),
test.s('Happy Mondays!'),
)
#app.task
def test(arg):
print(arg)
As I can see you need to have 3 types of entities: users (to store email or some other way to reach the user), notifications (to store what you want to send to user - text etc) and schedules (to store when user want to get notifications). You need to store entities of those types in some kind of database.
Schedule should be connected to user, notification should be connected to user and schedule.
Assume you have cron job that starts some script every minute. This script will try to get all notifications connected with schedule for current time (job starting time). Don't forget to implement some type of overlaping prevention.
After this script will place a tasks (with all needed data: type of notification, users who you want to notify etc) in queue (beanstalkd or something). You can create as many workers (even on different physical instances) as you want to serve this queue (without thinking about duplication) - this will give you a great power of scalability.
In case user changed his schedule it will affect all his notification at the same moment. There is no pending notification as they will be served only when they really should be send.
This is a very highlevel description. Many things depends on language, database(s), queue server, wokers implementation.

Is there a way to rely on Postgres Notify/Listen mechanism?

I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.

Stopping the execution of a currently running job after some time

I am using quartz to schedule jobs to be executed daily as a part of a much larger web application. However, after a couple of days, the administrator would like to stop the execution of a particular job (maybe because it is no longer needed). How do I go about doing this? I read the api docs for the Scheduler and it has a method called interrupt(JobKey jobkey) but that method would work only with the same instance of the scheduler that was used to schedule the job.
interrupt(JobKey jobKey)
Request the interruption, within this Scheduler instance, of all
currently executing instances of the identified Job, which must be an
implementor of the InterruptableJob interface.
Is there anyway of getting the instance of an existing scheduler? Or maybe use singletons?
Should definitely use a singleton instance of your scheduler. I recommend the use of an IoC container to manage this in a clean and efficient way.

Windows Workflow: Persistence and Polling

I'm currently learning the WF framework, so bear with me; mostly I'm looking for where to start looking, not necessarily a direct answer. I just can't seem to figure out how to begin researching what I'd like in The Google.
Let's say I have a simple one-step workflow (much more complicated than that, but for simplicity's sake). This workflow needs to watch a certain record in the database to see when it changes. I don't have the capability to "push" via a trigger from the database when the row changes, so I need to poll for it every so often.
This workflow needs to be persisted to the database to be durable against restarts and whatnot as this is a long-running workflow. I'm trying to figure out the best way to get it to check every 3 minutes or so and also persist to the database. Do the persistence capabilities of the framework allow for that? It seems to be time-based. And since the workflow won't be reawakened by an external event, how does it reload from the database and check the same step it did previously again? Does it attempt the last unfulfilled activity automatically upon reloading?
Do "while" activities with a delay attached to it work at all, or can it be handled solely through the persistence services?
I'm not sure what you mean by "handled soley through persistence services"? Persistence refers only to the storing of an idle workflow.
You could have a Delay and a Code activity in a Sequence in a While loop. When in the Delay the workflow will go idle and may be persisted if necessary. However depending on how much state is needed when persisting the workflow and/or how many such workflows you would have running at any one time may mean that a leaner approach is necessary.
A leaner approach would be to externalise the DB watching and have some "DB watching" workflow service raise an event when the desired change has occured. This service would be added to Workflow runtime.
To that end you need a service contract which is defined by an Inteface with the [ExternalDataExchange] attribute. This interface in turn defines an event that the service will raise when the desired DB change is detected. It also defines a method that a Workflow can call to specify what what change this service should be looking for. The method should accept an instance GUID so that the requesting instance can be found when the DB change is detected.
In the workflow you use a CallExternalMethodActivity to call this services method. You then flow to a HandleExternalEventActivity which listen for the event. At this point the workflow will go idle and can be persisted. It will remain there until the service raises the event.