Azure Service Fabric and Scheduled Tasks - scheduled-tasks

Say you have 30+ console applications running on the Windows machine which can invoked manually or through Windows Scheduled Tasks, what would be recommended way to move to/implement them in Service Fabric?
One way of implementing this would be as one Service Fabric application with many stateless services (Reliable Actor using Timers/Reminders) each listening to the Service Bus queue/topic, and then use Azure Scheduler to send messages to the queue/topic.
What would be the pros/cons of such implementation? This article seems to list few of them.
What would be other ways to implement this?
Seems like some people are trying to advocate for including pub/sub framework into Service Fabric, if that becomes part of Service Fabric would that a valid option?

I would look at using Azure Functions, this would be great for simplicity and trendy being Serverless compute, meaning no need to spin up and configure a bus or queue, then use Stateless reliable API services and have the Azure timed Function call the stateless service directly.
See here for a start:
https://azure.microsoft.com/en-us/services/functions/
This Video is doing a timer with db clean up by no reason why this couldn't be a HTTP call.
Video

I like your idea of converting the console applications to actors and using reminders. However, I don't see the need for Service Bus or the Azure Scheduler.
It seems to me that you only need to expose a few API methods on the actors. One to create/modify the run schedule, and a second that would allow the actor to be manually/immediately invoked (while still maintaining turn-based concurrency). The actor could store its full schedule internally, but it only only ever needs to calculate the next time to execute - and set the reminder accordingly.
Also, keep in mind that actor reminders are triggered under all circumstances, whereas a timer stopped if Service Fabric deactivates the actor.

Related

Are independent (micro)services a paradox?

Ideas about microservices:
Microservices should be functionally idependent
Microservices should specialize in doing some useful work in a domain they own
Microservices are intended to communicate with each other
I find this these ideas to be contradictory.
Let me give a hypothetical business scenario.
I need a service that executes a workflow that has a step requiring auto-translation.
My business has an auto-translation service. There is a RESTful API where I can POST a source language, target language and text, and it returns a translation. This is a perfect example of a useful standalone service. It is reusable and completely unaware of its consumers.
Should the workflow service that my business needs leverage this service? If so, then my service has a "dependency" on another service.
If you take this reasoning to the exteme, every service in the world would have every functionality in the world.
Now, I know you're thinking we can break this dependency by moving out of resquestion-response (REST) and into messaging. My service publishes a translation request message. A translation response message is published when the translation is complete and my service consumes this message. Ok, but my service has to freeze the workflow and continue when the message arrives. It's still "waiting" even if the waiting is true async waiting (say the workflow state was persisted and the translation message arrives a day later). This is just a delayed request-response.
For me personally, "independent" is a quality that applies to multiple dimensions. It may not be independent from runtime perspective, but it can be independent from development, deployment, operations and scalability perspectives.
For example, translation service may be independently developed, deployed, operated by another team. At the same time they can scale translation service independently from your business workflow service according to the demand they get. And you can scale business workflow service according to your own demand (of course downstream dependencies come in play here, but that's a whole another topic)

Throttle API calls to external service using Scala

I have a service exposing a REST endpoint that, after a couple of transformations, calls a third-party service also via its REST endpoint.
I would like to implement some sort of throttling on my service to avoid being throttled by this third-party service. Note that my service's endpoint accepts only one request and not a list of them. I'm using Play and we also have Akka Streams as dependency.
My first though was to have my service saving the requests into a database table and then have an Akka Streams Source, leveraging the throttle function, picking tasks, applying the transformations and then calling the external service.
Is this a reasanoble approach or does it have any severe drawbacks?
Thanks!
Why save the requests to the database? Does the queue need to survive restarts and/or do you run a load-balanced setup that needs to somehow synchronize the requests?
If you don't need the above I'd think using only Source.queue to store the task data would work just as well?
And maybe you already thought of this: If you want to make your endpoint more resilient you should allow your API to send a 'sorry, busy' response and drop the request instead of queuing it if your queue grows beyond a certain size.

The right way to call fire-and-forget method on a service-fabric service

I have a method on ServiceA that I need to call from ServiceB. The method takes upwards of 5 minutes to execute and I don't care about its return value. (Output from the method is handled another way)
I have setup my method in IServiceA like this:
[OneWay]
Task LongRunningMethod(int param1);
However that doesn't appear to run, because I am getting System.TimeoutException: This can happen if message is dropped when service is busy or its long running operation and taking more time than configured Operation Timeout.
One choice is to increase the timeout, but it seems that there should be a better way.
Is there?
For fire and forget or long running operations the best solution is using a message bus as a middle-ware that will handle this dependency between both process.
To do what you want without a middle-ware, your caller would have to worry about many things, like: Timeouts (like in your case), delivery guarantee(confirmation), Service availability, Exceptions and so on.
With the middle-ware the only worry your application logic need is the delivery guarantee, the rest should be handled by the middle-ware and the receiver.
There are many options, like:
Azure Service Bus
Azure Storage Queue
MSMQ
Event Hub
and so on.
I would not recommend using the SF Communication, Task.Run(), Threads workarounds as many places suggests, because they will just bring you extra work and wont run as smooth as the middle-ware approach.

Behaviour when reducing instances of a Bluemix application

I have an orchestrator service which keeps track of the instances that are running and what request they are currently dealing with. If a new instance is required, I make a REST call to increase the instances and wait for the new instance to connect to the orchestrator. It's one request per instance.
The orchestrator tracks whether an instance is doing anything and knows which instances can be stopped, however there is nothing in the API that allows me to reduce the number of instances stopping a particular instance, which is what I am trying to achieve.
Is there anything I can do to manipulate the platform into deterministically stopping the instances that I want to stop? Perhaps by having long running HTTP requests to the instances I require and killing the request when it's no longer required, then making the API call to reduce the number of instances?
Part of the issue here is that I don't know the specifics of the current behavior...
Assuming you're talking about CloudFoundry/Instant Runtime applications, all of the instances of an applications are running behind a load balancer which uses round-robin to distribute requests across the instances (unless you have session affinity cookie set up). Differentiating between each instances for incoming requests or manual scaling is not recommended and it's an anti-pattern. You can not control which instance the scale down task will choose.
If you really want that level of control with each instance, maybe you should deploy them as separate applications. MyApp1, MyApp2, MyApp3, etc. All of your applications can have the same route (myapp.mybluemix.net). Each of the applications can now distinguish themselves by their name (VCAP_APPLICATION) allowing you terminate them.

Using SignalR in Azure Worker Roles

I have an Azure hosted web application which works alongside a number of instances of a worker role. Currently the web app passes work to these workers by placing messages in an Azure queue for the workers to pick up. The workers pass status and progress messages back by placing messages into a 'feedback' queue. At the moment, in order to inform my browser clients as to progress, I make ajax based periodic polling calls in the browser to an MVC controller method which in turn reads the Azure 'feedback' queue and returns these messages as json back to the browser.
Obviously, SignalR looks like a very attractive alternative to this clumsy polling / queing approach, but I have found very little guidance on how to go about doing this when we are talking about multiple worker roles (as opposed to the web role) needing to send status to individual or all clients .
The SignalR.WindowsAzureServiceBus by Clemens vasters looks superb but leaves one a bit high and dry at the end i.e. a good example solution is lacking.
Added commentary: From my reading so far it seems that no direct communication from worker role (as opposed to web role) to browser client via the SignalR approach is possible. It seems that workers have to communicate with the web role using queues. This in turn forces a polling approach ie the queues must be polled for messages from the worker roles - this polling has to originate (be driven from) from the browser it appears (how can a polling loop be set up in a web role?)
In summary, SignalR, even with the SignalR.WindowsAzureServiceBus scale out approach of Clemens Vasters, cannot handle direct comunication from worker role to browser.
Any comments from the experts would be appreciated.
You can use your worker roles as SignalR clients, so they will send messages to the web role (which is SignalR server) and the web role in turn will forward messages to clients.
We use Azure Service Bus Queues to send data to our SignalR web roles which then forward on to the clients.
The CAT pages have very good examples of how to set up asynchronous loops and sending.
Keep in mind my knowledge of this two technologies is very basic, I'm just starting. And I might have misunderstood your question, but it seems pretty obvious to me:
Web roles are capable of subscribing to a queue server where the worker role deposits the message?. If so there would be no client "pulling", the queue service would provide the web server side code with a new message, and through SignalR you would push changes to the client without client requests involved. The communication between web and worker would remain the same (which in my opinion, it's the proper way to do it).
If you are using the one of the SignalR scaleout backplanes you can get workers talking to connected clients via your web application.
How to publish messages using the SignalR SqlMessageBus explains how to do this.
It also links to a fully worked example which demonstrates one way of doing this.
Alternative message bus products such as NServiceBus could be worth investigating. NServiceBus has the ability to deliver messages asynchronously across process boundaries without the need for polling.