Can I schedule a Twilio job to make API calls into my back-end? - callback

Is there a way to set up a Twilio scheduled job which is not making outgoing calls? I just want it to make an API call into my own back-end service.

Currently, not today. You can investigate cloud based schedulers called out in this blog post.
4 ways to schedule Node.js code

Related

ExpressJS: expose Event Processing system as a REST Service API

I am looking for a way to expose an existing event processing system to the external world using a REST interface. I have existing system design where we have RabbitMQ message queues where a publisher could post a message and then wait for the message processed results on a separate queue. Message ID is used to track the output to the original message on the output queue.
Now I want this to be exposed to the external consumers but we don't want to expose our RabbitMQ endpoint for this, so I was wondering if anyone has managed to achieve something similar to this using ExpressJS. Above diagram shows the current thought process
Main challenge I am facing here is that; some of this message processing could take more than couple of minutes, so was not sure how best to develop a API like this. Choices like should I create a polling interface for client here or is there a technology these days that help eliminate the polling on the client API to verify if the message is processed and get the result.
Can someone please help me with a good approach to manage these sort of requirement.
I finally ended up going the webhook way. Now when the REST API service receives a request, the client need to also provide a webhook and this will be registered with the client request and server will call it back when the results are available.

Throttle API calls to external service using Scala

I have a service exposing a REST endpoint that, after a couple of transformations, calls a third-party service also via its REST endpoint.
I would like to implement some sort of throttling on my service to avoid being throttled by this third-party service. Note that my service's endpoint accepts only one request and not a list of them. I'm using Play and we also have Akka Streams as dependency.
My first though was to have my service saving the requests into a database table and then have an Akka Streams Source, leveraging the throttle function, picking tasks, applying the transformations and then calling the external service.
Is this a reasanoble approach or does it have any severe drawbacks?
Thanks!
Why save the requests to the database? Does the queue need to survive restarts and/or do you run a load-balanced setup that needs to somehow synchronize the requests?
If you don't need the above I'd think using only Source.queue to store the task data would work just as well?
And maybe you already thought of this: If you want to make your endpoint more resilient you should allow your API to send a 'sorry, busy' response and drop the request instead of queuing it if your queue grows beyond a certain size.

Queued messages + API endpoint

We have developed modular web app with very powerful API and now we need queuing tool for delayed|time consuming jobs. We are looking at RabbitMQ or AWS SQS. But these two just store messages, and you have to manually get messages from them or I misunderstood it?
We would like to channel all messages through our API, so when message is published to Queue in should be POST-ed (after some delay) to to our Interface.
So my question:
Is there any tool for queuing that support http post (with oauth2)?
If not, is this approach somehow valid:
Create worker that poll messages from queue
and POST them to API with some client?
(we have to maintain cli tool, and we want to avoid that).
Are there any alternatives?
When using SQS polling is the only way out.
To make things easier you can write this polling logic in AWS Lambda because lambda functions do not have the overhead of maintaining infrastructure and servers

Azure Service Fabric and Scheduled Tasks

Say you have 30+ console applications running on the Windows machine which can invoked manually or through Windows Scheduled Tasks, what would be recommended way to move to/implement them in Service Fabric?
One way of implementing this would be as one Service Fabric application with many stateless services (Reliable Actor using Timers/Reminders) each listening to the Service Bus queue/topic, and then use Azure Scheduler to send messages to the queue/topic.
What would be the pros/cons of such implementation? This article seems to list few of them.
What would be other ways to implement this?
Seems like some people are trying to advocate for including pub/sub framework into Service Fabric, if that becomes part of Service Fabric would that a valid option?
I would look at using Azure Functions, this would be great for simplicity and trendy being Serverless compute, meaning no need to spin up and configure a bus or queue, then use Stateless reliable API services and have the Azure timed Function call the stateless service directly.
See here for a start:
https://azure.microsoft.com/en-us/services/functions/
This Video is doing a timer with db clean up by no reason why this couldn't be a HTTP call.
Video
I like your idea of converting the console applications to actors and using reminders. However, I don't see the need for Service Bus or the Azure Scheduler.
It seems to me that you only need to expose a few API methods on the actors. One to create/modify the run schedule, and a second that would allow the actor to be manually/immediately invoked (while still maintaining turn-based concurrency). The actor could store its full schedule internally, but it only only ever needs to calculate the next time to execute - and set the reminder accordingly.
Also, keep in mind that actor reminders are triggered under all circumstances, whereas a timer stopped if Service Fabric deactivates the actor.

Using SignalR in Azure Worker Roles

I have an Azure hosted web application which works alongside a number of instances of a worker role. Currently the web app passes work to these workers by placing messages in an Azure queue for the workers to pick up. The workers pass status and progress messages back by placing messages into a 'feedback' queue. At the moment, in order to inform my browser clients as to progress, I make ajax based periodic polling calls in the browser to an MVC controller method which in turn reads the Azure 'feedback' queue and returns these messages as json back to the browser.
Obviously, SignalR looks like a very attractive alternative to this clumsy polling / queing approach, but I have found very little guidance on how to go about doing this when we are talking about multiple worker roles (as opposed to the web role) needing to send status to individual or all clients .
The SignalR.WindowsAzureServiceBus by Clemens vasters looks superb but leaves one a bit high and dry at the end i.e. a good example solution is lacking.
Added commentary: From my reading so far it seems that no direct communication from worker role (as opposed to web role) to browser client via the SignalR approach is possible. It seems that workers have to communicate with the web role using queues. This in turn forces a polling approach ie the queues must be polled for messages from the worker roles - this polling has to originate (be driven from) from the browser it appears (how can a polling loop be set up in a web role?)
In summary, SignalR, even with the SignalR.WindowsAzureServiceBus scale out approach of Clemens Vasters, cannot handle direct comunication from worker role to browser.
Any comments from the experts would be appreciated.
You can use your worker roles as SignalR clients, so they will send messages to the web role (which is SignalR server) and the web role in turn will forward messages to clients.
We use Azure Service Bus Queues to send data to our SignalR web roles which then forward on to the clients.
The CAT pages have very good examples of how to set up asynchronous loops and sending.
Keep in mind my knowledge of this two technologies is very basic, I'm just starting. And I might have misunderstood your question, but it seems pretty obvious to me:
Web roles are capable of subscribing to a queue server where the worker role deposits the message?. If so there would be no client "pulling", the queue service would provide the web server side code with a new message, and through SignalR you would push changes to the client without client requests involved. The communication between web and worker would remain the same (which in my opinion, it's the proper way to do it).
If you are using the one of the SignalR scaleout backplanes you can get workers talking to connected clients via your web application.
How to publish messages using the SignalR SqlMessageBus explains how to do this.
It also links to a fully worked example which demonstrates one way of doing this.
Alternative message bus products such as NServiceBus could be worth investigating. NServiceBus has the ability to deliver messages asynchronously across process boundaries without the need for polling.