How to emulate queue for CGI script? - perl

In my CGI script i make long (up to 10 seconds) request to another server, parsing results and show response to my user (via AJAX). But another server owner ask me to perform each request no more than 1 request per 10 seconds so:
i need to save each request of my
user;
every ten seconds i can make only one
request to another server;
First i think about Cron which will open simple text file (queue file), read first line and send it as a request to another server. After that it will save result in another file (where i'll cache all results). So my CGI will first check cache file and try to find result in it and after that (if result is not found) it will save task in the queue file (for the Cron).
But Cron run only once per minute so my user must wait for so long time...
So how i can do this via CGI?
May be:
After checking the cache file CGI will
estimate time to complete request
(by reading current queue file) and
send this estimation time to the
HTML (where i can got this time and
make another request after this time
via AJAX).
After that it will save request to
the queue file and fork. The forked
process will wait untill it's
request will be on the top of the
queue and will make request to
another server.
After that it will save result in
the cache file. What you think?
May be some module already written for such tasks?

One option is to create a local daemon/service (Linux/Windows) that handles sending all requests to the remote server. Your web service can talk to this daemon instead of the remote service using the same protocol, except on a private port/socket. The daemon can accept requests from the web server/application and every ten seconds, if there is a pending request it can send it on to the remote server, and when there is a response, it can forward it back to the incoming request socket. You can think of this daemon as a proxy server that simply adds a queueing functionality. Note that the daemon doesn't actually have to parse either the incoming request or returning results; it just forwards the bits on to the destination in each case. It only has to implement the queueing and networking functionality.

Related

What to do if a RESTful api is only partly successful

In our design we have something of a paradox. We have a database of projects. Each project has a status. We have a REST api to change a project from “Ready” status to “Cleanup” status. Two things must happen.
update the status in the database
send out an email to the approvers
Currently RESTful api does 1, and if that is successful, do 2.
But sometimes the email fails to send. But since (1) is already committed, it is not possible to rollback.
I don't want to send the email prior to commit, because I want to make sure the commit is successful before sending the email.
I thought about undoing step 1, but that is very hard. The status change involves adding new records to the history table, so I need to delete them. And if another person make other changes concurrently, the undo might get messed up.
So what can I do? If (2) fails, should I return “200 OK” to the client?
Seems like the best option is to return “500 Server Error” with error message that says “The project status was changed. However, sending the email to the approvers failed. Please take appropriate action.”
Perhaps I should not try to do 1 + 2 in a single operation? But that just puts the burden on the client, which is worse!
Just some random thoughts:
You can have a notification sent status flag along with a datetime of submission. When an email is successful then it flips, if not then it stays. When changes are submitted then your code iterates through ALL unsent notifications and tries to send. No idea what backend db you are suing but I believe many have the functionality to send emails as well. You could have a scheduled Job (SQL Server Agent for MSSQL) that runs hourly and tries to send if the datetime of the submission is lapsed a certain amount or starts setting off alarms if it fails as well.
If ti is that insanely important then maybe you could integrate a third party service such as sendgrid to run as a backup sending mech. That of course would be more $$ though...
Traditionally I've always separated functions like this into a backend worker process that handles this kind of administrative tasking stuff across many different applications. Some notifications get sent out every morning. Some get sent out every 15 minutes. Some are weekly summaries. If I run into a crash and burn then I light up the event log and we are (lucky/unlucky) enough to have server monitoring tools that alert us on specified application events.

How to Avoid Temporary Website's server sleeping

I have a webApplication (Asp.net C#) and i need to send some data to another WebService at about 8:00am by my webApplication.
I used Timer class(static variable), timer's Elapsed(EventHandler) and interval(about 20min) to check hour and send data at 8:00am to WebService.
Every thing is good, and Timer's event handler run good.
but, after 2 hour if my website didnt have any request and Traffic, website will be sleep temporary! so timer and other static variable will be lost !! and task failed temporary!
Then, if one user send request to Visit the website, WebApplication will start again and timer and other Static variables will active!
if we haven't any visit or request, can we Avoid this Temporary Website's server sleeping?

What is going wrong in this SIP call? Multiple NOTIFY messages in a row before RTP established

the long string of NOTIFY messages happen after the called number answers. and after about 20-30 seconds the 503 happens and then the call connects fine with audio.
If that trace is for a single call it's an incredibly complex one. After spending a bit of time looking over it I don't think it is for a single call and instead there are a few different calls mixed up in it. It's complicated by the fact that 10.10.20.1 is a Back to Back User Agent (B2BUA) and is initiating its on calls in response to different events.
As to your question about the NOTIFY request it's originally generated by the UAC at 10.10.10.3 as part of what appears to be an attended transfer. The REFER request is the start of the transfer. An implicit subscription, which is what the NOTIFY request is part of, gets created for a REFER transaction (see https://www.rfc-editor.org/rfc/rfc3515 and also see https://www.rfc-editor.org/rfc/rfc4488 which deals with suppressing the implicit transaction).
For an attended transfer the NOTIFY request allows a call leg end point to indicate that the transfer has been processed successfully. In this case it looks like the user agent at 10.10.10.3 isn't happy to accept the transfer until it gets a response to its NOTIFY request. This is unusual behaviour as typically the NOTIFY requests are for just that, notifying agents of events not controlling call flow. Once 10.10.10.3 gets the 503 response to its NOTIFY request it finally starts sending the RTP to 10.10.20.4. It mustn't care what the response is as 503 is an error condition and would usually result in whatever was waiting for it to fail.

Cancel gwt rpc call

In this example there is a pretty description of how to make a timeout logic using a Timer#schedule. But there is a pitfall there. We have 2 rpc requests: first makes a lot of computation on server(or maybe retrieving a large amount of data from database) and second a tiny request that returns results immediately. If we make first request, we will not recieve results immediately, instead we will have a timeout and after timeout we make the second tiny request and then abortFlag from example will be true, so we can retrieve the results of second request, but also we can retrieve the results of first request that was timed out before(because the AsyncCallback object of first call was not destroyed).
So we need some kind of cancelling the first rpc call after timeout occurs. how can I do this?
Let me give you an analogy.
You, the boss, made a call to a supplier, to get some product info. Supplier say they need to call you back because the info would take some time to be gathered. So, you gave them the contact of your foreman.
Your foreman waits for the call. Then you told your foreman to cancel the info request if it takes more than 30 minutes.
Your foreman thinks you are bonkers because he cannot cancel the request, because he does not have an account that gives him privilege to access the supplier's ordering system.
So, your foreman simply ignores any response from the supplier after 30 minutes. Your ingenious foreman sets up a timer in his phone that ignores the call from the supplier after 30 minutes. Even if you killed your foreman, cut off all communication links, the vendor would still be busy servicing your request.
There is nothing on the GWT client-side to cancel. The callback is merely a javascript object waiting to be invoked.
To cancel the call, you need to tell the server-side to stop wasting cpu resources (if that is your concern). Your server-side must be programmed to provide a service API which when invoked would cancel the job and return immediately to trigger your GWT callback.
You can refresh the page, and that would discard the page request and close the socket, but the server side would still be running. And when the server side completes its tasks and tries to perform a http response, it would fail, saying in the server logs that it had lost the client socket.
It is a very straight forward piece of reasoning.
Therefore, it falls into the design of your servlet/service, how a previous request can be identified by a subsequent request.
Cascaded Callbacks
If request 2 is dependent on the status of request 1, you should perform a cascaded callback. If request 2 is to be run on success then, you should place request 2 into the onFailure block of the callback. Rather than submitting the two requests one after another.
Otherwise, your timer should trigger request 2, and request 2 would have two responsibilities:
tell the server to cancel the previous request
get the small piece of info

How to Resume the Persisted Workflow with Delay Activity without Reloading into memory

How to Resume the Persisted Workflow with Delay Activity without Reloading into memory:
I am creating a workflow for leave application. My requirement is if any participant is not responded in the specified time, then the request needs to pass to next level participant approval.
Suppose a requester submitted a Leave Request and the Team Lead needs to approve it within 7 days. If the Team Lead is not responded in 7 days, then automatically it has to go to Manager Approval.
In general to achieve this, we will write a Windows service which is checking periodically and send the notifications once the period is elapsed.
But I want to achieve without writing the Windows service. Is there any possibility in WF4.0.
I am trying like this, once the requester is submitted the request then I am showing the request in the participant mail box and persisting the workflow. Once the participant responded I am resuming the workflow (because I am saving the workflow instance ID) and passing the participant response for further workflow execution.
In this if the participant is not responded, how to escalate / send the request to manager without using windows service.
Is it possible to do with anything with the Delay Activity?
If you create a workflow service it is hosted in the WoskflowServiceHost and this periodically checks is there are expired timers and resumes those.
You must host the workflow engine somewhere ...
If it's not in a windows service, it should be in IIS.
You can also host it in a "normal" command line application, but if you close the application the workflow will stop.