Does SOAP support canceling a call? - soap

When using the SOAP protocol, is it possible to cancel a pending remote function call using SOAP?
I see three different situations:
A) Making a request to a service that takes a long time to complete. For example, when copying directory containing a lot of files, can the file copy loop be canceled?
B) Making a request that returns a long list. For example, when querying a big in-memory list of user names, can the transmission of this list-response be canceled?
C) Canceling a call that is still on the internal call queue; in other words, before the server has begun processing it. This can happen when issuing a lot of asynchronous calls in a short time.

From the client's point of view, cancelling a synchronous (request-response) SOAP call is the same as for any other HTTP call - just disconnect and stop listening for the response. A well written server will check whether the client is still connected before proceeding with lengthy operations (e.g. in .NET the server would check IsClientConnected) and should cancel the operation if not.
One-way calls cannot be cancelled in this manner however, because you've already sent the payload and disconnected. Cancellation of one-way calls would require an explicit call to some sort of cancellation method on the SOAP service, which it would have to explicitly support.

Related

Making a HTTP API server asynchronous with Future, how does it make it non-blocking?

I am trying to write a HTTP API server which does basic CRUD operation on a specific resource. It talks to an external db server to do the operations.
Future support in scala is pretty good, and for all non-blocking computation, future is used. I have used future in many places where we wrap an operation with future and move on, when the value is eventually available and the call back is triggered.
Coming to an HTTP API server's context, it is possible to implement non-blocking asynchronous calls, but when a GET or a POST call still blocks the main thread right?
When a GET request is made, a success 200 means the data is written to the db successfully and not lost. Until the data is written to the server, the thread that was created is still blocking until the final acknowledgement has been received from the database that the insert is successful right?
The main thread(created when http request was received) could delegate and get a Future back, but is it still blocked until the onSuccess is trigged which gets triggered when the value is available, which means the db call was successful.
I am failing to understand how efficiently a HTTP server could be designed to maximize efficiency, what happens when few hundred requests hit a specific endpoint and how it is dealt with. I've been told that slick takes the best approach.
If someone could explain a successful http request lifecycle with future and without future, assuming there are 100 db connection threads.
When a GET request is made, a success 200 means the data is written to
the db successfully and not lost. Until the data is written to the
server, the thread that was created is still blocking until the final
acknowledgement has been received from the database that the insert is
successful right?
The thread that was created for the specific request need not be blocked at all. When you start an HTTP server, you always have the "main" thread ongoing and waiting for requests to come in. Once a request starts, it is usually offloaded to a thread which is taken from the thread pool (or ExecutionContext). The thread serving the request doesn't need to block anything, it only needs to register a callback which says "once this future completes, please complete this request with a success or failure indication". In the meanwhile, the client socket is still pending a response from your server, nothing returns. If, for example, we're on Linux and using epoll, then we pass the kernel a list of file descriptors to monitor for incoming data and wait for that data to become available, in which we will get back a notification for.
We get this for free when running on top of the JVM due to how java.NIO is implemented for Linux.
The main thread (created when http request was received) could delegate
and get a Future back, but is it still blocked until the onSuccess is
trigged which gets triggered when the value is available, which means
the db call was successful.
The main thread usually won't be blocked, as it is whats in charge of accepting new incoming connections. If you think about it logically, if the main thread blocked until your request completed, that means that we could only serve one concurrent request, and who wants a server which can only handle a single request at a time?
In order for it to be able to accept multiple request, it will never handle the processing of the route on the thread in which it accepts the connection, it will always delegate it to a background thread to do that work.
In general, there are many ways of doing efficient IO in both Linux and Windows. The former has epoll while the latter has IO completion ports. For more on how epoll works internally, see https://eklitzke.org/blocking-io-nonblocking-io-and-epoll
First off, there has to be something blocking the final main thread for it to keep running. But it's no different than having a threadpool and joining to it. I'm not exactly sure what you're asking here, since I think we both agree that using threads/concurrency is better than a single threaded operation.
Future is easy and efficient because it abstracts all the thread handling from you. By default, all new futures run in the global implicit ExecutionContext, which is just a default threadpool. Once you kick of a Future request, that thread will spawn and run, and your program execution will continue. There are also convenient constructs to directly manipulate the results of a future. For example, you can map, and flatMap on futures, and once that future(thread) returns, it will run your transformation.
It's not like single threaded languages where a single future will actually block the entire execution if you have a blocking call.
When you're comparing efficiency, what are you comparing it to?
In general "non-blocking" may mean different things in different contexts: non-blocking = asynchronous (your second question) and non-blocking = non-blocking IO (your first question). The second question is a bit simpler (addresses more traditional or well-known aspect let's say), so let's start from it.
The main thread(created when http request was received) could delegate and get a Future back, but is it still blocked until the onSuccess is trigged which gets triggered when the value is available, which means the db call was successful.
It is not blocked, because Future runs on different thread, so your main thread and thread where you execute your db call logic run concurrently (main thread still able to handle other requests while db call code of previous request is executing).
When a GET request is made, a success 200 means the data is written to the db successfully and not lost. Until the data is written to the server, the thread that was created is still blocking until the final acknowledgement has been received from the database that the insert is successful right?
This aspect is about IO. Thread making DB call (Network IO) is not necessary blocked. It is the case for old "thread per request" model, when thread is really blocked and you need create another thread for another DB request. However, nowadays non-blocking IO became popular. You can google for more details about it, but in general it allows you to use one thread for several IO operations.

JsonTextReader.ReadAsync not reacting on cancellation

I am implementing an application that relies on Office365 API Streaming Subscriptions. When listening on subscriptions, I am reading the almost infinite result of a POST request. Practically I am getting a stream that contains a single JSON object, which starts with some warm-up properties and has an array of events.
I am using the JsonTextReader to parse the stream. There is a while (await jsonReader.ReadAsync(readCts.Token))... loop that seamlessly parses the stream.
It is working just great... well, almost.
At predefined intervals, I get a KeepAlive notification. The reader is identifying them, thus I want to use this to reset the readCts CancellationTokenSource timeout. If the notification is not arriving at the time, the read operation should be canceled. So far so good. But timeout based canceling work only if the underlying network connection is healthy (simulated by setting the timeout less than the keepalive event interval).
After interrupting the connection, this async operation is hanging, the cancellation token has no effect over it. And the logical connection is lost as well, re-establishing physical one does not resume the stream.
I have tried setting HttpClient instance's timeout, but that had no any effect either. Finally, I managed it by setting WinHttpHandler.ReceiveDataTimeout. But for that, I am using a separate HttpClient instance.
1) Is the behavior of cancellation described above normal?
2) I know, that HttpClient instances should be reused. But in general API calls are not running for hours. And I will probably have several of such calls in parallel. Can I share one HttpClient instance, or do I need as many as parallel requests I have?
Thank you.

Semaphore error logged in mobicents sip servlet

We have an application written against Mobicents SIP Servlets, currently this is using v2.1.547 but I have also tested against v3.1.633 with the same behavior noted.
Our application is working as a B2BUA, we have an incoming SIP call and we also have an outbound SIP call being placed to an MRF which is executing VXML. These two SIP calls are associated with a single SipApplicationSession - which is the concurrency model we have configured.
The scenario which recreates this 100% of the time is as follows:
inbound call placed to our application (call is not answered)
outbound call placed to MRF
inbound call hangsup
application attempts to terminate the SipSession associated with the outbound call
I am seeing this being logged:
2015-12-17 09:53:56,771 WARN [SipApplicationSessionImpl] (MSS-Executor-Thread-14) Failed to acquire session semaphore java.util.concurrent.Semaphore#55fcc0cb[Permits = 0] for 30 secs. We will unlock the semaphore no matter what because the transaction is about to timeout. THIS MIGHT ALSO BE CONCURRENCY CONTROL RISK. app Session is5faf5a3a-6a83-4f23-a30a-57d3eff3281c;SipController
I am willing to believe somehow our application might be triggering this behavior but I can't see how at the moment. I would have thought acquiring/releasing the Semaphore was all internal to the implementation so it should ensure something doesn't acquire the Semaphore and never release it?
Any pointers on how to get to the bottom of this would be appreciated, as I said it is 100% repeatable so getting logs etc is all possible.
It's hard to tell without seeing any logs or application code on how you access and schedule messages to be sent. But if you use the same SipApplicationSession in an asynchronous manner you may want to use our vendor specific asynchronous API https://mobicents.ci.cloudbees.com/job/MobicentsSipServlets-Release/lastSuccessfulBuild/artifact/documentation/jsr289-extensions-apidocs/org/mobicents/javax/servlet/sip/SipSessionsUtilExt.html#scheduleAsynchronousWork(java.lang.String,%20org.mobicents.javax.servlet.sip.SipApplicationSessionAsynchronousWork) which will guarantee that the access to the SipapplicationSession is serialized and avoid any concurrency issues.

Why lift 3 round trips are doing 2 kinds of HTTP request

I am using lift 3 round trip and I am trying to understand what happens behind the scene.
Why are there 2 kinds of request :
GET on comet_request
POST on ajax_request
Lift's uses HTTP Long Polling for asynchronous responses to the browser. I won't go into great detail on why the Lift developers have chosen Long Polling over other implementations, like Web Sockets, but there are well thought out reasons and if you're interested just do a quick search through the Lift mailing list where it's been discussed many times.
The gist of how it works is that the browser makes a request to the server, and the server holds the request open until there is information to send. When information becomes available, it gets pushed down the pipe, the browser processes it, and the browser initiates a new long poll request. Lift uses the servlet container's asynchronous support to hold the connection open with very little resource consumption, and because Javascript is asynchronous by nature, waiting on new information is not resource intensive for the browser either. Since there is a limit on the number of requests a browser can make to the same domain at once, Lift only opens one of these long poll connections at a time and multiplexes responses from what could be many different "responders" through it.
Initially Lift's asynchronous support was added so that data generated by server side events could be pushed to the client as they occurred. With the growth in popularity of client side frameworks, the ability to push asynchronous data initiated by client events became useful, hence the addition of round trips. The idea is that the client makes a request to the server, and rather than respond immediately, the server does some stuff in another thread then sends a response (potentially much) later. To users of the client side API, this is modeled as a promise, but behind the scenes what happens is that Lift receives the request and responds immediately (remember, we can't have too many requests open to the same domain) but will stream the actual data that satisfies the promise through the long polling connection when it becomes available.
So, that's what you're seeing. Your initial request is the ajax POST, which triggers the beginning of a round trip. If you were to look at the data returned by that request, you'd see that it's not the data that satisfies the promise. The actual response data is delivered via Lift's long polling mechanism, and that is what you see with the GET request.

GoLang simple REST API should use GoRoutines

My question is very simple.
Should I make use of GoRoutines for a very simple REST API?
I basically only do simple queries to a DB, verify sessions, or do logins. Is any of this operations worth setting up a GoRoutine? When are GoRoutines useful, and how should I set them up?
The net/http package already takes care of this for you. Once you call Serve (or more likely ListenAndServe) the following happens:
Serve accepts incoming HTTP connections on the listener l, creating a new service goroutine for each. The service goroutines read requests and then call handler to reply to them. Handler is typically nil, in which case the DefaultServeMux is used.
See http://golang.org/pkg/net/http/ for more.
You may want another goroutine if a request triggers the need for longer processing and you don't want to make the client wait.