HttpListener prevent Timeout - c#-3.0

I Implemented a HttpListener to process SoapRequests. This works fine but I can't find a soloution for the problem, that some soap-requests take too much time, resulting in timeouts on client side.
How do I let the requesting client know, that his request is not a timeout?
I thought about sending "dummy"-information while the request gets processsed, but the HttpListener only seems to send the data when you Close the response-object, and this can be done only once, so this is not the right thing to do I suppose.
Soloution:
Thread alliveWorker = new Thread(() =>
{
try
{
while (context.Response.OutputStream.CanWrite)
{
context.Response.OutputStream.WriteByte((byte) ' ');
context.Response.OutputStream.Flush();
Thread.Sleep(5000);
}
}
finally
{
}
});
alliveWorker.Start();
doWork();
alliveWorker.Interrupt();
createTheRealResponse();

Sending dummy information is not a bad idea.
I think you need to call the Flush() method on the HttpListenerResponse's OutputStream property after writing the dummy data. You must also enable SendChunked property:
Try sending a dummy space at regular interval:
response.SendChunked = true;
response.OutputStream.WriteByte((byte)' ');
response.OutputStream.Flush();

I see two options - increase timeouts on client side or extend protocol with operation status requests from client for long running operations.

If you are using .net 4.5, take a look at the HttpListenerTimeoutManager Class, you can use this class as a base to implement custom timeout behaviour.

Related

Using `delay` in Sidekiq with dynamic class/method names

Let's say I have some internal business logic that determines what mailer needs to be sent.
The output of this business logic is a Hash in the following format -
{ mailer_class: SomeMailer, mailer_method: "foo_email" }
{ mailer_class: OtherMailer, mailer_method: "bar_email" }
# etc...
I need to call the appropriate mailer based on the info above so I try something like this with Sidekiq's built in delay -
data = { mailer_class: ..., mailer_method: "..." }
data[:mailer_class].delay.send(mailer[:method])
This results in Sidekiq queueing up the send method which will eventually be called on my mailer.
Functionally it might work correctly, because that class after all will receive the appropriate method. But it feels a bit dirty and it interrupts other processes that watch the sidekiq queue because they expect to see a mailer method name but find :send instead.
Is there a good way around this or am I stuck modifying the rest of my application logic to work with this?
Thanks!
Why not pass that Hash to a Sidekiq Worker which knows how to send emails with that class/method combo?
def perform(hash)
hash['mailer_class'].constantize.send(hash['mailer_method'])
end

Stream a response in Spring mvc

This is the situation, lets say i have and endpoint and receive a request that retrieves data between a range of time or whatever, and the result of that request is a big list that i get from a database, lets say a list of a "Person" object, then for each of this person objects I have to call another method that it may be a little slow and it would delay the response a lot if i have to wait until it is executed for all the elements of this big list.
What i would like to accomplish is that i can stream the response through a rest endpoint and my front end does not have to wait until all this list is processed to start displaying it on the screen.
So i have a confusion here, i know that an asynchronous method using spring #Async it would make the consumer to be able to give a response even if the task is still not finished, but as far as i understand, this is helpful in the case of sending emails, or any other task or series of tasks whose response you are not going to display in the screen.
But in the case of a response that is meant to be displayed in the screen, i guess i should stream a chunk of data as soon as i have a whole "person" object ready.
What is the right way to accomplish this? is the Async method of any help in this situation or i should only find a way to detect when i have a person object is formed to stream it? or i'm terribly wrong and im not understanding the concepts of Async and streaming.
A little example would help.
Thanks.
I have been trying to understand the same concept from last 3 days and here is the my understanding which may help you.
Asynchronous REST endpoint:
If your REST end point is doing some complex business logic or calling some external service and may take some time respond back, its better to respond back from API ASAP moving the time consuming logic to background (separate thread). This is where Asynchronous processing will help.
Chunked output:
If your end point is expected to send large amount of data. In order to improve the user experience if i decide to start rendering the output (in UI) as soon as they start becoming available, chunked output from REST end point is the better approach.
Using jersey we can achieve both asynchronous processing and chunked output as mentioned in the below sample.
public ChunkedOutput<String> getChunkedResponse() {
final ChunkedOutput<String> output = new ChunkedOutput<String>(String.class);
new Thread() {
public void run() {
try {
String chunk;
int index = 0;
while ((chunk = getWordAtIndex(index)) != null) {
output.write(chunk);
index++;
}
} catch (IOException e) {
//Add code to handle the IO Exception during this operation
} finally {
try {
output.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}.start();
return output; // This output object may be returned way before output is created
}
I have tried out a sample to test this out with jersey and spring-boot combination. You can check it out in my git repository here.
Hope it helps.

How to get notified when unfiltered Netty server actually gets shutdown?

I have an Unfiltered Netty server that I need to shutdown and restart after every test.
val mockService = unfiltered.netty.Server.http(mockServicePort).handler(mockServicePlan)
before {
proxyServer.start()
}
after {
proxyServer.stop()
}
Currently, this is not working, and I am fairly certain that is because the stop() function is non-blocking and so the following start() function gets called to early.
I looked for a way to block or get notified on server closure, but it would not appear to be surfaced through the current API.
Is there a better way of achieving what I am trying to achieve?
Easy answer: replace your Unfiltered Netty server with a HTTP4S Blaze server.
var server: org.http4s.server.Server = null
val go: Task[Server] = org.http4s.server.blaze.BlazeBuilder
.bindHttp(mockServicePort)
.mountService(mockService)
.start
before {
server = go.run
}
after {
server.shutdown.run
}
There's also an awaitShutdown that blocks until the server shuts down. But since shutdown is a Task, it's easy to get notified when it has finished. Just flatMap that shit.

How do I call a method on my ServiceWorker from within my page?

I have a ServiceWorker registered on my page and want to pass some data to it so it can be stored in an IndexedDB and used later for network requests (it's an access token).
Is the correct thing just to use network requests and catch them on the SW side using fetch, or is there something more clever?
Note for future readers wondering similar things to me:
Setting properties on the SW registration object, e.g. setting self.registration.foo to a function within the service worker and doing the following in the page:
navigator.serviceWorker.getRegistration().then(function(reg) { reg.foo; })
Results in TypeError: reg.foo is not a function. I presume this is something to do with the lifecycle of a ServiceWorker meaning you can't modify it and expect those modification to be accessible in the future, so any interface with a SW likely has to be postMessage style, so perhaps just using fetch is the best way to go...?
So it turns out that you can't actually call a method within a SW from your app (due to lifecycle issues), so you have to use a postMessage API to pass serialized JSON messages around (so no passing callbacks etc).
You can send a message to the controlling SW with the following app code:
navigator.serviceWorker.controller.postMessage({'hello': 'world'})
Combined with the following in the SW code:
self.addEventListener('message', function (evt) {
console.log('postMessage received', evt.data);
})
Which results in the following in my SW's console:
postMessage received Object {hello: "world"}
So by passing in a message (JS object) which indicates the function and arguments I want to call my event listener can receive it and call the right function in the SW. To return a result to the app code you will need to also pass a port of a MessageChannel in to the SW and then respond via postMessage, for example in the app you'd create and send over a MessageChannel with the data:
var messageChannel = new MessageChannel();
messageChannel.port1.onmessage = function(event) {
console.log(event.data);
};
// This sends the message data as well as transferring messageChannel.port2 to the service worker.
// The service worker can then use the transferred port to reply via postMessage(), which
// will in turn trigger the onmessage handler on messageChannel.port1.
// See https://html.spec.whatwg.org/multipage/workers.html#dom-worker-postmessage
navigator.serviceWorker.controller.postMessage(message, [messageChannel.port2]);
and then you can respond via it in your Service Worker within the message handler:
evt.ports[0].postMessage({'hello': 'world'});
To pass data to your service worker, the above mentioned is a good way. But in case, if someone is still having a hard time implementing that, there is an other hack around for that,
1 - append your data to get parameter while you load service-worker (for eg., from sw.js -> sw.js?a=x&b=y&c=z)
2- Now in service worker, fetch those data using self.self.location.search.
Note, this will be beneficial only if the data you pass do not change for a particular client very often, other wise it will keep changing the loading url of service worker for that particular client and every time the client reloads or revisits, new service worker is installed.

ASP.NET Web Api: Delegate after Request

I have a problem with streams and the web api.
I return the stream which is consumed by the web api. Currently, i put the socket into a pool after getting the stream. but this cause some errors.
Now, I must putthe socket into the pool AFTER the request ended. (The stream was consumed and is now closed).
Is there a delegate for this or some other best practises?
Example code:
public HttpResponseMessage Get(int fileId)
{
HttpResponseMessage response = null;
response = new HttpResponseMessage(HttpStatusCode.OK);
Stream s = GetFile(id);
response.Content = new StreamContent(fileStream);
}
GetFile(int id)
{
FSClient fs = GetFSClient();
Stream s = fs.GetFileStream(id);
AddFSToPool(fs);
return s;
}
GetFile uses a self-programmed FileServer-Client.
It has an option to reuse FileServer-Connections. This connections will be stored in a pool. (In the pool are only unused FileServer-connections). If the next request calls GetFSClient() it gets an connected one from the pool (and removes it from the pool).
But if another requests comes in and uses a FileServer-Connection which is in the pool (because unused), there is still the problem, that the Stream is possibly in use.
Now I want to do the "put the FSClint into the pool" after the request ended and the stream is fully consumed.
Is there an entry point for that?
Stream is seen as a volatile/temporary resource - no wonder it implements IDisposable.
Also Stream is not thread-safe since it has a Position which means if it is read up to the end, it should be reset back to start and if two Threads reading the stream they will most likely read different chunks.
As such, I would not even attempt to solve this problem. Re-using streams on a web site (inherently multi-user / multi-threaded) not recommended.
UPDATE
As I said, still think that the best option is to re-think the solution but if you need to register something that runs after request finishes, use RegisterForDispose on request:
public HttpResponseMessage Get(HttpRequestMessage req, int fileId)
{
....
req.RegisterForDispose(myStream);
}