HTTP Sender and REST conventions - mirth

I'm writing a C# Web API server application, and will send JSON to it via a Mirth HTTP Sender destination. This post is about how to handle error conditions. Specifically, there are three scenarios I want to handle:
Sometimes we take the C# application server offline for a short period for system upgrade or maintenance, and Mirth is unable to connect at all. I want Mirth to queue all messages in order, and when the server is available, process them in the order they were received.
The server receives the request, but rejects it due to a problem with the content of the request, e.g., missing a required field. In accordance with REST conventions, the server will return a 400-level HTTP response. This message would be rejected every time it's submitted, so it should not be re-sent; just log the failure and move on to the next message.
The server receives the request, but something goes wrong on the server, and the server returns an HTTP 500 Server Error response. This would be the appropriate response, for example, when something in the server environment has gone wrong. One real-world example was the time the Web API server was running, but somebody rebooted the database server. REST conventions would suggest we continue to resend the message until the transient problem has been resolved.
For #1, initially I had it queue on failure/always, but it appears the response transformer never runs for messages that were queued (at least, the debug statements never showed in the log). I have turned queueing off, and set it to retry every ten seconds for an hour, and that seems to give the desired behavior. Am I on the right track here, or missing something?
For #2 and #3, returning any HTTP 400 or 500 error invokes the 1-hour retries. What I want is to apply the 1-hour retries for the 500 errors, but not the 400 errors. I’ve tried responseStatus = SENT in the response transformer, but the response transformer only runs once, after the hour has expired, and not for each retry.
This seems like a common problem, yet I’m not finding a solution. How are the rest of you handling this?

You're close!
So by default, the response transformer will only run if there's a response payload to transform. For connection problems, or possibly for 4xx/5xx responses that contain no payload, the response transformer won't execute.
However, if you set your response data types (From the Summary -> Set Data Types dialog, or from the Destinations -> Edit Response, Message Templates tab) to Raw, then the response transformer will execute all the time. The reason being that the Raw data type considers even an empty payload to be "transformable".
So turn queuing back on, and set your response data types to Raw. Then in the response transformer, if you look at the Reference tab there's a category for HTTP Sender:
You'll want the "response status line", that's the "HTTP/1.1 200 OK" line of the response that contains the response code. Here's a response transformer script that forces 4xx responses to error:
if (responseStatus == QUEUED) {
var statusLine = $('responseStatusLine');
if (statusLine) {
var parts = statusLine.split(' ');
if (parts.length >= 2) {
var responseCode = parseInt(parts[1], 10);
// Force 4xx responses to error
if (responseCode >= 400 && responseCode < 500) {
responseStatus = ERROR;
responseStatusMessage = statusLine;
}
}
}
}

Related

How to produce a response body with asynchronously created body chunks in Swift Vapor

I am looking into the Swift Vapor framework.
I am trying to create a controller class that maps data obtained on an SSL link to a third party system (an Asterisk PBX server..) into a response body that is sent over some time down to the client.
So I need to send received text lines (obtained separately on the SSL connection) as they get in, without waiting for a 'complete response' to be constructed.
Seeing this example:
return Response(status: .ok) { chunker in
for name in ["joe\n", "pam\n", "cheryl\n"] {
sleep(1)
try chunker.send(name)
}
try chunker.close()
}
I thought it might be the way to go.
But what I see connecting to the Vapor server is that the REST call waits for the loop to complete, before the three lines are received as result.
How can I obtain to have try chunker.send(name) send it's characters back the client without first waiting for the loop to complete?
In the real code the controller method can potentially keep an HTTP connection to the client open for a long time, sending Asterisk activity data to the client as soon as it is obtained. So each .send(name) should actually pass immediately data to the client, not waiting for the final .close() call.
Adding a try chunker.flush() did not produce any better result..
HTTP requests aren't really designed to work like that. Different browsers and clients will function differently depending on their implementations.
For instance, if you connect with telnet to the chunker example you pasted, you will see the data is sent every second. But Safari on the other hand will wait for the entire response before displaying.
If you want to send chunked data like this reliably, you should use a protocol like WebSockets that is designed for it.

Camel does not set 'Transfer-Encoding chunked' in case response is prepared by exception handler processor

I am implementing REST services using Apache-CXF running on servicemix and for that I have a camel route that does some processing, sends the message over queue, process some more and send back the reply. Something like this:
from("direct:start")
.process(A)
.process("activemq:abc")
.process(B);
On this route I have applied some basic validation and exception handler and when I have to stop the route in both cases, I use something like this:
exchange.getOut().setBody(response);
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
I use soap UI, restclient-UI and putty to make http requests and I get proper response body displayed in all of them. Now I wanted to preserve request headers so I made a little change everywhere in the code so that response bodies are set in exchange.getIn() only. For example: in case of validation failure I do:
exchange.getIn().setBody(response);
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
Just with this little change, the rest clients I am using to make request stopped displaying the response body. As per the server logs, response is being generated and also as per the logs in rest client, I am getting the proper response but they are unable to display the response body only in case when I stop the route in between. Normal response is displaying just fine. Only the restclient-UI was considerate enough to show the error as to why they are not displaying body and the error is:
Byte array conversion from response body stream failed.
Diggin deeper, I found the only response header which was there in success response but missing in error response:
Transfer-Encoding chunked
Error response is around 1000 characters long and contains a header called content-length. I am not sure but I think the problem has something to do with this itself. I would really like to play with exchange.getIn but these different kind of responses prepared by camel are confusing me. How can I make sure my camel responses are always displayed properly?
The Content-Length header will be preserved from the original request so you need to remove it so that camel cxf can work out the new body length on the response and set Content-Length with that.

Long GET request on REST API, handling server crashes

I have a REST API where the GET request can take 10-20 seconds. So I usually return a 202 code with a location like http://fakeserver/pending/blah where the client can check the status of this request. pending/blah returns a 200 code with "Status: pending" if the request is still pending, and a 303 code when it's done, with a final location for the result: http://fakeserver/finished/blah .
But what if the server crashes during the request processing? Should pending/blah return a 303 code, and then finished/blah returns a 404? How can I alert the client that the resource may be available at a location, but I'm not sure? Assume the requests are persistent, so that when the server reboots, it continues processing the request.
First of all I'll make the state of processed resource an internal field of this resource. This way you can avoid using strange endpoints like: /finished/blah/ or /pending/blah/ and instead of it introduce a single endpoint /resources/blah/ which will among other fields return the state it's currently in.
After changing architecture to the endpoint mentioned above if you ask for blah and server has crashed you can:
return 200 with pending status - client doesn't have necessarily to know about the crash
return 404, simple not found with and extra message that server has crashed.
return 500 and inform the client explicitly what the problem is.
Other useful codes may be also 409 or 503. Returning any 3XX is not a good idea IMO since no redirection applies here. Personally I'd go for 200 or 500(3).

REST API & HTTP Status Code

I have a bunch of PUT operations which execute actions on the input resource.
Let's make an example: I have a payment operation in my API which state that a credit card must be charged by a specific Amount.
In my code I first verify if there is sufficient credit on the card and then execute the operation. If there is'nt sufficient amount I simply return 400 but I am not sure it is correct.
Which is the correct HTTP Status Code in cases like this?
I can, of course send a response with HTTP 200 and attach a payload with further details explaining the error. I can also send back an HTTP 400 Bad Request or even better an HTTP 412 Precondition Failed.
Which is the correct code to send in the response in scenario like this where the validation failed? Is there any resource that I can read to understand the rationale behind HTTP Status Codes and HTTP Verbs?
Use 422 Unprocessable Entity.
The 422 status code means the server understands the content type of the request entity (hence a 415 Unsupported Media Type status code is inappropriate), and the syntax of the request entity is correct (thus a 400 Bad Request status code is inappropriate) but was unable to process the contained instructions.
Failing that, simply use 400 for any error having to do with your business domain. As of June 2004, the description for error 400 was amended to read:
The server cannot or will not process the request due to something that is perceived to be a client error
If the operation failed because of data sent by the user (it seems to be the case), you should use status codes 400 (general) or 422 (more precise but coming from the WebDAV spec). You can return back additional hints about the error within the payload (the structure is up to you) like:
{
error: {
"field": "amount",
"message": "The amount isn't correct - Sufficient credit."
}
}
I think that code 412 doesn't apply here since it must be returned when your server doesn't meet a condition specified by the client (see headers If-* like If-Match, If-Modified-Since, ...).
Hope it helps you,
Thierry
IMO: I would stick with 200 and then parse out the response and deal with that. HTTP status codes are protocol status code, not something that you should use for dealing with application logic.
{
"error": {
"field": "amount",
"message": "The amount isn't correct - Sufficient credit."
}
}
In case of the above code, the service call worked fine warranting a return code 200. However, you application logic now needs to deal with the error reported.
If we are using a HTTP status code to indicate error, we will start to get flagged in our logs etc. even though there was no technical error.

Single request to jetty interpreted twice with http error code 401

When I send GET http requests to an EJB served by jetty, I often get a 401 response even though the auth parameters are correct.
When I look into jetty logs I see this :
2013-06-27 11:54:11.004:DBUG:oejs.Server:REQUEST /app/general/launch on AsyncHttpConnection#3adf0ddc,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-5,l=34,c=0},r=1
2013-06-27 11:54:11.021:DBUG:oejs.Server:RESPONSE /app/general/launch 401
2013-06-27 11:54:11.066:DBUG:oejs.Server:REQUEST /app/general/launch on AsyncHttpConnection#3adf0ddc,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-5,l=102,c=0},r=2
I suspect that the request is not fully read (too large request entity or too large headers?)
as it is parsed twice for a single request. Is there a way to fix this ?
what does HttpParser{s=-5,l=34,c=0} and HttpParser{s=-5,l=102,c=0} mean ?
when I desactivate authentication (security constraints using simple jetty realm). the request is only parsed once.
401 means that the server requires authentication credentials that the client either has not sent or the ones sent by the client have not been authorized.
Some client implementations will resend the request if they receive a 401 including the credentials. If your client is doing that, that would explain why you get the request twice on the server.
The HttpParser toString() method returns the current status of the HttpParser. Here's the code:
return String.format("%s{s=%d,l=%d,c=%d}",
getClass().getSimpleName(),
_state,
_length,
_contentLength);
So s is the state. -5 is STATE_HEADER. And l and c represent the length and the contentLength.