I've been doing some digging with an application system utilizing HttpClient 4.1.x to handle RESTful calls under Spring.
While I've got this working great under both directly dealing with the httpclient as well as using as the transport for the RestTemplate, I've found that I have a need for something that I'm not sure was covered in execution.
The "BasicResponseHandler" treats the content response as a string and returns it provided that the response from the server is less than 300. The RESTful system I'm working with provides an XML document as part of an error response (status code >= 400). This XML response contains some information that may be of use to the client developer.
What I'd like to see if anyone has any experience dealing with this via the ResponseHandler interface. Essentially, the BasicResponseHandler will toss a ClientProtocolException in the case that there is a status >= 300. The handling AbstractHttpClient implementation will trap that exception, consume the entity silently, then re-throw the IOException (ClientProtocolException) that was thrown.
Would it be advisable to create a sub-class of ClientProtocolException to contain the additional information?
In the case of the error status, unmarshal any existing document into its respective type (if available) and then throw it thus preserving the content of the response.
Or is there another mechanism that I'm missing to handle this case?
The purpose of the ResponseHandler interface is to enable the caller to digest HTTP responses without buffering message content in memory. An extra benefit of using this interface is not having to worry about resource deallocation which is taken care of automatically by HttpClient.
In your particular case you should consider building a higher level domain object from the low level HTTP response content instead of returning a simple, unrepresentative string.
So, instead of throwing an exception, consider returning back to the caller an object consisting of the request status (success, failure, partial response, etc), and a parsed XML document or an JAXB object representing a message content.
Related
I have a discussion, how to implement content negotiation for error cases. I'd like to hear your opinion or experiences to deal with this topic. Please be aware that there might be APIs dealing with RFC-7807, and those who don't.
The main theses to discuss are (only concerning error responses):
The Accept-Header names the mime types that the client is able to handle in case of 2xx responses. It can be used also to decide which error response format is rendered (recommended), but this is not required. E.g. if we return a RFC-7807 problem detail, we typically use application/problem+json or application/problem+xml, although the client requested application/pdf.
The 406 response code is only for reporting back that the server is not able to create a 2xx response for the requested format(s). It would not be applicable to use when another problem occured, but the server is not able to render the error response in a client-compatible format.
In case of RFC-7807, we would mostly derive from the Accept header. If the client requests application/json, and we return application/problem+json, it's the same format, but not the same semantic and therefor has a different scheme in comparison with the 2xx response.
In case of errors, the client has to deal with response formats that it is not able to render. To minimize confusion, an OpenAPI spec for the API lists the error response formats that can be returned.
If the client sends an Accept-Header of application/problem+xml, it only prefers a special content type in case of error responses, but does not specify one for 2xx responses, so the server would use its preferred (mostly JSON).
What do you think about that?
We build a REST microservice with Scala 3, ZIO 2, ZIO logging and Tapir.
For context specific logging we want to use the MDC and set an attribute there which is taken from the request payload.
Is it possible to get access to the request payload in DefaultServerLog to extract the MDC attribute and then use it for ZIO logging feature MDC logging, i.e. create a LogAnnotation from the extracted attribute, so it will also be logged by all DefaultServerLog methods (doLogWhenHandled etc.). Currently it works for our own log statements, but not for those of Tapir/ZIO-HTTP.
See answer here https://softwaremill.community/t/how-to-get-access-to-the-request-payload-in-tapir-ziohttp-defaultserverlog/84/3.
Adam Warski:
"This is usually problematic as the body of the request is a stream of bytes, which is read from the socket as it arrives. That is, the request isn’t loaded into memory by default.
You can work-around this by reading the whole request into memory using serverRequest.underlying.asInstanceOf[zio.http.Request].body.asArray, extracting the required info and enriching the fiber-locals appropriately. You might also need to substitute the Request with a copy, which has the body provided as a byte array (in a “strict” form), so that the “proper” body parser doesn’t try to re-read from the network (where nothing will be available).
However, this has some downsides: the body will be parsed twice (once by your interceptor, once by the parsing that’s defined later); and it will be read into memory (which might be problematic if you don’t have a limit on the size of the body)."
As per this document, https://vertx.io/docs/vertx-web/java/#_route_match_failures Vert.x-Web will signal a 405 error If a route matches the path but doesn’t match the HTTP Method, but as per Mozilla document https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/405 response should contain allow header. Is there any reason why it's not added in response?
It only adds content-length header
Vert.x Version - 4.3.4
Due to the way vert.x keeps track of the handlers internally (in a skip list) it is not trivial to identify which handlers would trigger the right state, so, we can clearly identify which methods are invalid, but not so trivial to say which ones are valid.
This is something we are planning to improve in the future releases, as we would like to see the internal routing algorithm to move from a skip list to a compressed tree.
With a tree, it will be possible to detect undoubtedly which methods are valid or not.
If I want to send a message and handle the response somewhere in the code, what is the API? What components do I need? How do I construct them or get handles to them? What methods do I call? How do I add new message types?
Here is a sequence diagram I made for the messages exchanged as part of downloading candidate transaction sets referenced by proposals:
To send a message, a component needs a PeerImp object (generally held via shared_ptr<Peer>) on which it calls the send(shared_ptr<Message>) method. There is only one generic implementation of send, and it handles every protocol buffer message type. This call returns void (i.e. no request object).
When a message is received from a peer, the onMessage(MessageType) method for that message type is called. There is a different overload of onMessage for each message type.
Consider when you write code for HTTP. A popular idiom in JavaScript looks like this:
const response = await http.get(url, params)
There are some important differences between this pattern and the one for RTXP (the official name of our message protocol):
HTTP has an association between request and response. RTXP generally does not have this association, but in one notable example it does. TMGetLedger is a request message type, and TMLedgerData is its response message type. They both have a requestCookie field to associate a response with its request. The request generates a (random?) identifier for its “request cookie”, and the response copies that cookie.
With HTTP, the code that sends the request passes a handler expecting the response. Generally, the response handler is different for each place in the code that sends a request. Not so with RTXP. Instead, each request message type typically corresponds to exactly one response message type, and each message of a given type has the same exact handler. That means each place in the code that sends a request of the same message type uses the same response handler. I suspect that:
most message types are sent from exactly one place in the code
when one message type is sent from multiple places, then that message has an enumeration field to distinguish them (with “type” in its name)
each message type was designed for exactly the place in the code that needed to send it, which makes them hard to reuse
Most request message types are different from their response message types. The one notable exception is TMGetObjectByHash which has a Boolean query field that distinguishes a request (true) from a response (false).
There is some room for uniform handling of each message type:
A message is generally expected to be independently verifiable. If a response says it has the header for ledger ABCD, then the handler expects it can hash the header to get the digest ABCD.
A response is generally expected to correspond to a request.
If these expectations are violated, the peer that sent the message is penalized. Our server tracks a “fee” for each peer that measures its reliability. Bad messages are “charged” various fees based on the kind of violation. These fees only exist on the server. They have no bearing on the ledger.
PeerImp objects are obtained from the Overlay object. There is exactly one Overlay per Application, obtained by calling its overlay() method.
I have a RESTful service for getting let's say devices. It provides very usual functionality:
GET /devices
GET /devices/:id
POST /devices
PUT /devices/:id
DELETE /devices/:id
The device object might be defined as follows:
{
id: 123,
name: "Smoke detector",
firmware: "21.0.103",
battery: "ok",
last_maintenance: "2017-07-07",
last_alarm: "2014-02-01 12:11:10",
// ...
}
There is an application that might read device state via some device specific reader. The application itself has no idea how to interpret read data, but it might ask server to do it. In our case let's assume that the data contains the following: battery status, firmware version, last alarm.
If I were implementing regular RPC service, I would create function with "parse" meaning. It means it accept the raw data and returns an updated device object (or, alternatively, only the part of the device object containing the parsed state). But I doubt that I could find a good REST solution for such function. Now I am doing it via PATCH, but I personally do not like this solution, and therefore I will not provide it here. I believe there should be good solution for such class of problems.
So the question: how should I fit my "parse" logic in REST paradigm?
POST it to a /parsed-device-state URL, which will return a 201 Created, a Location header pointing to the place where you can get the parsed data from, and if you like, return the parsed data in the 201 as well (along with an additional Content-Location header with the same value as the Location header). Or if it takes a long time to parse, use 202 Accepted, and the same Location header. The caller can then poll that provided location until the results are ready.
So the question: how should I fit my "parse" logic in REST paradigm?
How would you fit your parse logic into a web site?
You'd probably start with a bookmark. GET $BOOKMARK would return a representation of a form. The form might include an input control like a text area element that would allow the consumer to input a representation, or it might include a input control that allows the consumer to link into a file. The consumer would submit the form, and the agent would create a request from the information in the form. That would probably be a POST (you aren't likely to include an arbitrary file's representation onto the query string) to whatever resource was specified as the action of the form. The server's response would provide a representation of the result.
If parsing were a particularly slow process, then the response instead might be a representation including links to resources that could be used to track the progress of the parsing. The whole protocol in this case looks a lot like putting work on a queue, and then polling for updates.
It's the right answer to a problem that is not a great fit for HTTP:
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
To some degree, what you are trying to do with your function is transfer compute, which may be why it feels like you are trimming corners off of the peg to fit it in the hole.
An alternative approach, which is a better fit for HTTP, is think about transferring a representation of the behavior. The API client gets a function that understands how to parse apples into oranges, and then runs that code on the information that it keeps locally. Think java script - we get a representation of the behavior from the server (which can embed into that representation information the server has that the client will need), and then execute the result locally. Metadata in the headers describes the lifetime of the representation, in a way that is understood by any standards compliant cache.