DISCLAIMER: At first, It seemed to me like this is too simple question to ask, however I couldn't find any definitive answer and there is a chance the answer is out there in a community but not documented because it's too simple :}
The questions are:
is there a canonic URL for healthcheck of a web service. If yes what is it.
is there a common way of encoding the health information whether in body or return code...
(canonical in a sense most tools and platforms recognize it and support it natively)
It is up to REST service developer how to implement health-check of their service. The main reason is that REST service is implementing certain business logic and different logic has different "attributes" of how healthy a service is.
Regarding encoding health information, the normal way is to provide response status code which is considered problematic if response is a kind of 5xx, which means that server failed to fulfill the request.
Codes like 4xx cannot be considered for healthchecking hence they rather mean that problem was caused by the client.
Alongside with status code they often supply error details in response body. Like:
{status: "ERROR", description: "Here is error description"}
P.S. - Some implementations extend the code range and introduce their own which are to be treated in a special way that is implied by service developers.
What is the value of RESTful “methods” (ie. GET, POST, PUT, DELETE, PATCH, etc.)?
Why not just make every client use the “GET” method w/ any/all relevant params, headers, requestbodies, JSON,etc. etc.?
On the server side, the response to each method is custom & independently coded!
For example, what difference does it make to issue a database query via GET instead of POST?
I understand that GET is for queries that don’t change the DB (or anything else?).
And POST is for calls that do make changes.
But, near as I can tell, the RESTful standard doesn’t prevent one to code up a server response to GET and issue a stored procedure call that indeed DOES change the DB.
Vice versa… the RESTful standard doesn’t prevent one to code up a server response to POST and issue a stored procedure call that indeed does NOT change the ANYTHING!
I’m not arguing that a midtier (HTTP) “RESTlike” layer is necessary. It clearly is.
Let's say I'm wrong (and I may be). Isn't it still likely that there are numerous REST servers violating the proper use of these protocols suffering ZERO repercussions?
The following do not directly address my questions but merely dance uncomfortably around it like an acidhead stoner at a Dead concert:
Different Models for RESTful GET and POST
RESTful - GET or POST - what to do?
GET vs POST in REST Web Service
PUT vs POST in REST
I just spent ~80 hours trying to communicate a PATCH to my REST server (older Android Java doesn't recognize the newer PATCH so I had to issue a stupid kluge HTTP-OVERIDE-METHOD in the header). A POST would have worked fine but the sysop wouldn't budge because he respects REST.
I just don’t understand why to bother with each individual method. They don't seem to have much impact on Idempotence. They seem to be mere guidelines. And if you "violate" these "guidelines" they give someone else a chance to point a feckless finger at you. But so what?
Aren't these guidelines more trouble than they're worth?
I'm just confused. Please excuse the stridency of my post.
Aren’t REST GET/POST/etc. methods superfluous?
What is the value of RESTful “methods” (ie. GET, POST, PUT, DELETE, PATCH, etc.)?
First, a clarification. Those aren't RESTful methods; those are HTTP methods. The web is a reference implementation (for the most part) of the REST architectural style.
Which means that the authoritative answers to your questions are documented in the HTTP specification.
But, near as I can tell, the RESTful standard doesn’t prevent one to code up a server response to GET and issue a stored procedure call that indeed DOES change the DB.
The HTTP specification designates certain methods as being safe. Casually, this designates that a method is read only; the client is not responsible for any side effects that may occur on the server.
The purpose of distinguishing between safe and unsafe methods is to allow automated retrieval processes (spiders) and cache performance optimization (pre-fetching) to work without fear of causing harm.
But you are right, the HTTP standard doesn't prevent you from changing your database in response to a GET request. In fact, it even calls out specifically a case where you may choose to do that:
a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.
The HTTP specification also designates certain methods as being idempotent
Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent.
The motivation for having idempotent methods? Unreliable networks
Idempotent methods are distinguished because the request can be repeated automatically if a communication failure occurs before the client is able to read the server's response.
Note that the client here might not be the user agent, but an intermediary component (like a reverse proxy) participating in the conversation.
Thus, if I'm writing a user agent, or a component, that needs to talk to your server, and your server conforms to the definition of methods in the HTTP specification, then I don't need to know anything about your application protocol to know how to correctly handle lost messages when the method is GET, PUT, or DELETE.
On the other hand, POST doesn't tell me anything, and since the unacknowledged message may still be on its way to you, it is dangerous to send a duplicate copy of the message.
Isn't it still likely that there are numerous REST servers violating the proper use of these protocols suffering ZERO repercussions?
Absolutely -- remember, the reference implementation of hypermedia is HTML, and HTML doesn't include support PUT or DELETE. If you want to afford a hypermedia control that invokes an unsafe operation, while still conforming to the HTTP and HTML standards, the POST is your only option.
Aren't these guidelines more trouble than they're worth?
Not really? They offer real value in reliability, and the extra complexity they add to the mix is pretty minimal.
I just don’t understand why to bother with each individual method. They don't seem to have much impact on idempotence.
They don't impact it, they communicate it.
The server already knows which of its resources are idempotent receivers. It's the client and the intermediary components that need that information. The HTTP specification gives you the ability to communicate that information for free to any other compliant component.
Using the maximally appropriate method for each request means that you can deploy your solution into a topology of commodity components, and it just works.
Alternatively, you can give up reliable messaging. Or you can write a bunch of custom code in your components to tell them explicitly which of your endpoints are idempotent receivers.
POST vs PATCH
Same song, different verse. If a resource supports OPTIONS, GET, and PATCH, then I can discover everything I need to know to execute a partial update, and I can do so using the same commodity implementation I use everywhere else.
Achieving the same result with POST is a whole lot more work. For instance, you need some mechanism for communicating to the client that POST has partial update semantics, and what media-types are accepted when patching a specific resource.
What do I lose by making each call on the client GET and the server honoring such just by paying attention to the request and not the method?
Conforming user-agents are allowed to assume that GET is safe. If you have side effects (writes) on endpoints accessible via GET, then the agent is allowed to pre-fetch the endpoint as an optimization -- the side effects start firing even though nobody expects it.
If the endpoint isn't an idempotent receiver, then you have to consider that the GET calls can happen more than once.
Furthermore, the user agent and intermediary components are allowed to make assumptions about caching -- requests that you expect to get all the way through to the server don't, because conforming components along the way are permitted to server replies out of their own cache.
To ice the cake, you are introducing another additional risk; undefined behavior.
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
Where I believe you are coming from, though I'm not certain, is more of an RPC point of view. Client sends a message, server responds; so long as both participants in the conversation have a common understanding of the semantics of the message, does it matter if the text in the message says "GET" or "POST" or "PATCH"? Of course not.
RPC is a fantastic choice when it fits the problem you are trying to solve.
But...
RPC at web scale is hard. Can your team deliver that? can your team deliver with cost effectiveness?
On the other hand, HTTP at scale is comparatively simple; there's an enormous ecosystem of goodies, using scalable architectures, that are stable, tested, well understood, and inexpensive. The tires are well and truly kicked.
You and your team hardly have to do anything; a bit of block and tackle to comply with the HTTP standards, and from that point on you can concentrate on delivering business value while you fall into the pit of success.
We have a legacy application that allows our developers to "add" messages via a ThreadLocal in Java.
The current SOAP endpoints will scoop these messages off the thread and then package them up in the response.
The endpoints also catch all exceptions and then marshal those exceptions via this same mechanism to normalize the passing of messages (be they informational, warning, or error).
These messages are rich objects (they have a code, severity, classification, and then the actual message text.)
This is nice in many ways because now we have a standard way to communicate meaningful messages to the user (or calling service) but it also makes using the API more challenging because now the client must pick out the messages from the response AND also pick out the real payload.
Any web service can communicate messages this way...but only a handful do.
I would like to start moving our application towards a REST API but I am struggling on how best to handle the messaging. I am not super keen on adding an envelope to each of our REST responses because this really pollutes the API.
The alternative appears to be adding these messages to custom HTTP headers. Is this the "preferred" approach? Remember I will have a list of one or more of these messages and I will likely have to serialize them as json as well.
Thanks.
While I was reading about automated Junit Test case generation in Eclipse I have come across with this sentence
the testcases were generated to test both the synchronous and asynchronous clients.
I googled a lot to find the definition of these two terms and the difference between them but couldn't find any appropriate answer.
Could anyone please explain what is synchronous and asynchronous clients?
From EAI Patterns:
In a synchronous implementation of a Web Service, the client connection remains open from the time the request is submitted to the server. The client will wait until the server sends back the response message....
At the present time, most Web Services toolkits only support synchronous messaging by default. However, using existing standards and tools such as asynchronous message queuing frameworks, some vendors have emulated asynchronous messaging for Web Services.
In asynchronous clients, clients should be able to handle incoming data from server after server has done its job. Asynchronous requests are like 'fire and forget' mechanism. Target will inform you about the progress.
I have a project that is currently in production delivering some web-services using the REST approach. Right now, I need to delivery some of this web-services in SOAP too (it means that I will need to deliver some of the same web-services in SOAP and others a bit different), so, I ask you:
Should I incorporate to the existent project the SOAP stack (libraries, configuration files, ...), building another layer that deliver the data in envelopes way (some people call it "anti-corruption layer") ?
Should I build another project using just the canonical model in common (become it in a shared-library) ?
... Or how do you proceed in similar situations ?
Please, consider our ideal target a SOA architecture.
Thanks.
In our projects we have a facade layer which exposes the services and maps to business entities, and a business layer where the business logic is run.
So to add a SOAP end point for an existing service, we just create a new facade and call in to the same business logic.
In many cases it is even simpler, since we use WCF we can have a http SOAP endpoint for external clients, and a binary tcpip endpoint for internal clients. The new endpoint can be added by changing the configuration without any need to change the code.
The way I think about an SOA system, you have messages and pub/sub. The message is the interface. Getting those messages into and out of the system is an implementation detail. I create an endpoint that accepts a raw message document (more REST-like, but not really REST) as well as an endpoint that accepts the message as a single parameter to a SOAP call. The code that processes the incoming message is a separate concern from the HTTP endpoint enablement.
You can use an ESB for this. Where ESB receive the soap messages and send the rest request to the back end. WSO2 ESB provides this functionality. Please look at this sample[1].
[1] http://wso2.org/project/esb/java/4.0.0/docs/samples/proxy_samples.html#Sample152