REST Design: What Http verb should be used to retrieve a dynamic resource? - rest

I have a scenario in which I have REST API which manages a Resource which we will call Group. A Group contains members and the group resource is dynamic - whenever you retrieve it, you get the latest data (so a query must run server side to update the number of members in a group - in other words, the result of the request is to modify the data, since the results of running the query are stored).
Given a *group_id* it should return a minimal amount of information like
{
group_id: "5t7yu8i9io0op",
group_name: "That's my name",
size: 34
}
So a GET to this resource causes the resource to change, since a subsequent GET could return a new value for 'size'. This tells me it is not idempotent and so you should use POST to retrieve this resource. Am I correct in this conclusion?
If I am correct, do you think it is advisable to also provide a GET method that only returns the currently stored data for the group (eg. so the size could be out of date, even the name too). I suppose in this case I should return a last-modified date as one of the fields so that the user knows how up-to-date the resource is and can then elect to use the POST method...but then I am left wondering why would anyone do that, so why not ONLY provide the POST method and forget about GET?
Confused I am!
Thanks in advance.
[EDIT]
#Satish posted a link in his/her answer to the HTTP specs. In section 9.1.1. it ends with this sentence:
Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.
So in my scenario, the requester does not really care about the side-effect that the value for 'size' is recomputed as a direct result of making the request. They want the group information and it just so happens that to provide accurate, up-to-date group data, the size query must be run in order to update that value. Whilst making the request causes data to change implies this should be a POST, the user did not request that side-effect and so therefore a GET request would be acceptable and more intuitive, would it not? And therefore still be restful according to this sentence.
[2nd EDIT]
#Satish asks a very important question in the comments. So for others who read this I'll explain further about this problem:
Normally you would not run the group query to update its size from a REST request. As members are added or removed from a group, you would update the computed size of that group, store it and then a simple GET request would always return the correct size. However, our situation is more complicated in that a group is only stored as a query definition in ElasticSearch (kind of like a view in an RDBMS). Members do not get added/removed to and from groups. They get added to a much larger set of data (a collection in MongoDB). There are hundreds, potentially thousands, of different 'group definitions' so it is not practical to recompute size for every group when the collection changes. We cannot know when an item is added/removed to/from the collection which groups might change size - you only know by running the group definition who is in that group and what the size is. I hope that clears things up. :)

You should use GET. Even if dynamic resource is changing, you did not request for that change through your request and you are not accountable for that change. Ref: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html

In your case when you do a GET you retrieve some information about the Group. You don't modify the group structure. Ok, the group can be changed by an external entity so your next GET may bring you another data. Am I right? Who modifies the structure of the group and when?
So you should use GETbecause the resource it will be modified from somewhere else and not by your call that tries to do a read operation.
EDIT
After your edited the question I just want to add that I agree about the side effects.
It matters if you sent data or a change command explicitly to the server or you just read something and you don't have to pay attention for what the server side is doing to gave you the response. More intuitively:
GET - Requests data from a specified resource
POST - Submits data to be processed to a specified resource

It is the combination of GET and POST. So you should use POST.
Refer : http://adarshdchaurasia.wordpress.com/2013/09/26/http-get-vs-post/
You should not use GET because if you use GET method then search engines may cache the responses. It may cause unintentional data update at your server side, which you do not want. GET method is meant to return content without updating anything on server. POST is meant to updated the things at server and return result against that operation.

Related

What REST method should be used to implement a simple inter-process communication?

This is more a theorical question than a practical one.
We have a backend application that uploads csv files to a frontend application, then and only then the backend sends an empty POST request to tell the frontend to start to process those files to update its database.
For this question it doesn't matter if this is a good design (I think it isn't), what are those files, and what database is: I am only want to know better about the REST "sintax".
I'm referring to wikipedia and restfulapi.net, but I'm not convinced about any alternative, because:
GET: Request sender doesn't receive data;
POST (the currently used): Request sender doesn't want to insert data that are on the request body (just data from external files, if existent. Also they can be insert/update/delete);
PUT: Sounds good, but again, data are not on the request body;
PATCH: Sounds best, but data are not on the body (Also, I am wrong or is it deprecated/unused?);
DELETE: Doesn't always need to delete.
I know it is habit to use POST requests to let machines yell "go!" to each other, but I never thought it was right.
What do you think - in theory - would be the proper method?
The actual reference for the semantics of the HTTP methods is the RFC 7231 and not the ones you referenced in your question.
POST is a catch all method and requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
4.3.3. POST
The POST method requests that the target resource process the
representation enclosed in the request according to the resource's
own specific semantics. For example, POST is used for the following
functions (among others):
Providing a block of data, such as the fields entered into an HTML
form, to a data-handling process;
Posting a message to a bulletin board, newsgroup, mailing list,
blog, or similar group of articles;
Creating a new resource that has yet to be identified by the
origin server; and
Appending data to a resource's existing representation(s).
[...]
Responses to POST requests are only cacheable when they include
explicit freshness information. However, POST caching is not widely implemented.
In these scenarios, the receiving application knows where the CSV files will be and monitors that location. When it finds one, it processes it and then deletes or archives it. The application will likely have its own criteria for considering itself ready to process, e.g. time of day, size of file etc.
If the data load on the front end takes a long time you could "partition" the updates based on "importance". How you define importance would be up to your business rules. You could then POST a list of CSV filenames/locations to the front end. The list would be ordered by importance. The front end could then update its database based on that importance. Scheduling less important data for a more appropriate time of day.
If the backend knows the difference between new users and updated users you could use PUT and POST. The front end could assign higher priority to PUT requests as they relate to new users, perhaps assigning lower priority and staggered syncing for CSV filenames in POST requests.

simple model when requesting collection and extended model when requesting resource - how

I have the following URI: /articles/:id, where article is a resource on web-service and have associated model/class. Now I need to return only partial data for each resource (to save bandwidth and make for speed) when collection is requested, but when a single item is requested from collection I need to send full data. My question is should I use two models/classes for the same resource on the server and initiate different one depending on collection or single resource is requested? Or maybe there is should be only one model/class but not all fields should be filled with data when a collection is requested? Or maybe there is another approach?
I suggest using the approach suggested here with a fields query parameter.
If the API is going to be open to everyone to use and client usage is going to be unpredictable, then by default you probably need to limit the fields that you return. Just make sure you document in some way all the possible fields that could be used, in case a client actually needs them.
If the API is going to be consumed only by an app or apps you made, then by default you could return all of the fields and then your app can pass that fields parameter to speed things up.

Handling long queries without violating REST

We have a REST api, and we've done a pretty good job at sticking to the spirit of REST. However, we have an important consumer, and they're requesting a way to reconcile their datastore. The flow works like this:
Consumer makes a GET call to retrieve all inventory objects created within a date range. Lets say this returns 1 million inventory VINs.
Consumer compares the payload with their own datastore, see's that they're missing 5,000 inventory objects
Consumer would like to make a request with the 5,000 VIN id's, and return those 5,000 objects.
The problem is that the long query string (JSON array of vins) bumps into the query string length limits imposed by our server. Possbile ideas - make 5k separate calls (seems horrible), increase querystring length limit on server (would like not to do this), use POST instead (not RESTful?).
So, I'm wondering what Roy Fielding would do...
What about a POST submitting the JSON file with the id's list to a new resource, e.g. called /inventory/difference?
If the computation goes any long, you can answer with 202 Accepted and the id of the resource being generated, then point back to it at /inventory/difference/:id.
Somewhat similar to what moonwave99 suggested, but instead you create a resource called a "set".
You POST to /set a list of identifiers that you wish to be in the set. The result of the POST is a redirect URL to the resource that names the specific set.
So:
POST /set
Result:
301 Moved Permanently
Location: /set/123
Then:
GET /set/123
Returns the list of items in the set.
Sets are orthogonal to the use case of "fetching differences", they're simply a compilation of items.
If the creation of a set takes a long time, and you consider the set itself to be a snapshot of the data, when the user tries to do the GET /set/123 can simply reply with a 202 Accepted until the actual dataset has been completed.
You can then use:
GET /set/123/identifiers
To get a collection of the actual identifiers in the set, for example, if you like.
You can do something like
POST /setfromquery
and send a list of criteria (name like "John*", city = "Los Angeles", etc.). This doesn't really need its own specific resource, just define your query "language" to include both simple lists of IDs as well as perhaps other filter criteria.
Set operations (unions, differences, etc.). Lots of powerful things can be done with a set resource.
Finally, of course, there's the ever popular:
DELETE /set/123
I don't think anyone would fault you in working around GET not accepting a request body by using POST for a request that needs a request body. You are just being pragmatic.
I agree, making 5000 individual requests or upping the query string limit are ugly. POST is the way forward.
Using a post without creating a resource just seemed too dirty for me. In the end, we made it so that there was a limit of 100 ids requested in a "chunk". In practice, these requests will rarely be > 100, so hacking REST principles to accomodate an edge case seemed like a bad idea. I made sure the limitation was clearly defined in our API docs, done and done...

RESTful API design: should unchangable data in an update (PUT) be optional?

I'm in the middle of implementing a RESTful API, and I am unsure about the 'community accepted' behavior for the presence of data that can not change. For example, in my API there is a 'file' resource that when created contains a number of fields that can not be modified after creation, such as the file's binary data, and some metadata associated with it. Additionally, the 'file' can have a written description, and tags associated.
My question concerns doing an update to one of these 'file' resources. A GET of a specific 'file' will return all the metadata, description & tags associated with the file, plus the file's binary data. Should a PUT of a specific 'file' resource include the 'read only' fields? I realize that it can be coded either way: a) include the read only fields in the PUT data and then verify they match the original (or issue an error), or b) ignore the presence of the read only fields in the PUT data because they can't change, never issuing an error if they don't match or are missing because the logic ignores them.
Seems like it could go either way and be acceptable. The second method of ignoring the read only fields can be more compact, because the API client can skip sending that read only data if they want; which seems good for people who know what they are doing...
Personally, both ways are acceptable.... however, if I were you, I'll opt for option A (check read-only fields to ensure they are not changed, else throw an error). Depending on the scope of your project, you cannot assume what the consumers know about your Restful WS in depth because most of them don't read documentations or WADL, even if they are the experienced ones. :)
If you don't provide immediate feedback to the consumers that certain fields are read-only, they will have a false assumption that your web service will take care all the changes they have made without double checking, OR once they find out the "inconsistent" updates, they complain to others that your web service is buggy.
You can approach this in two different ways if the read-only field doesn't match the original values...
Don't process the request. Send a 409 Conflict code and specific error message.
Process the request, send a 200 OK and a message stating that changes made the read-only fields are ignored.
Unless the read-only data makes up a significant portion of the data (to the extreme that transmitting the read-only data has a noticeable impact on network traffic and response times), you should write the service to accept the read only fields in the PUT and check them for changes. It's just simpler to have the same data going in and out.
Look at it this way: You could make inclusion of the read only fields optional in the PUT, but you will still have to / should write the code in the service to check that any read only fields that were received contain the expected values. You have to write the read only checking either way.
Prohibiting the read-only fields in the PUT is a bad idea because it will require the clients to strip away fields they received from you in the GET. This requires that the client get more intimately involved with your data and semantics than they really need to be. The clients will consider this a headache, an unnecessary complication, and downright mean of you to add to their burden. Taking data received from your GET, modifying one field of interest, and sending it back to you with a PUT should be a brain-dead simple round-trip for the client. Don't complicate things when you don't have to.

RESTful way to create multiple items in one request

I am working on a small client server program to collect orders. I want to do this in a "REST(ful) way".
What I want to do is:
Collect all orderlines (product and quantity) and send the complete order to the server
At the moment I see two options to do this:
Send each orderline to the server: POST qty and product_id
I actually don't want to do this because I want to limit the number of requests to the server so option 2:
Collect all the orderlines and send them to the server at once.
How should I implement option 2? a couple of ideas I have is:
Wrap all orderlines in a JSON object and send this to the server or use an array to post the orderlines.
Is it a good idea or good practice to implement option 2, and if so how should I do it.
What is good practice?
I believe that another correct way to approach this would be to create another resource that represents your collection of resources.
Example, imagine that we have an endpoint like /api/sheep/{id} and we can POST to /api/sheep to create a sheep resource.
Now, if we want to support bulk creation, we should consider a new flock resource at /api/flock (or /api/<your-resource>-collection if you lack a better meaningful name). Remember that resources don't need to map to your database or app models. This is a common misconception.
Resources are a higher level representation, unrelated with your data. Operating on a resource can have significant side effects, like firing an alert to a user, updating other related data, initiating a long lived process, etc. For example, we could map a file system or even the unix ps command as a REST API.
I think it is safe to assume that operating a resource may also mean to create several other entities as a side effect.
Although bulk operations (e.g. batch create) are essential in many systems, they are not formally addressed by the RESTful architecture style.
I found that POSTing a collection as you suggested basically works, but problems arise when you need to report failures in response to such a request. Such problems are worse when multiple failures occur for different causes or when the server doesn't support transactions.
My suggestion to you is that if there is no performance problem, for example when the service provider is on the LAN (not WAN) or the data is relatively small, it's worth it to send 100 POST requests to the server. Keep it simple, start with separate requests and if you have a performance problem try to optimize.
Facebook explains how to do this: https://developers.facebook.com/docs/graph-api/making-multiple-requests
Simple batched requests
The batch API takes in an array of logical HTTP requests represented
as JSON arrays - each request has a method (corresponding to HTTP
method GET/PUT/POST/DELETE etc.), a relative_url (the portion of the
URL after graph.facebook.com), optional headers array (corresponding
to HTTP headers) and an optional body (for POST and PUT requests). The
Batch API returns an array of logical HTTP responses represented as
JSON arrays - each response has a status code, an optional headers
array and an optional body (which is a JSON encoded string).
Your idea seems valid to me. The implementation is a matter of your preference. You can use JSON or just parameters for this ("order_lines[]" array) and do
POST /orders
Since you are going to create more resources at once in a single action (order and its lines) it's vital to validate each and every of them and save them only if all of them pass validation, ie. you should do it in a transaction.
I've actually been wrestling with this lately, and here's what I'm working towards.
If a POST that adds multiple resources succeeds, return a 200 OK (I was considering a 201, but the user ultimately doesn't land on a resource that was created) along with a page that displays all resources that were added, either in read-only or editable fashion. For instance, a user is able to select and POST multiple images to a gallery using a form comprising only a single file input. If the POST request succeeds in its entirety the user is presented with a set of forms for each image resource representation created that allows them to specify more details about each (name, description, etc).
In the event that one or more resources fails to be created, the POST handler aborts all processing and appends each individual error message to an array. Then, a 419 Conflict is returned and the user is routed to a 419 Conflict error page that presents the contents of the error array, as well as a way back to the form that was submitted.
I guess it's better to send separate requests within single connection. Of course, your web-server should support it
You won't want to send the HTTP headers for 100 orderlines. You neither want to generate any more requests than necessary.
Send the whole order in one JSON object to the server, to: server/order or server/order/new.
Return something that points to: server/order/order_id
Also consider using CREATE PUT instead of POST