RESTApi EndPoint GET Multiple Resources - rest

The operation bellow retrieve of all books only the ones that are into the request. It is clearly a GET operation.
From your perspective what is better ( pro/cons ) of doing those two bellow:
Note:
GET - api/library/2/books/
Retrieves all the books from the Library 2.
Using GET:
GET - api/library/2/books/3/5/10/33/...../pages
Using POST:
POST - api/library/2/books/pages
Body:
{
"books_id": [
2,
30,
40,
20,
30
]
}
I’m in a real doubt here between using POST or GET methods to implement that. Books Ids on URL will get really messy if there was like 100-200 books to retrieve. I want some enlightenment here.
I'm using PHP to handle the Rest Application and all those methods above are valid.

This operation matches the standard semantics of the GET method, and therefore the expectations of various software. For example:
many HTTP clients know that they can automatically retry GET requests in case of errors
it’s easier to cache responses to GET
If your book IDs are independent from library IDs, then it may be better to drop the reference to the library, and do just
GET /api/books/3,5,10,33/pages
Books Ids on URL will get really messy if there was like 100-200 books to retrieve
If every book ID is 6 digits long, this adds up to just 700-1400 bytes. This is well within the range supported by any good HTTP client. To really push the practical limits on URL length, you would need many more books — but do you really need (or want) to support retrieval of so many pages at once?
(Alternatively, though, your book IDs might be much longer — perhaps UUIDs.)
If you do run into limits on URL length, it’s OK to use POST to a dedicated “endpoint”:
POST /api/books/bulk-pages
{"books_id": [3, 5, 10, 33]}
POST is defined in RFC 7231 § 4.3.3 as a sort of “catch-all” method:
process the
representation enclosed in the request according to the resource's
own specific semantics. For example, POST is used for the following
functions (among others):
o Providing a block of data, such as the fields entered into an HTML
form, to a data-handling process;
As a curiosity, there has been a recent attempt to standardize a SEARCH method that would allow request payloads like POST, but also be safe and idempotent like GET. Unfortunately, that effort has stalled, so you probably shouldn’t try to use SEARCH now.
Technically, the protocol allows you to send a payload even with a GET request, but as RFC 7231 § 4.3.1 notes, this is unusual and may cause trouble:
A payload within a GET request message has no defined semantics;
sending a payload body on a GET request might cause some existing
implementations to reject the request.

Query parameters
For retrieving multiple books, consider query parameters (using ?):
GET /api/library/2/books?id=3,5,10,33
Matrix parameters
For retrieving pages of multiple books, you could consider matrix parameters (using ;):
GET /api/library/2/books;id=3,5,10,33/pages
Then you also can use query parameters to filter the pages.
• For more details about matrix and query parameters, refer to this question.
• Also refer to the following sections of the RFC 3986 for more details: §3.3 Path and §3.4 Query

POST in your case does not look semantically correct. Maybe you can try query params like
GET - api/library/2/books/pages?ids=[1,2,3,4,5]

Related

GET Vs POST in REST

According to Rest, we should use GET if we have to retrieve some data, and use POST if it's creating a resource.
But, if we have to take multiple parameters (more than 7-8), or list of UUIDs let's say, then shouldn't we use POST rather than GET?
To avoid:
The complexity of the URL
Future scope to incorporate any new field
URL length which may limit us in future
If we are not encoding our URL then we may risk of exposing params (least significant point though)
Thanks.
GET Vs POST in REST
Use GET when the semantics of GET fit the use case; in other words, when you are trying to retrieve a copy of the latest representation of some resource.
Use POST when no other standardized method supports the semantics you need.
You can use POST for everything, but you give up the advantages of using more specialized methods -- the ability of general purpose components to do intelligent things because they understand the semantics of the request. For instance, GET has safe semantics, which means that a general purpose component knows that it can pre-fetch a representation of the resource before the user needs it, or automatically repeat a request if it doesn't get a response from the server.
What HTTP gives you is a extendable collections of refinements of the general purpose POST method, adding more specific constraints so that general purpose components can leverage the resulting properties.
The complexity of the URL
Complexity in the URL really isn't a big deal, as far as general purpose components are concerned a URL is just a opaque sequence of bytes that happens to abide by certain production rules. For the most part, the effective target-uri of a web request is treated as an atomic unit, so what might seem "complex to a human being doesn't bother the machines at all (for instance, take a look at the URL used when you submit a search from the google home page).
URL length which may limit us in future
We care a bit about URL length. RFC 3986 doesn't restrict the length of the URI, but some implementations of general purpose components will fail if the length is far outside the norm. So you probably don't want to include a url encoded copy of the unabridged works of Shakespeare in the query part of your request.
Future scope to incorporate any new field
Again, there's not a lot of difference here. Adding new optional elements to a URI template is really no different than adding new optional elements to a message template.
we may risk of exposing params
We also want to be careful about sensitive information - as far as the machines are concerned, the URI is an identifier; there's no particular reason to worry about a specific sequence of bytes. Which means that the URI may be exposed at rest (in a clients history, or list of bookmarks, in the servers access logs). Restricting sensitive information to the body of a message reduces the chance of the data escaping beyond its intended use.
Note that REST, and leveraging the different HTTP methods, isn't the only way to get useful work done. SOAP (and more recently gRPC) decided that a different collection of trade offs was better -- in effect, reducing HTTP to transport, rather than an application in itself.
According to Rest, we should ... use POST if it's creating a resource.
This is an incorrect interpretation of REST. It's a very common interpretation, but incorrect. The semantics of POST are defined by RFC 7231; it means that, and not something else.
The suggestion that POST should only be used for create is a misleading over simplification. The earliest references I've been able to find to it is a blog post by Paul Prescod in 2002; and of course it became very popular with the arrival of Ruby on Rails.
But recall: REST is the architectural style of the world wide web. HTML, the most common hypertext media type in use on the web, has native support for only two HTTP methods; GET, used to fetch resource representations from a server, and POST which does everything else.
You should also use POST if you have sensitive data such as username and or password which are best encoded as form parameters (key value pairs)

API REST : 414 Request-URI too long using GET

While there have been many topics related to this issue, I still couldn't find a solution to this problem.
I have a GET API REST call that can be very specific, example :
/v1/books?id=40,41,42,43,44,45,46,47...
However, I get a 414 Request-URI too long error sometimes because the list of ids is long.
I've read on every topic related to this problem that we have to use POST instead of GET when there are many parameters (instead of trying to change the max limit in apache, which I agree is not a good solution)
But I'm trying to fetch books, not create ones! REST API is very specific that POST is for creating new entries.
And since I'm using Slim Framework, if I call a POST to fetch books, it'll be expecting different parameters to create a new book. Slim can't specify two different POST /v1/books as it'll always use the first one it finds, no matter the parameters you send... (I'll either be fetching or creating books, never both at the same time)
So, is there a solution somewhere to my problem ?
I'm a bit surprised there's not a REST solution yet...
Looked so hard, couldn't find anything...
Thanks in advance!
PS : I'm consuming this GET API using cURL/PHP, no JS/AJAX in there.
But I'm trying to fetch books, not create ones! REST API is very specific that POST is for creating new entries.
No, POST is for all sorts of things. See It is Okay to Use POST, by Roy T Fielding.
It isn’t RESTful to use POST for information retrieval when that information corresponds to a potential resource, because that usage prevents safe reusability and the network-effect of having a URI.
414 URI Too Long indicates that the target-uri in the request's start-line has tripped the server's arbitrary length limit. Since the server is the authority for its own resources, it gets to make that sort of decision for itself.
The idiomatically correct answer is to create a new resource; which is to say you POST your information to the server, and the server creates a new resource and a matching identifier. For example, the server could save the contents of your post into a random file name, and then send you back the information you wanted with an identifier that encodes that filename, so that you can GET any updates later.
POST /v1/books
id=40,41,42,43,44,45,46,47...
201 Created
Location: /v1/book-lists/9d133345-ded1-47ab-a954-a81c1d6d487f
Content-Location: /v1/book-lists/9d133345-ded1-47ab-a954-a81c1d6d487f
-- current representation of /v1/book-lists/9d133345-ded1-47ab-a954-a81c1d6d487f here --
Subsequent requests to see if the representation has changed could then be sent to the book-lists URI.
This is not, of course, free. Somebody has to decide that they want this in their domain application protocol, design the resources, implement the server side caching of the query payload, and so on.
Also, note that this doesn't actually solve the problem, being that the server should not be expected to support arbitrarily long (aka infinitely long) requests. It really only gives you some breathing room between the length that the server thinks is too long for a target-uri and the length that the server thinks is too long for a payload (413 Payload to Large).
So if you are designing an API, you need to think about what use cases you want to support, what data lengths are at the extremes of those use cases, and choose a domain application protocol that satisfies them, subject to your other constraints.

REST delete multiple items in the batch

I need to delete multiple items by id in the batch however HTTP DELETE does not support a body payload.
Work around options:
1. #DELETE /path/abc?itemId=1&itemId=2&itemId=3 on the server side it will be parsed as List of ids and DELETE operation will be performed on each item.
2. #POST /path/abc including JSON payload containing all ids. { ids: [1, 2, 3] }
How bad this is and which option is preferable? Any alternatives?
Update: Please note that performance is a key here, it is not an option execute delete operation for each individual id.
Along the years, many people fell in doubt about it, as we can see in the related questions here aside. It seems that the accepted answers ranges from "for sure do it" to "its clearly mistreating the protocol". Since many questions was sent years ago, let's dig into the HTTP 1.1 specification from June 2014 (RFC 7231), for better understanding of what's clearly discouraged or not.
The first proposed workaround:
First, about resources and the URI itself on Section 2:
The target of an HTTP request is called a "resource". HTTP does not limit the nature of a resource; it merely defines an interface that might be used to interact with resources. Each resource is identified by a Uniform Resource Identifier (URI).
Based on it, some may argue that since HTTP does not limite the nature of a resource, a URI containing more than one id would be possible. I personally believe it's a matter of interpretation here.
About your first proposed workaround (DELETE '/path/abc?itemId=1&itemId=2&itemId=3') we can conclude that it's something discouraged if you think about a resource as a single document in your entity collection while being good to go if you think about a resource as the entity collection itself.
The second proposed workaround:
About your second proposed workaround (POST '/path/abc' with body: { ids: [1, 2, 3] }), using POST method for deletion could be misleading. The section Section 4.3.3 says about POST:
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics. For example, POST is used for the following functions (among others): Providing a block of data, such as the fields entered into an HTML form, to a data-handling process; Posting a message to a bulletin board, newsgroup, mailing list, blog, or similar group of articles; Creating a new resource that has yet to be identified by the origin server; and Appending data to a resource's existing representation(s).
While there's some space for interpretation about "among others" functions for POST, it clearly conflicts with the fact that we have the method DELETE for resources removal, as we can see in Section 4.1:
The DELETE method removes all current representations of the target resource.
So I personally strongly discourage the use of POST to delete resources.
An alternative workaround:
Inspired on your second workaround, we'd suggest one more:
DELETE '/path/abc' with body: { ids: [1, 2, 3] }
It's almost the same as proposed in the workaround two but instead using the correct HTTP method for deletion. Here, we arrive to the confusion about using an entity body in a DELETE request. There are many people out there stating that it isn't valid, but let's stick with the Section 4.3.5 of the specification:
A payload within a DELETE request message has no defined semantics; sending a payload body on a DELETE request might cause some existing implementations to reject the request.
So, we can conclude that the specification doesn't prevent DELETE from having a body payload. Unfortunately some existing implementations could reject the request... But how is this affecting us today?
It's hard to be 100% sure, but a modern request made with fetch just doesn't allow body for GET and HEAD. It's what the Fetch Standard states at Section 5.3 on Item 34:
If either body exists and is non-null or inputBody is non-null, and request’s method is GET or HEAD, then throw a TypeError.
And we can confirm it's implemented in the same way for the fetch pollyfill at line 342.
Final thoughts:
Since the alternative workaround with DELETE and a body payload is let viable by the HTTP specification and is supported by all modern browsers with fetch and since IE10 with the polyfill, I recommend this way to do batch deletes in a valid and full working way.
It's important to understand that the HTTP methods operate in the domain of "transferring documents across a network", and not in your own custom domain.
Your resource model is not your domain model is not your data model.
Alternative spelling: the REST API is a facade to make your domain look like a web site.
Behind the facade, the implementation can do what it likes, subject to the consideration that if the implementation does not comply with the semantics described by the messages, then it (and not the client) are responsible for any damages caused by the discrepancy.
DELETE /path/abc?itemId=1&itemId=2&itemId=3
So that HTTP request says specifically "Apply the delete semantics to the document described by /path/abc?itemId=1&itemId=2&itemId=3". The fact that this document is a composite of three different items in your durable store, that each need to be removed independently, is an implementation details. Part of the point of REST is that clients are insulated from precisely this sort of knowledge.
However, and I feel like this is where many people get lost, the metadata returned by the response to that delete request tells the client nothing about resources with different identifiers.
As far as the client is concerned, /path/abc is a distinct identifier from /path/abc?itemId=1&itemId=2&itemId=3. So if the client did a GET of /path/abc, and received a representation that includes itemIds 1, 2, 3; and then submits the delete you describe, it will still have within its own cache the representation that includes /path/abc after the delete succeeds.
This may, or may not, be what you want. If you are doing REST (via HTTP), it's the sort of thing you ought to be thinking about in your design.
POST /path/abc
some-useful-payload
This method tells the client that we are making some (possibly unsafe) change to /path/abc, and if it succeeds then the previous representation needs to be invalidated. The client should repeat its earlier GET /path/abc request to refresh its prior representation rather than using any earlier invalidated copy.
But as before, it doesn't affect the cached copies of other resources
/path/abc/1
/path/abc/2
/path/abc/3
All of these are still going to be sitting there in the cache, even though they have been "deleted".
To be completely fair, a lot of people don't care, because they aren't thinking about clients caching the data they get from the web server. And you can add metadata to the responses sent by the web server to communicate to the client (and intermediate components) that the representations don't support caching, or that the results can be cached but they must be revalidated with each use.
Again: Your resource model is not your domain model is not your data model. A REST API is a different way of thinking about what's going on, and the REST architectural style is tuned to solve a particular problem, and therefore may not be a good fit for the simpler problem you are trying to solve.
That doesn’t mean that I think everyone should design their own systems according to the REST architectural style. REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. That’s fine with me as long as you don’t call the result a REST API. I have no problem with systems that are true to their own architectural style. -- Fielding, 2008

What's the best best way to send complex option data in a GET request without using body payload?

so I finished a server in Node using Express (developed through testing) and while developing my frontend I realized that Java doesn't allow any body payload in GET requests. I've done some reading around and understand that the Http specs do allow this, but most often the standard is to not put any payload in GET. So if not this, then what's the best way to put this into a GET request. The data I include is mostly simple fields except in my structure some of them are nested and some of them are arrays, that's why sending JSON seemed so easy. How else should I do this?
For example I have a request with this data
{
token,
filter: {
author_id,
id
},
limit,
offset
}
I've done some reading around and understand that the Http specs do allow this, but most often the standard is to not put any payload in GET.
Right - the problem is that there are no defined semantics, which means that even if you control both the client and the server, you can't expect intermediaries participating in the discussion to cooperate. See RFC-7231
The data I include is mostly simple fields except in my structure some of them are nested and some of them are arrays, that's why sending JSON seemed so easy. How else should I do this?
The HTTP POST method is the appropriate way to deliver a payload to a resource. POST is the most general method in the HTTP vocabulary, it covers all use cases, even those covered by other cases.
What you lose in POST is the fact that the request is safe and idempotent, and you don't get any decent caching behavior.
On the other hand, if the JSON document is being used to constrain the representation that is returned by the resource, then it is correct to say that the JSON is part of the identifier for that document, in which case you encode it into the query
/some/hierarchical/part?{encoded json goes here}
This gives you back the safe semantic, supports caching, and so on.
Of course, if your json structures are complicated, then you may find yourself running into various implicit limits on URI length.
I found some interesting specs for GET that allow more complex objects to be posted (such as arrays and objects with properties inside). Many frameworks that support GET queries seem to parse this.
For arrays, redefine the field. For example for the array ids=[1,2,3]
test.com?ids=1&ids=2&ids=3
For nested objects such as
{
filter.id: 5,
filter.post: 2
}
test.com?filter[id]=5&filter[post]=2

How to design RESTful search/filtering? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm currently designing and implementing a RESTful API in PHP. However, I have been unsuccessful implementing my initial design.
GET /users # list of users
GET /user/1 # get user with id 1
POST /user # create new user
PUT /user/1 # modify user with id 1
DELETE /user/1 # delete user with id 1
So far pretty standard, right?
My problem is with the first one GET /users. I was considering sending parameters in the request body to filter the list. This is because I want to be able to specify complex filters without getting a super long url, like:
GET /users?parameter1=value1&parameter2=value2&parameter3=value3&parameter4=value4
Instead I wanted to have something like:
GET /users
# Request body:
{
"parameter1": "value1",
"parameter2": "value2",
"parameter3": "value3",
"parameter4": "value4"
}
which is much more readable and gives you great possibilities to set complex filters.
Anyway, file_get_contents('php://input') didn't return the request body for GET requests. I also tried http_get_request_body(), but the shared hosting that I'm using doesn't have pecl_http. Not sure it would have helped anyway.
I found this question and realized that GET probably isn't supposed to have a request body. It was a bit inconclusive, but they advised against it.
So now I'm not sure what to do. How do you design a RESTful search/filtering function?
I suppose I could use POST, but that doesn't seem very RESTful.
The best way to implement a RESTful search is to consider the search itself to be a resource. Then you can use the POST verb because you are creating a search. You do not have to literally create something in a database in order to use a POST.
For example:
Accept: application/json
Content-Type: application/json
POST http://example.com/people/searches
{
"terms": {
"ssn": "123456789"
},
"order": { ... },
...
}
You are creating a search from the user's standpoint. The implementation details of this are irrelevant. Some RESTful APIs may not even need persistence. That is an implementation detail.
If you use the request body in a GET request, you're breaking the REST principle, because your GET request won't be able to be cached, because cache system uses only the URL.
What's worse, your URL can't be bookmarked, because the URL doesn't contain all the information needed to redirect the user to this page.
Use URL or Query parameters instead of request body parameters, e.g.:
/myapp?var1=xxxx&var2=xxxx
/myapp;var1=xxxx/resource;var2=xxxx
In fact, the HTTP RFC 7231 says that:
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
For more information take a look here.
It seems that resource filtering/searching can be implemented in a RESTful way. The idea is to introduce a new endpoint called /filters/ or /api/filters/.
Using this endpoint filter can be considered as a resource and hence created via POST method. This way - of course - body can be used to carry all the parameters as well as complex search/filter structures can be created.
After creating such filter there are two possibilities to get the search/filter result.
A new resource with unique ID will be returned along with 201 Created status code. Then using this ID a GET request can be made to /api/users/ like:
GET /api/users/?filterId=1234-abcd
After new filter is created via POST it won't reply with 201 Created but at once with 303 SeeOther along with Location header pointing to /api/users/?filterId=1234-abcd. This redirect will be automatically handled via underlying library.
In both scenarios two requests need to be made to get the filtered results - this may be considered as a drawback, especially for mobile applications. For mobile applications I'd use single POST call to /api/users/filter/.
How to keep created filters?
They can be stored in DB and used later on. They can also be stored in some temporary storage e.g. redis and have some TTL after which they will expire and will be removed.
What are the advantages of this idea?
Filters, filtered results are cacheable and can be even bookmarked.
I think you should go with request parameters but only as long as there isn't an appropriate HTTP header to accomplish what you want to do. The HTTP specification does not explicitly say, that GET can not have a body. However this paper states:
By convention, when GET method is
used, all information required to
identify the resource is encoded in
the URI. There is no convention in
HTTP/1.1 for a safe interaction (e.g.,
retrieval) where the client supplies
data to the server in an HTTP entity
body rather than in the query part of
a URI. This means that for safe
operations, URIs may be long.
As I'm using a laravel/php backend I tend to go with something like this:
/resource?filters[status_id]=1&filters[city]=Sydney&page=2&include=relatedResource
PHP automatically turns [] params into an array, so in this example I'll end up with a $filter variable that holds an array/object of filters, along with a page and any related resources I want eager loaded.
If you use another language, this might still be a good convention and you can create a parser to convert [] to an array.
FYI: I know this is a bit late but for anyone who is interested.
Depends on how RESTful you want to be, you will have to implement your own filtering strategies as the HTTP spec is not very clear on this. I'd like to suggest url-encoding all the filter parameters e.g.
GET api/users?filter=param1%3Dvalue1%26param2%3Dvalue2
I know it's ugly but I think it's the most RESTful way to do it and should be easy to parse on the server side :)
Don't fret too much if your initial API is fully RESTful or not (specially when you are just in the alpha stages). Get the back-end plumbing to work first. You can always do some sort of URL transformation/re-writing to map things out, refining iteratively until you get something stable enough for widespread testing ("beta").
You can define URIs whose parameters are encoded by position and convention on the URIs themselves, prefixed by a path you know you'll always map to something. I don't know PHP, but I would assume that such a facility exists (as it exists in other languages with web frameworks):
.ie. Do a "user" type of search with param[i]=value[i] for i=1..4 on store #1 (with value1,value2,value3,... as a shorthand for URI query parameters):
1) GET /store1/search/user/value1,value2,value3,value4
or
2) GET /store1/search/user,value1,value2,value3,value4
or as follows (though I would not recommend it, more on that later)
3) GET /search/store1,user,value1,value2,value3,value4
With option 1, you map all URIs prefixed with /store1/search/user to the search handler (or whichever the PHP designation) defaulting to do searches for resources under store1 (equivalent to /search?location=store1&type=user.
By convention documented and enforced by the API, parameters values 1 through 4 are separated by commas and presented in that order.
Option 2 adds the search type (in this case user) as positional parameter #1. Either option is just a cosmetic choice.
Option 3 is also possible, but I don't think I would like it. I think the ability of search within certain resources should be presented in the URI itself preceding the search itself (as if indicating clearly in the URI that the search is specific within the resource.)
The advantage of this over passing parameters on the URI is that the search is part of the URI (thus treating a search as a resource, a resource whose contents can - and will - change over time.) The disadvantage is that parameter order is mandatory.
Once you do something like this, you can use GET, and it would be a read-only resource (since you can't POST or PUT to it - it gets updated when it's GET'ed). It would also be a resource that only comes to exist when it is invoked.
One could also add more semantics to it by caching the results for a period of time or with a DELETE causing the cache to be deleted. This, however, might run counter to what people typically use DELETE for (and because people typically control caching with caching headers.)
How you go about it would be a design decision, but this would be the way I'd go about. It is not perfect, and I'm sure there will be cases where doing this is not the best thing to do (specially for very complex search criteria).