I believe this header can be used to maintain affinity with a specific replica, to get write-then-read consistency in a series of calls to AAD graph. But I can't find any documentation for it, save for a mention on this page (https://msdn.microsoft.com/en-us/library/azure/ad/graph/howto/azure-ad-graph-api-error-codes-and-error-handling) not to use it under certain error conditions. What is the correct way to use this header?
Related
I have a resource called “subscriptions”
I need to update a subscription’s send date. When a request is sent to my endpoint, my server will call a third-party system to update the passed subscription.
“subscriptions” have other types of updates. For instance, you can change a subscription’s frequency. This operation also involves calling a third-party system from my server.
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
PATCH subscriptions/:id
I can hypothetically use my controller behind the endpoint to fire different functions depending on the query string... But what if I need to add a third or fourth “update” type action? Should they ALL run through this single PATCH route?
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
No - but you will often want to.
Consider how you would support this on the web: you might have a number of different HTML forms, each accepting a slightly different set of inputs from the user. When the form is submitted, the browser will use the input controls and form metadata to construct an HTTP (POST) request. The target URI of the request is copied from the form action.
So your question is analogous to: should we use the same action for all of our different forms?
And the answer is yes, if you want the general purpose HTTP application to understand which resource is expected to change in response to the message. One reason that you might want that is cache invalidation; using the right target URI allows all of the caches to understand which previously cached responses should not be reused.
Is that choice free? no - it adds some ambiguity to your access logs, and routing the request to the appropriate handler in your code takes a bit more work.
Trying to use PATCH with different target URI is a little bit weird, and suggests that maybe you are trying to stretch PATCH beyond the standard constraints.
PATCH (and PUT) have remote authoring semantics; what they mean is "make your copy of the target resource look like my copy". These are methods we would use if we were trying to fix a spelling error on a web page.
Trying to change the representation of one resource by sending a remote authoring request to a different resource makes it harder for the general purpose HTTP application components to add value. You are coloring outside of the lines, and that means accepting the liability if anything goes wrong because you are using standardized messages in a non standard way.
That said, it is reasonable to have many different resources that present representations of the same domain entity. Instead of putting everything you know about a user into one web page, you can spread it out among several that are linked together.
You might have, for example, a web page for an invoice, and then another web page for shipping information, and another web page for billing information. You now have a resource model with clearer separation of concerns, and can combine the standardized meanings of PUT/PATCH with this resource model to further your business goals.
We can create as many resources as we need (in the web level; at the REST level) to get a job done. -- Webber, 2011
So, in your example, would I do one endpoint like this user/:id/invoice/:id and then another like this user/:id/billing/:id
Resources, not endpoints.
GET /invoice/12345
GET /invoice/12345/shipping-address
GET /invoice/12345/billing-address
Or
GET /invoice/12345
GET /shipping-address/12345
GET /billing-address/12345
The spelling conventions that you use for resource identifiers don't actually matter very much.
So if it makes life easier for you to stick all of these into a hierarchy that includes both users and invoices, that's also fine.
I know that from most of today's REST APIs, web calls responses have to be paginated.
But, I don't see on the web any insight on how to select the ideal size of a batch returned by an API call: should it be 10, 100, 1000. To be short: on what factors should we base the reflection of the size of an API response?
Some people state that it should be based on the number of elements displayed by the UI. I don't agree with this, as not all APIs are directly linked with an UI, and, in any cases, modern REST APIs allow to chose the number of items in output batch with a configurable parameter up to a certain amount.
So, how could we define the value for this "maximum number of elements returned by an HTTP request"?
Should it be based on the payload size? Internal architecture of the API? Should it be based on performance measurement?
Any insight on this? I'm not really looking for an explicit figure, but more some techniques that could help to find the answer. The best for me would be the process followed by some successful APIs. But, I cannot find any.
My general approach to this is to:
Try to avoid paging
Make REST clients resilient against paging changes, by using next and previous links, instead of using a ?page= attribute. A REST client knows another page is available strictly by the appearance of the link.
I don't use hard rules or measurements to figure out when paging is needed. Paging is generally painful, so my first approach would be to try to figure out what requirements drives the need for paging, and if that requirement can be removed.
Once I've determined it's not possible to remove this requirement in another way, I would set the cut-off of a page as large as reasonable, to remove the likelyhood clients need to do additional requests.
If it's a closed API (used only by clients you control), pick whatever the UI wants. It's trivial to change. If clients can choose from several options, you can include a pageSize parameter. Or, better..
If it's an open API (used by clients you don't control), then let clients control what size paging they want. Rather than support a pageNumber parameter, support offset and limit. Offset is how many records to skip before starting to return records, and limit is the maximum number of records to return. If the client is not happy with how their request performs, they can adjust the parameters to suit their needs. Any upper limit your API has should be driven by what your service can handle. It's neither possible nor desirable for you to try to figure out the Magic Maximum Page Size that makes all clients happy and no clients sad.
Also, please note that none of this has anything to do with ReST, which is silent when it comes to paging.
I usually make a rough performance measurement by hand. I want as few requests as necessary, but do not want to risk timeouts.
I am building my first REST API. I could do most of my queries without any problem but now I encountered a use case I don't know how to solve.
Here is the use case.
I submit a dataset to the API, the dataset is then stored in database (this part works as intended). When stored in the database, it creates different resources due to business rules.
So now I don't know how to inform the user what is the location of the newly created resource since I could have more than one.
I read this Can the Location header be used for multiple resource locations in a 201 Created response? which tells me that only one location header is allowed.
Should I rethink my POST method? Should I use a different way to acknowledge the user where are the resources?
Yes, the Location header requires a single identifier. It's intended for one resource you should follow in order to complete the request according to some predefined semantics.
You can use the Link header instead. Then you can have multiple URIs. Check the RFC 5988 here for a few examples and don't forget to document it properly.
As an alternative, keep in mind that the semantics of the POST method are determined by you, so there's nothing wrong with returning the list of links in the response payload, as long as the resource format allows it in some way and it's documented.
What is the REST-ful way of deleting multiple items?
My use case is that I have a Backbone Collection wherein I need to be able to delete multiple items at once. The options seem to be:
Send a DELETE request for every single record (which seems like a bad idea if there are potentially dozens of items);
Send a DELETE where the ID's to delete are strung together in the URL (i.e., "/records/1;2;3");
In a non-REST way, send a custom JSON object containing the ID's marked for deletion.
All options are less than ideal.
This seems like a gray area of the REST convention.
Is a viable RESTful choice, but obviously has the limitations you have described.
Don't do this. It would be construed by intermediaries as meaning “DELETE the (single) resource at /records/1;2;3” — So a 2xx response to this may cause them to purge their cache of /records/1;2;3; not purge /records/1, /records/2 or /records/3; proxy a 410 response for /records/1;2;3, or other things that don't make sense from your point of view.
This choice is best, and can be done RESTfully. If you are creating an API and you want to allow mass changes to resources, you can use REST to do it, but exactly how is not immediately obvious to many. One method is to create a ‘change request’ resource (e.g. by POSTing a body such as records=[1,2,3] to /delete-requests) and poll the created resource (specified by the Location header of the response) to find out if your request has been accepted, rejected, is in progress or has completed. This is useful for long-running operations. Another way is to send a PATCH request to the list resource, /records, the body of which contains a list of resources and actions to perform on those resources (in whatever format you want to support). This is useful for quick operations where the response code for the request can indicate the outcome of the operation.
Everything can be achieved whilst keeping within the constraints of REST, and usually the answer is to make the "problem" into a resource, and give it a URL.
So, batch operations, such as delete here, or POSTing multiple items to a list, or making the same edit to a swathe of resources, can all be handled by creating a "batch operations" list and POSTing your new operation to it.
Don't forget, REST isn't the only way to solve any problem. “REST” is just an architectural style and you don't have to adhere to it (but you lose certain benefits of the internet if you don't). I suggest you look down this list of HTTP API architectures and pick the one that suits you. Just make yourself aware of what you lose out on if you choose another architecture, and make an informed decision based on your use case.
There are some bad answers to this question on Patterns for handling batch operations in REST web services? which have far too many upvotes, but ought to be read too.
If GET /records?filteringCriteria returns array of all records matching the criteria, then DELETE /records?filteringCriteria could delete all such records.
In this case the answer to your question would be DELETE /records?id=1&id=2&id=3.
I think Mozilla Storage Service SyncStorage API v1.5 is a good way to delete multiple records using REST.
Deletes an entire collection.
DELETE https://<endpoint-url>/storage/<collection>
Deletes multiple BSOs from a collection with a single request.
DELETE https://<endpoint-url>/storage/<collection>?ids=<ids>
ids: deletes BSOs from the collection whose ids that are in the provided comma-separated list. A maximum of 100 ids may be provided.
Deletes the BSO at the given location.
DELETE https://<endpoint-url>/storage/<collection>/<id>
http://moz-services-docs.readthedocs.io/en/latest/storage/apis-1.5.html#api-instructions
This seems like a gray area of the REST convention.
Yes, so far I have only come accross one REST API design guide that mentions batch operations (such as a batch delete): the google api design guide.
This guide mentions the creation of "custom" methods that can be associated via a resource by using a colon, e.g. https://service.name/v1/some/resource/name:customVerb, it also explicitly mentions batch operations as use case:
A custom method can be associated with a resource, a collection, or a service. It may take an arbitrary request and return an arbitrary response, and also supports streaming request and response. [...] Custom methods should use HTTP POST verb since it has the most flexible semantics [...] For performance critical methods, it may be useful to provide custom batch methods to reduce per-request overhead.
So you could do the following according to google's api guide:
POST /api/path/to/your/collection:batchDelete
...to delete a bunch of items of your collection resource.
I've allowed for a wholesale replacement of a collection, e.g. PUT ~/people/123/shoes where the body is the entire collection representation.
This works for small child collections of items where the client wants to review a the items and prune-out some and add some others in and then update the server. They could PUT an empty collection to delete all.
This would mean GET ~/people/123/shoes/9 would still remain in cache even though a PUT deleted it, but that's just a caching issue and would be a problem if some other person deleted the shoe.
My data/systems APIs always use ETags as opposed to expiry times so the server is hit on each request, and I require correct version/concurrency headers to mutate the data. For APIs that are read-only and view/report aligned, I do use expiry times to reduce hits on origin, e.g. a leaderboard can be good for 10 mins.
For much larger collections, like ~/people, I tend not to need multiple delete, the use-case tends not to naturally arise and so single DELETE works fine.
In future, and from experience with building REST APIs and hitting the same issues and requirements, like audit, I'd be inclined to use only GET and POST verbs and design around events, e.g. POST a change of address event, though I suspect that'll come with its own set of problems :)
I'd also allow front-end devs to build their own APIs that consume stricter back-end APIs since there's often practical, valid client-side reasons why they don't like strict "Fielding zealot" REST API designs, and for productivity and cache layering reasons.
You can POST a deleted resource :). The URL will be
POST /deleted-records
and the body will be
{"ids": [1, 2, 3]}
I'm building a REST service where I want to implement a way to deprecate certain URIs when they shouldn't be supported anymore for one reason or another. As functions are deprecated, they will be replaced by new ones that work in similar (but not identical) ways. This means that at some point, I will have to start responding with 410 Gone.
The idea is that all client software should be updated, and after say six months all users should have had the chance to upgrade. At this time, the deprecated URIs will start to inform the client that it's out of date, so that the client can display a message to the user. This time is not known in advance, though, and can't explicitly be written in the documentation.
The problem I want to solve is:
Is there an HTTP header field I should use to indicate that a certain URI will cease to work at a certain time and, if so, which?
This can't be the first time someone wants to solve this problem. Is there an unofficial header field already in use, or should I design my own? Note that I don't want to add this information to the content itself, as that would mean that every resource was changed and needs to be refreshed by the client, which is of course not what happened.
Strictly speaking, no. The resources should be driving your applications state, so if there is a change, the uri linking would provide the nessessary changes to your application.
For a HTTP header, you are free to add custom headers. Normally starting with X- but its important to know changes to uri's is only interesting to developers not users.