Do i need to put path params in an URI if i don't use them [REST APIs] - rest

I'm having a debate with a senior of mine at work and i want to know if what he says is true. Imagine I have a path /users/bucket-list that gets the currently logged in user bucket list. Now my question is, since i get the ID of the logged in user from the context do i still need to name my path like this /users/:user_id/bucket-list. I don't use the path param but my senior thinks that it should still be there and I think that since i don't use it i need to omit it. I want to hear your thoughts about this.

TL; DR
You are "doing it wrong"
Most of the time, you'll get away with it
Getting away with it is the wrong goal
Any information that can be named can be a resource -- Fielding, 2000
In most cases, I find that the easiest way to reason about "resources" is to substitute "documents", and then once the basic ideas are in place to then generalize if necessary.
One of the design problems that we face in creating our API is figuring out our resources; should "Alice's bucket-list" be presented separately from "Bob's bucket-list", or do they belong together? Do we have one resource for the entire list, or one resource for each entry in the list, and so on.
A related problem we need to consider in our design is how many representations a resource should support. This might include choosing to support multiple file formats (csv vs plain-text vs json, etc), and different languages (EN vs FR), and so on.
Your senior's proposed design is analogous to having two different resources. And having done that, everything will Just Work[tm]. There's no confusion about which resource is being identified, authorization is completely separate from identification, and so on.
Your design, however, is analogous to having a single resource with multiple representations, where a representation is chosen based on who is looking at it. And that's kind of a mess -- certainly your server can interpret the HTTP request, but general purpose components are not going to know that your resource has different identification semantics than every other resource on the internet.
We normally discriminate different representations using the Vary header; but the Authorization field is sort of out of bounds there, see RFC 7231.
In practice, you are likely to get away with your design because we have special rules about how shared-caches interact with authenticated requests, see RFC 7234.
But "likely to get away with it" is pretty weak. The point of having common standards is to get interop. If you are going to risk interop, you had better be getting something very valuable back in exchange. Nothing you have presented here suggests a compensating advantage.

Related

REST API URL design for only single property from performance viewpoint

I am developing REST API and Frontend as a microservice. I know some basic principles of URL design, but there is a performance issue and I'm not sure how to deal with it.
For the convenience of displaying the webpage, I'd like to get certain information about more than 100 resources per page. (Actually, BFF exists as an orchestration layer)
Since the target resource includes the aggregation result from a large amount of database record, it takes about 3 seconds per request. However, the information I want on the webpage is only a part of it, and it doesn't require complex aggregation to get it, and that makes the response time much shorter.
Take a case as an example.
There is a resource of article, and return the resource data in articles/:id containing a complex aggregation. But in this case, all I need is a count of comments, which can be quickly obtained by issuing a SQL count statement without a counter cache.
However, when examining REST API design, I've never seen a case where a GET request that returns only a specific field.
And in microservices, API should only return resource state in loosely coupled situation, so I think it shouldn't be focused on specific fields.
What kind of URL design or performance optimization can be considered in the face of performance problems?
REST is the architectural style of the world wide web; the style and the web were developed in parallel in the 1990s. Given the usage patterns and technical constraints of the time, the attention of the style is focused more on the caching of large grained documents, rather than trying to reduce the latency of transport.
So you would be more likely to design your representation so that count is present somewhere in the representation, and addressable to that you can call attention to it. Thus: fragments.
So if you already had a resource with the identifier /articles, and having the comment count was important enough, then you might treat the representation like a DTO (Data Transfer Object), and simply include the comment count in the representation, accessible via some identifier like /articles#comment-count.
That's not necessarily a great fit for your use case.
An alternative is to just introduce a stand-alone comment count resource.
Any information that can be named can be a resource -- Fielding, 2000
If you are actually doing REST, then the spelling of the URI don't matter (consider - when is the last time you cared where the google search form actually submitted your query?). The identifiers are used as identifiers, general purpose clients don't try to extract semantics from the identifiers.
So using /d8a496c4-51c5-4eeb-8cbd-d5e777cbdee7 as your identifier for the comment-count should "just work".
It's not particularly friendly to the human reader, of course, so you might prefer something else. URI design is, in this sense, a lot like choosing a good variable name -- the machines don't care, so me make choices that are easier for the human beings to manage. That normally means choosing a spelling that is "consistent" with the other spellings in your API.
RFC 3986 introduces distinctions between the path, the query, and the fragment, that you can expect general-purpose components to understand; one of the potentially important distinctions is that reference resolution describes how a general purpose component can compute a new identifier from a base uri and a relative reference.
/articles/comment-count + ./1 -> /articles/1

REST API Design - Single General Endpoint or Many Specific endpoints

This is a relatively subjective question, but I want to get other people's opinion nonetheless
I am designing a REST Api that will be accessed by internal systems (a couple of clients apps at most).
In general the API needs to update parameters of different car brands. Each car brand has around 20 properties, some of which are shared between all car brands, and some specific for each brand.
I am wondering what is a better approach to the design for the endpoints of this API.
Whether I should use a single endpoint, that takes in a string - that is a JSON of all the properties of the car brand, along with an ID of the car brand.
Or should I provide a separate endpoint per car brand, that has a body with the exact properties necessary for that car brand.
So in the first approach I have a single endpoint that has a string parameter that I expect to be a JSON with all necessary values
PUT /api/v1/carBrands/
Whereas in the second approach in the second scenario I have an endpoint per type of car brand, and each endpoint has a typed dto object representing all the values it needs.
PUT /api/v1/carBrand/1
PUT /api/v1/carBrand/2
.
.
.
PUT /api/v1/carBrand/n
The first approach seems to save a lot of repetitive code - afterall the only difference is the set of parameters. However, since this accepts an arbitrary string, there is no way for the enduser to know what he should pass - he will need someone to tell it to him and/or read from documentation.
The second approach is a lot more readable, and any one can fill in the data, since they know what it is. But it involves mostly replicating the same code around 20 times.
Its really hard for me to pick an option, since both approaches have their drawbacks. How should I judge whats the better option
I am wondering what is a better approach to the design for the endpoints of this API.
Based on your examples, it looks as though you are asking about resource design, and in particular whether you should use one large resource, or a family of smaller ones.
REST doesn't answer that question... not directly, anyway. What REST does do is identify that caching granularity is at the resource level. If there are two pieces of information, and you want the invalidation of one to also invalidate the other, then those pieces of information should be part of the same resource, which is to say they should be accessed using the same URI.
If that's not what you want, then you should probably be leaning toward using separated resources.
I wouldn't necessarily expect that making edits to Ford should force the invalidation of my local copy of Ferrari, so that suggests that I may want to treat them as two different resources, rather than two sub-resources.
Compare
/api/v1/carBrands#Ford
/api/v1/carBrands#Ferrari
with
/api/v1/carBrands/Ford
/api/v1/carBrands/Ferrari
In the former case, I've got one resource in my cache (/api/v1/carBrands); any changes I make to it invalidate the entire resource. In the latter case, I've got two resources cached; changing one ignores the other.
It's not wrong to use one or the other; both are fine, and have plenty of history. They make different trade offs, one or the other may be a better fit for the problem you are trying to solve today.

REST design principles: Referencing related objects vs Nesting objects

My team and I we are refactoring a REST-API and I have come to a question.
For terms of brevity, let us assume that we have an SQL database with 4 tables: Teachers, Students, Courses and Classrooms.
Right now all the relations between the items are represented in the REST-API through referencing the URL of the related item. For example for a course we could have the following
{ "id":"Course1", "teacher": "http://server.com/teacher1", ... }
In addition, if ask a list of courses thought a call GET call to /courses, I get a list of references as shown below:
{
... //pagination details
"items": [
{"href": "http://server1.com/course1"},
{"href": "http://server1.com/course2"}...
]
}
All this is nice and clean but if I want a list of all the courses titles with the teachers' names and I have 2000 courses and 500 teachers I have to do the following:
Approximately 2500 queries just to read the data.
Implement the join between the teachers and courses
Optimize with caching etc, so that I will do it as fast as possible.
My problem is that this method creates a lot of network traffic with thousands of REST-API calls and that I have to re-implement the natural join that the database would do way more efficiently.
Colleagues say that this is approach is the standard way of implementing a REST-API but then a relatively simple query becomes a big hassle.
My question therefore is:
1. Is it wrong if we we nest the teacher information in the courses.
2. Should the listing of items e.g. GET /courses return a list of references or a list of items?
Edit: After some research I would say the model I have in mind corresponds mainly to the one shown in jsonapi.org. Is this a good approach?
My problem is that this method creates a lot of network traffic with thousands of REST-API calls and that I have to re-implement the natural join that the database would do way more efficiently. Colleagues say that this is approach is the standard way of implementing a REST-API but then a relatively simple query becomes a big hassle.
Your colleagues have lost the plot.
Here's your heuristic - how would you support this use case on a web site?
You would probably do it by defining a new web page, that produces the report you need. You'd run the query, you the result set to generate a bunch of HTML, and ta-da! The client has the information that they need in a standardized representation.
A REST-API is the same thing, with more emphasis on machine readability. Create a new document, with a schema so that your clients can understand the semantics of the document you return to them, tell the clients how to find the target uri for the document, and voila.
Creating new resources to handle new use cases is the normal approach to REST.
Yes, I totally think you should design something similar to jsonapi.org. As a rule of thumb, I would say "prefer a solution that requires less network calls". It's especially true if amount of network calls will be less by order of magnitude.
Of course it doesn't eliminate the need to limit the request/response size if it becomes unreasonable.
Real life solutions must have a proper balance. Clean API is nice as long as it works.
So in your case I would so something like:
GET /courses?include=teachers
Or
GET /courses?includeTeacher=true
Or
GET /courses?includeTeacher=brief|full
In the last one the response can have only the teacher's id for brief and full teacher details for full.
My problem is that this method creates a lot of network traffic with thousands of REST-API calls and that I have to re-implement the natural join that the database would do way more efficiently. Colleagues say that this is approach is the standard way of implementing a REST-API but then a relatively simple query becomes a big hassle.
Have you actually measured the overhead generated by each request? If not, how do you know that the overhead will be too intense? From an object-oriented programmers perspective it may sound bad to perform each call on their own, your design, however, lacks one important asset which helped the Web to grew to its current size: caching.
Caching can occur on multiple levels. You can do it on the API level or the client might do something or an intermediary server might do it. Fielding even mad it a constraint of REST! So, if you want to comply to the REST architecture philosophy you should also support caching of responses. Caching helps to reduce the number of requests having to be calculated or even processed by a single server. With the help of stateless communication you might even introduce a multitude of servers that all perform calculations for billions of requests that act as one cohesive system to the client. An intermediary cache may further help to reduce the number of requests that actually reach the server significantly.
A URI as a whole (including any path, matrix or query parameters) is actually a key for a cache. Upon receiving a GET request, i.e., an application checks whether its current cache already contains a stored response for that URI and returns the stored response on behalf of the server directly to the client if the stored data is "fresh enough". If the stored data already exceeded the freshness threshold it will throw away the stored data and route the request to the next hop in line (might be the actual server, might be a further intermediary).
Spotting resources that are ideal for caching might not be easy at times, though the majority of data doesn't change that quickly to completely neglect caching at all. Thus, it should be, at least, of general interest to introduce caching, especially the more traffic your API produces.
While certain media-types such as HAL JSON, jsonapi, ... allow you to embed content gathered from related resources into the response, embedding content has some potential drawbacks such as:
Utilization of the cache might be low due to mixing data that changes quickly with data that is more static
Server might calculate data the client wont need
One server calculates the whole response
If related resources are only linked to instead of directly embedded, a client for sure has to fire off a further request to obtain that data, though it actually is more likely to get (partly) served by a cache which, as mentioned a couple times now throughout the post, reduces the workload on the server. Besides that, a positive side effect could be that you gain more insights into what the clients are actually interested in (if an intermediary cache is run by you i.e.).
Is it wrong if we we nest the teacher information in the courses.
It is not wrong, but it might not be ideal as explained above
Should the listing of items e.g. GET /courses return a list of references or a list of items?
It depends. There is no right or wrong.
As REST is just a generalization of the interaction model used in the Web, basically the same concepts apply to REST as well. Depending on the size of the "item" it might be beneficial to return a short summary of the items content and add a link to the item. Similar things are done in the Web as well. For a list of students enrolled in a course this might be the name and its matriculation number and the link further details of that student could be asked for accompanied by a link-relation name that give the actual link some semantical context which a client can use to decide whether invoking such URI makes sense or not.
Such link-relation names are either standardized by IANA, common approaches such as Dublin Core or schema.org or custom extensions as defined in RFC 8288 (Web Linking). For the above mentioned list of students enrolled in a course you could i.e. make use of the about relation name to hint a client that further information on the current item can be found by following the link. If you want to enable pagination the usage of first, next, prev and last can and probably should be used as well and so forth.
This is actually what HATEOAS is all about. Linking data together and giving them meaningful relation names to span a kind of semantic net between resources. By simply embedding things into a response such semantic graphs might be harder to build and maintain.
In the end it basically boils down to implementation choice whether you want to embed or reference resources. I hope, I could shed some light on the usefulness of caching and the benefits it could yield, especially on large-scale systems, as well as on the benefit of providing link-relation names for URIs, that enhance the semantical context of relations used within your API.

REST API naming convention: referencing unique resources with nested paths

Given there is a one to many relationship between users and comments, and all ids are provided to be unique;
what are the considerations between naming the operation as:
DELETE /users/{user_uuid}/comments/{comment_uuid}
or
DELETE /comments/{comment_uuid}?
In the former user_uuid is redundant as it's not needed to delete a comment. Is it worth keeping user_uuid just to make the urls looks RESTful?
Both work fine for well structured RESTful resource--long as the comment_uuid is indeed a uuid. Neither hint at the underlying implementation or strikes me as screaming this is an RPC service :)
Whatever you choose, rule #1... Keep it consistent.
That being said, I prefer the first one as it reinforces semantic information that this is a user comment. I can see that and know pretty much what I'm getting back, without making a request.
Comment is a bad one to show a counter case, because most comments are from users, but think about this... conceivably, you could have some other type of entity that leaves comments, imagine registering bots in your system /bot/{bot_uuid},
Now if you go with just /comment you did you just delete a user or bot comment?
Compare this as you're scanning code vs /bot/{bot_uuid}/comment/{comment_uuid}. The more verbose is a lot clearer IMOP.
Finally, if someone provides a get request for a specific comment /users/{user_uuid}/comments/{comment_uuid} I know the URL for the user, just drop the omment part. Sure, most might guess, /user/{user_uuid}, but like I said, user and comment are bad examples, as you get more non-typical resource name it becomes less obvious. The thing is if you're alway's explicit, you don't have to worry when the resources looks like these:
/widget/{widget_uuid}/contrawidget/{co_uuid}/parts/{part_uuid}
/spaceframe/{spaceframe_uuid}/parts/{part_uuid}
Now would you just do parts:
/parts/{part_uuid}
probably nots as it could be confusing to the consumer
Is it worth keeping user_uuid just to make the urls looks RESTful?
No. The business value that you get from making the identifiers look RESTful is indistinguishable from zero.
You might do it for other reasons: URI design is primarily about making things easier for humans. As far as the machines are concerned, all of the URI could just be UUIDS with no other hints.
That said, there is something important to consider....
/users/{user_uuid}/comments/{comment_uuid}
/comments/{comment_uuid}
These are different identifiers; therefore, as far as the clients are concerned, they are different resources. This means that, as far as clients are concerned, they are cached separately. Invalidating one does not invalidate the other.
(Of course, other clients, unaware that the DELETE happened, will continue using cached copies of both resources. Cache invalidation is one of the two hard problems).
I think that what your question is a design question and not a RESTful question as #ray said, and like for all design question the answer is... depends.
I prefer the first one also, because the comment (as I understand a comment) could not exist without a user.
But for this kind of questions I use the Entity-Control-Boundary Pattern (EBC) it basically propose a form to interact with your application in the context of certain entities, not using all the entities of the system, just the key ones, I generally use this as my rule to identify the paths that make more sense.

Forcing web api consumers to accept new fields in responses

I'm creating v2 of an existing RESTful web api.
The responses are JSON lists of objects, roughly in the form:
[
{
name1=value1,
name2=value2,
},
{
name1=value3,
name2=value4,
}
]
One problem we've observed with v1 is that some clients will access fields by integer position, instead of by name. This means that if we decide to add fields to the response (which we had originally considered a compatibility-preserving change), then some of our client's code breaks, unless we add the fields at the end. Even then, other clients code breaks anyway, because they will fail in some way when they encounter an unexpected attribute name.
To counter this in v2, we are considering randomly reordering the fields in every response. This will force clients to index fields by name instead of by position.
Additionally, we are considering adding a randomly-named field to every response. This will force clients to ignore fields they don't recognize.
While this sounds somewhat barmy, it does have the advantage that we will be able to add new fields, safe in the knowledge that this isn't breaking any clients. This means we can issue compatible updates to v2.1, v2.3, etc at the same URL, and that means we will only have to maintain & support a smaller number of API versions.
The alternative is to issue compatibility-breaking v3, v4, at new URLs, which means that we will have to maintain & support many incompatible API versions, which will stretch us that little bit thinner.
Is this a bad idea, and if so, why? Are there any other similar ideas I should think about?
Update: The first couple of responses are pointing out that if I document the issue (i.e. indicate in the docs that fields may be added or reordered) then I am no longer to blame if client code breaks when I subsequently add or reorder fields. Sadly I don't think this is an appropriate option for us: Many dozens of organisations rely on the functionality of our APIs for real-world transactions with substantial financial impact. These organisations are not technically oriented - and the resulting implementations at the client end cover the whole spectrum of technical proficiency. We already did document that fields may get added or reordered in the docs for v1, and that clearly didn't work, because now we're having to issue v2 because many clients, due to lack of time or experience or ability, still wrote code that breaks when we add new fields. If I were now to add fields to the interface, it breaks a dozen different company's interfaces to us, which means they (and us) are bleeding money every minute. If I were to refuse to revert the change or fix it, saying "They should have read the docs!", then I will soon be out of the job, and rightly so. We may attempt to educate the 'failing' partners, but this is doomed to fail as the problem gets larger every month as we continue to grow. My question is, can I systemically head the whole issue off at the pass, preventing this situation from ever arising, no matter what clients try to do? If the techniques I suggest would work, why shouldn't I use them? Why isn't everyone else using them?
If you want your media types to be "evolvable", make that point very clear in the documentation. Similarly, if the placement order of fields is not guaranteed, make that explicitly clear too. If you supply sample code for your API, make sure it does not rely on field ordering.
However, even assuming that you have to maintain different versions of your media types, you don't have to version the URI. REST gives you the ability to maintain the same version-agnostic URI but use HTTP content negotiation (via the Accept and Content-Type headers) to offer different payloads at the same URI.
Therefore any client that doesn't explicitly wish to accept your new v2/v3/etc encoding won't get it. By default, you can return the old v1 encoding with the original field ordering and all of those brittle client apps will work fine. However, new client developers will know (thanks to your documentation) to indicate via Accept that they are willing and able to see the new fields and they don't care about their order. Best of all, you can (and should) use the same URI throughout. Remember - different payloads like this are just different representations of the same underlying resource, so the URI should be the same.
I've decided to run with the described techniques, to the max. I haven't heard any objections to them that hold any water for me. Brian's answer, about re-using the same URI for different API versions, is a solid and much-appreciated complementary idea (with upvote), but I can't award it 'best answer' because it doesn't get to the core of my original question.