So let's say that I have two endpoints:
example.com/v1/send
example.com/v1/read
and now I have to change something in /send without losing backward compatibility. So I'm creating:
example.com/v2/send
But what should I do then? Do I need to create example.com/v2/read which will be doing same as /v1? And let's imagine that there are lots of controllers with hundreds of endpoints. Will I be creating a new version like that with changing every small endpoint? Or should my frontend use API like that?
example.com/v1/send
example.com/v2/read
What is the best practice?
Over the time new endpoints may be included, some endpoints may be removed, the model may change, etc. That what versioning is for: track the changes.
It's likely that you will support both version 1 and 2 for a certain period, but you hardly will support both versions forever. At some point you may drop version 1, and want to keep only version 2 fully up and running.
So, consider the new version of the API as an API that can be used independently from the previous versions. In other words, a particular client should target one version of the API instead of multiple. And, of course, it's desirable to have backwards compatibility if possible.
In good time: Instead of adding the version in the URL, have you considered a media type to handle the versioning?
For instance, have a look at the GitHub API. All requests are handled by a default version of the API, but the target version can be defined (and the clients are encouraged to define the target version) in the Accept header, using a media type:
Accept: application/vnd.github.v3+json
Related
Why not just make your backend api route start with /api?
Why do we want to have the /v1 bit? Why not just api/? Can you give a concrete example? What are the benefits of either?
One of the major challenges surrounding exposing services is handling updates to the API contract. Clients may not want to update their applications when the API changes, so a versioning strategy becomes crucial. A versioning strategy allows clients to continue using the existing REST API and migrate their applications to the newer API when they are ready.
There are four common ways to version a REST API.
Versioning through URI Path
http://www.example.com/api/1/products
REST API versioning through the URI path
One way to version a REST API is to include the version number in the URI path.
xMatters uses this strategy, and so do DevOps teams at Facebook, Twitter, Airbnb, and many more.
The internal version of the API uses the 1.2.3 format, so it looks as follows:
MAJOR.MINOR.PATCH
Major version: The version used in the URI and denotes breaking changes to the API. Internally, a new major version implies creating a new API and the version number is used to route to the correct host.
Minor and Patch versions: These are transparent to the client and used internally for backward-compatible updates. They are usually communicated in change logs to inform clients about a new functionality or a bug fix.
This solution often uses URI routing to point to a specific version of the API. Because cache keys (in this situation URIs) are changed by version, clients can easily cache resources. When a new version of the REST API is released, it is perceived as a new entry in the cache.
Pros: Clients can cache resources easily
Cons: This solution has a pretty big footprint in the code base as introducing breaking changes implies branching the entire API
Ref: https://www.xmatters.com/blog/blog-four-rest-api-versioning-strategies/#:~:text=Clients%20may%20not%20want%20to,API%20when%20they%20are%20ready.
I want to identify what might be considered as a best practice for URI versioning of the APIs, regarding the logic of the back-end implementation.
Let's say we have a java application with the following API:
http://.../api/v1/user
Request:
{
"first name": "John",
"last name": "Doe"
}
After a while, we need to add 2 more mandatory fields to the user API:
http://.../api/v2/user
Request:
{
"first name": "John",
"last name": "Doe",
"age": 20,
"address": "Some address"
}
We are using separate DTOs for each version, one having 2 fields, and another having 4 fields.
We have only one entity for the application, but my question is how we should handle the logic, as a best practice? Is ok to handle this in only one service?
If those 2 new fields "age" and "address" would not be mandatory, this would not be considered a breaking change, but since they are, I am thinking that there are a few options:
use only one manager/service in the business layer for all user API versions (but the complexity of the code in that only one manager will grow very much in time and will be hard to maintain)
use only one manager for all user API versions and also use a class as a translator so I can make compatible older API versions with the new ones
a new manager/service in the business layer for each user API version
If I use only one manager for all user API versions and put there some constraints/validations, V2 will work, but V1 will throw an exception because those fields are not there.
I know that versioning is a big topic, but I could not find a specific answer on the web until now.
My intuition says that having a single manager for all user API versions will result in a method that has nothing to do with clean code, and also, I am thinking that any change added with a new version must be as loosely coupled as possible, because will be easier to make older methods deprecated and remove them in time.
You are correct in your belief that versioning with APIs is a contentious issue.
You are also making a breaking change and so incrementing the version of your API is the correct decision (w.r.t. semver).
Ideally your backend code will be under version control (eg GitHub). In this case you can safely consider V1 to be a specific commit in your repository. This is the code that has been deployed and is serving traffic for V1. You can then continue making changes to your code as you see fit. At some point you will have added some new breaking changes and decide to mark a specific commit as V2. You can then deploy V2 alongside V1. When you decide to depreciate V1 you can simply stop serving traffic.
You'll need some method of ensuring only V1 traffic goes to the V1 backend and V2 to the V2 backend. Generally this is done by using a Reverse Proxy; popular choices include NGINX and Apache. Any sufficient reverse proxy will allow you to direct requests based on the path such that if the request is prefixed by /api/v1 then redirect that request to Backend1 and if prefixed by /api/v2 to Backend2.
Hopefully this model will help keep your code clean: the master branch in your repository only needs to deal with the most recent API. If you need to make changes to older API versions this can be done with relative ease: branch off the V1 commit, make your changes, and then define the HEAD of that modified branch as the 'new' V1.
A couple of assumptions about your backend have been made for this answer that you should be aware about. Firstly, your backend can be scaled horizontally. For example, this means that if you interact with a database then the multiple versions of your API can all safely access the database concurrently. Secondly, that you have the resources do deploy replica backends.
Hopefully that explanation makes sense; but if not any questions send them my way!
If you're able to/can entertain code changes to your existing API, then you can refer to this link. Also, the link's mentioned at the bottom of the post direct you to respective GitHub source code which can be helpful in case if you think to introduce the code changes after your trial-error.
The mentioned approach(using #JsonView) basically prevents one from introducing multiple DTO's of a single entity for the same/multiple clients. Eventually, one can also refrain from introducing new version APIs each & every time you introduce new fields in your existing API.
spring-rest-jackson-jsonviewjackson-jsonview
We have a lot of different Microservices and we test amongst other things the REST APIs of these different Microservices. These automated REST API tests are not included in the Microservice projects/Repos. Instead we have a testautomation project containing all different tests, like API test and end2end tests, because we want to test everything as a black box.
Now the problem is that there are infinitely combinations of different Microservice versions are possible to test against on a test environment (example: Executing the tests today against Microservice A with version 1.0 and Microservice B with version 2.0 is different to execute the same tests tomorrow against Microservice A with version 1.1 and Microservice B with version 2.1). So, we will need some kind of versioning or tagging our testautomation project or the executed tests, that we are able to identify which combinations of different Microservice versions are valid and which combinations are not valid/working, because e.g. some tests will fail.
Are there any recommendations or experiences to implement and integrate such a versioning/tagging mechanism?
To me the actual problem already lays grounded in your actual design. You state that you maintain some micro-services based on a REST architecture, though in such an environment you don't need to version any endpoint as such to start with. Fielding himself answered how a API in a REST environment should be versioned by simply responding with: Don't.
But why is that? One of the few constraints REST has is HATEOAS (or Hate-Us as I tend to pronounce it) which stands for Hypertext-As-The-Engine-Of-Application-State. This acronym basically just describes the interaction model used in the Web, which will have no preasumption on the content to receive and will only render to the user what it received, including any URIs returned by the server. A browser will trigger a state change upon calling an endpoint targeted by a URI invoked by the user. This might be a link, an image or a form-button (or what not). The core idea here is that the client will be served by the API or server with all the information it needs to take further actions and just present the results to the user.
While browsing the Web page of your preferred manufacturer or vendor you might notice that you'll most likely receive a HTML page containing images, links and further content. The browser itself isn't aware of the product offered on that site though it is sill able to render the result to the user as it knows how to render HTML. If you visit an other page your browser will still be able to render HTML regardless of the content that page offers. Yet if one of these pages is changing in some way your browser will still be able to render the result to you, unless the server responds with a media type that your browser isn't yet aware of, which is though very unlikely on the Web.
What most self-claimed "REST" APIs however return is some arbitrary content specific to a certain API, even though most of these use application/json as representation format. A tailor-made client, that has some knowledge about the API built in, is usually interacting with such an API that however is very unlikely to be able to interact with any other API out there. If something on the API level changes the likelihood of breaking that client without any additional updates of it are therefore high. This is very common to RPC like systems such as SOAP, RMI and CORBA.
Such clients often assume certain endpoints such as /api/users/12345 to return data about a particular user in a most likely JSON representation. The payload is marshalled later on to a object of the underlying programming language probably ignoring any unknown fields and nulling out specified fields that aren't available within the response. Though the fundamental problem here is that clients assume that certain endpoints have a certain types. "Smart" developers will now introduce versioning to the endpoints so that the above mentioned URI will change to /api/v1/users/12345 for a JSON representation containing the old fields while /api/v2/users/12345 will return the new fields. Both versions, however, still describe the same user. Having two different URIs for the same user is already a bad design per se, though usually the versioning of an endpoint does not come alone. Usually the whole API itself is versioned itself so that if you encounter a breaking change you are forced to introduce a whole new API version either copying the other unmodified resources or reusing the same models internally further just exposed under multiple URIs again.
Instead of assuming endpoints to return a certain type with a predefined representation format clients and server should negotiate about the content. HTTP here in particular supports content type negotiation where a client informs a server about its capabilities and the server should respond in a representation format understood by the client. This could be something like application/vnd.acmee-users+json or application/vcard+xml or the like. A client understanding application/vnd.acmee-users.v2+json i.e. might get served by the server the new representation while older clients will still inform the server that they only understand application/vnd.acmee-users+json and get served by such a representation. How the server handles the change internally is not of interest to the client. It is just interested in a representation format it can handle.
Though versioning media-types is also not the preferred way of versioning changes by some architects out there as you fundamentally still describe the same thing just with a bit different syntax or a slightly different semantics. HTML i.e. still ships with application/html (rarely with text/html) but not with application/html_5 or the like. It is designed explicitly in a way to stay backwards compatible. A server generating HTML 5 output will still be rendered on a browser that only supports HTML 4.01 or 2. Maybe not all elements will be rendered the same way as on a HTML 5 compatible browser, but the client won't stop to work unexpectedly.
Mark Nottingham, who is the co-chair in the IETF HTTP working group, stated that the underlying principle of versioning is to not break existing clients. Therefore according to him
This implies that API versioning absolutely cannot be tied to software versioning in any way; doing so will needlessly limit (and often break) your clients, and generally upset people. (Source)
Nottingham even states that the product-token used in User-Agent or Server headers should be taken in preference to any URI or media-type versioning to produce responses specific for certain software versions. With the sheer number of client software out there I'm not the biggest fan of that approach however, as this would require the server to have certain knowledge of the capabilities of HTTP clients and their versions used. For APIs, however, that only have a limited number of clients, probably most of them also under the same control as the API/server, this could be a viable approach though.
As you might see for yourself, in a REST architecture there is no real need to version endpoints itself as client will only handle what they are served with by the API/server. Whether a product-token approach is preferable over a media-type based one might be rather opinionated. The latter one, however, should be based on standardized media-types, registered with IANA. In best case the media-type itself is designed in a way that is backward compatible like HTML, which might avoid introducing new media-types for the same things over and over again.
As Phil Sturgeon mentioned in one of his blog posts
If people are going to design their APIs as RPC with a RESTish facade, they should just commit to being an RPC API and build endpoint for specific clients like they’re literally already doing.
Just be honest about it. Hide the false intention, RPC the lot, document as such, and maybe just use gRPC.
I am designing a REST API and lately I put some thought on how to make most of caching for dynamic content (after the response that I got on this topic), while respecting the principles of HTTP (and thus REST).
Obviously the canonical solution (at least in my understanding) is to use etags, but this will not decrease the number of requests in any way, just the size.
I was thinking of embedding a version in the URL (it will be server produced, based on the actual content - be it serial number or some hash). I will explain the scheme and the user scenario and how I think it will help, and then ask my questions.
Setup
GET /entity/{id}/
returns temporary redirect to /entity/{id}/{current_version} and no-cache headers.
GET /entity/{latest_version}/
returns OK response with cache forever.
GET /entity/{old_version}/
returns 410 Gone (I don't want to actually keep old versions).
GET /entity/?[query]
is some search that returns a list of links to current versions of result entities. No cache.
Use scenario and how I think it would help
User application (AJAX) will always start with some kind of query, then it has to pull the descriptions of entities. Since it is expected that changes for a single client result set will not be very dynamic, it seems good idea to use the above scheme and client pull fresh results from the query every time, but if most of the entities did not change since last visit, they will be already cached in browser. If this hypothesis is true, this will lead to significant decrease in the number of requests, as well as total size.
Using etags would result in much simpler URI scheme, but probably more complicated and heavy server side implementation.
Notes and questions
1
I know somebody will propose that /entity/{id}/ should be a collection that returns list versions, but versions are not actually stored, useful or desired. It is more a synonym for the latest one. My question here is if somebody sees any problem with that, besides general principles. This is protected API, I do not care about SEO in this case and it is transparent for client. Actually, as API will be more or less hyperlinked, it is not expected to actually call /entity/{id}/ directly normally, but use whatever results returns. It can be used, for example for context free links.
2
I have some doubts for 410 Gone for old versions. On one hand this version is not available anymore and clients should not be accessing it anyway. On the other hand, if client asks for it after all (for whatever reason), it may make sense to return permanent redirect to /entity/{id}/ (probably better that temporary redirect to current version).
3
Speaking of redirects. 301 is cemented for permanent redirect, but is 302 the best choice for temporary? Most important is browser support (it will be AJAX).
4
Of course, the main issue is the usage of URLs instead of etags for caching (hoping on the browser caches). If somebody has real experience under high load (relative to servers capabilities, cough), I will appreciate sharing it.
Additional notes
After some more research there is an issue with versioned resources and it is propagation of updates for linked resources. There are two options:
Link a specific version of the resource. This means that server side logic will be heavy and cumbersome, as updates have to be propagated for linked resources through reverse links;
Link the /latest/ version. This means that even if both resource and linked resourced concrete versions are cached locally, clients (browsers) will have to make a request to /latest/ in order to 'check' latest version of a linked resource. Of course it is a small request (only redirect) and if resource didn't change location is already cached. One problem may be that resources are often pulled from such links (in opposite to query result to particular version). Another (much worse) problem is that actually old version of the resource is linking the newest version of another - it can be data inconsistency (i.e. somebody edited document and also changed a linked attachment - client will have old version of the document and new one for the attachment).
Both options are unsatisfactory. In this light caching of dynamic data is possible only for 'leaf' level resources - ones that do not link to any other, bust just have direct attribute values.
Final notes
After research and discussions, versioned resources are not the brightest idea as general architecture. After measurement and given the opportunity, something can be retrofitted in a canonical API for 'plain' resources. I would accept Roysvork's comment (' It is my opinion that the reason this is difficult is that it is not really a very good idea.') as solution, if it was a separate answer :)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to version REST URIs
I'm currently writing a REST service and I would like to get some ideas on how my clients can specify the service version when making a request.
For example when I upgrade my REST server to version 2 I don't want calls to break for all clients that have implemented version 1 of my service.
I've seen others add the version in the url or specify the version in the header somewhere.
There may not be a "best" way to implement this but I would appreciate some thoughts on the subject( pro's and con's of each etc...)
Thanks
We do it via separate routes:
http://my.service.com/v1/user/1
http://my.service.com/v2/user/1
In the case of no change between versions, we just map the route to the controller that services the v1 version of the resource.
Yes, this does create multiple URL's for the same resource, and the wonky hardcore REST evanginlists will start crying, but this way makes it easy to manage for you and your users. I find users can't even use request headers to set things like content types nevermind an X-Path or something like that to handle versioning....
If you really want to avoid the duplicate resource issue, you couldpass in a get paramter like version:
http://my.service.com/user/1?version=1
If no version, default to whatever. This is actually fully REST dogmatic, but I think it puts a lot onto your API users.
You could do some kind of user lookup table to route between version if you have a way to map user or api key to version, but this is pretty crazy overhead.
I would recommend versioning via the Accept/Content-Type header. The URIs shouldn't change across versions unless there is a large structural change to the resources themselves. Here is a great explanation: Best practices for API versioning?