REST vs SOAP evolvability - rest

I get the benefits of changing link uris, but that's really not what this question is about.
What I mean by evolvability is adding new features to a service or modifying (when possible) existing ones and that's actually it.
SOAP isn't that bad as REST community tends to talk about it when it comes to evolvability. For example:
In REST we can add new rel - in SOAP we can add new method. Both
types of old clients will continue working with new services.
In REST we can add new form field and set its default value - in
SOAP we could have service arguments as some ServiceArgs class and
add a new field to ServiceArgs. That's ugly, but it works.
What are the evolvability examples when SOAP clients break and you can do nothing about it, while REST clients are handling the situation gracefully?
Thanks!

SOAP is a contract-based technology. The entire client/server interaction is written up and codified in a big document (the WSDL) and must be agreed upon and honored by both sides in order for things to work. If either side decides to add features, the other side must "evolve" in lock-step with it. Both sides are completely coupled, joined at the hip, glued together, married, for ever.
The typical approach to enhancing your SOAP services is to create new WSDL documents for the new versions of the service, while also maintaining the older ones. Another technique is to create a new interface to contain new methods and inherit from the old one. The approach you describe in #1 is IMO breaking the SOAP rules, because the client and server will now be using different contracts and it only works because additive changes (like new methods) can be shoe-horned in and most of the time things will work. The moment someone makes a destructive change then the client's contract will not match the server's and it's game over. It's a difficult process to manage, which is why most organizations opt to create entirely new WSDL for each new version of the API.
REST doesn't magically make all of these problems go away, but it makes things easier to manage by not forcing you to bundle your entire distributed system's "contract" into one artifact. You're using HTTP? Great, then you get to use all of the wonderful HTTP features that the web uses too: proxy servers, URLs, content negotiation, authentication, etc. You want to communicate using JSON encoding as well as XML? Knock yourself out. It's trivial to do in REST at any time, without affecting existing clients. You want security? Fine, start challenging for authenticated credentials using HTTP's in-built support for exactly that. All of these things (HTTP, JSON, etc) are standardized and described in different places and that's exactly how it should be.
SOAP combines the transmission protocol, location information, payload description, encoding choice and RPC methods into one ginormous document. If you want to make any change to anything in that list, you need a new document. Worse still, some of those things can't be changed at all.
REST separates those things out so that the pieces can evolve independently. Your URLs (or "URIs", to be more precise) are returned at runtime and assuming the client doesn't start to hardcode them are evolvable without any changes needed to the client. Additive changes to your media types are trivial if your documentation makes it clear that new fields may appear in the future. You've also got the option of versioning your media types, allowing the co-existence of v1/v2/v3... media types within your system, and the client can pick (using the Accept and Content-Type headers in HTTP) which one they want to use.
Ever heard the joke about the Porsche owner who buys a brand new car whenever the ashtray gets full? That's SOAP. What should be a trivial change requires a major overhaul. REST gives you the vacuum cleaner. You don't have to use it, but it sure is cheaper.

Related

REST API Design - adapter service - how to mark endpoints for different directions?

Let's imagine a web service X that has a single purpose - help to integrate two existing services (A and B) having different domain models. Some sort of adapter pattern.
There are cases when A wants to call B, and cases when B wants to call A.
How should endpoints of X be named to make clear for which direction each endpoint is meant?
For example, let's assume that the service A manages "apples". And the service B wants to get updates on "apples".
The adapter service X would have two endpoints:
PUT /apples - when A wants to push updated "apples" to B
GET /apples - when B wants read "apples" from A
(without awaiting a push from A)
Such endpoint structure as above is quite misleading. The endpoints are quite different and use different domain models: PUT-endpoint awaits model of A, and GET-endpoint return model of B.
I would appreciate any advice on designing the API in such a case.
I don't like my own variant:
PUT /gateway-for-A/apples
GET /gateway-for-B/apples
In my view, it is fine, but can be improved:
PUT /gateway-for-A/apples
GET /gateway-for-B/apples
Because
forward slashes are conventionally used to show the hierarchy between individual resources and collections: /gateway-for-A/apples
What can be improved:
it is better to use lowercase
remove unnecessary words
So I would stick with the foloowing URI:
PUT /a/apples
GET /b/apples
Read more here about Restful API naming conventions
First things first: REST has no endpoints, but resources
Next, in terms of HTTP you should use the same URI for updating the state of a resource and retrieving updates done to it as caching, which basically uses the effective URI of a resource, will automatically invalidate any stored responses for an URI if a non-safe operation is performed on it and forward the request to the actual server. If you split concerns onto different URIs you basically bypass that cache management performed for you under the hood completely.
Note further, HTTP/0.9, HTTP/1.0 and HTTP/1.1 itself don't have a "push" option. It is a request-response protocol and as such if a client is interested in getting updates done to a resource it should poll the respective resource whenever it needs updates. If you need above-mentioned push though you basically need to switch to Web Sockets or the like. While HTTP/2 introduced server push functionality, this effectively just populates your local 2nd level cache preventing the client from effectively requesting the resource and instead using the previously received and cached one.
Such endpoint structure as above is quite misleading. The endpoints are quite different and use different domain models: PUT-endpoint awaits model of A, and GET-endpoint return model of B.
A resource shouldn't map your domain model 1:1. Usually in a REST architecture there can be way more resources than there are entities in your domain model. Just think of form-like resources that explain a client on how to request the creation or update of a resource or the like.
On the Web and therefore also in REST architectures the representation format exchanged should be based on well-defined media-types. These media types should define the syntax and semantics of elements that can be found within an exchanged document of that kind. The elements in particular provide the affordance that tell a client in particular what certain elements can be used for. I.e. a button wants to be pressed while a slider can be dragged left or right to change some numeric values or the like. You never have to frequent any external documentation once support for that media type is added to your client and/or server. A rule of thumb in regards to REST is to design the system as if you'd interact with a traditional Web page and then apply the same concepts you used for interacting with that Web page and translate it onto the REST application domain.
Client and server should furthermore use content-type negotiation to negotiate which representation format the server should generate for responses so that clients can process them. REST is all about indirections that ultimately allow a server to change its internals without affecting clients that behave well negatively. Staying interoperable whilst changing is an inherent design decision of REST. If that is not important to you, REST is probably overkill for your needs and you probably should use something more (Web-) RPC based.
In regards to you actual question, IMO a messaging queue could be a better fit to your problem than trying to force your design onto a REST architecture.
I was hoping that there is a well-known pattern for adapter service (when two different services are being integrated without knowing each other formats)
I'd compare that case with communication attempts among humans stemming from different countries. I.e. imagine a Chines who speaks Mandarin trying to communicate with a Frech. Either the Chinese needs to be able to talk French, the French being able to talk in Mandarin, they both use an intermediary language such as English or they make use of a translator. In terms of trust and cost, the latter option might be the most expansive one of all of these. As learning laguages though is a time-consuming, ongoing process this usually won't happen quickly unless special support is added, by hiring people with that language skills or using external translators.
The beauty of REST is, servers and clients aren't limited to one particular representation format (a.k.a. language). In contrast to traditional RPC services, which limit themselves to one syntax, in REST servers and clients can support a multitude of media types. I.e. your browser knows how to process HTML pages, how to render JPG, PNG, GIF, ... images, how to embed Microsoft Word, Excel, ... documents and so forth. This support was added over the years and allows a browser to basically render a plethora of documents.
So, one option is to either create a translation service that is able to translate one representation to an other and then act as middleman in the process or you directly add support for the non yet understood media types to the service/client directly. The more media-types your client/servers are able to process, the more likely they will be to interoperate with other peers in the network.
The former approach clearly requires that the middleman service is able to at least support the two representation formats issued by A and B but on the other hand allows you to use services not directly under your control. If at least one of the services though is under your control, directly adding the not-yet-supported media type could be potentially less work in the end. Especially when certain library support for the media type is already available or can be obtained easily.
In a REST architecture clients and servers aren't build with the purpose of knowing the other one by heart. This is already a hint that there is a strong coupling between these two. They shouldn't be aware of the others "API" other than that they use HTTP as transport layer and URIs as their addressing scheme. All other stuff are negotiated and discovered on the fly. If they don't share the same language capabilities the server will responde with a 406 Not Accepttable response that informs a client that they don't speak the same languages and thus won't be able to communicate meaningfully.
As mentioned before, REST is all about introducing indirections to aid in the decoupling intent which allows servers to evolve freely in future without those changes breaking clients as these will just coop with the change. Therefore, eventual change in future is an inherent design concept. If at least one participant in a REST architecture doesn't respect these design concepts they are a potential candidate for introducing the problems traditional RPC services did in the past, like breaking clients on a required change, maintaining v2/3/4/.../n of various different APIs and scaling issues due to the direct coupling of client and servers.
Not sure why you need to distinguish it in the path and why the domain or subdomain is not enough for it:
A: PUT x.example.com/apples -> X: PUT b.example.com/apples
B: GET x.example.com/apples -> X: GET a.example.com/apples
As of the model, you want to do PUSH-PULL in a system which is designed for REQ-REP. Let's translate the upper: pushApples(apples) and pullApples() -> apples if this is all they do, then PUT and GET are just fine with the /apples URI if you ask me. Maybe /apples/new is somewhat more expressive if you need only the updates, but I would rather use if-modified-since header instead and maybe push with if-unmodified-since.
Though I think you should describe what the two service does, not what you do with the apples, which appear to be a collection of database entities instead of a web resource, which is several layers above the database. Currently your URIs describe the communication, not the services. For example why is X necessary, why don't they call each other directly? If you can answer that question, then you will understand what X does with the apples and how to name its operations and how to design the URIs and methods which describe them.

Rest api with generic User

I created a few Rest apis right now and I always preferred a solution, where I created an endpoint for each resource.
For example:
GET .../employees/{id}/account
GET .../supervisors/{id}/account
and the same with the other http methods like put, post and delete. This blows up my api pretty much. My rest apis in general preferred redundancy to reduce complexity but in this cases it always feels a bit cumbersome. So I create another approach where I work with inheritance to keep the "dry" principle.
In this case there is a base class User and via inheritance my employee and supervisor model extends from it. Now I only need one endpoint like
GET .../accounts/{id}
and the server decides which object is returned. Also while this thins out my api, it increases complexity and in my api documentation ( where I use spring rest docs ) I have to document two different Objects for the same endpoint.
Now I am not sure about what is the right way to do it ( or at least the better way ). When I think about Rest, I think in resources. So my employees are a seperate resource as well as my supervisors.
Because I always followed this approach, I tink I might be mentally run in it and maybe lost the objectivity.
It would be great if you can give my any objective advice on how this should be handled.
I built an online service that deals with this too. It's called Wirespec:
https://wirespec.dev
The backend automatically creates the url for users and their endpoints dynamically with very little code. The code for handling the frontend is written in Kotlin while the backend for generating APIs for users is written in Node.js. In both cases, the amount of code is very negligible and self-maintaining, meaning that if the user changes the name of their API, the endpoint automatically updates with the name. Here are some examples:
API: https://wirespec.dev/Wirespec/projects/apis/Stackoverflow/apis/getUserDetails
Endpoint: https://api.wirespec.dev/wirespec/stackoverflow/getuserdetails?id=100
So to answer your question, it really doesn't matter where you place the username in the url.
Try signing in to Wirespec with your Github account and you'll see where your Github username appears in the url.
There is, unfortunately, no wright or wrong answer to this one and it soley depends on how you want to design things.
With that being said, you need to distinguish between client and server. A client shouldn't know the nifty details of your API. It is just an arbitrary consumer of your API that is fed all the information it needs in order to make informed choices. I.e. if you want the client to send some data to the server that follows a certain structure, the best advice is to use from-like representations, such as HAL forms, Ion or even HTML. Forms not only teach a client about the respective properties a resource supports but also about the HTTP operation to use, the target URI to send the request to as well as the representation format to send the data in, which in case of HTML is application/x-www-form-urlencoded most of the time.
In regards to receiving data from the server, a client shouldn't attempt to extract knowledge from URIs directly, as they may change over time and thus break clients that rely on such a methodology, but rely on link relation names. Per URI there might be multiple link relation names attached to that URI. A client not knowing the meaning of one should simply ignore it. Here, either one of the standardized link relation names should be used or an extension mechanism as defined by Web linking. While an arbitrary client might not make sense from this "arbitrary string" out of the box, the link relation name may be considered the predicate in a tripple often used in ontologies where the link relation name "connects" the current resource with the one the link relation was annotated for. For a set of URIs and link relation names you might therefore "learn" a semantic graph over all the resources and how they are connected to each other. I.e. you might annotate an URI pointing to a form resource with prefetch to hint a client that it may load the content of the referenced URI if it is IDLE as the likelihood is high that the client will be interested to load that resource next anyway. The same URI might also be annotated with edit-form to hint a client that the resource will provide an edit form to send some data to the server. It might also contain a Web linking extension such as https://acme.org/ref/orderForm that allows clients, that support such a custom extension, to react to such a resource accordingly.
In your accounts example, it is totally fine to return different data for different resources of the same URI-path. I.e. resource A pointing to an employee account might only contain properties name, age, position, salery while resource B pointing to a supervisor could also contain a list of subordinates or the like. To a generic HTTP client these are two totally different resources even though they used a URI structure like /accounts/{id}. Resources in a REST architecture are untyped, meaning they don't have a type ouf of the box per se. Think of some arbitrary Web page you access through your browser. Your browser is not aware of whether the Web page it renders contains details about a specific car or about the most recent local news. HTML is designed to express a multitude of different data in the same way. Different media types though may provide more concrete hints about the data exchanged. I.e. text/vcard, applciation/vcard+xml or application/vcard+json all may respresent data describing an entity (i.e. human person, jusistic entity, animal, ...) while application/mathml+xml might be used to express certain mathematical formulas and so on. The more general a media type is, the more wiedspread usage it may find. With more narrow media types however you can provide more specific support. With content type negotiation you also have a tool at your hand where a client can express its capabilities to servers and if the server/API is smart enough it can respond with a representation the client is able to handle.
This in essence is all what REST is and if followed correctly allow the decoupling of clients from specific servers. While this might sound confusing and burdensome to implement at first, these techniques are intended if you strive for a long-lasting environment that still is able to operate in decateds to come. Evolution is inherently integrated into this phiolosophy and supported by the decoupled design. If you don't need all of that, REST might not be the thing you want to do actually. Buf if you still want something like REST, you for sure should design the interactions between client and server as if you'd intereact with a typical Web server. After all, REST is just a generalization of the concepts used on the Web quite successfully for the past two decades.

ReST - PUT vs PATCH to minimize coupling between client & API when adding new properties

We are building set of new REST APIs.
Let's say we have a resource /users with the following fields:
{
id: 1
email: "test#user.com"
}
Clients implement this API and can then update this resource by sending a new resource representation to PUT /users/1.
Now let's say we add a new property name to the model like so:
{
id: 1
email: "test#user.com"
name: "test user"
}
If the models the existing clients are using are to call our API not updated, then calls to PUT /users/1 will remove the new name property since PUT is supposed to replace the resource. I know that the clients could work straight with the raw json to ensure they always receive any new properties that are added in the API, but that is a lot of extra work, and under normal circumstances clients are going to create their own model representations of the API resources on their side. This means that any time any new property is added, all clients need to update the code/models on their side to make sure they aren't accidentally removing properties. This creates unneeded coupling between systems.
As a way to solve this problem, we are considering not implementing PUT operations at all and switching updates to PATCH where properties that aren't passed in are simply not changed. That seems technically correct, but might not be in the spirit of REST. I am also slightly concerned about client support for the PATCH verb.
How are others solving this problem? Was is the best practice here?
You are in a situation where you need some form of API versioning. The most appropriate way is probably using a new media-type every time you make a change.
This way you can support older versions and a PUT would be perfectly legal.
If you don't want this and just stick to PATCH, PATCH is supported everywhere except if you use ancient browsers. Not something to worry about.
Switching from PUT to PATCH will not fix your problem, IMO. The root cause, IMO, is that clients already consider the data being returned for a representation to follow a certain type. According to Fielding
A REST API should never have “typed” resources that are significant to the client.
(Source)
Instead of using typed resources clients should use content-type negotiation to exchange data. Here, media-type formats that are generic enough to gain widespread adoption are for sure beneficial, certain domains may however require a more specific representation format.
Think of a car-vendor Web page where you can retrieve the data from your preferred car. You, as a human, can easily identify that the data depicts a typical car. However, the media-type you most likely received the data in (HTML) does not state by its syntax or the semantics of its elements that the data describes a car, unless some semantic annotation attributes or elements are present, though you might be able to update the data or use the data elsewhere.
This is possible as HTML ships with a rich specification of its elements and attributes, such as Web forms that not only describe the supported or expected input parameters but also the URI where to send the data to, the representation format to use upon sending (implicitly given by application/x-www-form-urlencoded; may be overwritten by the enctype attribute though) or the HTTP method to use, which is fixed to either GET or POST in HTML. Through this, a server is able to teach a client on how a request needs to be built. As a consequence the client does not need to know anything else besides having to understand the HTTP, URI and HTML specifications.
As Web pages are usually filled with all kinds of unrelated stuff, such as adds, styling information or scripts, and the XML(-like) syntax, which is not every ones favourite, as it may increase the size of the actual payload slightly, most so-called "REST" APIs do want to exchange JSON-based documents. While plain JSON is not an ideal representation format, as it does not ship with link-support at all, it is though very popular. Certain additions such as JSON Hyper-Schema (application/schema+json hyper-schema) or JSON Hypertext Application-Language (HAL) (application/hal+json) add support for links and link-relation. These can be used to render data received from the server as-is. However, if you want a response to automatically drive your application state (i.e. to dynamically draw the GUI with the processed data) a more specific representation format is needed, that can be parsed by your client and act accordingly as it understands what the server wants it to do with it (= affordance). If you like to instruct a client on how to build a request support for other media-types such as hal-forms or ion need to be supported. Certain media-types furthermore allow you to use a concept called profiles, that allow you to annotate a resource with a semantic type. HAL JSON i.e. does support something like that where the Content-Type header may now contain a value such as application/hal+json;profile=http://schema.org/Car that hints the media-type processor that the payload follows the definition of the given profile and may thus apply further validity checks.
As the representation format should be generic enough to gain widespread usage, and URIs itself shouldn't hint a client as well what kind of data to expect, an other mechanism needs to be used. Link relation names are basically an annotation for URIs that tell a client about the purpose of a certain link. A pageable collection might return links annotated with first, prev, next and last which are pretty obvious what they do. Other links might be hinted with prefetch, that hint a client that a resource can be loaded right after loading the current resource finished as it is very likely that the client will retrieve this resource next. Such media-types, however, should be either standardized (defined in a proposal or RFC and registerd with IANA) or follow the schema proposed by Web linking, (i.e. as used by Dublin Core). A client that just uses the URI for an invoked link-relation name will still work in case the server changes its URI scheme instead of attempting to parse some parameters from the URI itself.
In regards to de/coupling in a distributed system a certain amount of coupling has to exist otherwise parties wont be able to communicate at all. Though the point here is, the coupling should be based on well-defined and standardized formats that plenty of clients may support instead of exchanging specific representation formats only a very limited number of clients support (in worst case only the own client). Instead of directly coupling to the API and using an undefined JSON-based syntax (maybe with external documentation of the semantics of the respective fields) the coupling should now occur on the media-types parties can use to exchange the format. Here, not the question of which media-type to support should be asked but how many you want to support. The more media-types your client or server supports, the more likely it is to interact with other peers in the distributed system. On the grand-scheme of things, you want a server to be able to server a plethora of clients while a single client should be able to interact with (in best case) every server without the need for constant adoptions.
So, if you really want to decouple clients from servers, you should take a closer look at how the Web actually works and try to mimic its interaction model onto your application layer. As "Uncle Bob" Robert C. Martin mentioned
An architecture is about intent! (Source)
and the intention behind the REST architecture is the decoupling of clients from servers/services. As such, supporting multiple media-types (or defining your own-one that is generic enough to reach widespread adoption), looking up URIs just via their accompanying link-relation names and relying on content-type negotiation as well as relying only on the provided data may help you to achieve the degree of decoupling you are looking for.
All nice and well in theory, but so far every rest api I encountered in my career had predefined contracts that changed over time.
The problem here is, that almost all of those so called "REST APIs" are RPC services at its heart which should not be termed "REST" to start with - this is though a community issue. Usually such APIs ship with external documentation (i.e. Swagger) that just re-introduce the same problems classical RPC solutions, such as CORBA, RMI or SOAP, suffer from. The documentation may be seen as IDL in that process without the strict need for skeleton classes, though most "frameworks" use some kind of typed data classes that will either ignore the recently introduced field (in best case) or totally blow up on invocation.
One of the problems REST suffers from is, that most people haven't read Fieldings thesis and therefore don't see the big picture REST tries to establish but claim to know what REST is and therefore mix up things and call their services RESTful which lead to a situation where REST != REST. The ones pointing out what a REST architecture is and how one might achieve it are called out as dreamers and unworldly when the ones proclaiming the wrong term (RPC over HTTP = REST) continue to do so adding to the confusion of especially the ones just learning the whole matter.
I admit that developing a true REST architecture is really, really hard as it is just too easy to introduce some form of coupling. Hence, a very careful design needs to be done that needs time and also costs money. Money plenty of companies can't or don't want to spend, especially in a domain where new technologies evolve on a regular basis and the ones responsible for developing such solutions often leave the company before the whole process had finished.
Just saying it shouldn’t be ‘typed’ is not really a viable solution
Well, how often did you need to change your browser as it couldn't interact with a Web page? I don't talk about CSS-stuff or browser-specific CSS or JS stuff. How often needed the Web to change in the last 2-3 decades? Similar to the Web, the REST architecture is intended for long-lasting applications for years to come, that supports natural evolution by design. For simple frontend-2-backend systems it is for sure overkill. It starts to shine especially in cases where there are multiple peers not under your control you can interact with.

How to versioning/tag testframework with REST API tests against multiple Microservices?

We have a lot of different Microservices and we test amongst other things the REST APIs of these different Microservices. These automated REST API tests are not included in the Microservice projects/Repos. Instead we have a testautomation project containing all different tests, like API test and end2end tests, because we want to test everything as a black box.
Now the problem is that there are infinitely combinations of different Microservice versions are possible to test against on a test environment (example: Executing the tests today against Microservice A with version 1.0 and Microservice B with version 2.0 is different to execute the same tests tomorrow against Microservice A with version 1.1 and Microservice B with version 2.1). So, we will need some kind of versioning or tagging our testautomation project or the executed tests, that we are able to identify which combinations of different Microservice versions are valid and which combinations are not valid/working, because e.g. some tests will fail.
Are there any recommendations or experiences to implement and integrate such a versioning/tagging mechanism?
To me the actual problem already lays grounded in your actual design. You state that you maintain some micro-services based on a REST architecture, though in such an environment you don't need to version any endpoint as such to start with. Fielding himself answered how a API in a REST environment should be versioned by simply responding with: Don't.
But why is that? One of the few constraints REST has is HATEOAS (or Hate-Us as I tend to pronounce it) which stands for Hypertext-As-The-Engine-Of-Application-State. This acronym basically just describes the interaction model used in the Web, which will have no preasumption on the content to receive and will only render to the user what it received, including any URIs returned by the server. A browser will trigger a state change upon calling an endpoint targeted by a URI invoked by the user. This might be a link, an image or a form-button (or what not). The core idea here is that the client will be served by the API or server with all the information it needs to take further actions and just present the results to the user.
While browsing the Web page of your preferred manufacturer or vendor you might notice that you'll most likely receive a HTML page containing images, links and further content. The browser itself isn't aware of the product offered on that site though it is sill able to render the result to the user as it knows how to render HTML. If you visit an other page your browser will still be able to render HTML regardless of the content that page offers. Yet if one of these pages is changing in some way your browser will still be able to render the result to you, unless the server responds with a media type that your browser isn't yet aware of, which is though very unlikely on the Web.
What most self-claimed "REST" APIs however return is some arbitrary content specific to a certain API, even though most of these use application/json as representation format. A tailor-made client, that has some knowledge about the API built in, is usually interacting with such an API that however is very unlikely to be able to interact with any other API out there. If something on the API level changes the likelihood of breaking that client without any additional updates of it are therefore high. This is very common to RPC like systems such as SOAP, RMI and CORBA.
Such clients often assume certain endpoints such as /api/users/12345 to return data about a particular user in a most likely JSON representation. The payload is marshalled later on to a object of the underlying programming language probably ignoring any unknown fields and nulling out specified fields that aren't available within the response. Though the fundamental problem here is that clients assume that certain endpoints have a certain types. "Smart" developers will now introduce versioning to the endpoints so that the above mentioned URI will change to /api/v1/users/12345 for a JSON representation containing the old fields while /api/v2/users/12345 will return the new fields. Both versions, however, still describe the same user. Having two different URIs for the same user is already a bad design per se, though usually the versioning of an endpoint does not come alone. Usually the whole API itself is versioned itself so that if you encounter a breaking change you are forced to introduce a whole new API version either copying the other unmodified resources or reusing the same models internally further just exposed under multiple URIs again.
Instead of assuming endpoints to return a certain type with a predefined representation format clients and server should negotiate about the content. HTTP here in particular supports content type negotiation where a client informs a server about its capabilities and the server should respond in a representation format understood by the client. This could be something like application/vnd.acmee-users+json or application/vcard+xml or the like. A client understanding application/vnd.acmee-users.v2+json i.e. might get served by the server the new representation while older clients will still inform the server that they only understand application/vnd.acmee-users+json and get served by such a representation. How the server handles the change internally is not of interest to the client. It is just interested in a representation format it can handle.
Though versioning media-types is also not the preferred way of versioning changes by some architects out there as you fundamentally still describe the same thing just with a bit different syntax or a slightly different semantics. HTML i.e. still ships with application/html (rarely with text/html) but not with application/html_5 or the like. It is designed explicitly in a way to stay backwards compatible. A server generating HTML 5 output will still be rendered on a browser that only supports HTML 4.01 or 2. Maybe not all elements will be rendered the same way as on a HTML 5 compatible browser, but the client won't stop to work unexpectedly.
Mark Nottingham, who is the co-chair in the IETF HTTP working group, stated that the underlying principle of versioning is to not break existing clients. Therefore according to him
This implies that API versioning absolutely cannot be tied to software versioning in any way; doing so will needlessly limit (and often break) your clients, and generally upset people. (Source)
Nottingham even states that the product-token used in User-Agent or Server headers should be taken in preference to any URI or media-type versioning to produce responses specific for certain software versions. With the sheer number of client software out there I'm not the biggest fan of that approach however, as this would require the server to have certain knowledge of the capabilities of HTTP clients and their versions used. For APIs, however, that only have a limited number of clients, probably most of them also under the same control as the API/server, this could be a viable approach though.
As you might see for yourself, in a REST architecture there is no real need to version endpoints itself as client will only handle what they are served with by the API/server. Whether a product-token approach is preferable over a media-type based one might be rather opinionated. The latter one, however, should be based on standardized media-types, registered with IANA. In best case the media-type itself is designed in a way that is backward compatible like HTML, which might avoid introducing new media-types for the same things over and over again.
As Phil Sturgeon mentioned in one of his blog posts
If people are going to design their APIs as RPC with a RESTish facade, they should just commit to being an RPC API and build endpoint for specific clients like they’re literally already doing.
Just be honest about it. Hide the false intention, RPC the lot, document as such, and maybe just use gRPC.

What are the advantages of using REST APIs directly over API wrappers?

When should I choose one over the other?
The way I see it API wrappers are so much simpler to use but I feel like there's something I'm not seeing, so can you enlighten me?
REST is a software design to decouple clients from APIs though it is often misunderstood as simple URI design thingy due to the fact that it is often based on HTTP.
Advantages of using APIs and clients that support REST is clearly, that clients are not coupled to any API in particular and are therefore tolerant to changes done on the server side like moving resources to different endpoints. Like a browser is able to present content of a sheer infinite number of web pages, true RESTful clients should behave identical and be able to communicate with any API that supports REST. It may learn on the fly how to deal with new content type by looking up some processing routines dynamically (similar to plugins of certain applications) or fallback to a default handling (reporting errors or presenting unknown data as plain text).
An API wrapper is often used to create clients that are limited to a certain API only. This simplifies development as the client can contain certain logic needed for interaction with the API like en/de-coding messages sent to or received from the service, list all available operations and similar stuff. Often URI endpoints are also either injected through properties or hardcoded into the application. Also, the content type is often limited to XML or JSON and the rules on how to treat responses are hardcoded into the client directly. All these steps however tightly couple the client to the API. In case the API changes (or is enriched by further endpoints) the API wrapper has to be updated and shipped to each consumer otherwise users either won't be able to use the API or make use of the latest features.
API wrapper are often tailor made for the usage of the API and are also often much simpler to implement. They, however, also require constant updating in cases the API itself is changing as the wrapper is incapable of handling these changes itself. REST clients on the other hand are far more complicate to develop as the client somehow has to know (or learn) the semantical meaning of a certain response and has to infer somehow how to act upon received responses. Some parts of it are yet an active field of research (at least in automated processing).
As you asked on when to use which: In cases where you have to create an all-purpose client, a REST client is for sure the right thing to do. However, identifying the correct semantical treatment will be the true chanllenge for these clients IMO. In cases where you only need to provide a client frontend for customers (or users) and not much change is expected on the API itself, a wrapper may be much easier to implement. However, please don't call such a client RESTful!