I am planning to write a RESTful API and I am clueless how to handle versioning.
I have read many discussions and blog articles, which suggest to use the accept header for versioning.
But then I found following website listening popular REST APIs and their versioning method and most of them using the URL for versioning.
Why?
Why are most people saying: "Don't use the URL, but use the accept header", but popular APIs using URL?
Both mechanisms are valid. You need to know your consumer to know which path to follow. In general, working with enterprises and academically-minded folks tends to point developers towards Resource Header versioning. However, if your clients are smaller businesses, then URL versioning approach is more widely used.
The Pros and Cons (I'm sure there are more, and some of the Cons have work-arounds not mentioned here)
It's more explorable. For most requests you can just use a browser, whereas, the Resource Header implementation requires a more programatic approach to testing. However, because not all HTTP requests are explorable, for example, POST requests, you should use a Rest Client plugin like Postman or Paw. URI Pro/Header Con
With a URI-versioned API, resource identification and the resource’s representation is munged together. This violates the basic principles of REST; one resource should be identified by one and only one endpoint. In this regard, the Resource Header versioning choice is more academically idealistic. Header Pro/URI Con.
A URI-versioned API is less error prone and more familiar to the client developers. Versioning by URL allows the developer to figure out the version of a service at a glance. f the client developer forgets to include a resource version in the header, you have to decide if they should be directed to the latest version (which can cause errors when incrementing the version) or a 301 (Moved Permanatly) error. Either way there is more confusion for your more novice client developers. URI Pro/Header Con
URI versioning lends itself to hosing multiple versions in the same application. In this case you do not have to further development your framework.
Note: If you do this your directory structure will most likely contain a substantial amount of duplicate code in the v2 directory. Also, deploying updates requires a system restart - Thus this technique should be avoided if possible. URI Pro/Header Con.
It is easier to add versioning to the HTTP Headers for an existing project that didn't already have versioning in mind from it's inception. Header Pro/URI Con.
According to the RMM Level 3 REST Principle: Hypermedia Controls, you should use the HTTP Accept and Content-Type headers to handle versioning of data as well as describing data. Header Pro/URI Con.
Here are some helpful links if you want to do some further reading:
Martin Fowler's description of the Richardson Maturity Model
API Versioning - Pivotal Labs
HATEAOS
Informit.com's Article on Versioning REST Services
Related
I have a Soap service that is running over http. Is this also a REST service. What are the criteria that would make it a REST service. What are the criteria that would definitively exclude it as a REST service? There are posts (e.g. here) that compare REST and Soap but do not seem to answer this question directly. My answer is: Yes, a Soap service at its functional level is an http request that returns an XML payload where state is not maintained by the server and is therefore a REST service.
Fielding stated in his dissertation:
REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems.
If you compare the above mentioned properties with Web-browsing, you will find plenty of similarities between both as Fielding just took the concepts which made the Web such a success and applied it onto a more general field, that also should allow applications to "surf the Web".
In order to rightfully call an architecture REST it has to support self-descriptiveness, scalability and cacheability while also respecting and adhering to the rules and semantics outlined by the underlying transport protocol and enforce the usage of well-defined standards, such as media types, link relation names, HTTP operations, URI standards, ...
Self-descriptiveness of a service is utilized by HATEOAS (or hate-us, as I tend to pronounce it, as people like me who see the benefit in REST always have to stress this key-term, which therefore also ended up in its own meme). Via HATEOAS a client is served by the server with all the available "actions" a client could take from the current "state" the client is in. An "action" here is just a link with an accompanying link-relation name a client can use to deduce when to best invoke that URI. The media-type the response was returned for may define what to do with such links. HTML i.e. states that on clicking a link a GET request is triggered and the content of the link is loaded either in the current pane or in a new tab, depending on the arguments the link has. Other media-types may defines something similar or something different at all. The general motto here, though, is: proceeding thru exploring. The interaction model in a REST architecture is therefore best designed as affordance and state machine while the actual service should follow more like a Web site approach where a server is teaching a client, i.e. on how a request has to look like and where to send the request to (similar to HTML forms).
As plenty of Web pages are more or less static and a majority of requests are retrieval focused, the Web heavily relies on caching. The same is generally expected from REST APIs as well, hence the strong requirement for cacheability here, as this could reduce the workload on servers quite notably if proper caching is in place.
By keeping client state away from servers this also allows to add new copies of a service onto new servers located behind a load balancer or new regions and thus increase scalability. A client usually does not care where it retrieves the data from, hence a server might just return a URI pointing to a clone instead of itself.
SOAP on the other hand is RPC, like Java's remote method invocation (RMI) or CORBA, where you have an own interface definition language (IDL) to generate client side stub-code for you, that contains the actual logic on how to transform certain objects into byte streams and how to send them over the wire, where you invoke certain methods.
Where SOAP violates REST constraints is clearly by the lack of caching support as well as out-of-band knowledge that needs to be available before actually using a client. SOAP messages are usually always exchanged as POST operations, which are not cacheable by default. Certain HTTP headers are available to allow intermediary servers to cache the response though SOAP doesn't make use of such and thus lacks general support for it.
A client developed for SOAP endpoint A will most likely also not be interoperable with a further SOAP endpoint B ran by a different company. While one might argue that a Web client also does not know how to process each of the different media-types, browsers usually provide plugin mechanism to load that kind of knowledge into the client. A media type is in addition to that also standardized, at least it should be, and may therefore be usable with plenty of servers (think of Flash-support i.e.). A further problem SOAP services have is, that once anything is changed in the WSDL definition clients not aware of the update will most likely stop to work with that updated service until the client code is updated to work with the latest version of the generated stub classes.
In regards to the XML format exchanged in SOAP: While technically it is doable for a REST service to return a SOAP XML payload, the format itself lacks support of HATEOAS, which is a necessity and not an option. How should a client make further choices based on the received response simply on the content received without any a-priori knowledge of the API itself?
I hope you can see that SOAP lacks support of caching, may have problems with scalability as well as leads to a tight coupling of clients to the actual API. The lack of HATEOAS support by the SOAP message envelop/header/body also does not allow clients to explore the API freely and thus adapt to server changes automatically.
Proper REST services follow the architectural guidelines spelled out in chapter five of Roy Fielding's dissertation. Most people erroneously use the term "REST API" when they really mean "HTTP API". Statelessness is a necessary but not sufficient condition for an API to adhere to the REST architectural guidelines.
Can anyone explain difference between Swagger & HATEOAS. I can Search many time but no buddy can explain the proper detailed answer this two aspects.
The main difference between Swagger and HATEOAS IMO, which is not covered in the accepted answer, is, that Swagger is only needed for RPC'esque APIs. Such APIs, however, have actually hardly anything to do with REST.
There is a further, widespread misconception that anything exchanged via HTTP is automatically RESTful (~ in accordance with the REST archtitectural style), which it is not. REST just defines a set of constraints that are not choices or options but are mandatory. From start to finish. There is nothing wrong from being not RESTful, but it is wrong to term such an architecture REST.
Swagger describe the operations that can be performed on an endpoint and the payload (including headers and the expected representation formats) that needs to be sent to the service and also describe what a client might expect as response. This allows Swagger to be used both as documentation as well as testing-framework for the API. Due to the tight coupling of Swagger to the API it behaves much like a typical RPC service description, i.e. similar to WSDL files in SOAP or stub or skeletton classes in RMI or CORBA. If either the endpoint changes or something in the payload changes, clients implementing against a Swagger documentation will probably break over time just reintroducing the same problems typical RPC implementations have.
REST and HATEOAS, on the other side, are designed for disovery and further development. REST isn't a protocol but an architectural style to start with that describes the interaction flow between a client and server in a distributed system. It basically took the concepts which made the Web so successful and translated it onto the application layer. So the same concepts that apply to the browsable Web also apply to REST. Therefore it is no miracle that also HATEOAS (the usage of and support for links, link relations and link names) behave similar to the Web.
On designing a REST architecture it is benefitial to think of a state machine where a server provides all of the information a client needs to take further actions. Asbjørn Ulsberg held a great talk back in 2016 where he explains affordances and how a state machine might be implemented through HATEOAS. Besides common or standardized media-types and relation names no out-of-band knowledge is necessary to interact with the service further. In the case of the toaster example Asbjørn gave in his talk, a toaster may have the states off, on, heating and idle where turning a toaster on will lead to a state transition from off to on followed by a transition to heating till a certain temperature is reached where the state is transitioned to idle and switches between idle and heating till the toaster is turned off.
HATOAS will provide a client with the information on the current state and include links a client can invoke to transition to the next state, i.e. turning the toaster off again. It's important to stress here, that a client is provided by the server with every action the client might perform next. There is no need for a client implementor to consult any proprietary API documentation in order for a client to be able to interact with a REST service. Further, URIs do not have to be meaningful or designed to convey a semantical-expressive structure as clients will determine whether invoking that URI makes sense via the link-relation name. Such relation names are either specified by IANA, by a common approach such as Dublin Core or schema.org or by absolut URIs acting as extension attributes which might point to a human-readable description, which further might be propagated to the user via mouse-over tooltips or such.
I hope you can see by yourself that Swagger is only needed to describe RPC Web-APIs rather than applications that follow the REST architectural design. Messages exchanged via REST APIs should include all the information needed by a client to make informed choices on the next state transition. As such it is benefitial to design such message flows and interactions as state machine.
Update:
How are Swagger and HATEOAS mutually exclusive? The former documents your endpoints (making auto-generating code possible) and the latter adds meta-information to your endpoints which tell the consumer what they can do (i.e. which other endpoints are available). These are very different things.
I never stated that they are mutually exclusive, just that they serve two different purposes, where if you follow one approach the other gets more or less useless. Using both does not make any sense though.
Let's move the discussion to the Web domain as this is probably more easily understandable and REST is de facto just a generalization of the concepts used on the Web, so doing this step is just natural and also a good recommendation in terms of designing REST architectures in general. Think of a case where you as a user want to send some data to the server. You have never used the service before so you basically don't know how a request has to look like.
In Swagger you would call the endpoint documentation, select the option that most likely might solve your task, read up on how the request needs to look like and hack a test-case into your application that ends up generating a HTTP request that is sent to the respective location. Auto-generating code might spare you some hacking time, though you still need to integrate the stub classes into your application and test the whole thing at least once just to be safe. If you later on need to integrate a second service of that API or of yet an other API in general, you need to start from the beginning and look up the Swagger documentation, generate or hack the interaction code and integrate it into your domain. Plenty of manual steps involved and in cases of API changes you need to update your client as otherwise it might stop working.
In the Web example however, you just start your browser/Web client, invoke the respective URI that allows you to send the data to the server and the server will most likely send you a HTML form you just need to fill out and click the send button which automatically sends the request to the server which will start to process it. This is HATEOAS. You used the given controls to drive your workflow. The server taught your client every little detail it needed to make a valid request. It served your client with the target URI to send the request to, the HTTP method it should use and most often also implicitly the media type the payload should be in. In addition to that it also gave your clients a skeleton of the expected and/or supported elements the payload should contain. I.e. the form may require you to fill out a couple of input fields, select among a given set of choices or use some other controls such as a date or time picker value that is translated to a valid date or time representation for you. All you needed to do was to invoke the respective resource in your Web client. No auto-generation, no integration into your browser/application. Using other services (from the same or different providers) will, most likely, just work the same way so no need to change or update your HTTP client (browser) as long as the media-type request and responses are exchanged are supported.
In the case where you rely on Swagger RPC'esque documentation, that documentation is the truth on how to interact with the service. Mixing in some HATEOAS information doesn't provide you any benefits. In the Swagger case, carrying around additional meta-information that bloat up the request/response for no obvious reasons, as all the required information is given in the reference documentation, will, with some certainty, lead to people starting questioning the sanity of the developers of that service and ask for payload reduction. Just look here at SO for a while and you will find enough question asking on how to optimize the interaction further and further and reducing message size to a minimum as they process every little request and don't make use of response caching at all. In the HATEOAS case, pointing to an external reference is just useless as peers in such an architecture most likely already have support for the required necessities, such as URI, HTTP and the respective media types, implemented into it. In cases where custom media-types are used, support can be added at runtime via plug-ins or add-ons dynamically (if supported).
So, Swagger and HATEOAS are not mutually exclusive but the other gets more or less useless once you decided for one route or the other.
Swagger: Swagger aids in development across the entire API lifecycle, from design and documentation, to test and deployment. (Refer to swagger.io)
HATEOAS: Hypermedia as the Engine of Application State
An Ion Form is a Collection Object where the value member array contains Form Fields. Ion Forms ensure that resource transitions (links) that support data submissions can be discovered automatically (colloquially referred to as HATEOAS). (Refer to https://ionspec.org/)
One is a framework for supporting designing and testing for APIs, the other is an API design architecture.
Building a RESTful API is not a binary concept. That is why we use the Richardson maturity model in order to measure how RESTful an API is.
Based on this maturity model
At level 0 we provide mechanisms for client of the API to call some methods on the server (Simple RPC)
At level 1 we expose resources on the server so the client of the API can have direct access to the resources that it requires (exposing Resources)
At level 2 we provide a uniform way for the client of the api to interact with the API (exposed resources) and the HTTP protocol has these methods (using HTTP verbs to interact with resources).
the ultimate step is to make our api explorable by the client. HATEOAS provides such functionality (over HTTP) meaning that it adds relevant links and affordances (extra methods) that can executed on the resource so the client of the API can understand its behavior.
Based on these definitions in properly designed RESTful API there is no coupling between client and server and client can interact with the exposed endpoints an discover them.
On the other hand, swagger is a tool that helps you document your API along with some extra goodies (code generators).
I believe that Swagger (with the help of swagger Hub) provides services for implementing a RESTful endpoint with maturity levels up to 2. But it does not go any further and it does not provide proper support of HATEOAS.
You can define your resources and HTTP verbs in (json/yml) files. And based on this definition Swagger can generate API documentation and the extra goodies (client stubs and skeletal implementation of the server API).
For all those people who have worked with Java RMI, SOAP,... the extra goodies part is a reminder of old technologies where there was tight coupling between Client and Server because the stubs and skeletal implementations are all built based on the same API definition file.
I see that in the Microsft managed REST APIs exposed in Azure there are two ways to do versioning
a) x-ms-version in header
b) api-version in query string
I wanted to understand what is the decision behind the selection between the two. I was reading somewhere that x-ms-versioning is legacy and way forward is the query string versioning mode. Is this correct?
Also as per Scot Hanselman's blog he says Query string parameter is not his preferred way and he would choose the URL Path segment. Then wondering why Microsoft adopted this option? I do agree that each person has his own preference but would be helpful to know the reason for this selection from Microsoft.
The x-ms-version header is legacy and only maintained for backward compatibility. In fact, the notion of using the x- prefix has been deprecated since the introduction of RFC 6648 in 2012.
Using api-version in the query string is one of the official conventions outlined in §12 Versioning of the Microsoft REST API Guidelines. The guidelines also allow for using the URL segment method, but the query string method is the most prolific.
Fielding himself has pretty strong opinions about API versioning - namely don't do it. The only universally accepted approach to implementing a versioned REST API is to use media type negotiation. (e.g. accept: application/json;v=1.0 or accept: application/vnd.acme.v1+json). GitHub versions their API this way.
While media type negotiation abides by the REST constraints and isn't all that difficult to implement, it's the least used method. There are likely many explanations for why, but the reality is that there are at least 3 other very common methods: by query string, by URL segment, or by header.
Pedantic musings aside, the most common forms are likely due to the ease in which a client can access the service. The header method is not much different that using media type negotiation. If a header is going to be used, then using the accept and content-type headers with media type negotiation is a better option. This leaves us with just the query string and URL segment approach.
While the URL segment approach is common, it has a few pitfalls that most service authors don't consider. It's proliferation has very much been, "Well, that's how [insert company here] does it". In my opinion, the URL segment method suffers from the following issues:
URLs are not stable. They change over time with each new API version.
As part of the Uniform Interface the URL path is supposed to identify a resource. If your URL paths are api/v1/products/123 and api/v2/products/123, it implies that there are two different products, which is not true. The v1 versus v2 URL segment is implying a different representation. In all API versions, the resource is still the same logical product.
If your API supports HATEOAS, then link generation is a challenge when you have incongruent resource versioning. Mature service taxonomies and the REST constraints themselves are meant to help evolve services and resources over time. It's quite feasible to expect that api/orders could be 1.0 and a referenced line item with api/products/123 support 1.0-3.0. If the version number is baked into the URL, what URL should the service generate? It could assume the same version, but that could be wrong. Any other option is coupling to the related resources and may not be what the client wants. A service cannot know all of the API versions of different resources a client understands. The server should therefore only generate links in the form of vanilla resource identifiers (ex: api/products/123) and let the client specify which API version to use. Any other manipulation by the client of links greatly diminishes the value of supporting HATEOAS. Of course, if everything is the same version or on the same resource, this may be a non-issue.
2 and 3 can be further complicated by a client that persists resource links. When an API version is sunset, any old links to the resource are now broken, even though the resource still exists albeit not using an expected, but obsolete representation.
This brings things full circle to the query string method. While it's true that the query string will vary by API version, it's a parameter not part of the identifier (e.g. path). The URL path stays consistent for clients across all versions. It's also easy for clients to append the query parameter for the version that they understand. API versions are also commonly numeric, date-based, or both. Sometimes they have a status too. The URL api/products/123?api-version=2018-03-10-beta1 is arguably much cleaner than api/2018-03-10-beta1/products/123.
In conclusion, while the query string method may not be the true RESTful method to version a resource, it tends to carry more of the expected traits while remaining easy to consume (which isn't a REST constraint). In conjunction with the Microsoft REST API Guidelines, this is why ASP.NET API Versioning defaults to the query string method out-of-the-box, even though all of these methods are supported.
Hopefully, this provides some useful insights as how different styles can affect your service taxonomy aside from pure preference.
I want to know the main difference between REST and API. Sometimes I see REST API in programming documents, then is REST or API same as REST API? I would like to know more about relation between REST, API and REST API.
REST is a type of API. Not all APIs are REST, but all REST services are APIs.
API is a very broad term. Generally it's how one piece of code talks to another. In web development API often refers to the way in which we retrieve information from an online service. The API documentation will give you a list of URLs, query parameters and other information on how to make a request from the API, and inform you what sort of response will be given for each query.
REST is a set of rules/standards/guidelines for how to build a web API. Since there are many ways to do so, having an agreed upon system of structuring an API saves time in making decisions when building one, and saves time in understanding how to use one.
Other popular API paradigms include SOAP and GraphQL.
Note that the above attempts to answer the question in regards to how the terms are commonly used in web development. Roman Vottner has offered a different answer below which offers good insights into the original definition of the term REST with more technical precision than I have provided here.
REST mostly just refers to using the HTTP protocol the way it was intended. Use the GET HTTP method on a URL to retrieve information, possibly in different formats based on HTTP Accept headers. Use the POST HTTP method to create new items on the server, PUT to edit existing items, DELETE to delete them. Make the API idempotent, i.e. repeating the same query with the same information should yield the same result. Structure your URLs in a hierarchical manner etc.
REST just is a guiding principle how to use URLs and the HTTP protocol to structure an API. It says nothing about return formats, which may just as well be JSON.
That is opposed to, for example, APIs that send binary or XML messages to a designated port, not using differences in HTTP methods or URLs at all.
There is no comparison in REST and API, REST is an API type.
API, in general, is a set of protocols deployed over an application software to communicate with other software components (Like browser interacting with servers) and provide an interface to services which the application software
offers to several live consumers.
And Rest is a form of principle which an API follows in which the server provides information whatever the client desires to interact with services.
REST basically is a style of web architecture that governs the behavior of clients and servers. While API is a more general set of protocols and is deployed over the software to help it interact with some other software.
REST is only geared towards web applications. And mostly deals with HTTP requests and responses. This makes it practically usable by any programming language and easy to test.
API is an acronym for Application Programming Interface and defines a set of structures (i.e. classes) one has to implement in order to interact with a service the API was exposed for. APIs usually expose operations that can be invoked including any required or supported arguments as well as the expected responses. Classical examples here are Corba IDL, SOAP or RMI in the Java ecosystem but also RPC-like usages of Web systems specified in documentation like Swagger or OpenAPI.
REST (REpresentational State Transfer) on the contrary was specified by Fielding in his doctoral thesis where he analyzed how the whole user interactions occurs on the Web. He realized that on the Web only a transport protocol, a naming scheme for stuff as well as a well defined exchanged format is needed to exchange messages or documents. These three parts therefore define the interface to interact with peers in such a ecosystem. The transport layer is covered by HTTP while the naming scheme is defined by URI/IRI. Contrary to traditional RPC protocols which usually only support one syntax, REST is actually independent from a particular syntax. To upkeep interoperability both client and server though need do negotiate about it, which HTTP itself supports through the Accept request and Content-Type response headers. As long as client and server support HTTP, URI/IRI and a set of negotiated representation formats, defined by backing hypermedia capable media-types, they will be able to interact with each other. In a more narrow sense REST therefore has no API other than HTTP, URI/IRI and the respective media types.
However, things are unfortunately not that easy. Most people unfortunately understand something very different in terms of REST or REST API. While URIs should not convey any semantics itself, after all they are just pointers to a resource, plenty of programmers attribute more importance to URIs than they should. Some clients i.e. will attempt to extract some knowledge off of URIs or consider URIs to return responses that represent a certain type. I.e. it may seem natural to consider an URI such as https://api.acme.org/users/1 to return a representation that describes a particular user of that particualar system. An external documentation may specify that a JSON structure is returned that follows a given template such as
{
"id": 1,
"firstName": "Roman",
"lastName": "Vottner",
"role": "Admin",
...
}
can be expected, however, such a thing is closer to RPC than it is to REST. Neither is the response self-descriptive, as required by REST, nor does it follow a representation format that follows a well defined media type that defines the syntax and each of the elements that may form a message. Clients therefore are usually tailor-made for exactly one particular system (or REST API if you will) and can't be used to interact with different systems out of the box without further manual integration/updates. External documentation such as OpenAPI or Swagger are used to describe the available endpoints, the payload-templates that a server will be able to process as well as the expected responses, depending on the input. These documentation therefore is the truth and thus defines the API a client can look up or even use to autogenerate stub classes to interact with the server-side, similar to SOAP.
I therefore don't agree with the answer given by dave. While for RPC systems or the common understood term of REST API his definition may be suitable, for actual REST architectures his explanation isn't fitting at all and thus, IMO at least, not correct either. REST isn't a collection of rules, standards and/or guidelines. It is a set of few constraints that just ensure that peers in such an architecture avoid coupling, support future evolution and become more robust to change.
API is basically a set of functions and procedures which allow one application to access the feature of other application
REST is a set of rules or guidelines to build a web API.
It is basically an architectural style for networked applications on the web which is limited to client-server based applications.
Read more at: https://www.freelancinggig.com/blog/2018/11/02/what-is-the-difference-between-api-and-rest-api/
I have been reading about REST and SOAP, and understand why implementing REST can be beneficial over using a SOAP protocol. However, I still don't understand why there isn't the "WSDL" equivalent in the REST world. I have seen posts saying there is "no need" for the WSDL or that it would be redundant In the REST world, but I don't understand why. Isn't it always useful to programmatically bind to a definition and create proxy classes instead of manually coding? I don't mean to get into a philosophical debate, just looking for the reason there is no WSDL in REST, or why it is not needed. Thanks.
The Web Application Description Language (WADL) is basically the equivalent to WSDL for RESTful services but there's been an ongoing controversy whether something like this is needed at all.
Joe Gregorio has written a nice article about that topic which is worth a read.
WSDL describes service endpoints. REST clients should not be coupled to server endpoints (i.e. should not be aware of in URLs in advance). REST clients are coupled on the media-types that are transfered between the client and server.
It may make sense to auto generate classes on the client to wrap around the returned media-types. However, as soon as you start to create proxy classes around the service interactions you start to obscure the HTTP interactions and risk degenerating back towards a RPC model.
RSDL aims to turn rest like a hypermedia, in other words, it has more information than a service descriptor like WSDL or WADL. For example, it has the information about navigation, like hypertext and hyperlinks.
For example, given a current resource, you have a set os links to another resources related.
However, i didn't find Rest Clients that supports this format or Rest Server Solutions with a feature to auto generate it.
I think there is a long way for a conclusion about it. See the HTML long story and W3C vs Browsers lol.
For more details about Rest like Hypermedia look it: http://en.wikipedia.org/wiki/HATEOAS
Note : Roy Fielding has been criticizing these tendencies in Rest Apis without the hypermidia approach: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
My Conclusion : Now a Days, WADL is more common that Rest and Integration Frameworks like Camel CXF already supports WADL ( generate and consume ), because it is similar to WSDL, therefore most easy to understand in this migration process ( SOAP to REST ).
Let's see the next chapters ;)
Isn't it always useful to programmatically bind to a definition and
create proxy classes instead of manually coding?
Agree wholeheartedly, this is why I use Swagger.io
Swagger is a powerful open source framework backed by a large
ecosystem of tools that helps you design, build, document, and consume
your RESTful APIs.
So basically I use Swagger to describe my models, endpoints, etc, and then I use other tools like swagger-codegen to generate the proxy classes instead of manually coding it.
See also: RAML
There is an RSDL (restful service description language) which is equivalent to WSDL. The URL below describes its practice http://en.wikipedia.org/wiki/HATEOAS and http://en.wikipedia.org/wiki/RSDL.
The problem is that we have lots of tool to generate code from wsdl to java, or reverse.
But I didn't find any tool to generate code from RSDL.
WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate
However, REST uses the network protocol by using HTTP verbs and the URI to represent an objects state.
WSDLs tell you at this place, if you send this message, you'll perform this action and get this format back as a result.
In REST, if I wanted to create a new profile I would use the verb POST with a JSON body or http server variables describing my profile to the URL /profile
POST should return a server-side generated ID, using the status code 201 CREATED and the header Location: *new_profile_id* (for example 12345)
I can then perform updates changing the state of /profile/12345 using the HTTP verb POST, say to change my email addresss or phone number. Obviously changing the state of the remote object.
GET would return the current status of the /profile/12345
PUT is usually used for client-side generated ID
DELETE, obvious
HEAD, gets the status without returning the body.
With REST it should be self-documenting through a well designed API and thus easier to use.
This is a great article on REST. It really help me understand it too.
WSDL 2.0 specification has added support for REST web services too. Best of both worlds scenario. Problem is WSDL 2.0 is not widely supported by most tools out there yet. WSDL 2.0 is W3C recommended, WSDL1.1 is not W3C recommended but widely supported by tools and developers.
Ref:
http://www.ibm.com/developerworks/library/ws-restwsdl/
The Web Application Description Language (WADL) is an XML vocabulary used to describe RESTful web services.
As with WSDL, a generic client can load a WADL file and be immediately equipped to access the full functionality of the corresponding web service.
Since RESTful services have simpler interfaces, WADL is not nearly as necessary to these services as WSDL is to RPC-style SOAP services.