RESTful API runtime discoverability / HATEOAS client design - rest

For a SaaS startup I'm involved in, I am building both a RESTful web API and a couple of client apps on different platforms that consume it. I think I've got the API figured out, but now I'm turning to the clients. As I've been reading about REST, I see that a key part of REST is discovery, but there seems to be a lot of debate between two different interpretations of what discovery really means:
Developer discovery: The developer hard-codes copious amounts of API details into the client, such as resource URI's, query parameters, supported HTTP methods, and other details that they've discovered through browsing the docs and experimenting with the API's responses. This type of discovery IMHO necessitates cool linkage and the API versioning question, and leads to hard coupling of the client code to the API. Not much better than if using a well-documented collection of RPC's it seems.
Runtime discovery - The client app itself is able to figure out everything it needs with little or no out-of-band information (presumably, only a knowledge of the media types the API deals with.) Links can be hot. But to make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in. There are possibly other difficulties I haven't thought of yet since I haven't gotten to that point in development. But I do like the idea of loose coupling.
Runtime discovery seems to be the holy grail of REST, but I'm seeing precious little discussion about how to implement such a client. Almost all REST sources I've found seem to assume Developer discovery. Anyone know of some Runtime discovery resources? Best practices? Examples or libraries with real code? I'm working in PHP (Zend Framework) for one client. Objective-C (iOS) for the other.
Is Runtime discovery a realistic goal, given the present set of tools and knowledge in the developer community? I can write my client to treat all of the URI's in an opaque manner, but how to do this most efficiently is a question, especially over low-bandwidth connections. Anyway, URI's are only part of the equation. What about link templating in the Runtime context? How about communicating what methods are supported, aside from making a lot of OPTIONS requests?

This is definitely a tough nut to crack. At Google, we've implemented our Discovery Service that all our new APIs are built against. The TL;DR version is we generate a JSON Schema-like spec that our clients can parse - many of them dynamically.
That results means easier SDK upgrades for the developer and easy/better maintenance for us.
By no means the perfect solution, but many of our devs seem to like.
See link for more details (and make sure to watch the vid.)

Fascinating. What you are describing is basically the HATEOAS principle. What is HATEOAS you ask? Read this: http://en.wikipedia.org/wiki/HATEOAS
In layman's terms, HATEOAS means link following. This approach decouples your client from specific URL's and gives you the flexibility to change your API without breaking anyone.

You did your home work and you got to the heart of it: runtime discovery is holy grail. Don't chase it.
UDDI tells a poignant story of runtime discovery: http://en.wikipedia.org/wiki/Universal_Description_Discovery_and_Integration

One of the requirements that should be satisfied before you can call an API 'RESTful' is that it should be possible to write a generic client application on top of that API. With the generic client, a user should be able to access all the API's functionality. A generic client is a client application that does not assume that any resource has a specific structure beyond the structure that is defined by the media type. For example, a web browser is a generic client that knows how to interpret HTML, including HTML forms etc.
Now, suppose we have a HTTP/JSON API for a web shop and we want to build a HTML/CSS/JavaScript client that gives our customers an excellent user experience. Would it be a realistic option to let that client be a generic client application? No. We want to provide a specific look-and-feel for every specific data element and every specific application state. We don't want to include all knowledge about these presentation-specifics in the API, on the contrary, the client should define the look and feel and the API should only carry the data. This implies that the client has hard-coded coupling of specific resource elements to specific layouts and user interactions.
Is this the end of HATEOAS and thus the end of REST? Yes and no.
Yes, because if we hard-code knowledge about the API into the client, we loose the benefit of HATEOAS: server-side changes may break the client.
No, for two reasons:
Being "RESTful" is a property of the API, not of the client. As long as it is possible, in theory, to build a generic client that offers all capabilities of the API, the API can be called RESTful. The fact that clients don't obey the rules, is not the API's fault. The fact that a generic client would have a lousy user experience is not an issue. Why is it important to know that it is possible to have a generic client, if we don't actually have that generic client? This brings me to the second reason:
A RESTful API offers clients the option to choose how generic they want to be, i.e. how resilient to server-side changes they want to be. Clients which need to provide a great user experience may still be resilient to URI changes, to changes in default values and more. Clients doing batch jobs without user interaction may be resilient to other kinds of changes.
If you are interested in practical examples, checkout my JAREST paper. The last section is about HATEOAS. You will see that with JAREST, even highly interactive and visually attractive clients can be quite resilient to server-side changes, though not 100%.

I think the important point about HATEOAS is not that it is some holy grail client-side, but that it isolates the client from URI changes - it is assumed you are using known (or developer discovered custom) Link Relations that will allow the system to know which link for an object is the editable form. The important point is to use a media type that is hypermedia aware (e.g. HTML, XHTML, etc).

You write:
To make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in.
If that link template is supplied in the previous request, then there is no out-of-band information. For example a HTML search form uses link templating (/search?q=%#) to generate a URL (/search?q=hateoas), but nothing is known by the client (the web browser) other than how to use HTML forms and GET.

Related

Difference between Swagger & HATEOAS

Can anyone explain difference between Swagger & HATEOAS. I can Search many time but no buddy can explain the proper detailed answer this two aspects.
The main difference between Swagger and HATEOAS IMO, which is not covered in the accepted answer, is, that Swagger is only needed for RPC'esque APIs. Such APIs, however, have actually hardly anything to do with REST.
There is a further, widespread misconception that anything exchanged via HTTP is automatically RESTful (~ in accordance with the REST archtitectural style), which it is not. REST just defines a set of constraints that are not choices or options but are mandatory. From start to finish. There is nothing wrong from being not RESTful, but it is wrong to term such an architecture REST.
Swagger describe the operations that can be performed on an endpoint and the payload (including headers and the expected representation formats) that needs to be sent to the service and also describe what a client might expect as response. This allows Swagger to be used both as documentation as well as testing-framework for the API. Due to the tight coupling of Swagger to the API it behaves much like a typical RPC service description, i.e. similar to WSDL files in SOAP or stub or skeletton classes in RMI or CORBA. If either the endpoint changes or something in the payload changes, clients implementing against a Swagger documentation will probably break over time just reintroducing the same problems typical RPC implementations have.
REST and HATEOAS, on the other side, are designed for disovery and further development. REST isn't a protocol but an architectural style to start with that describes the interaction flow between a client and server in a distributed system. It basically took the concepts which made the Web so successful and translated it onto the application layer. So the same concepts that apply to the browsable Web also apply to REST. Therefore it is no miracle that also HATEOAS (the usage of and support for links, link relations and link names) behave similar to the Web.
On designing a REST architecture it is benefitial to think of a state machine where a server provides all of the information a client needs to take further actions. Asbjørn Ulsberg held a great talk back in 2016 where he explains affordances and how a state machine might be implemented through HATEOAS. Besides common or standardized media-types and relation names no out-of-band knowledge is necessary to interact with the service further. In the case of the toaster example Asbjørn gave in his talk, a toaster may have the states off, on, heating and idle where turning a toaster on will lead to a state transition from off to on followed by a transition to heating till a certain temperature is reached where the state is transitioned to idle and switches between idle and heating till the toaster is turned off.
HATOAS will provide a client with the information on the current state and include links a client can invoke to transition to the next state, i.e. turning the toaster off again. It's important to stress here, that a client is provided by the server with every action the client might perform next. There is no need for a client implementor to consult any proprietary API documentation in order for a client to be able to interact with a REST service. Further, URIs do not have to be meaningful or designed to convey a semantical-expressive structure as clients will determine whether invoking that URI makes sense via the link-relation name. Such relation names are either specified by IANA, by a common approach such as Dublin Core or schema.org or by absolut URIs acting as extension attributes which might point to a human-readable description, which further might be propagated to the user via mouse-over tooltips or such.
I hope you can see by yourself that Swagger is only needed to describe RPC Web-APIs rather than applications that follow the REST architectural design. Messages exchanged via REST APIs should include all the information needed by a client to make informed choices on the next state transition. As such it is benefitial to design such message flows and interactions as state machine.
Update:
How are Swagger and HATEOAS mutually exclusive? The former documents your endpoints (making auto-generating code possible) and the latter adds meta-information to your endpoints which tell the consumer what they can do (i.e. which other endpoints are available). These are very different things.
I never stated that they are mutually exclusive, just that they serve two different purposes, where if you follow one approach the other gets more or less useless. Using both does not make any sense though.
Let's move the discussion to the Web domain as this is probably more easily understandable and REST is de facto just a generalization of the concepts used on the Web, so doing this step is just natural and also a good recommendation in terms of designing REST architectures in general. Think of a case where you as a user want to send some data to the server. You have never used the service before so you basically don't know how a request has to look like.
In Swagger you would call the endpoint documentation, select the option that most likely might solve your task, read up on how the request needs to look like and hack a test-case into your application that ends up generating a HTTP request that is sent to the respective location. Auto-generating code might spare you some hacking time, though you still need to integrate the stub classes into your application and test the whole thing at least once just to be safe. If you later on need to integrate a second service of that API or of yet an other API in general, you need to start from the beginning and look up the Swagger documentation, generate or hack the interaction code and integrate it into your domain. Plenty of manual steps involved and in cases of API changes you need to update your client as otherwise it might stop working.
In the Web example however, you just start your browser/Web client, invoke the respective URI that allows you to send the data to the server and the server will most likely send you a HTML form you just need to fill out and click the send button which automatically sends the request to the server which will start to process it. This is HATEOAS. You used the given controls to drive your workflow. The server taught your client every little detail it needed to make a valid request. It served your client with the target URI to send the request to, the HTTP method it should use and most often also implicitly the media type the payload should be in. In addition to that it also gave your clients a skeleton of the expected and/or supported elements the payload should contain. I.e. the form may require you to fill out a couple of input fields, select among a given set of choices or use some other controls such as a date or time picker value that is translated to a valid date or time representation for you. All you needed to do was to invoke the respective resource in your Web client. No auto-generation, no integration into your browser/application. Using other services (from the same or different providers) will, most likely, just work the same way so no need to change or update your HTTP client (browser) as long as the media-type request and responses are exchanged are supported.
In the case where you rely on Swagger RPC'esque documentation, that documentation is the truth on how to interact with the service. Mixing in some HATEOAS information doesn't provide you any benefits. In the Swagger case, carrying around additional meta-information that bloat up the request/response for no obvious reasons, as all the required information is given in the reference documentation, will, with some certainty, lead to people starting questioning the sanity of the developers of that service and ask for payload reduction. Just look here at SO for a while and you will find enough question asking on how to optimize the interaction further and further and reducing message size to a minimum as they process every little request and don't make use of response caching at all. In the HATEOAS case, pointing to an external reference is just useless as peers in such an architecture most likely already have support for the required necessities, such as URI, HTTP and the respective media types, implemented into it. In cases where custom media-types are used, support can be added at runtime via plug-ins or add-ons dynamically (if supported).
So, Swagger and HATEOAS are not mutually exclusive but the other gets more or less useless once you decided for one route or the other.
Swagger: Swagger aids in development across the entire API lifecycle, from design and documentation, to test and deployment. (Refer to swagger.io)
HATEOAS: Hypermedia as the Engine of Application State
An Ion Form is a Collection Object where the value member array contains Form Fields. Ion Forms ensure that resource transitions (links) that support data submissions can be discovered automatically (colloquially referred to as HATEOAS). (Refer to https://ionspec.org/)
One is a framework for supporting designing and testing for APIs, the other is an API design architecture.
Building a RESTful API is not a binary concept. That is why we use the Richardson maturity model in order to measure how RESTful an API is.
Based on this maturity model
At level 0 we provide mechanisms for client of the API to call some methods on the server (Simple RPC)
At level 1 we expose resources on the server so the client of the API can have direct access to the resources that it requires (exposing Resources)
At level 2 we provide a uniform way for the client of the api to interact with the API (exposed resources) and the HTTP protocol has these methods (using HTTP verbs to interact with resources).
the ultimate step is to make our api explorable by the client. HATEOAS provides such functionality (over HTTP) meaning that it adds relevant links and affordances (extra methods) that can executed on the resource so the client of the API can understand its behavior.
Based on these definitions in properly designed RESTful API there is no coupling between client and server and client can interact with the exposed endpoints an discover them.
On the other hand, swagger is a tool that helps you document your API along with some extra goodies (code generators).
I believe that Swagger (with the help of swagger Hub) provides services for implementing a RESTful endpoint with maturity levels up to 2. But it does not go any further and it does not provide proper support of HATEOAS.
You can define your resources and HTTP verbs in (json/yml) files. And based on this definition Swagger can generate API documentation and the extra goodies (client stubs and skeletal implementation of the server API).
For all those people who have worked with Java RMI, SOAP,... the extra goodies part is a reminder of old technologies where there was tight coupling between Client and Server because the stubs and skeletal implementations are all built based on the same API definition file.

Is RESTful (HATEOAS ) practical for specialised clients?

Is there a proof of concept client(i.e. web application) that represents a real-world application implemented using and taking advantage of the RESTful principles?
All I could find are API browsers but the development of a real world application(i.e. a social network or ecommerce website) is quite different.
I've read Roy's work and related papers but I still can't gasp how to make the most of Restful in the client development. I always end-up storing state on the client or specialise the media/type rendering. For example the same resource(i.e. profile resource) is rendered differently based on context(i.e. on the homepage, on the product page or on the dedicated profile page) so farewell media-type -> code on demand rendering.
I really can't see any advantage(in the way I work) of HATEOAS over an API with well defined/auto-generated IDL(i.e. json hyper-schema).
My current conclusion is that only generic clients(i.e. google) can benefit from HATEOS not real-world/specialised applications. The specialised client development doesn't seem to take any benefit if your API is HATEOS-enabled instead of being IDL described.
While it's true that HATEOAS gives you URI flexibility, and human discovery of flows, the real benefit is using it as an encoding of resource state.
If you have a state machine associated with a resource, you will have some states that permit certain state transitions and not others.
The opportunity to effect a possible state transition is offered to REST clients via operations against resource URIs - using HATEAOS hypermedia, you can define the transitions by a known rel link name, and then include or exclude the rel links, depending on which transitions are permitted by the current state.
This means the logic of determining which transitions are valid is kept server side - the client can choose to hide or disable UI options depending on if the associated rel link is present.
Another reason to include or exclude a particular rel link may be related to the access control permissions offered to the current user. Simply exclude them if the current user isn't permitted to carry out the transition.
If you are not dynamically including or excluding rel links based on resource state and/or state of the authorized user, then your analysis of the pros cons is pretty spot on, because you are not using them for the real reason they were included. After all, the S in REST stands for state! :)
HATEOS is a design philosophy / style / flavor and this is largely a matter of taste or a tradeoff between full-blown code gen and a hand-written API.
The key differentiating aspect of HATEOS is the way references are constructed to other resources in the API (namely, by a full URL). This removes a lot of the documentation burden that you might otherwise encounter if the API response only includes an ID (and not the full URL to the resource).
However, when you use HATEOS with JSON instead of XML you lose some of the other context (e.g. should I PUT or GET or POST to this endpoint?) and so you must supplement this with some other kind of metadata if you want to generate a client, or documentation for humans.
In my experience HATEOS APIs are much easier for humans to consume with simple REST clients (e.g. cURL) compared to a WSDL or IDL which assumes the client is using generated code and will never touch the API directly.
Tradeoffs
So why would you choose HATEOS vs WSDL or some other generated option?
The basic assumption for APIs (which is not always true) is that they will have many flavors of clients / consumers, possibly implemented in different languages. This means that over time, writing and updating clients is more work than writing the service.
If you or your business are going to maintain the API clients yourself then there is a cost tradeoff between generating code for all of the clients (WSDL, SWIG, etc.) or hiring a language-specific developer to maintain one.
Chances are a generated API client is not going to follow the idiomatic style for any given language, and the code is generally ugly. If these things matter to you then you will probably want a human to write the client code. If you don't care about this, then you can stop reading about HATEOS and use a WSDL or similar approach instead.
In case you do want to optimize for a human to consume the API, though, HATEOS succeeds because it conveys contextual information to a human, and this makes it easier to write clients without extensive API documentation.
Example
For an example of a HATEOS-like API take a look at the GitHub API. It is quite easy to browse with a REST client and once you learn how to authenticate you can find most of the things you want by following referenced data URLs. You will still need to reference the documentation for specific details and advanced use-cases (like POSTing data) but it is very easy to write a simple client for GitHub without pulling in a GitHub client library or reading the docs end-to-end.

Should a Netflix or Twitter-style web service use REST or SOAP? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I've implemented two REST services: Twitter and Netflix. Both times, I struggled to find the use and logic involved in the decision to expose these services as REST instead of SOAP. I hope somebody can clue me in to what I'm missing and explain why REST was used as the service implementation for services such as these.
Implementing a REST service takes infinitely longer than implementing a SOAP service. Tools exist for all modern languages/frameworks/platforms to read in a WSDL and output proxy classes and clients. Implementing a REST service is done by hand and - get this - by reading documentation. Furthermore, while implementing these two services, you have to make "guesses" as to what will come back across the pipe as there is no real schema or reference document.
Why write a REST service that returns XML anyway? The only difference is that with REST you don't know the types each element/attribute represents - you are on your own to implement it and hope that one day a string doesn't come across in a field you thought was always an int. SOAP defines the data structure using the WSDL so this is a no-brainer.
I've heard the complaint that with SOAP you have the "overhead" of the SOAP Envelope. In this day and age, do we really need to worry about a handful of bytes?
I've heard the argument that with REST you can just pop the URL into the browser and see the data. Sure, if your REST service is using simple or no authentication. The Netflix service, for instance, uses OAuth which requires you to sign things and encode things before you can even submit your request.
Why do we need a "readable" URL for each resource? If we were using a tool to implement the service, do we really care about the actual URL?
A canary in a coal mine.
I have been waiting for a question like this for close to a year now. It was inevitable that this day would come and I am sure we are going to see many more questions like this in the coming months.
The warning signs
You are absolutely correct, it does take longer to build RESTful clients than SOAP clients. The SOAP toolkits take away lots of boilerplate code and make client proxy objects available with almost no effort. With a tool like Visual Studio and a server URL I can be accessing remote objects of arbitrary complexity, locally in under five minutes.
Services that return application/xml and application/json are so annoying for client developers. What are we supposed to do with that blob of data?
Fortunately, lots of sites that provide REST services also provide a bunch of client libraries so that we can use those libraries to get access to a bunch of strongly typed objects. Seems kind of dumb though. If they had used SOAP we could have code-gen’d those proxy classes ourselves.
SOAP overhead, ha. It’s latency that kills. If people are really concerned about the number of excess bytes going across the wire then maybe HTTP is not the right choice. Have you seen how many bytes are used by the user-agent header?
Yeah, have you ever tried using a web browser as debugging tool for anything other than HTML and javascript. Trust me it sucks. You can only use two of the verbs, the caching is constantly getting in the way, the error handling swallows so much information, it’s constantly looking for a goddamn favicon.ico. Just shoot me.
Readable URL. Only nouns, no verbs. Yeah, that’s easy as long as we are only doing CRUD operations and we only need to access a hierarchy of objects in one way. Unfortunately most applications need a wee bit more functionality than that.
The impending disaster
There are a metric boatload of developers currently developing applications that integrate with REST services who are in the process of coming to the same set of conclusions that you have. They were promised simplicity, flexibility, scalability, evolvabilty and the holy grail of serendipitous reuse. The characteristics of the web itself, how can things go wrong.
However, they are finding that versioning is just as much of a problem, but the compiler doesn’t help detect issues. The hand written client code is a pain to maintain as the data structures evolve and URLs get refactored. Designing APIs around just nouns and four verbs can be really hard, especially with RESTful Url zealots telling you when you can and cannot use query strings.
Developers are going to start asking why are we wasting our effort on support both Json formats and Xml formats, why not just focus our efforts on one and do it well?
How did things go so wrong
I’ll tell you what went wrong. We as developers let the marketing departments take advantage of our primary weakness. Our eternal search for the silver bullet blinded us to the reality of what REST really is. On the surface REST seems so easy and simple. Name your resources with Urls and use GET, PUT, POST and DELETE. Hell, us devs already know how to do that, we have been dealing with databases for years that have tables and columns and SQL statements that have SELECT, INSERT, UPDATE and DELETE. It should have been a piece of cake.
There are other parts of REST that some people discuss, such as self-descriptiveness, and the hypermedia constraint, but these constraints are not so simple as resource identification and the uniform interface. The seem to add complexity where the desired goal is simplicity.
This watered down version of REST became validated in developer culture in many ways. Server frameworks were created that encouraged Resource Identification and the uniform interface, but did nothing to support the other constraints. Terms started to float around differentiating the approaches, (HI-REST vs LO-REST, Corporate REST vs Academic REST, REST vs RESTful).
A few people scream out that if you don’t apply all of the constraints it’s not REST. You will not get the benefits. There is no half REST. But those voices were labelled as religious zealots who were upset that their precious term had been stolen from obscurity and made mainstream. Jealous people who try to make REST sound more difficult than it is.
REST, the term, has definitely become mainstream. Almost every major web property that has an API supports "REST". Twitter and Netflix are two very high profile ones. The scary thing is that I can only think of one public API that is self-descriptive and there are a handful that truly implement the hypermedia constraint. Sure some sites like StackOverflow and Gowalla support links in their responses, but there are huge gaping holes in their links. The StackOverflow API has no root page. Imagine how successful the web site would have been if there was no home page for the web site!
You were misled I’m afraid
If you have made it this far, the short answer to your question is those APIs (Netflix and Twitter) do not conform to all of the constraints and therefore you will not get the benefits that REST apis are supposed to bring.
REST clients do take longer to build than SOAP clients but they are not tied to one specific service, so you should be able to re-use them across services. Take the classic example, of a web browser. How many services can a web browser access? What about a Feed Reader? Now how many different services can the average Twitter client access? Yes, just one.
REST clients are not supposed to be built to interface with a single service, they are supposed to be built to handle specific media types that could be served by any service. The obvious question to that is, how can you build a REST client for a service that delivers application/json or application/xml. Well you can’t. That’s because those formats are completely useless to a REST client. You said it yourself,
you have to make "guesses" as to what
will come back across the pipe as
there is no real schema or reference
document
You are absolutely correct for services like Twitter. However, the self-descriptive constraint in REST says that the HTTP content type header should describe exactly the content that is being transmitted across the wire. Delivering application/json and application/xml tells you nothing about the content.
When it comes to considering the performance of REST based systems it is necessary look at the bigger picture. Talking about envelope bytes is like talking about loop unwinding when comparing a quick-sort to a shell-sort. There are scenarios where SOAP can perform better, and there are scenarios where REST can perform better. Context is everything.
REST gains much of its performance advantage by being very flexible about what media types it supports and by having sophisticated support for caching. For caching to work well though nearly all of the constraints must be adhered to.
Your last point about readable urls is by far the most ironic. If you truly commit to the hypermedia constraint, then every URL could be a GUID and the client developer would lose nothing in readability.
The fact that URIs should be opaque to the client is one of the most key things when developing REST systems. Readable URLs are convenient for the server developer and well structured URLs make it easier for the server framework to dispatch requests, but those are implementation details that should have no impact on the developers consuming the API.
The Twitter API is not even close to being RESTful and that is why you are unable to see any benefit to using it over SOAP. The Netflix API is much closer but it’s use of generic media types demonstrates that failing to adhere to even a single constraint can have a profound impact on the benefits derived from the service.
It may not be all their fault
I’ve done a whole lot of dumping on the service providers, but it takes two to dance RESTfully. A service may follow all of the constraints religiously and a client can still easily undo all of the benefits.
If a client hard codes urls to access certain types of resources then it is preventing the server from changing those urls. Any kind URL construction based on implicit knowledge of how the service structures its urls is a violation.
Making assumptions about what type of representation will be returned from a link can lead to problems. Making assumptions about the content of the representation based on knowledge that is not explicitly stated in the HTTP headers is definitely going to create coupling that will cause pain in the future.
Should they have used SOAP?
Personally, I don’t think so. REST done right allows a distributed system to evolve over the long term. If you are building distributed systems that have components that are developed by different people and need to last for many years, then REST is a pretty good option.
SOAP is an object-oriented, remote procedure call technology stack. It works by building a new abstraction on top of an existing protocol (HTTP).
REST is a document oriented approach, that simply uses the features of an existing protocol (HTTP). "REST" is just a buzzword -- the concept is this: Just use the web the way it was designed to work!
In response to edits to question:
"Implementing a REST service takes infinitely longer than implementing a SOAP service."
Um, no, it can't be infinitely longer. And in cases where what you are trying to retrieve is already a document or file, it's actually much faster. For example, the OGC spec for WMS (Web Mapping Service) defines both a SOAP and REST version of the protocol, and there's a reason why almost nobody implements the SOAP version -- it's because if you're trying to get a map, it's a lot easier to just build a URL and fetch image bytes from that URL than it is to bother with encapsulating it into a SOAP message. But yes, I will agree that if the point of the web service is to transfer some strongly-typed object in a domain object model, SOAP is better suited for that use.
"Why write a REST service that returns XML anyway?"
Well, yes, that can be silly. But it depends on what the XML is. If there's a clearly defined schema for it somewhere, then there's no ambiguity. For example, you can think of WSDL URLs as being a kind of RESTful web service for retrieving information about a web service. In this case, adding the overhead of another SOAP request would be pointless.
In general, REST wins when the content that is being transferred can be thought of as a file, as a single unit. SOAP wins when the content needs to be treated as an object with members.
"I've heard the complaint that with SOAP you have the "overhead" of the SOAP Envelope. In this day and age, do we really need to worry about a handful of bytes?"
Yes. Not in every circumstance, but there are sites with a great deal of traffic where it makes a difference. Is it enough of a difference to outweigh the semantic differences of using SOAP instead of REST? I doubt it. If you're doing an object remoting protocol and the number of bytes is making a difference, SOAP is probably not the tool for you anyway -- maybe you should be using CORBA or DCOM instead.
"I've heard the argument that with REST you can just pop the URL into the browser and see the data."
Yes, and this is a large argument in favor of REST if it makes sense to view the data in a browser. For example, with image data, it's an easy way to debug the service -- just paste the URL into your browser's address bar and see what the image looks like. Or if the data returned is in XML, and you have a referenced XML stylesheet that renders into readable HTML in the browser, then you get the benefit of semantic markup and easy visualization all in one package. But you are correct, this benefit mostly evaporates when working with more complex authentication schemes. If you can't encode all your authentication information into each HTTP request, then I would argue that it doesn't count as REST at all.
"Why do we need a "readable" URL for each resource? If we were using a tool to implement the service, do we really care about the actual URL?"
Well, it depends. Why do we need readable URLs for any resource on the web? You can read Tim Berners-Lee's essay Cool URIs Don't Change for the rationale, but basically, as long as the resource may still be useful in the future, the URI for that resource should stay the same.
Obviously, for transient resources (like the "today's Money" link in the essay) there is no need for it, since the need to reference the resource goes away if the corresponding resource goes away. But for more permanent resources (like StackOverflow questions, for example, or movies on IMDB), you want to have a URL that will work forever. When you're designing a web service, you need to decide if the resources themselves could outlive your service, and if so, then REST is probably the right way to go.
For the record, yes, I've been developing web pages since well before NetFlix or Twitter existed. And no, I've not yet had any need or opportunity to implement a client to either NetFlix or Twitter's services. But even if their services are atrociously difficult to work with, that doesn't mean the technology they implemented their services on top of is bad -- only that those two implementations are bad.
To make a long story short: REST and SOAP are just tools. They each have strengths and weaknesses. If the only tool you have is a hammer, then every problem looks like a nail. So get to know both tools, and learn how to use them correctly, and then choose the right tool for each job.
An honest question deserves an honest answer. But first, why did you use the text of this question as an answer to another question if you did not think it was rhetorical in nature?
Anyway:
"Tools exist for all modern languages/frameworks/platforms to read in a WSDL and output proxy classes and clients. Implementing a REST service is done by hand by reading documentation."
Just like browser vendors have read and re-read the HTML 4.01 specification up and down to try to implement a consistent browsing experience. Have you reflected on the fact that browsers were invented long before internet banking and stackoverflow, and yet, you can use a browser to do just those things. This is made possible because of the sole reason that everybody agrees to use HTML (and related formats like CSS, JS, JPEG etc).
Blogging is actually not that new, and someone came up with AtomPub, which allows any blogging software to access and update posts in a blog, much like any web browser can access any web page. That's pretty neat, and works because of the RESTful constraints imposed by the protocol.
But for Twitter and Netflix, there is no universal agreement that "all microblogs in existence shall use the media type application/tweet", mainly because microblogging is so new. Maybe in a few years time a few microblogging services settle on the same API so that Twitter, Facebook, Identica and can interoperate. None of their existing APIs are anywhere near RESTful, however much they claim, so I don't expect it to happen real soon.
"Furthermore, while implementing these two services, you have to make "guesses" as to what will come back across the pipe as there is no real schema or reference document."
You've hit the nail on the head. REST is all about distributed and hypermedia, and that pretty much sums it up. A browser looks at what it gets from a request and shows it to the user. A HTML page usually spawns a lot more GET requests, for example CSS, scripts and images. An image is typically only rendered to the screen, JavaScript is executed, and so on. Each time, the browser does what it does because it found the link in an <img> or <style> tag and the response media type was image/jpeg or text/css.
If Twitter makes a hypermedia based API, it will probably always return an application/tweet every time you follow a link to a tweet, but the client should never assume it, and always check what it gets before acting on it.
"Why write a REST service that returns XML anyway?"
This all boils down to media types. Like HTML, if you see an element that you've no idea what actually means, the HTML spec instructs you to ignore them, and process the "body" of the tag if it has one. Likewise, the atom spec instructs you to ignore unknown elements and foreign markup (from different namespaces) and not process the body (IIRC).
Designing media types for generic problem domains (as in the HTML media type for the rich text problem domain) is very hard. Making media types for very narrow problem domains is probably a lot easier (like a tweet). But it's always a good idea to design for extensibility and specify how clients (and servers) are supposed to react when they see elements or data items that don't match the spec. JPEG, for example has an Application-specific record type (e.g. APP1) which is used to contain all sorts of meta data.
"I've heard the complaint that with SOAP you have the "overhead" of the SOAP Envelope. In this day and age, do we really need to worry about a handful of bytes?"
No, we don't. REST is absolutely not about being efficient over the wire, it's actually trading wire efficiency in. REST's efficiency comes from the possibilities of caching enabled by all the other constraints: Fielding's dissertation notes: The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction. I don't think that the SOAP Envelope byte count overhead is a valid concern.
"I've heard the argument that with REST you can just pop the URL into the browser and see the data."
Yes, that's also an invalid argument. It doesn't work that way. Even if it did work, most narrow REST APIs out there use media types that browsers have no idea about and it still won't work.
But there are a lot more possibilities than a browser to test a HTTP based API, like command line utilities or browser extensions that allow you to control almost any aspect of a HTTP request, inspect response headers and discover links for you to follow. But even so, this is nowhere near as easy as generating WSDL stubs and making a three line program to call the function anyway.
"Why do we need a "readable" URL for each resource? If we were using a tool to implement the service, do we really care about the actual URL?"
If you look at how the web works, I'm pretty sure that humans are by and large glad that the URI for a wikipedia page looks like this, http://en.wikipedia.org/wiki/Stack_overflow instead of http://en.wikipedia.org/wiki/?oldid=376349090. But it actually is not important to REST. The important thing to try to get right is to choose to place relevant data in the URI that is not likely to change. You might think that the database ID will never change, but what happens when two data sets need to be merged? All your primary keys change. The page title (Stack_overflow) will not change.
Sorry for the long response, but I believe this question is valid, and hasn't been addressed before here on SO. I'm sure Darrel Miller will add his answer once he's back too.
Edit: formatting
Martin Fowler has a post on the Richardson Maturity Model which does a great job explaining the difference between SOAP and REST.
WSDL and other document level protocols are redundant. The HTTP protocol supports a much richer set of operations besides just serving documents and submitting forms.
Supporters of REST are uncomfortable with that redundancy.

Alternatives to YQL

This is a multi-part question. I just watched a very interesting presentation on YQL by the lead developer (a graduate of my MS program). While it was very compelling, and I am looking forward to trying it out, I am wondering if anyone knows of alternative frameworks for querying multiple web service APIs to make them appear seamless, the apparent purpose of YQL?
Yahoo's strategy has been to create XML schema definitions that bind a given web service's parameters into their YQL Open Table query parameters, which I think is very clever. Is there any tool that attempts (perhaps I am naive here) to automate the discovery of parameters in say a REST API? I am aware that with SOAP APIs, because there is a published WSDL, it makes automation easier, but is there yet no way to do this with REST? Is anyone trying?
Yes people are trying to produce description languages for REST. The most popular effort is WADL. There are lots of questions about WADL here on SO. Is it a good idea? In my opinion no.
REST does not need a discovery model beyond what it already has with hypermedia, because is trying to solve a problem at a different architectural layer than web services. Web services deliver data to an application's business logic/domain model. REST is about delivering content and behaviour to a presentation layer.
How about an analogy? Think of the different between an object and struct in C++. A struct is just simple data that some client process is going to manipulate. That's what a web service does, it returns a chunk of data, a struct. Sure maybe it did a bunch of server side processing to produce the result, but the end result is a lump of data. A REST interface delivers an object. i.e. It contains both data and the methods that can be used to manipulate that object. By definition, if you understand the uniform interface and you understand the returned media type, you already know what you can do with the response. Discovery mechanisms are redundant.
If you find this hard to believe, the think about the web. How does a web browser discover web pages? The web has no formalized discovery mechanism, and yet there is a world of information out there that we can discover with a web browser.
There is this little website http://zachgrav.es/yql/tablesaw/ which indeed auto-discovers parameters in a REST api and turns it into a YQL compatible table.
There are two ways to find information. Either you use a 100% unambiguous language or you use a natural language. Anything in between like YQL is doomed to fail because it delivers neither and works well only with the examples its authors tout.
I blogged about this at http://zscraper.wordpress.com/2012/05/30/enough-with-crawling-2. My personal stance is that you'll always get the most accurate results if you do your homework first, i.e. study the target domain and figure out how to query it unambiguously.
To answer your question and give you an alternative -- try Bobik. This is a cloud-backed scraping service that you control via REST API. Compose your "queries" in traditional syntax (Bobik supports Javascript, JQuery, XPATH and CSS) and call Bobik to run them from any client-side environment (webpages, mobile apps, or your server).
Hope this helps.

What is the reason for using WADL?

To describe RESTful we can say that every resource has its own URI. Using HTTP GET, POST, PUT and DELETE, we can operate on these resources. All resources are representational. Whoever wants to use our resources can do so via a browser or REST client.
That's the main idea of a RESTful architecture. This architecture allows services on the internet. So why does this architecture need WADL? What does WADL offer that standard HTTP does not? Why does WADL need to exist?
The purpose of WADL is to define a contract. Contract specifies how one party can call another.
When you create a web application from scratch, you don't need contract and WADL.
When you integrate your system with the other system and you can communicate clearly with their development team, you don't need contract and WADL (because you can make a phone call to make things clear).
However when you integrate a complex enterprise system with several others complex enterprise systems maintained by several different companies (or federal institutions), then believe me you want to have a communication contract defined as strictly as possible. Then you need WADL or Open Specification. Need it badly.
People with weak enterprise background tend to see entire IT as a collection of separated web applications developed independently. But enterprise reality is sometimes tough. Sometimes you can't even call or write to the people developing the application you have to integrate with. Sometimes you communicate with a legacy application that is no longer maintained--it just runs and you need to figure out how to communicate with it properly. In such conditions you need a contract because it saves your ass.
Actually client generation is the minor feature of the contract definition. It's just a toy. Contract enforces bad communicators to communicate integration rules clearly. This is the main reason to use WADL or Open Specification or whatever.
Using WADL implies that you just might be gracious enough to actually define the data / documents you are passing back and forth. Say you are passing some XML fragments, they might actually be part of a defined schema.
Whether or not you use the DL to generate code is not very important to me. What matters, in my subjective opinion, is that it is important to have a formal agreement on interfaces between business partners. Even if what is passed is obvious, it helps to identify who has to fix what later if somebody changes the previous interface.
Data format is just as much a part of an interface as verb names.
WADL appeals to people coming from the SOAP world where it is common to use a code generator to create client side code based on the WSDL. I don't think that mechanism is useful in REST as it creates client code that is coupled to server endpoints.
I believe that if you properly define your media-types and use hypermedia within those media-types, then it is not necessary to have WADL. The description of the available end-points is contained within the media-type definitions themselves. And if you are now saying to yourself, but application/xml doesn't contain any information about available hyperlinks, then I say BINGO. That's why I don't think application/xml and application/json are appropriate media-types for REST. I'm not saying don't use XML or JSON, just don't use the generic media type name.
The other appeal of WADL is for the purpose of documenting REST services. Unfortunately, it leads developers down the wrong path as WADL attempts to document server-side end points. Documenting a REST services should focus primarily on the media-types. A client developer should be able to write a REST client without knowing any url other than the root url.
WADL allows you to generate code, tests and documentation. Actually there are few very useful tools utilizing WADL, you can see some examples here. The problem with the "pure" REST, as described in Fielding's dissertation, is writing clients supporting Hypermedia (imagine writing Java Swing-based client application for example). With WADL this task is completely automated, and it's a huge advantage in my view. Testing becomes a way easier too.
Before I give my explanation, let me say that most pure REST extremists will deride it to the ends of the earth. I don't agree with them, as i'd rather get something done, but just so you know.
WADL is a description of a web service API, a little like WSDL is for SOAP type web services, that is designed to be more in tune with RESTful interfaces (something WSDL is poor at).
It's primary usage in my experience is to allow you to generate client code that can call the service (handy if it's a very large API, which literally saves hours of work). It also serves the purpose of documenting a REST-like interface.
REST specifies nothing about WADL.
When you want to expose the REST services ,the best way is to generate WADL and share with consumer(similar to WSDL in SOAP based web services).WADL is used to describe service all in on place.
WADL is not necessary to use. But, If you are working with complex existing application and you want to implement REST service call by replacing the EJB/SOAP service call, Then it is very safe and good practice that you use WADL. By using WADL generate client side java stubs you will be in sync with the service.
You can generate client side java stub using WADL file with help of wadl2java maven plugin.