After I read that sentence:
SOAP was designed for a distributed computing environment whereas
REST was designed for a point to point environment.
I searched about DCE and point to point environment. I learned about DCE but I found nothing about point to point environment. So what is it?
I believe "SOAP was designed for a distributed computing environment" boils down to meaning SOAP is an RPC mechanism. From what I understand, "point to point environment" is really just another way of saying that the protocol implementing the style is client/server and only supports unicast addressing.
It's a meaningless contrast, really. And, if my understanding is correct, the "point to point environment" bit is rubbish - HTTP might not support anything other than unicast addressing of resources, but that doesn't mean that any other protocol supporting the RESTful style, hypothetical or otherwise, couldn't have an addressing mechanism for resources that mapped onto potentially multiple resources.
Of course, I'm only making an educated guess.
REST - Used on the web for a larger scale internet applications and websites(twitter.com) Point To Point = internet.
SOAP - Used in Busienss Systems, Commonly Used With Existing XML Schema from third parties.
Related
I want to dive into the whole diversity of tools which provide connection between programs over the network.
To clarify the question, I divide it on subquestions:
Why some groups of programs (or specific tools/frameworks/approaches with programming languages where this frameworks can be used) were popular in each period of time? (I expect description of problems which were solved, description of tools, why those tools are considered as best solution to those problems at that time, why some tools lost popularity)
What is the entire history of software communication over the network? (tools/approaches popularity precisely to decades)
What are the modern solutions to this problem?
I can distinguish only two significant approaches.
RPC, RMI and their implementations (I saw this, but it is about concrete problem and specific tools to solve this problem, I want to see the place of this problem in the whole picture of interconnection programs over the network. I heard about implementations: ONC RPC, XML-RPC, CORBA, DCOM, gRPC, but which are active now? which are reasonable to use? which are preferable and why? I want answers not to be opinion based, so I accept answers like "technology A better than technology B for problem X because ..." only if there is reliable research/statistics or facts). I heard that RPC and RMI were popular 10 years ago. Are they still?
Web services: REST, SOAP.
Am I miss something? Maybe there are some technologies which solve problem completely new way? Maybe there are technologies which can be treated as replacement to RPC(RMI) and Web Services? Can we replace RPC(RMI) by REST for any task? Can we replace RPC(RMI) by REST only for modern tasks? Should I separate technologies not as RPC and Web Services, but in some other manner?
As a partial answer, I can give you my feedback on the use of RabbitMQ.
As explain here, it provides a lot of different ways to use it :
RPC by implementing a "callback" queue
One to one, one to many routing strategy to propagate your events through your whole infrastructure and target the right destination.
It comes with the ability to persist messages to avoid loosing data when a crash appears but also with some plugins to increase possibilities (e.g x-delayed plugin)
This technologie written in Erlang is powerful and is a must try in term of communication between programs.
To your question „Am I missing something“: yes.
Very popular communication patterns are the so-called Event-Driven or Message-Driven protocols. This type of protocols are often used in distributed systems such web applications, microservices and IoT-Environments. The communication is complete asynchronously and allows building scalable and loosely coupled systems.
There are many different frameworks and methods for Event-Driven systems like WebSockets, WebHooks, Pub-Sub and Messaging-Librarys like AcitveMQ, OpenMQ, RabbitMQ, ZeroMQ and MQTT.
Hope this info helps for your research.
I see more and more software organizations using gRPC in their service-oriented architectures, but people are also still using REST. In what use cases does it make sense to use gRPC, and when does it make sense to use REST for inter-service communication?
Interestingly, I've come across open source projects that use both REST and gRPC. For instance, Kubernetes and Docker Swarm all employ gRPC to some extent for cluster coordination, but also expose REST APIs for interfacing with master/leader nodes. Why not use gRPC up and down?
Edit: for a summary of my own thoughts on this topic, see
https://medium.com/#natemurthy/rest-rpc-and-brokered-messaging-b775aeb0db3
When done correctly, REST improves long-term evolvability and scalability at the cost of performance and added complexity. REST is ideal for services that must be developed and maintained independently, like the Web itself. Client and server can be loosely coupled and change without breaking each other.
RPC services can be simpler and perform better, at the cost of flexibility and independence. RPC services are ideal for circumstances where client and server are tightly coupled and follow the same development cycle.
However, most so-called REST services don't really follow REST at all, because REST became just a buzzword for any kind of HTTP API. In fact, most so-called REST APIs are so tightly coupled, they offer no advantage over an RPC design.
Given that, my somewhat cynical answers to your question are:
Some people are adopting gRPC for the same reason they adopted REST a few years ago: design-by-buzzword.
Many people are realizing the way they implement REST amounts to RPC anyway, so why not go with an standardized RPC framework and implement it correctly, instead of insisting on poor REST implementations?
REST is a solution for problems that appear in projects that span several organizations and have long-term goals. Maybe people are realizing they don't really need REST and looking for better options.
Depending on the gRPC's future roadmap, people will continue migrating to it and letting REST (over HTTP) "quiet".
gRPC is more convenient in many ways:
Usually fast (like super-fast)
(Almost) No "design dichotomy" ― what's the right end-point to use, what's the right HTTP verb to use, etc.
Not dealing with the messy input/response serialization baloney as gRPC deals with the serialization ― more efficient data encoding and HTTP/2 which makes things go faster with multiplexed requests over a single connection and header compression
Define/Declare your input/response and generate reliable clients for different languages (of course, the ones that are "supported", this is a HUGE advantage)
Formalized set of errors ― this is debatable but so far they are more directly applicable to API use cases than the HTTP status codes
In any case, you will have to deal with all the gRPC troubles also since nothing in this world is infalible, but so far it "looks better" than REST ― and has actually proven that.
I think you can have the best of both worlds. In any case gRPC largely follows HTTP semantics (over HTTP/2) but explicitly allow for full-duplex streaming, diverging from typical REST conventions as it uses static paths for performance reasons during call dispatch as parsing call parameters from paths ― query parameters and payload body adds latency and complexity.
The promise of REST has always been a uniform interface. An ideal REST client would be able to talk to a wide range of RESTful resources, even those that did not exist when the client was coded.
Unfortunately, this ideal has never really materialized except for REST’s original case — the World Wide Web of human-readable documents.
At this point, most interfaces that call themselves “RESTful” are really a baroque kind of RPC, where request and response data is smeared over methods, query strings, headers, status codes, payloads, all in a variety of fragile formats.
Most of the uniformity in today’s “RESTful” interfaces is in the heads of developers. They “know” that POST /orders/ is probably going to add a new order. But they still have to program their clients to “know” that, for every API they talk to, often making lots of errors.
Still, there is some uniformity that can actually be useful in code. For example, if you have a “RESTful” API, then you can often add a transparent, finely tunable caching layer to it nearly for free. This is possible because (semantically correct) HTTP messages already carry all the standardized information needed for caching: request method, URL, status code, Cache-Control, Vary and all that. In gRPC, you have to roll your own caching.
But the real reason for the current dominance of “REST” is not this sort of minor affordances. It’s really just the success of the World Wide Web. At some point in history, it transpired that everyone already had a performant, flexible HTTP server (to serve their Web site) and a solid HTTP client (to view said site), so when people started adding machine-readable resources, it was just easier and cheaper to stick to the same HTTP ways. They used HTTP methods and headers and status codes because that’s what their Web servers already understood and logged. Tools like PHP allowed them to do this with zero deployment overhead over their regular Web sites.
If uniformity and alignment with the World Wide Web are not important for you, then RPC is a tried and true architectural choice, and gRPC is a solid implementation that can save you some trouble, as ɐuıɥɔɐɯ explains.
I am fairly new to building applications using the RESTful architecture. As a matter of fact, all I have done so far is categorized as Level 2 REST by Leonard Richardson and that I know Fielding would happily categorize as Non-RESTful.
I have spent hours trying to understand HATEOAS and how to reach level 4. And I see it more clearly now. I conceptualize the application as a series of state transitions, and the resources will dynamically provide links with information on how to move from one state to another.
But everything related to HATEOAS seem to be inherent of a human-computer interaction. I mean, even when the resources provide the links that enable the application user to move to the next state, it is ultimately the user the one that drives the application from one state to the other by causing the use of of the provided links.
But how are things supposed to work when we are dealing with computer-to-computer interaction? After all when it comes to service-orientation the idea of service composition is key, and we cannot naively assume that the client is always going to be a human being? Many services are designed to be consumed by non-human users, and some interactions/orchestrations might be fairly complex, the type of things that are typically modeled with things like BPM, or BPEL.
Is REST and particularly HATEOAS only usable in applications that imply human intervention and if not how is this supposed to work otherwise?
I am getting this vibe that REST is only good for certain type of solutions and inadequate for others, but literature out there has failed to explain those inadequacies and sell REST as the cure of all evil, but I just don't quite get how to use for proper service composition when humans are not the drivers.
I'd really appreciate any references or insights on this, because believe me I have two days straight reading all I have been able to find on this topic and I have not yet being able to reach any reasonable and well documented conclusions.
Well, your client app can parse the response to get possible actions. In this case actual urls are obtained not from knowledge of the API, but upon calling the initial method (usually GET). All human-less.
It sounds almost as if you're comparing SOA to REST/Hypermedia and fail to see that SOA is a strategy, for designing a complex system made out of other systems, while REST/Hypermedia is a software architecture style applying a bunch of constraints on client-server communication. The client, however, can be both a server or a human, it doesn't matter.
To use or not to use REST/Hypermedia is not something to bother with when outlining/designing service composition. It's a question that comes into play when trying to achieve syntactic interoperability. Many times it comes down to comparing REST to Soap and other technical details.
I am wondering if it would be possible to develop an enterprise-level web application without the use of a standard MVC structure and application server by carrying the business/flow logic and session data to the client-size Javascript and make it talk to REST data services directly...maybe we could make use of an authorization/authentication layer and a second validation layer sitting on top of the data services. All these services operate on standard HTTP methods, support configurable logging&monitoring, and content or query parameters are all contained in the HTTP request/response body. Static HTML and Javascript are served to the browser and the rest is carried out by Javascript functions talking to the HTTP-based authorization/authentication, validation and then data services. Do you think this kind of an architecture could satisfy enterprise-level web application requirements?
It's possible but unlikely; what are the drivers that suggest this architecture to you? Is it just to be different or are there some specific aspects that this best addresses?
by carrying the business/flow logic
and session data to the client-side
Javascript and make it talk to REST
data services directly
In theory you'd still be able to have an appropriately layered solution (Business Logic (BL) script vs UI focused script) but practically speaking it'd be messy, and you'd lose the ability to physically separate it into different tiers. This could "bite" you at any number of places in the life of the system.
"Enterprise" grade systems are seldom small, I hate to think how much logic you'd be having to send over the wire to support a given action/process.
Putting all the BL into a scripting language ties you to that platform, and platforms change over time. The bad thing about scripts is that whilst they are stable to a degree I'd suggest they are more exposed to change than server-based platforms like Java or .Net. In an enterprise scenario, the servers will have very tight change control and upgrade paths mapped out for them - whereas browsers are much more open to regular change.
There's the issue of compatibility - unless you're tied to a specific browser (to the version level) guaranteeing consistent behavior is going to be harder, and will likely require more development effort. Let's say you deliver the solution successfully; what do you do when the business wants to take advantage of mobile computing - say iPads? Your only option is going to be a browser - you won't be able to take advantage of any of the native advantages of the platform. "The web and browsers" might seem like they'll be around forever - but then I'm guessing that what MainFrame folks said at the time. A server-centered solution is going to give you more life for less expense.
Staffing will be an issue - you'll need very strong JavaScript and server-side developers.
Security: having your core BL out on the client where it's much more exposed sounds very dangerous.
EDIT:
Web Apps can be sow for many reasons - not many of which are reason enough to put all your BL in JavaScript on the client. Building apps for performance is a whole field of endeavor on its own - I suggest you get more familiar with Architecting and implementing for performance before you write off n-tier web apps altogether :)
Regarding keeping your layers separated: there are different ways of doing this but it boils down to abstraction - and more correctly to keeping good design principles in mind; if you haven't heard of SOLID that would be a good place to start. In terms of implementation start reading up on Dependency Inversion (FYI - self-promotion, the articles mine and is .Net focused, but you should have no problem tracking down Java-based ones too).
Are people still writing SOAP services or is it a technology that has passed its architectural shelf life? Are people returning to binary formats?
The alternative to SOAP is not binary formats.
I think you're seeing a surge in the desire to leave the complexities of WS-* behind in favor of REST and JSON, because they're much simpler to use and don't require frameworks to be used successfully. The problems that WS-* ostensibly tries to solve aren't problems for most users, but they have to pay for the complexity any way.
I still write WS-*–based services. Somewhat surprisingly, I've had less trouble with them when trying to inter-operate with less capable developers. This is because if I send them a WSDL file, they know how to crank it through their tool and get an API they can call, while being blissfully unaware what is happening under the hood. To give customers a REST-ful service, I have to start talking to them about HTTP and XML, which they really don't understand as well as they think they do, and then I start getting a headache.
In other words, to be successful with REST, both the service provider and consumer have to know what they're doing (and they can keep things simple and come up with a great, non–WS-* solution). With WS-* technologies, it can still succeed even if only one party has a clue.
I think, however, that REST-oriented standards that are much less complicated than current WS standards, will eventually emerge, and when that happens, comparable tools will be available too.
I think so. RESTful solutions are more and more sensible for the vast majority of use cases; the complexities of SOAP and other RPC technologies just aren't worth the effort anymore.
I wouldn't consider SOAP legacy at all. REST vs. SOAP is really just the continuation of the debate of COM/CORBA vs. HTTP POST/GET etc. SOAP is nothing more than an updated version of the same principles defined with C and C (contracts, providers, consumers etc.). It's just that has appeared to SOAP succeed (at least partially) where the other two failed (and it could be that SOAP just has a better marketing team), that is that SOAP really does allow to different systems to connect rather easily compared to it's predecessors. That being said, it still suffers from the same drawbacks that COM/CORBA did...it can get really complex.
I think REST is just coming back into style at the moment. It's nothing new, people are just taking another look at it. Look at the web. It's REST and it's been around for years. 5 years from now people are going to look back and say the same thing about it being legacy and the need to change. It's the nature of software development. Everything goes in cycles.
The debate about which one is better is going to be just like the tabs vs. spaces debate. There are going to be people on different sides swearing that one is better. Really in the end, they both accomplish the same goal. Sure one will be a better solution than the other in some situations, but in the end neither will be superior 100% of the time.
We were using SOAP, but since we control both messaging endpoints (thick client out on the web connecting to our servers) we decided that the "lingua franca" of XML wasn't offering any real benefit. Instead, we're experimenting with binary serialization via Google protocol buffers, and like everything we've learned so far. It's somewhat CORBA-esque, but doesn't make me grumpy the way CORBA did. Still haven't found the best fit for the RPC layer, but pretty sure the payload will be protocol buffers.
The point I'm trying to make is that if you control both sides of the conversation, there are significant efficiency advantages in bypassing the XML tax.
Yes, some people still are (and now it's 2011!). I think the main reason is that MS WCF automatically generates SOAP bindings. The horror.
It's impossible to define what the best technology solution is without considering what the problem is, in other words, what the context is. Both REST and SOAP have their place. If you have a high traffic site and a development audience who is comfortable with REST, then SOAP would be a bad choice, primarily because the message size is so incredibly bloated. If you have small scale site with a modest development budget, then SOAP will be a superior choice due to automatic proxy generation from WSDL. To make a fair comparison, it should be mentioned that implementing a REST conversation takes more development time and therefore is more expensive, a very relevant fact for your boss.
While it is true that SOAP is a more complicated protocol, in my experience this doesn't translate to maintainability issues. That's because messages ride on HTTP and can be easily debugged just like REST message, and the SOAP stacks available on major platforms are very solid.
The complexity of SOAP is of course an advantage if your requirements include sophisticated items like federated message security. On the other hand, these kind of requirements are not seen that often in my experience. The WS standards committee may have been vulnerable to some YAGNI issues. Now that web service communication is commonplace, it's turning out to be simpler that was originally envisioned.