Connection between programs over the network - rest

I want to dive into the whole diversity of tools which provide connection between programs over the network.
To clarify the question, I divide it on subquestions:
Why some groups of programs (or specific tools/frameworks/approaches with programming languages where this frameworks can be used) were popular in each period of time? (I expect description of problems which were solved, description of tools, why those tools are considered as best solution to those problems at that time, why some tools lost popularity)
What is the entire history of software communication over the network? (tools/approaches popularity precisely to decades)
What are the modern solutions to this problem?
I can distinguish only two significant approaches.
RPC, RMI and their implementations (I saw this, but it is about concrete problem and specific tools to solve this problem, I want to see the place of this problem in the whole picture of interconnection programs over the network. I heard about implementations: ONC RPC, XML-RPC, CORBA, DCOM, gRPC, but which are active now? which are reasonable to use? which are preferable and why? I want answers not to be opinion based, so I accept answers like "technology A better than technology B for problem X because ..." only if there is reliable research/statistics or facts). I heard that RPC and RMI were popular 10 years ago. Are they still?
Web services: REST, SOAP.
Am I miss something? Maybe there are some technologies which solve problem completely new way? Maybe there are technologies which can be treated as replacement to RPC(RMI) and Web Services? Can we replace RPC(RMI) by REST for any task? Can we replace RPC(RMI) by REST only for modern tasks? Should I separate technologies not as RPC and Web Services, but in some other manner?

As a partial answer, I can give you my feedback on the use of RabbitMQ.
As explain here, it provides a lot of different ways to use it :
RPC by implementing a "callback" queue
One to one, one to many routing strategy to propagate your events through your whole infrastructure and target the right destination.
It comes with the ability to persist messages to avoid loosing data when a crash appears but also with some plugins to increase possibilities (e.g x-delayed plugin)
This technologie written in Erlang is powerful and is a must try in term of communication between programs.

To your question „Am I missing something“: yes.
Very popular communication patterns are the so-called Event-Driven or Message-Driven protocols. This type of protocols are often used in distributed systems such web applications, microservices and IoT-Environments. The communication is complete asynchronously and allows building scalable and loosely coupled systems.
There are many different frameworks and methods for Event-Driven systems like WebSockets, WebHooks, Pub-Sub and Messaging-Librarys like AcitveMQ, OpenMQ, RabbitMQ, ZeroMQ and MQTT.
Hope this info helps for your research.

Related

Is the direct use of UDP/TCP sockets becoming a last resort for use in production code?

Recently, I created a lightweight wrapper for the C++ boost asio library for some network communication. I used it to prototype some new functionality. We quickly moved over to a system that utilized Kafka to take advantage of an existing microservice framework when more internal funding came our way. No problem, I figured we would move to a different network model later on, and the internals were more important to my job than the network communication.
My question is, with the amount of technologies that abstract away network interfaces, (i.e. Kafka, grpc, ActiveMQ, ZeroMQ, etc.) are the use of base TCP/UDP sockets becoming more of a last resort, where software architects try to find an existing broker/stream processor/network message passing tool to fit their model? Or are there still many new production developments utilizing base level TCP/UDP sockets, not including those who solely write network libraries such as those mentioned above?
Note that I don't work with Kafka, grpc etc. in my line of work, but I have used UDP/TCP sockets extensively in the past. So forgive any misunderstanding of those particular technologies.

SOAP for distributed transaction

I have been reading on difference between REST and SOAP. I see in many posts that SOAP is a better choice for distributed transactional resources.
Please give me a practical example of SOAP being used for distributed transaction.
SOAP has been the main player for many years inside enterprise applications simply because there was no alternative. REST came later.
Since SOAP is a protocol it is easier to build tools around it since you know how it behaves always (i.e. as the protocol is defined). For this reason and because it's mature as technology, a lot of other specifications were build around it, to cover any uses one might have for doing something with SOAP. See a list here. There are of course some for transactional semantics also. If you use
SOAP with a technology like Java or C# (which are heavyweight champions in the enterprise applications field) then you can have these transactional specifications already implemented in the framework or libraries and you just use them.
REST on the other hand is an architectural style of building applications. It's harder to limit it to a set of specifications. You can implement it in many ways. It is also going somehow against "the way of the SOAP" by staying away of creating new standards or specifications and instead just reusing the ones of the web. For this reason, there are no specs or tools to help you with transactional RESTful services. You have to build your own.
So when your application is build by self-contained web services, and these services need to cooperate on creating the applications outcome, and you need a distributed transaction to guarantee that outcome is consistent (all operations succeeding or none succeeding) then it's (more) practical to go for the technology that has the better tooling in supporting it.

Developing Chat/Real time web application

I am currently doing my research on building a chat system with more than 10k users connected online. I came across technologies and ways to do it such as jabber(XMPP), websockets, long polling, push. As far as I now, long polling might not work given the number of users. I know there is a lot of ways to accomplish this. I also know that facebook and Google chat systems are developed on XMPP.
I would truly appreciate if anyone could point me to the right direction. I believe all these methods and technologies out there are good depending on the scale of the project. I definitely need performance and scalability.
I've used Socket.io together with NodeJS for such a chat application. It scaled to over 10K concurrent users on moderate servers and there was a lot of room to grow.
This does depend on your limitations, tho.
What kind of hardware are you planning on using?
Which operating system would power your servers?
Which client platforms are you targeting?
Do you have an existing infrastructure you need to fit this into?
Do you have a previously selected programming language?
The existing skill sets your team members have and your team's ability to adopt new platforms and languages if necessary.
Take all of the above into consideration when making your decision.
Personally, I've found XMPP to be quite adequate, but a bit bloated for my purposes. YMMV.
You are comparing a fruit basket and three different variety of oranges.
XMPP is the only protocol that you have mentioned that actually is designed to support a chat system (of which many exist). The others are simply asynchronous messaging protocols/techniques. XMPP already supports http based chat via BOSH. Without a doubt, it will also support WebSockets when the specification is finalized. There is actually a draft of this already written, but at this point it appears to be a draft using a draft, so there will probably be few, if any, implementations.
Using XMPP would allow you to build on a proven technology for implementing a chat system and would allow you to choose what transport you want to use "under the hood". You haven't actually said whether you need a http based transport or not, but with XMPP you can use the stock tcp socket based transport or a http based one (BOSH) with the knowledge that it will also support WebSockets in the future.
The other benefit is of course that this is a widely used standard that will allow reuse of existing clients, servers and libraries in pretty much all popular (and not so popular) languages and platforms.
Scalability is not too much of a concern with the numbers you are quoting, as most (maybe all) existing xmpp servers will handle that many users.

Advantages of using an Erlang web server for a web application

Note: This question is heavily affected by the main requirement to the web application that I build: high availability and fault tolerance. All the other requirements (like scalability and number of users) is not in question here.
I have got and advice from one of the members of this community to use an Erlang web-server as a back-end for my web application.
The suggestion was that I could use something like Mochiweb as a backend and Django/Ruby on Rails as a front end using JSON and the Service Oriented Model.
The only obvious advantage of this approach that I can understand is that the development of the front-end part is 'as usual' - regular MVC stuff, Ruby on Rails or any other common framework of someone's choice.
But what about other advantages? Do they actually exist?
Sure, Erlang/OTP adds fault-tolerance to the system in question, but doesn't adding a web front-end layer diminish this fault tolerance level to much lower level?
Don't we introduce a 'single point of failure' by coupling Ruby on Rails with Mochiweb? Of course, Mochiweb can cope with faults, but what if something wrong happens on the front-end side?
Technically the Erlang/OTP platform does not do anything about fault-tolerance and high-availability on it's own. It just allows to easily implement concurrent and distributed software. Erlang web server running on a single machine can fail as all other do - just because hardware fails.
So for HA sites it's more important to have proper redundancy and fallback scenarios for cases of hardware and software failures, rather than using any specific software stack or platform. Probably it will just be slightly easier to implement in Erlang compared to other platforms (if you are familiar with it, of course), but absolutely the same results can be achieved using pure Ruby/Python/Java/C or almost any other.
The web industry have tons of experience setting up fault-tolerant frontends. It's just a matter of setting up multiple web machines (often light reverse-proxies) and some sort of HA manager (built into many loadbalancing solutions). The backend is usually the harder part.
I wouldn't use Erlang as a front-end web server if the back end is some other technology.
Many of the benefits of Erlang as a web server come about when the back end is also using Erlang. The biggest of these is lower I/O costs. When your front end and back end are completely separate software stacks, you lose that benefit.
If you're going to build something on Rails, you might as well use something you can get more help with on the front end, such as nginx.

Is this thinking accurate on ESB vs. REST-styled interfaces?

To tie together various legacy applications, some mainframe-based, I'm trying to compare using an ESB like MQSeries, a WS-* approach, or something more RESTful.
Is there much substance to the idea that writing the interface to REST, instead of MQ or even WS-*, may have a secondary benefit of also taking us closer to web-enabling portions of the apps (for use by humans with browsers)?
Applications have been web-enabled for years before REST bacame a fad, and will do for years to come. I don't see much of a relationship between the two.
BTW, I didn't think of MQSeries as having to do with ESB - it's just a message queueing system, largely equivalent to MSMQ.