REST for low latency messaging . - sockets

why dont you see more people using REST architecture for client server system. You see people using sockets, or TIBCO RV or EMS or MQ but i haven't seen much basic REST architecture
does anyone know any reason why you would avoid using this architecture for client / server communication for high through put / low latency

REST is not a good fit for every problem.
REST is best for Resource management. If you are writing web services (as with a client-server system) then you find you want things like language-agnostic data representation, argument validation, client/server code generation, error handling, access controls. REST basically requires you to code those things yourself.
On the other hand, it adds the HTTP layer. You get seamless integration of proxies, caching etc, but you do lose some speed due to HTTP headers, the webserver frontend, etc.

I don't know that I would necessarily avoid it but I can think of a couple of reasons why I might not choose it for a high through-put, low latency service. First, you have to deal with the entire web stack to get your message to your service. This could introduce a number of unnecessary layers and services that would delay messages. A custom service need only support the protocol layers required by the service itself.
Second, unless your service is the only service hosted on the web server, you'll be competing with other requests for your messages to be serviced. While having a custom endpoint for your service may not solve all resource contention problems, at least you don't have to compete for access from other services to your endpoint.
Third, a custom protocol need only support the actual service-related protocol information and may result in smaller packet sizes because you don't need to support the additional HTTP protocol overhead. This would particularly effect protocols that exchange small messages as the header information would be a larger fraction of the message size.

Related

Does gRPC vs NATS or Kafka make any sense?

For a long time, when it comes to the microservice architecture, NATS and Kafka are the first options that come to my mind. But recently I found this gRPC template in dotnet core and that grasped my attention. I read a lot about it and watched a lot of videos but I don't think any of those could address gRPC correctly as they usually contrast between gRPC and message brokers or protocols such as REST which I guess is pretty inappropriate although SOAP would be relevant here.
My assumption is that gRPC is a modern version of SOAP with better performance and less implementation hassle due to it protocol buffer. And I think that gRPC can by no means be compared against Kafka or NATS. And also that it cannot replace RESTful service as neither could SOAP.
Now, the question, to what extent are my assumptions true? For example, when it comes to selecting a communication bridge between nodes on a cluster, do I have to put gPRC among my options now (NATS, Kafkam Rabbit, etc) or should I consider that when creating a web proxy to bridge external request to my microservices?
Finally, how about real-time communication, can gRPC replace websocket/socket.io/signalR completely? What does it replace?
I often see people misplacing these technologies by one crucial aspect: public authentication.
For instance, check this graph:
This is a benchmark of Inverted Json (https://github.com/lega911/ijson), comparing some tools, such as iJson, RabbitMQ, Nats, 0MQ, etc.
Notice that Nats, ZeroMQ and iJson are not meant to be used as public end-points (for instance, Nats have user/password, token and keys, but it is useless in an open environment, such as web browsers, because there is no way to make the key non public).
On the other hand, GRPC works just fine with JWT and Oauth2, making it completely safe to public end-points (as safer as any other HTTP endpoint), 'cos those tokens are server-signed (so, even tough they are public, they can't be forged or tempered with)
So, what I'm trying to say is: there are techs meant to face public and techs meant to glue together servers and process within servers (which are private connections).
GRPC is public, ZeroMQ and iJson are totally private (iJson, for instance, don't have any kind of authentication). Nats works with keys or passwords, so, although is "safer" than iJson and ZeroMQ, it is not meant to be public.
When you say REST (I'm assuming HTTP here, because REST is just an architecture), websocket/socket.io/signalR, you are depicting all public interfaces. GRPC will cover you here (it's comparable to REST as request/response and websocket/socket.io/signalR because it supports half and full duplex streaming (similar to sockets)).
Nats, iJson, ZeroMQ, on the other hand, are not meant to do that. They are meant to communicate between services.
So, basically, REST/websocket/socket.io/signalR = gRPC.
Internal communication between services (in the same or in different servers) = NATs, iJSON, ZeroMQ.
(notice that I'm not even considering the other technologies in the graph, because they are products, IMO, not simple libraries you can use to achieve an end, such as RabbitMQ, nginx, etc. The other ones I'm not familiar enough to be able to make an opinion (but I'm surprised by the uvloop in that graph)).
Your intuition is correct that gRPC is not comparable to an asynchronous queueing system like kafka, Rabbit, etc.
It is however a replacement for synchronous server to server communication technologies often implemented over SOAP, RPC, REST, etc. where you are expecting to get a response from another server rather than firing a message into a queue and then effectively forgetting about the message.
gRPC is definitely an option for real-time communication. It can replace socket communication if you are not streaming to the browser(No gRPC support), have a look at the Bidirectional streaming support.
About replacing Kafka/Rabbit, gRPC can be used as a PubSub system as it supports Bidirectional streaming but I would not recommend it.

What can go wrong if we do NOT follow RESTful best practices?

TL;DR : scroll down to the last paragraph.
There is a lot of talk about best practices when defining RESTful APIs: what HTTP methods to support, which HTTP method to use in each case, which HTTP status code to return, when to pass parameters in the query string vs. in the path vs. in the content body vs. in the headers, how to do versioning, result set limiting, pagination, etc.
If you are already determined to make use of best practices, there are lots of questions and answers out there about what is the best practice for doing any given thing. Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
Most of the best practice guidelines direct developers to follow the principle of least surprise, which, under normal circumstances, would be a good enough reason to follow them. Unfortunately, REST-over-HTTP is a capricious standard, the best practices of which are impossible to implement without becoming intimately involved with it, and the drawback of intimate involvement is that you tend to end up with your application being very tightly bound to a particular transport mechanism. So, some people (like me) are debating whether the benefit of "least surprise" justifies the drawback of littering the application with REST-over-HTTP concerns.
A different approach examined as an alternative to best practices suggests that our involvement with HTTP should be limited to the bare minimum necessary in order to get an application-defined payload from point A to point B. According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request. The request will either fail to be delivered, in which case it is the responsibility of the web server to return an "HTTP 404 Not Found" to the client, or it will be successfully delivered, in which case the delivery of the request was "HTTP 200 OK" as far as the transport protocol is concerned, and anything else that might go wrong from that point on is exclusively an application concern, and none of the transport protocol's business. Obviously, this approach is kind of like saying "let me show you where to stick your best practices".
Now, there are other voices that say that things are not that simple, and that if you do not follow the RESTful best practices, things will break.
The story goes that for example, in the event of unauthorized access, you should return an actual "HTTP 401 Unauthorized" (instead of a successful response containing a json-serialized UnauthorizedException) because upon receiving the 401, the browser will prompt the user of credentials. Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Another, more sophisticated way the story goes is that usually, between the client and the server exist proxies, and these proxies inspect HTTP requests and responses, and try to make sense out of them, so as to handle different requests differently. For example, they say, somewhere between the client and the server there may be a caching proxy, which may treat all requests to the exact same URL as identical and therefore cacheable. So, path parameters are necessary to differentiate between different resources, otherwise the caching proxy might only ever forward a request to the server once, and return cached responses to all clients thereafter. Furthermore, this caching proxy may need to know that a certain request-response exchange resulted in a failure due to a particular error such as "Permission Denied", so as to again not cache the response, otherwise a request resulting in a temporary error may be answered with a cached error response forever.
So, my questions are:
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices? Are these concerns about proxies real? Are caching proxies really so dumb as to cache REST responses? Is it hard to configure the proxies to behave in less dumb ways? Are there drawbacks in configuring the proxies to behave in less dumb ways?
It's worth considering that what you're suggesting is the way that HTTP APIs used to be designed for a good 15 years or so. API designers are tending to move away from that approach these days. They really do have their reasons.
Some points to consider if you want to avoid using ReST over HTTP:
ReST over HTTP is an efficient use of the HTTP/S transport mechanism. Avoiding the ReST paradigm runs the risk of every request / response being wrapped in verbose envelopes. SOAP is an example of this.
ReST encourages client and server decoupling by putting application semantics into standard mechanisms - HTTP and XML/JSON (or others data formats). These protocols and standards are well supported by standard libraries and have been built up over years of experience. Sure, you can create your own 'unauthorized' response body with a 200 status code, but ReST frameworks just make it unnecessary so why bother?
ReST is a design approach which encourages a view of your distributed system which focuses on data rather than functionality, and this has a proven a useful mechanism for building distributed systems. Avoiding ReST runs the risk of focusing on very RPC-like mechanisms which have some risks of their own:
they can become very fine-grained and 'chatty'
which can be an inefficient use of network bandwidth
which can tightly couple client and server, through introducing stateful-ness and temporal coupling beteween requests.
and can be difficult to scale horizontally
Note: there are times when an RPC approach is actually a better way of breaking down a distributed system than a resource-oriented approach, but they tend to be the exceptions rather than the rule.
existing tools for developers make debugging / investigations of ReSTful APIs easier. It's easy to use a browser to do a simple GET, for example. And tools such as Postman or RestClient already exist for more complex ReST-style queries. In extreme situations tcpdump is very useful, as are browser debugging tools such as firebug. If every API call has application layer semantics built on top of HTTP (e.g. special response types for particular error situations) then you immediately lose some value from some of this tooling. Building SOAP envelopes in PostMan is a pain. As is reading SOAP response envelopes.
network infrastructure around caching really can be as dumb as you're asking. It's possible to get around this but you really do have to think about it and it will inevitably involve increased network traffic in some situations where it's unnecessary. And caching responses for repeated queries is one way in which APIs scale out, so you'll likely need to 'solve' the problem yourself (i.e. reinvent the wheel) of how to cache repeated queries.
Having said all that, if you want to look into a pure message-passing design for your distributed system rather than a ReSTful one, why consider HTTP at all? Why not simply use some message-oriented middleware (e.g. RabbitMQ) to build your application, possibly with some sort of HTTP bridge somewhere for Internet-based clients? Using HTTP as a pure transport mechanism involving a simple 'message accepted / not accepted' semantics seems overkill.
REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. -- Roy T Fielding
Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
When in doubt, go back to the source
Fielding's dissertation really does quite a good job at explaining how the REST architectural constraints ensure that you don't destroy the properties those constraints are designed to protect.
Keep in mind - before the web (which is the reference application for REST), "web scale" wasn't a thing; the notion of a generic client (the browers) that could discover and consume thousands of customized applications (provided by web servers) had not previously been realized.
According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request.
Yup - that's a thing, it's called RPC; you are effectively taking the web, and stripping it down to a bare message transport application that just happens to tunnel through port 80.
In doing so, you have stripped away the uniform interface -- you've lost the ability to use commodity parts in your deployment, because nobody can participate in the conversation unless they share the same interpretation of the message data.
Note: that's doesn't at all imply that RPC is "broken"; architecture is about tradeoffs. The RPC approach gives up some of the value derived from the properties guarded by REST, but that doesn't mean it doesn't pick up value somewhere else. Horses for courses.
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices?
Cheap scaling of reads - as your offering becomes more popular, you can service more clients by installing a farm of commodity reverse-proxies that will serve cached representations where available, and only put load on the server when no fresh representation is available.
Prefetching - if you are adhering to the safety provisions of the interface, agents (and intermediaries) know that they can download representations at their own discretion without concern that the operators will be liable for loss of capital. AKA - your resources can be crawled (and cached)
Similarly, use of idempotent methods (where appropriate) communicates to agents (and intermediaries) that retrying the send of an unacknowledged message causes no harm (for instance, in the event of a network outage).
Independent innovation of clients and servers, especially cross organizations. Mosaic is a museum piece, Netscape vanished long ago, but the web is still going strong.
Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Of course they are -- where do you think you are reading this answer?
So far, REST works really well at exposing capabilities to human agents; which is to say that the server side is so ubiquitous at this point that we hardly think about it any more. The notion that you -- the human operator -- can use the same application to order pizza, run diagnostics on your house, and remote start your car is as normal as air.
But you are absolutely right that replacing the human still seems a long ways off; there are various standards and media types for communicating semantic content of data -- the automated client can look at markup, identify a phone number element, and provide a customized array of menu options from it -- but building into agents the sorts of fuzzy intelligence needed to align offered capabilities with goals, or to recover from error conditions, seems to be a ways off.

Microservices: REST vs Messaging

I heard Amazon uses HTTP for its microservice based architecture. An alternative is to use a messaging system like RabbitMQ or Solace systems. I personally have experience with Solace based microservice architecture, but never with REST.
Any idea what do various big league implementations like Amazon, Netflix, UK Gov etc use?
Other aspect is, in microservices, following things are required (besides others):
* Pattern matching
* Async messaging.. receiving system may be down
* Publish subscribe
* Cache load event.. i.e. on start up, a service may need to load all data from a couple of other services, and should be notified when data is completely loaded, so that it can 'know' that it is now ready to service requests
These aspects are naturally done with messaging rather than REST. Why should anyone use REST (except for public API). Thanks.
A standard that I've followed in the past is to use web services when the key requirement is speed (and data loss isn't critical) and messaging when the key requirement is reliability. Like you've said, if the receiving system is down, a message will sit on a queue until the system comes back up to process it. If it's a REST endpoint and it's down, requests will simply fail.
REST API presumes use of HTTP only. it is quite stone age technology and does not accept async. messaging. To plugin messaging there, I would consider WebSockets Gateways
-sorry for eventually dummy statements

Microservices, AMQP or REST [duplicate]

A little background.
Very big monolithic Django application. All components use the same database. We need to separate services so we can independently upgrade some parts of the system without affecting the rest.
We use RabbitMQ as a broker to Celery.
Right now we have two options:
HTTP Services using a REST interface.
JSONRPC over AMQP to a event loop service
My team is leaning towards HTTP because that's what they are familiar with but I think the advantages of using RPC over AMQP far outweigh it.
AMQP provides us with the capabilities to easily add in load balancing, and high availability, with guaranteed message deliveries.
Whereas with HTTP we have to create client HTTP wrappers to work with the REST interfaces, we have to put in a load balancer and set up that infrastructure in order to have HA etc.
With AMQP I can just spawn another instance of the service, it will connect to the same queue as the other instances and bam, HA and load balancing.
Am I missing something with my thoughts on AMQP?
At first,
REST, RPC - architecture patterns, AMQP - wire-level and HTTP - application protocol which run on top of TCP/IP
AMQP is a specific protocol when HTTP - general-purpose protocol, thus, HTTP has damn high overhead comparing to AMQP
AMQP nature is asynchronous where HTTP nature is synchronous
both REST and RPC use data serialization, which format is up to you and it depends of infrastructure. If you are using python everywhere I think you can use python native serialization - pickle which should be faster than JSON or any other formats.
both HTTP+REST and AMQP+RPC can run in heterogeneous and/or distributed environment
So if you are choosing what to use: HTTP+REST or AMQP+RPC, the answer is really subject of infrastructure complexity and resource usage. Without any specific requirements both solution will work fine, but i would rather make some abstraction to be able switch between them transparently.
You told that your team familiar with HTTP but not with AMQP. If development time is an important time you got an answer.
If you want to build HA infrastructure with minimal complexity I guess AMQP protocol is what you want.
I had an experience with both of them and advantages of RESTful services are:
they well-mapped on web interface
people are familiar with them
easy to debug (due to general purpose of HTTP)
easy provide API to third-party services.
Advantages of AMQP-based solution:
damn fast
flexible
cost-effective (in resources usage meaning)
Note, that you can provide RESTful API to third-party services on top of your AMQP-based API while REST is not a protocol but rather paradigm, but you should think about it building your AQMP RPC api. I have done it in this way to provide API to external third-party services and provide access to API on those part of infrastructure which run on old codebase or where it is not possible to add AMQP support.
If I am right your question is about how to better organize communication between different parts of your software, not how to provide an API to end-users.
If you have a high-load project RabbitMQ is damn good piece of software and you can easily add any number of workers which run on different machines. Also it has mirroring and clustering out of the box. And one more thing, RabbitMQ is build on top of Erlang OTP, which is high-reliable,stable platform ... (bla-bla-bla), it is good not only for marketing but for engineers too. I had an issue with RabbitMQ only once when nginx logs took all disc space on the same partition where RabbitMQ run.
UPD (May 2018):
Saurabh Bhoomkar posted a link to the MQ vs. HTTP article written by Arnold Shoon on June 7th, 2012, here's a copy of it:
I was going through my old files and came across my notes on MQ and thought I’d share some reasons to use MQ vs. HTTP:
If your consumer processes at a fixed rate (i.e. can’t handle floods to the HTTP server [bursts]) then using MQ provides the flexibility for the service to buffer the other requests vs. bogging it down.
Time independent processing and messaging exchange patterns — if the thread is performing a fire-and-forget, then MQ is better suited for that pattern vs. HTTP.
Long-lived processes are better suited for MQ as you can send a request and have a seperate thread listening for responses (note WS-Addressing allows HTTP to process in this manner but requires both endpoints to support that capability).
Loose coupling where one process can continue to do work even if the other process is not available vs. HTTP having to retry.
Request prioritization where more important messages can jump to the front of the queue.
XA transactions – MQ is fully XA compliant – HTTP is not.
Fault tolerance – MQ messages survive server or network failures – HTTP does not.
MQ provides for ‘assured’ delivery of messages once and only once, http does not.
MQ provides the ability to do message segmentation and message grouping for large messages – HTTP does not have that ability as it treats each transaction seperately.
MQ provides a pub/sub interface where-as HTTP is point-to-point.
UPD (Dec 2018):
As noticed by #Kevin in comments below, it's questionable that RabbitMQ scales better then RESTful servies. My original answer was based on simply adding more workers, which is just a part of scaling and as long as single AMQP broker capacity not exceeded, it is true, though after that it requires more advanced techniques like Highly Available (Mirrored) Queues which makes both HTTP and AMQP-based services have some non-trivial complexity to scale at infrastructure level.
After careful thinking I also removed that maintaining AMQP broker (RabbitMQ) is simpler than any HTTP server: original answer was written in Jun 2013 and a lot of changed since that time, but the main change was that I get more insight in both of approaches, so the best I can say now that "your mileage may vary".
Also note, that comparing both HTTP and AMQP is apple to oranges to some extent, so please, do not interpret this answer as the ultimate guidance to base your decision on but rather take it as one of sources or as a reference for your further researches to find out what exact solution will match your particular case.
The irony of the solution OP had to accept is, AMQP or other MQ solutions are often used to insulate callers from the inherent unreliability of HTTP-only services -- to provide some level of timeout & retry logic and message persistence so the caller doesn't have to implement its own HTTP insulation code. A very thin HTTP gateway or adapter layer over a reliable AMQP core, with option to go straight to AMQP using a more reliable client protocol like JSONRPC would often be the best solution for this scenario.
Your thoughts on AMQP are spot on!
Furthermore, since you are transitioning from a monolithic to a more distributed architecture, then adopting AMQP for communication between the services is more ideal for your use case. Here is why…
Communication via a REST interface and by extension HTTP is synchronous in nature — this synchronous nature of HTTP makes it a not-so-great option as the pattern of communication in a distributed architecture like the one you talk about. Why?
Imagine you have two services, service A and service B in that your Django application that communicate via REST API calls. This API calls usually play out this way: service A makes an http request to service B, waits idly for the response, and only proceeds to the next task after getting a response from service B. In essence, service A is blocked until it receives a response from service B.
This is problematic because one of the goals with microservices is to build small autonomous services that would always be available even if one or more services are down– No single point of failure. The fact that service A connects directly to service B and in fact, waits for some response, introduces a level of coupling that detracts from the intended autonomy of each service.
AMQP on the other hand is asynchronous in nature — this asynchronous nature of AMQP makes it great for use in your scenario and other like it.
If you go down the AMQP route, instead of service A making requests to service B directly, you can introduce an AMQP based MQ between these two services. Service A will add requests to the Message Queue. Service B then picks up the request and processes it at its own pace.
This approach decouples the two services and, by extension, makes them autonomous. This is true because:
If service B fails unexpectedly, service A will keep accepting requests and adding them to the queue as though nothing happened. The requests would always be in the queue for service B to process them when it’s back online.
If service A experiences a spike in traffic, service B won’t even notice because it only picks up requests from the Message Queues at its own pace
This approach also has the added benefit of being easy to scale— you can add more queues or create copies of service B to process more requests.
Lastly, service A does not have to wait for a response from service B, the end users don’t also have to wait for long— this leads to improved performance and, by extension, a better user experience.
Just in case you are considering moving from HTTP to AMQP in your distributed architecture and you are just not sure how to go about it, you can checkout this 7 parts beginner guide on message queues and microservices. It shows you how to use a message queue in a distributed architecture by walking you through a demo project.

What are the pros and cons of HTTP callbacks vs. message passing?

We are looking to develop a number of services, but are not sure which "response" mechanism is the best route to go. The two contenders are:
HTTP callbacks, where the service would update the client application via "pinging" it with update messages sent via HTTP requests
Message Passing, where the service would update the client via publishing messages into a pub-sub queue on a message server
In both cases, both the caller and the services are within our network, we have full control over them, and things we develop are the only users of the services.
What are the pros / cons of each way of providing status updates to the calling application, and what, if any, pros / cons would there be for making the initial request via one method or the other?
Note: The first service we have in mind for this is an email service similar to SendGrid, which we can't use for various reasons, but still need the same functionality.
The main difference would be the quality of service that you get "out of the box" with a messaging server.
If you go with HTTP then your application has to take care of what happens when a message doesn't arrive as expected. To get an idea of the issues you need to consider and the complexities involved in solving them, take a look at WS-ReliableMessaging or HTTPLR.
With messaging, you get a configurable level of reliability out of the box. And there's a lot of good choice these days such as ActiveMQ, RabbitMQ, 0MQ.
My personal preference is for reliability to be handled at the transport layer (by messaging), but then for a good discussion and dissenting view, check out "Nobody Needs Reliable Messaging."