Does HTTP/2 multiplexing violate REST API rules? - rest

Multiplexing is a pretty cool feature of http/2. It allows using one connection to serve multiple requests from a single client simultaneously.
My question is: does this multiplexing feature violate REST API rules?
I understand that REST API enforces request-response architecture, but multiplexing without server-push (streaming) feature enabled is essentially one request -> one response paradigm, so that's not a violation, is that right?
REST API also enforces stateless, and I'm lost there: is multiplexing through a single connection considered as stateful or stateless?
If I want to upgrade a REST API which is currently implemented with HTTP/1.1 to use HTTP/2, do I have the privilege to use the multiplexing feature, or I have to do stream after stream (req1, res1, req2, res2...)?

Network multiplexing and REST API are two absolutely different matters/layers of responsibility.
Multiplexing is about how do communication signals flow, and not about what is the architectural pattern of HTTP messages' communication (which is what REST is all about).
From the REST perspective, it does not matter:
how electrical signals flow in the cable or wirelessly;
what type of cable or other physical mean you use for transferring data;
even if you maintain a single physical (TCP) connection throughout several request-response cycles or you open and close TCP connection per each HTTP request-response;
even if you use something else than TCP (yes, that's not a good idea, but theoretically, as long as communication is ensured to have integrity, consistency and stability (which is all TCP brings), it doesn't much matter how the physical connection is established).
Because,
REST is an architectural (design of the web application) pattern for implementing web applications.
Multiplexing is about how the physical signals/connection is being implemented.
As long as HTTP messages flow seamlessly between client and server, physical or transport layer have nothing to do with REST endpoints; hence, there is nothing in multiplexing, that can violate anything in REST, as - again: these two serve absolutely different purposes.

Related

How different between "WebSocket" and "REST API"

I always use REST API when I get or post some data.
But WebSocket can also do that.
So, I am confused about the difference between WebSocket and REST API
when I try to get or post some data.
A REST API uses HTTP as the underlying protocol for communication, which in turn follows the request and response paradigm. However, with WebSockets, although the communication still starts off over HTTP, it is further elevated to follow the WebSockets protocol if both the server and the client are compliant with the protocol (not all entities support the WebSockets protocol).
Now with WebSockets, it is possible to establish a full duplex and persistent connection between the client and a server. This means that unlike a request and a response, the connection stays open for as long as the application is running, and since it is full duplex, two way simultaneous communication is possible i.e now the server is capable of initiating a communication and 'push' some data to the client.
This is the primary concept use in the realtime technology where you are able to get new updates in the form of server push without the client having to request (refresh the page) repeatedly. Examples of such applications are Uber car's location tracking, Push Notifications, Stock market prices updating in realtime etc.
Here's a video from a presentation I gave earlier this month about websockets and how they are different than using the regular REST APIs: https://www.youtube.com/watch?v=PJZ06MLXGwA&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2
You can provide a REST API along with a WebSocket API for different purposes. It's up to your requirements and it depends on what you want to achieve.
For example, a WebSocket API can be used to provide real-time notifications while the REST API can be used to manage your resources.
There are a few details you should be aware of:
REST is a protocol independent architectural style frequently implemented over the HTTP protocol and it's supposed to be stateless.
WebSocket is a bi-directional, full-duplex and persistent connection protocol, hence it's stateful.
Just to mention one example of application that provides different APIs: Stack Exchange provides a REST API along with a WebSocket API.
I haven't yet fully understood what a REST API is, but I guess you refer to it in a broarder way so as to web systems that provide structured data referd to a specific resource as can be a customer, or a product, upon a POST or GET call over http.
The main difference from a practical and simplistic approach is that HTTP GET / POST are request - response protocols. The server will send a response upon a request sent by the client.
In the case of Websockets, communication is bidirectional. Either the server or the client can send information to the counterpart at any time.
To visualize the difference a page that provides stock market data if using HTTP GET will issue a new request to the server each X seconds to get the updated price. Using websockets, the SERVER could directly send the new price to the web browser as soon as it has changed.
You may also be interested in looking into long polling which is a techinc used with HTTP GET / POST to provide similar functionality as Websockets (though it is a totally different thing)
WebSockets are, just like sockets, an interface between two adjacent layers.
Specifically, in ARPANET reference model, sockets are an interface between Transport layer and Application layer; in OSI reference model, they represent an interface between Session layer and Transport layer. Interface means that they reside "in between" layers (at their boundary).
WebSockets are the sockets interface that was "migrated" from the Session/Transport layer boundary to the Session/Presentation boundary of the OSI model. This was done in order to overcome limitations of sockets in the world of web where all communications are "free" by default only on the port 80, the port of HTTP traffic. HTTP protocol, which sits on top of (guaranteed delivery) TCP Transport layer, is part of the Session layer in OSI model thus it can be considered as a "transport" as well for the layer above, the Presentation layer.
Since "I" in "API" stands for "Interface", both sockets and WebSockets are a form of API, although the term API belongs to a modern jargon. REST API is also an interface between Session and Presentation layers of the OSI model.
The difference between the REST API interface and WebSockets interface is that WebSockets is a full duplex persistent TCP connection established via 3-way handshake protocol over HTTP. REST API over HTTP is, just like HTTP, a Request/Response (non-standard) protocol where a TCP connection is created on each new request i.e. it is not persistent.

Designing a REST API with req/resp and pub/sub requirements

Nowadays I'm designing a REST interface for a distributed system. It is a client/sever architecture but with two message exchange patterns:
req/resp: the most RESTful approach, it would be a CRUD interface to access/create/modify/delete objects in the server.
pub/subs: this is my main doubt. I need the server to send asynchronous notifications to the client as soon as possible.
Searching in the web I found that one solution could be to implement REST-servers in the server and client: Publish/subscribe REST-HTTP Simple Protocol web services architecture?
Another alternative would be to implement blocking-REST and so the client doesn't need to listen in a specific port: Using blocking REST requests to implement publish/subscribe
I would like to know which options would you consider to implement an interface like this one. Thanks!
Web Sockets can provide a channel for the service to update web clients live. There's other techniques like http long polling where the client makes a "blocking" request (as you referred to it) where the service holds the request for a period of less than a timeout (say 50 sec) and writes a response when it has data. The web client immediately issues another request. This loop creates a continuous channel where messages can be "sent" from the server to the client but it's initiated from the client (firewalls, proxies, etc...)
There are libraries such as socket.io, signalR and many others that wrap this logic and even fallback from websockets to long polling gracefully for you and abstract away the details.
I would recommend write some sample web socket and long polling examples just to understand but then rely on libraries like mentioned above to get it right.

Microservices, AMQP or REST [duplicate]

A little background.
Very big monolithic Django application. All components use the same database. We need to separate services so we can independently upgrade some parts of the system without affecting the rest.
We use RabbitMQ as a broker to Celery.
Right now we have two options:
HTTP Services using a REST interface.
JSONRPC over AMQP to a event loop service
My team is leaning towards HTTP because that's what they are familiar with but I think the advantages of using RPC over AMQP far outweigh it.
AMQP provides us with the capabilities to easily add in load balancing, and high availability, with guaranteed message deliveries.
Whereas with HTTP we have to create client HTTP wrappers to work with the REST interfaces, we have to put in a load balancer and set up that infrastructure in order to have HA etc.
With AMQP I can just spawn another instance of the service, it will connect to the same queue as the other instances and bam, HA and load balancing.
Am I missing something with my thoughts on AMQP?
At first,
REST, RPC - architecture patterns, AMQP - wire-level and HTTP - application protocol which run on top of TCP/IP
AMQP is a specific protocol when HTTP - general-purpose protocol, thus, HTTP has damn high overhead comparing to AMQP
AMQP nature is asynchronous where HTTP nature is synchronous
both REST and RPC use data serialization, which format is up to you and it depends of infrastructure. If you are using python everywhere I think you can use python native serialization - pickle which should be faster than JSON or any other formats.
both HTTP+REST and AMQP+RPC can run in heterogeneous and/or distributed environment
So if you are choosing what to use: HTTP+REST or AMQP+RPC, the answer is really subject of infrastructure complexity and resource usage. Without any specific requirements both solution will work fine, but i would rather make some abstraction to be able switch between them transparently.
You told that your team familiar with HTTP but not with AMQP. If development time is an important time you got an answer.
If you want to build HA infrastructure with minimal complexity I guess AMQP protocol is what you want.
I had an experience with both of them and advantages of RESTful services are:
they well-mapped on web interface
people are familiar with them
easy to debug (due to general purpose of HTTP)
easy provide API to third-party services.
Advantages of AMQP-based solution:
damn fast
flexible
cost-effective (in resources usage meaning)
Note, that you can provide RESTful API to third-party services on top of your AMQP-based API while REST is not a protocol but rather paradigm, but you should think about it building your AQMP RPC api. I have done it in this way to provide API to external third-party services and provide access to API on those part of infrastructure which run on old codebase or where it is not possible to add AMQP support.
If I am right your question is about how to better organize communication between different parts of your software, not how to provide an API to end-users.
If you have a high-load project RabbitMQ is damn good piece of software and you can easily add any number of workers which run on different machines. Also it has mirroring and clustering out of the box. And one more thing, RabbitMQ is build on top of Erlang OTP, which is high-reliable,stable platform ... (bla-bla-bla), it is good not only for marketing but for engineers too. I had an issue with RabbitMQ only once when nginx logs took all disc space on the same partition where RabbitMQ run.
UPD (May 2018):
Saurabh Bhoomkar posted a link to the MQ vs. HTTP article written by Arnold Shoon on June 7th, 2012, here's a copy of it:
I was going through my old files and came across my notes on MQ and thought I’d share some reasons to use MQ vs. HTTP:
If your consumer processes at a fixed rate (i.e. can’t handle floods to the HTTP server [bursts]) then using MQ provides the flexibility for the service to buffer the other requests vs. bogging it down.
Time independent processing and messaging exchange patterns — if the thread is performing a fire-and-forget, then MQ is better suited for that pattern vs. HTTP.
Long-lived processes are better suited for MQ as you can send a request and have a seperate thread listening for responses (note WS-Addressing allows HTTP to process in this manner but requires both endpoints to support that capability).
Loose coupling where one process can continue to do work even if the other process is not available vs. HTTP having to retry.
Request prioritization where more important messages can jump to the front of the queue.
XA transactions – MQ is fully XA compliant – HTTP is not.
Fault tolerance – MQ messages survive server or network failures – HTTP does not.
MQ provides for ‘assured’ delivery of messages once and only once, http does not.
MQ provides the ability to do message segmentation and message grouping for large messages – HTTP does not have that ability as it treats each transaction seperately.
MQ provides a pub/sub interface where-as HTTP is point-to-point.
UPD (Dec 2018):
As noticed by #Kevin in comments below, it's questionable that RabbitMQ scales better then RESTful servies. My original answer was based on simply adding more workers, which is just a part of scaling and as long as single AMQP broker capacity not exceeded, it is true, though after that it requires more advanced techniques like Highly Available (Mirrored) Queues which makes both HTTP and AMQP-based services have some non-trivial complexity to scale at infrastructure level.
After careful thinking I also removed that maintaining AMQP broker (RabbitMQ) is simpler than any HTTP server: original answer was written in Jun 2013 and a lot of changed since that time, but the main change was that I get more insight in both of approaches, so the best I can say now that "your mileage may vary".
Also note, that comparing both HTTP and AMQP is apple to oranges to some extent, so please, do not interpret this answer as the ultimate guidance to base your decision on but rather take it as one of sources or as a reference for your further researches to find out what exact solution will match your particular case.
The irony of the solution OP had to accept is, AMQP or other MQ solutions are often used to insulate callers from the inherent unreliability of HTTP-only services -- to provide some level of timeout & retry logic and message persistence so the caller doesn't have to implement its own HTTP insulation code. A very thin HTTP gateway or adapter layer over a reliable AMQP core, with option to go straight to AMQP using a more reliable client protocol like JSONRPC would often be the best solution for this scenario.
Your thoughts on AMQP are spot on!
Furthermore, since you are transitioning from a monolithic to a more distributed architecture, then adopting AMQP for communication between the services is more ideal for your use case. Here is why…
Communication via a REST interface and by extension HTTP is synchronous in nature — this synchronous nature of HTTP makes it a not-so-great option as the pattern of communication in a distributed architecture like the one you talk about. Why?
Imagine you have two services, service A and service B in that your Django application that communicate via REST API calls. This API calls usually play out this way: service A makes an http request to service B, waits idly for the response, and only proceeds to the next task after getting a response from service B. In essence, service A is blocked until it receives a response from service B.
This is problematic because one of the goals with microservices is to build small autonomous services that would always be available even if one or more services are down– No single point of failure. The fact that service A connects directly to service B and in fact, waits for some response, introduces a level of coupling that detracts from the intended autonomy of each service.
AMQP on the other hand is asynchronous in nature — this asynchronous nature of AMQP makes it great for use in your scenario and other like it.
If you go down the AMQP route, instead of service A making requests to service B directly, you can introduce an AMQP based MQ between these two services. Service A will add requests to the Message Queue. Service B then picks up the request and processes it at its own pace.
This approach decouples the two services and, by extension, makes them autonomous. This is true because:
If service B fails unexpectedly, service A will keep accepting requests and adding them to the queue as though nothing happened. The requests would always be in the queue for service B to process them when it’s back online.
If service A experiences a spike in traffic, service B won’t even notice because it only picks up requests from the Message Queues at its own pace
This approach also has the added benefit of being easy to scale— you can add more queues or create copies of service B to process more requests.
Lastly, service A does not have to wait for a response from service B, the end users don’t also have to wait for long— this leads to improved performance and, by extension, a better user experience.
Just in case you are considering moving from HTTP to AMQP in your distributed architecture and you are just not sure how to go about it, you can checkout this 7 parts beginner guide on message queues and microservices. It shows you how to use a message queue in a distributed architecture by walking you through a demo project.

What are the pitfalls of using Websockets in place of RESTful HTTP?

I am currently working on a project that requires the client requesting a big job and sending it to the server. Then the server divides up the job and responds with an array of urls for the client to make a GET call on and stream back the data. I am the greenhorn on the project and I am currently using Spring websockets to improve efficiency. Instead of the clients constantly pinging the server to see if it has results ready to stream back, the websocket will now just directly contact the client hooray!
Would it be a bad idea to have websockets manage the whole process from end to end? I am using STOMP with Spring websockets, will there still be major issues with ditching REST?
With RESTful HTTP you have a stateless request/response system where the client sends request and server returns the response.
With webSockets you have a stateful (or potentially stateful) message passing system where messages can be sent either way and sending a message has a lower overhead than with a RESTful HTTP request/response.
The two are fairly different structures with different strengths.
The primary advantages of a connected webSocket are:
Two way communication. So, the server can notify the client of anything at any time. So, instead of polling a server on some regular interval to see if there is something new, a client can establish a webSocket and just listen for any messages coming from the server. From the server's point of view, when an event of interest for a client occurs, the server simply sends a message to the client. The server cannot do this with plain HTTP.
Lower overhead per message. If you anticipate a lot of traffic flowing between client and server, then there's a lower overhead per message with a webSocket. This is because the TCP connection is already established and you just have to send a message on an already open socket. With an HTTP REST request, you have to first establish a TCP connection which is several back and forths between client and server. Then, you send HTTP request, receive the response and close the TCP connection. The HTTP request will necessarily include some overhead such as all cookies that are aligned with that server even if those are not relevant to the particular request. HTTP/2 (newest HTTP spec) allows for some additional efficiency in this regard if it is being used by both client and server because a single TCP connection can be used for more than just a single request/response. If you charted all the requests/responses going on at the TCP level just to make an https REST request/response, you'd be surpised how much is going on compared to just sending a message over an already established webSocket.
Higher Scale in some circumstances. With lower overhead per message and no client polling to find out if something is new, this can lead to added scalability (higher number of clients a given server can serve). There are downsides to the webSocket scalability too (see below).
Stateful connections. Without resorting to cookies and session IDs, you can directly store state in your program for a given connection. While a lot of development has been done with stateless connections to solve most problems, sometimes it's just simpler with stateful connections.
The primary advantages of a RESTful HTTP request/response are:
Universal support. It's hard to get more universally supported than HTTP. While webSockets enjoy relatively good support now, there are still some circumstances where webSocket support isn't regularly available.
Compatible with more server environments. There are server environments that don't allow long running server processes (some shared hosting situations). These environments can support HTTP request, but can't support long running webSocket connections.
Higher Scale in some circumstances. The webSocket requirement for a continuously connected TCP socket adds some new scale requirements to the server infrastructure that HTTP requests don't demand. So, this ends up being a tradeoff space. If the advantages of webSockets aren't really needed or being used in a significant way, then HTTP requests might actually scale better. It definitely depends upon the specific usage profile.
For a one-off request/response, a single HTTP request is more efficient than establishing a webSocket, using it and then closing it. This is because opening a webSocket starts with an HTTP request/response and then after both sides have agreed to upgrade to a webSocket connection, the actual webSocket message can be sent.
Stateless. If your job is not made more complicated by having a stateless infrastruture, then a stateless world can make scaling or fail-over much easier (just add or remove server processes behind a load balancer).
Automatically Cacheable. With the right server settings, http responses can be cached by browser or by proxies. There is no such built-in mechanism for requests sent via webSockets.
So, to address the way you asked the question:
What are the pitfalls of using websockets in place of RESTful HTTP?
At large scale (hundreds of thousands of clients), you may have to do some special server work in order to support large numbers of simultaneously connected webSockets.
All possible clients or toolsets don't support webSockets or requests made over them to the same level they support HTTP requests.
Some of the less expensive server environments don't support the long running server processes required to support webSockets.
If it's important to your application to get progress notifications back to the client, you could either use a long running http connection with continuing progress being sent down or you can use a webSocket. The webSocket is likely easier. If you really only need the webSocket for the relatively short duration of this particular activity, then you may find the best overall set of tradeoffs comes by using a webSocket only for the duration of time when you need the ability to push data to the client and then using http requests for the normal request/response activities.
It really depends on your requirements. REST services can be much more transparent and easier to pick up by developer compared to Websockets.
Using Websockets, you remove most of the advantages that RESTful webservices offer, such as the ability to reference a resource via a URI. Really what you should be doing is to figure out what the advantages are of REST and hypermedia, and based on that decide whether those advantages are important to you.
It's of course entirely possible to create a RESTful webservice, and augment it with a a websocket-based API for real-time responses.
But if you are creating a service that only you are going to consume in a controlled environment, the only disadvantage might be that not every client supports websockets, while pretty much any type of environment can do a simple http call.

Is ReST over websockets possible?

I am planning to develop a web based chat application which takes in ReSTful requests, translate them to XMPP and deliver them to an XMPP server.
Using websockets for this kind of chat based application looked promising as the events (or responses) can be delivered asynchronously. But if I use websockets as the underlying protocol for transferring the requests from the browser, can this still be considered as a ReSTful design? If yes, how are the URIs, verbs (GET, POST...), parameters represented in the websocket message? Wrap them in an xml/json and send it?
Also, ReSTful architecture states that no session state will be stored on the server. But here in this case when an XMPP client session is created, the state of this session will be stored on the server (violating the stateless constraint)
REST is an architectural style that does not impose a protocol. So yes, you can do REST with Web Sockets, REST with HTTP and REST with FTP if you like.
The main reason to use HTTP is that it is easy and fairly simple to communicate with any component or programming language via HTTP and also because HTTP supports distributed environments with multiple intermediaries: proxies, firewalls...; So you can deploy your service on any topology and anyone will be able to access it.
My rant:
If you are a RESTliban and Roy Fielding’s dissertation is the source of truth, verbs are never acknowledged as part of the semantic. URIs are the semantic. The usage of different verbs for different actions has been an elegant evolution of REST over HTTP, but not part of the "truth". You can check the scenario of rest over HTTP evaluated by Roy in chapter six of his dissertation. No mention to verbs. And notice it is an evaluation scenario, not the specification.
TLDR;
If you need realtime two way communications via the internet and the client is a web browser, the best choice is Web Sockets. You could then implement an application level protocol on top of web sockets to implement a RESTful Web Service.
Yes. You can use REST over WebSocket with library like
SwaggerSocket.
Why would you want to build a REST API on top of socket? IMHO the benefit of a REST API is to leverage standard HTTP protocol possibilities like stateless requests, semantic verbs like GET, DELETE to build an API that can be easily understood by (client) developers. Since sockets do not offer HTTP verbs and so on, you would build some kind of HTTP layer for sockets which is IMHO not reasonable.
In case you would really build such a thing, I'd recommend to use the HTTP protocol as a blueprint and implement the socket protocol like HTTP.
REST architectural style mostly presumes 2 entities viz. client and server.
As we move more towards real time web and development of reactive systems WebSocket would prominently start replacing usage of REST API's.
WS allows data push and pull which dismisses the concept of server and client.
STOMP,AMQP ,XMPP can be used as messaging protocols.
The data itself maybe JSON or Google protocol buffers or maybe Apache Avro.
WebSockets is not tied to web servers but can be developed in stand alone apps like mobile apps or desktop apps too.
I don't understand why you would convert XMPP into REST and then run REST over WS. The point of WebSocket is to take the XMPP protocol directly to the browser, thereby avoiding all of the translation issues.
There are JavaScript libraries that can talk XMPP from the browser to the server. All you need is to proxy the XMPP traffic from WS over into TCP and then straight into your XMPP server. Kaazing has a gateway that allows you to do this.
If you want to use open source, you will need to write a JavaScript XMPP library. There are examples that show how to write a JS library for simple protocols. You just have to find one and extend the concept to the XMPP protocol.
So to recap, here are the way the architecture would look:
Your XMPP Client code <-> XMPP JavaScript Library <-> WebSocket over http <-> WebSocket to TCP Proxy <-> XMPP Server
where the XMPP Client code and the XMPP JavaScript Library runs in the browser, and the WS to TCP proxy along with the XMPP server are all server-side.
I understand this post is really old, but wanted to interject a bit further on the notion that "So if I choose a REST architecture I forfeit the ability for real-time communications?".
In a word, no. A number of REST style implementations I have had experience with leverage REST for compatibility, discoverability, and as a means to scale across different devices in the shadow of IoT.
However, in addition to using WS in addition to REST to facilitate near real-time transmission. There are also a number of abstractions which really help with this and allow you to focus on building your API and deciding how the RT components of the consuming applications should operate.
I would suggest taking a look at things like Tibco Smart-Sockets, or SignalR if you're looking to build a REST API and would like to avoid re-creating the wheel for your RT needs.
I created a project that adds callbacks to the web socket send function: https://github.com/ModernEdgeSoftware/WebSocketR2
Message IDs are established so the client can implement callbacks. It handles message retries after timeouts as well as reconnects to the server if the connection gets dropped. You can then structure you payload to be as RESTful as you want by adding verbs and paths.
This is similar to when a video game studio uses UDP to achieve the speeds they need, but their net code implements a lot of TCP like features for reliability.
The OP's original question is: "Is ReST over websockets possible?"
What this question implies is the following: Is REST API possible over Websockets as a transport.
Of course, OP did not mean the following: Is REST architectural style possible over Websockets. His question was more an operational one i.e. can REST API requests, such as GET, PUT, POST, DELETE etc. be exchanged over a Websockets pipe.
To answer this question, we have to understand that both sockets and Websockets are the same type of interface (full duplex, 3-way handshake protocol), but the difference is that sockets interface originated in ARPANET reference model. In that network model, sockets were an interface between Session layer and Transport layer. The word "interface" means that it resides "in between" network layers, i.e. within their boundary. In other words, sockets are not part of any specific network layer.
Websockets are the same type of socket interface, but in OSI 7-layer network model they no longer reside in between Session and Transport layers. Instead, they reside in between Session layer and Presentation layer. Why there? Why this "move"? A motivation for this was to be able to leverage HTTP protocol as a transport for sockets. And what is so special about HTTP protocol? In enterprise establishments, there are a lot of network zones and segments and these security domains are protected by firewalls. And firewalls, as we know, have associated rules for inbound/outbound traffic. If we want two components in two different network zones to talk to each other, we have to ensure that ports on corresponding firewalls are open. That would involve collaboration of infrastructure, operations teams, business approvals etc. and would introduce significant delays in achieving a simple thing: two components communicating with each other.
Which brings us to our use case: Websockets interface placed between Session OSI layer (where HTTP resides) and Presentation OSI layer (where things like TLS reside). By default, port 80 is open on all firewalls thus no involvement of operations and infrastructure is needed. And our two components can now converse over Websockets communication pipe.
Back to the OP's question. Any type of a string list can be transferred over sockets. Sockets/Websockets are an ideal mechanism for transporting all sorts of custom protocols, whether they are STOMP, HL7, FHIR, or many others. GET, PUT, POST, DELETE requests are different operations on a REST API endpoint. These operations are in the form of a specific string list, and as we saw, sockets/Websockets are very convenient for passing string lists back and forth. In the case of REST over HTTP, though, you are leveraging the whole HTTP "infrastructure" available in all modern Browsers, such as Chrome, Firefox, Edge etc., as well as web servers such as Apache, nginx, IIS, OHS, IHS etc. In other words, REST API piggybacks on an established, string list-based protocol called HTTP that is built-in (part of) both clients and servers' sides. This cannot be said about Websockets. You would have to ensure every type of client and server complies with your (custom) transport solution based on Websockets!
I just spot new topic on the blog of one company who providing cloud solution and "Server-end/Service as a Platform" (SaaS) for games.
I'm not advertising this company, nor I used them, so I don't even know how good or bad they are.
However, they very clearly explain reasons and what are the benefits of using WebSockets in REST
Have a read on their blog
REST requires a stateless protocol according to the statelessness constraint, websockets is a stateful protocol, so it is not possible.