The scenario being when many different clients are requesting the same resource from a reverse proxy server, is the reverse proxy forwarding the client request (to the server with the resource) for each client request? If so is that by default or does it depend on the configuration (the request headers: etags, if-none-match, .ect) between the proxy and server with the resource. Thanks
This question might be more appropriate over in ServerFault. That being said, it highly depends on the reverse proxy setup. It's likely both the reverse-proxy software being used (there are many, many types, from nginx, to custom c++, to even Apache) as well as the configuration set for each. If you're trying to write a reverse proxy, syndicating requests to the resource is a tempting, but dangerous idea. It works very well for public, static pages, but you have to be careful not to open just one connection to, for example, facebook.com, and send the result to two different people (showing one of them the other's personal data). For that reason, it should only be attempted when the resource marks, through appropriate headers that it is safe to cache.
I am looking to build a Java backend that services an incoming REST call by making a large of https POST requests to other http servers. A REST call will translate to aggregating the data from several 1000 https POST requests. Since I am looking to scale to several thousand remote http servers I am evaluating Vert.x to help do event driven, non-blocking https post requests.
I'd like to know the best approach to distribute these POST requests to my Verticals. I do see several examples of Verticals implementing http server requests with each Vertical waking up to a connect event. However I do not see any examples for http client Vertical load balancing in Vert.x.
One approach would be to have Verticals synchronize their access to a Producer Consumer queue but it would not be a good idea to have blocking code in a Vertical.
Your best guess might be using createHTTPServer and/or RouteMatcher included by Vert.x. The great thing about either, is that they dynamically scale with zero-configuration. Any other services must be defined at a load-balancing level, but Vert.x does this for you with HTTP servers.
If you're looking to deploy your application from multiple machines, you might want to look into Hazelcast (Vert.x uses this in it's core libraries, so it's packed with Vert.x) http://www.hazelcast.com
I am making a transition from SOAP to REST, and I would like to convince my colleagues that this is a good move. We don't need the extra security mechanisms that SOAP can provide. For us the overhead of SOAP and WSDL has only proven to be a headache over the years.
Apart from the obvious simplifications, one really valuable advantage for our system would be the HTTP caching mechanisms. I have read things on the subject, but I still don't fully understand why these caching mechanisms can't be applied to SOAP messages.
Is it simply because REST by convention encodes all the parameters in the url? Since a GET call can also have a body with parameters, I understand that it's not restricted for REST, but the caching mechanisms don't work if you do so?
SOAP, when using HTTP as the transfer mechanism, is sent via HTTP POST requests. As HTTP POST is non-idempotent, it is not cached at the HTTP level.
REST may be cached, provided the requests in question are idempotent ones: GET, PUT and (theoretically) DELETE. Any POST requests still won't be cached. That said, if you are planning to do a lot of work with this, you should look into how caches check whether they are still valid. In particular, the ETag header will be a good way to implement caching providing you've got a cheap way to compute what the value should be.
Is it simply because REST by convention encodes all the parameters in the url? Since a GET call can also have a body with parameters, I understand that it's not restricted for REST, but the caching mechanisms don't work if you do so?
REST does not prescribe any particular mechanism for how to encode request parameters in the URL. What is recommended is that clients never do any URL synthesis: let the server do all the URL creation (by whatever mechanism you want, such as embedding within the path or as query parameters). What you definitely shouldn't do is have a GET where the client sends a body to the server! Any cache is likely to lose that. Instead, when that request doesn't correspond to a simple fetch of some resource, you POST or PUT that complex document to the server and GET the result of the operation as a separate stage.
A POST of a complex document that returns the result of the operation as another complex document is pretty much how SOAP is encoded. If you're tempted to do that, you might as well use SOAP directly as that's got much more mature tooling in many languages. If you want to run REST that way, you're probably doing it wrong somewhere…
Stating that POST requests are non-idempotent in general is not correct. POST should be used for non-idempotent operations but SOAP is not really using the HTTP correctly and as the result of that it can happened, that POST is used for idempotent operation - for example for fetching an entity by ID (which you would normally use GET for).
Also according to this: Is it possible to cache POST methods in HTTP? (check the answer of reBoot), POST requests is possible to cache according to RFC. It may not be implemented very well by browsers and proxies though. Anyway if you add appropriate HTTP headers, SOAP over HTTP messages should be possible to cache using HTTP caching mechanisms.
Another reason could be that SOAP URI is always the soap server, its not determining which resource needs to be cached or what is the resource. So Cache server cannot perform caching of something it don't know. However, REST has URI for each resource (can be multiple URIs for same resource), which helps caching server to perform its job.
On top of this, as mentioned in other answers only some of HTTP verbs are used for caching like GET, PUT etc (mostly GET).
I am planning to develop a web based chat application which takes in ReSTful requests, translate them to XMPP and deliver them to an XMPP server.
Using websockets for this kind of chat based application looked promising as the events (or responses) can be delivered asynchronously. But if I use websockets as the underlying protocol for transferring the requests from the browser, can this still be considered as a ReSTful design? If yes, how are the URIs, verbs (GET, POST...), parameters represented in the websocket message? Wrap them in an xml/json and send it?
Also, ReSTful architecture states that no session state will be stored on the server. But here in this case when an XMPP client session is created, the state of this session will be stored on the server (violating the stateless constraint)
REST is an architectural style that does not impose a protocol. So yes, you can do REST with Web Sockets, REST with HTTP and REST with FTP if you like.
The main reason to use HTTP is that it is easy and fairly simple to communicate with any component or programming language via HTTP and also because HTTP supports distributed environments with multiple intermediaries: proxies, firewalls...; So you can deploy your service on any topology and anyone will be able to access it.
My rant:
If you are a RESTliban and Roy Fielding’s dissertation is the source of truth, verbs are never acknowledged as part of the semantic. URIs are the semantic. The usage of different verbs for different actions has been an elegant evolution of REST over HTTP, but not part of the "truth". You can check the scenario of rest over HTTP evaluated by Roy in chapter six of his dissertation. No mention to verbs. And notice it is an evaluation scenario, not the specification.
TLDR;
If you need realtime two way communications via the internet and the client is a web browser, the best choice is Web Sockets. You could then implement an application level protocol on top of web sockets to implement a RESTful Web Service.
Yes. You can use REST over WebSocket with library like
SwaggerSocket.
Why would you want to build a REST API on top of socket? IMHO the benefit of a REST API is to leverage standard HTTP protocol possibilities like stateless requests, semantic verbs like GET, DELETE to build an API that can be easily understood by (client) developers. Since sockets do not offer HTTP verbs and so on, you would build some kind of HTTP layer for sockets which is IMHO not reasonable.
In case you would really build such a thing, I'd recommend to use the HTTP protocol as a blueprint and implement the socket protocol like HTTP.
REST architectural style mostly presumes 2 entities viz. client and server.
As we move more towards real time web and development of reactive systems WebSocket would prominently start replacing usage of REST API's.
WS allows data push and pull which dismisses the concept of server and client.
STOMP,AMQP ,XMPP can be used as messaging protocols.
The data itself maybe JSON or Google protocol buffers or maybe Apache Avro.
WebSockets is not tied to web servers but can be developed in stand alone apps like mobile apps or desktop apps too.
I don't understand why you would convert XMPP into REST and then run REST over WS. The point of WebSocket is to take the XMPP protocol directly to the browser, thereby avoiding all of the translation issues.
There are JavaScript libraries that can talk XMPP from the browser to the server. All you need is to proxy the XMPP traffic from WS over into TCP and then straight into your XMPP server. Kaazing has a gateway that allows you to do this.
If you want to use open source, you will need to write a JavaScript XMPP library. There are examples that show how to write a JS library for simple protocols. You just have to find one and extend the concept to the XMPP protocol.
So to recap, here are the way the architecture would look:
Your XMPP Client code <-> XMPP JavaScript Library <-> WebSocket over http <-> WebSocket to TCP Proxy <-> XMPP Server
where the XMPP Client code and the XMPP JavaScript Library runs in the browser, and the WS to TCP proxy along with the XMPP server are all server-side.
I understand this post is really old, but wanted to interject a bit further on the notion that "So if I choose a REST architecture I forfeit the ability for real-time communications?".
In a word, no. A number of REST style implementations I have had experience with leverage REST for compatibility, discoverability, and as a means to scale across different devices in the shadow of IoT.
However, in addition to using WS in addition to REST to facilitate near real-time transmission. There are also a number of abstractions which really help with this and allow you to focus on building your API and deciding how the RT components of the consuming applications should operate.
I would suggest taking a look at things like Tibco Smart-Sockets, or SignalR if you're looking to build a REST API and would like to avoid re-creating the wheel for your RT needs.
I created a project that adds callbacks to the web socket send function: https://github.com/ModernEdgeSoftware/WebSocketR2
Message IDs are established so the client can implement callbacks. It handles message retries after timeouts as well as reconnects to the server if the connection gets dropped. You can then structure you payload to be as RESTful as you want by adding verbs and paths.
This is similar to when a video game studio uses UDP to achieve the speeds they need, but their net code implements a lot of TCP like features for reliability.
The OP's original question is: "Is ReST over websockets possible?"
What this question implies is the following: Is REST API possible over Websockets as a transport.
Of course, OP did not mean the following: Is REST architectural style possible over Websockets. His question was more an operational one i.e. can REST API requests, such as GET, PUT, POST, DELETE etc. be exchanged over a Websockets pipe.
To answer this question, we have to understand that both sockets and Websockets are the same type of interface (full duplex, 3-way handshake protocol), but the difference is that sockets interface originated in ARPANET reference model. In that network model, sockets were an interface between Session layer and Transport layer. The word "interface" means that it resides "in between" network layers, i.e. within their boundary. In other words, sockets are not part of any specific network layer.
Websockets are the same type of socket interface, but in OSI 7-layer network model they no longer reside in between Session and Transport layers. Instead, they reside in between Session layer and Presentation layer. Why there? Why this "move"? A motivation for this was to be able to leverage HTTP protocol as a transport for sockets. And what is so special about HTTP protocol? In enterprise establishments, there are a lot of network zones and segments and these security domains are protected by firewalls. And firewalls, as we know, have associated rules for inbound/outbound traffic. If we want two components in two different network zones to talk to each other, we have to ensure that ports on corresponding firewalls are open. That would involve collaboration of infrastructure, operations teams, business approvals etc. and would introduce significant delays in achieving a simple thing: two components communicating with each other.
Which brings us to our use case: Websockets interface placed between Session OSI layer (where HTTP resides) and Presentation OSI layer (where things like TLS reside). By default, port 80 is open on all firewalls thus no involvement of operations and infrastructure is needed. And our two components can now converse over Websockets communication pipe.
Back to the OP's question. Any type of a string list can be transferred over sockets. Sockets/Websockets are an ideal mechanism for transporting all sorts of custom protocols, whether they are STOMP, HL7, FHIR, or many others. GET, PUT, POST, DELETE requests are different operations on a REST API endpoint. These operations are in the form of a specific string list, and as we saw, sockets/Websockets are very convenient for passing string lists back and forth. In the case of REST over HTTP, though, you are leveraging the whole HTTP "infrastructure" available in all modern Browsers, such as Chrome, Firefox, Edge etc., as well as web servers such as Apache, nginx, IIS, OHS, IHS etc. In other words, REST API piggybacks on an established, string list-based protocol called HTTP that is built-in (part of) both clients and servers' sides. This cannot be said about Websockets. You would have to ensure every type of client and server complies with your (custom) transport solution based on Websockets!
I just spot new topic on the blog of one company who providing cloud solution and "Server-end/Service as a Platform" (SaaS) for games.
I'm not advertising this company, nor I used them, so I don't even know how good or bad they are.
However, they very clearly explain reasons and what are the benefits of using WebSockets in REST
Have a read on their blog
REST requires a stateless protocol according to the statelessness constraint, websockets is a stateful protocol, so it is not possible.
I have a question about the design of an application I'm working on.
I made a monolithic java application with sockets open 24/7, something like a game server. I'm just trying to say it's a single jar application instead of a modular servlet/page based web application.
I would now like to add a RESTful API to this application. So people/clients can make HTTP requests to my application to obtain certain info. Because of the monolithic nature of my java application I'm unsure of how to implement this. One other important thing: I'm expecting multiple requests per second, so it would be nice if I could have an existing http server handle the requests, and somehow forward them to my app to set up a reply, and have the http server send it again.
Some things I have thought of:
wrap my application in a tomcat application, although I'm not sure if tomcat can run an application continuously instead of mapping to servlets on request.
open a socket and parse incoming http requests myself (or there is propably a lib for that?). I fear this will have an impact on performance, and would rather use existing http servers because they are optimized for high traffic.
use an excisting http server to handle the requests (apache, lighttp, ...) and have it forward requests to my app via things like scgi, or use a server that can forward via XMLRPC. Are there any other technologies/protocols to do this?
Any advice on how to handle this?
Thanks!
I'd decouple your RESTful service endpoint as much as possible from your original application. This allows you to scale (add multiple servers for your REST endpoint), but also to change your original application without having to change your REST API directly.
Clients <== REST (HTTP) ==> RESTful endpoint <== legacy (sockets) ==> Legacy backend
So your REST server is one the hand a service provider for your clients, but represents at the same time also a client for your original backend.
I would design the RESTful API and then pick one of the existing REST frameworks for Java, like Restlet, and implement the REST service itself. At the same time you can start implementing a gateway between the REST server and your original backend, by using sockets.
Pay attention to scalability and performance (i.e. you may want to use connection pools for the rest <=> backend bridge and not spawn a socket per incoming API request) and also think of possible advantages of HTTP. You might benefit when you're able to use caching, etc. as far as your backend application logic allows so.