Forward HTTP RESTfull API requests from http server to my application - rest

I have a question about the design of an application I'm working on.
I made a monolithic java application with sockets open 24/7, something like a game server. I'm just trying to say it's a single jar application instead of a modular servlet/page based web application.
I would now like to add a RESTful API to this application. So people/clients can make HTTP requests to my application to obtain certain info. Because of the monolithic nature of my java application I'm unsure of how to implement this. One other important thing: I'm expecting multiple requests per second, so it would be nice if I could have an existing http server handle the requests, and somehow forward them to my app to set up a reply, and have the http server send it again.
Some things I have thought of:
wrap my application in a tomcat application, although I'm not sure if tomcat can run an application continuously instead of mapping to servlets on request.
open a socket and parse incoming http requests myself (or there is propably a lib for that?). I fear this will have an impact on performance, and would rather use existing http servers because they are optimized for high traffic.
use an excisting http server to handle the requests (apache, lighttp, ...) and have it forward requests to my app via things like scgi, or use a server that can forward via XMLRPC. Are there any other technologies/protocols to do this?
Any advice on how to handle this?
Thanks!

I'd decouple your RESTful service endpoint as much as possible from your original application. This allows you to scale (add multiple servers for your REST endpoint), but also to change your original application without having to change your REST API directly.
Clients <== REST (HTTP) ==> RESTful endpoint <== legacy (sockets) ==> Legacy backend
So your REST server is one the hand a service provider for your clients, but represents at the same time also a client for your original backend.
I would design the RESTful API and then pick one of the existing REST frameworks for Java, like Restlet, and implement the REST service itself. At the same time you can start implementing a gateway between the REST server and your original backend, by using sockets.
Pay attention to scalability and performance (i.e. you may want to use connection pools for the rest <=> backend bridge and not spawn a socket per incoming API request) and also think of possible advantages of HTTP. You might benefit when you're able to use caching, etc. as far as your backend application logic allows so.

Related

Designing a REST API with req/resp and pub/sub requirements

Nowadays I'm designing a REST interface for a distributed system. It is a client/sever architecture but with two message exchange patterns:
req/resp: the most RESTful approach, it would be a CRUD interface to access/create/modify/delete objects in the server.
pub/subs: this is my main doubt. I need the server to send asynchronous notifications to the client as soon as possible.
Searching in the web I found that one solution could be to implement REST-servers in the server and client: Publish/subscribe REST-HTTP Simple Protocol web services architecture?
Another alternative would be to implement blocking-REST and so the client doesn't need to listen in a specific port: Using blocking REST requests to implement publish/subscribe
I would like to know which options would you consider to implement an interface like this one. Thanks!
Web Sockets can provide a channel for the service to update web clients live. There's other techniques like http long polling where the client makes a "blocking" request (as you referred to it) where the service holds the request for a period of less than a timeout (say 50 sec) and writes a response when it has data. The web client immediately issues another request. This loop creates a continuous channel where messages can be "sent" from the server to the client but it's initiated from the client (firewalls, proxies, etc...)
There are libraries such as socket.io, signalR and many others that wrap this logic and even fallback from websockets to long polling gracefully for you and abstract away the details.
I would recommend write some sample web socket and long polling examples just to understand but then rely on libraries like mentioned above to get it right.

Load balance large number of https POST calls in Vert.x

I am looking to build a Java backend that services an incoming REST call by making a large of https POST requests to other http servers. A REST call will translate to aggregating the data from several 1000 https POST requests. Since I am looking to scale to several thousand remote http servers I am evaluating Vert.x to help do event driven, non-blocking https post requests.
I'd like to know the best approach to distribute these POST requests to my Verticals. I do see several examples of Verticals implementing http server requests with each Vertical waking up to a connect event. However I do not see any examples for http client Vertical load balancing in Vert.x.
One approach would be to have Verticals synchronize their access to a Producer Consumer queue but it would not be a good idea to have blocking code in a Vertical.
Your best guess might be using createHTTPServer and/or RouteMatcher included by Vert.x. The great thing about either, is that they dynamically scale with zero-configuration. Any other services must be defined at a load-balancing level, but Vert.x does this for you with HTTP servers.
If you're looking to deploy your application from multiple machines, you might want to look into Hazelcast (Vert.x uses this in it's core libraries, so it's packed with Vert.x) http://www.hazelcast.com

Is ReST over websockets possible?

I am planning to develop a web based chat application which takes in ReSTful requests, translate them to XMPP and deliver them to an XMPP server.
Using websockets for this kind of chat based application looked promising as the events (or responses) can be delivered asynchronously. But if I use websockets as the underlying protocol for transferring the requests from the browser, can this still be considered as a ReSTful design? If yes, how are the URIs, verbs (GET, POST...), parameters represented in the websocket message? Wrap them in an xml/json and send it?
Also, ReSTful architecture states that no session state will be stored on the server. But here in this case when an XMPP client session is created, the state of this session will be stored on the server (violating the stateless constraint)
REST is an architectural style that does not impose a protocol. So yes, you can do REST with Web Sockets, REST with HTTP and REST with FTP if you like.
The main reason to use HTTP is that it is easy and fairly simple to communicate with any component or programming language via HTTP and also because HTTP supports distributed environments with multiple intermediaries: proxies, firewalls...; So you can deploy your service on any topology and anyone will be able to access it.
My rant:
If you are a RESTliban and Roy Fielding’s dissertation is the source of truth, verbs are never acknowledged as part of the semantic. URIs are the semantic. The usage of different verbs for different actions has been an elegant evolution of REST over HTTP, but not part of the "truth". You can check the scenario of rest over HTTP evaluated by Roy in chapter six of his dissertation. No mention to verbs. And notice it is an evaluation scenario, not the specification.
TLDR;
If you need realtime two way communications via the internet and the client is a web browser, the best choice is Web Sockets. You could then implement an application level protocol on top of web sockets to implement a RESTful Web Service.
Yes. You can use REST over WebSocket with library like
SwaggerSocket.
Why would you want to build a REST API on top of socket? IMHO the benefit of a REST API is to leverage standard HTTP protocol possibilities like stateless requests, semantic verbs like GET, DELETE to build an API that can be easily understood by (client) developers. Since sockets do not offer HTTP verbs and so on, you would build some kind of HTTP layer for sockets which is IMHO not reasonable.
In case you would really build such a thing, I'd recommend to use the HTTP protocol as a blueprint and implement the socket protocol like HTTP.
REST architectural style mostly presumes 2 entities viz. client and server.
As we move more towards real time web and development of reactive systems WebSocket would prominently start replacing usage of REST API's.
WS allows data push and pull which dismisses the concept of server and client.
STOMP,AMQP ,XMPP can be used as messaging protocols.
The data itself maybe JSON or Google protocol buffers or maybe Apache Avro.
WebSockets is not tied to web servers but can be developed in stand alone apps like mobile apps or desktop apps too.
I don't understand why you would convert XMPP into REST and then run REST over WS. The point of WebSocket is to take the XMPP protocol directly to the browser, thereby avoiding all of the translation issues.
There are JavaScript libraries that can talk XMPP from the browser to the server. All you need is to proxy the XMPP traffic from WS over into TCP and then straight into your XMPP server. Kaazing has a gateway that allows you to do this.
If you want to use open source, you will need to write a JavaScript XMPP library. There are examples that show how to write a JS library for simple protocols. You just have to find one and extend the concept to the XMPP protocol.
So to recap, here are the way the architecture would look:
Your XMPP Client code <-> XMPP JavaScript Library <-> WebSocket over http <-> WebSocket to TCP Proxy <-> XMPP Server
where the XMPP Client code and the XMPP JavaScript Library runs in the browser, and the WS to TCP proxy along with the XMPP server are all server-side.
I understand this post is really old, but wanted to interject a bit further on the notion that "So if I choose a REST architecture I forfeit the ability for real-time communications?".
In a word, no. A number of REST style implementations I have had experience with leverage REST for compatibility, discoverability, and as a means to scale across different devices in the shadow of IoT.
However, in addition to using WS in addition to REST to facilitate near real-time transmission. There are also a number of abstractions which really help with this and allow you to focus on building your API and deciding how the RT components of the consuming applications should operate.
I would suggest taking a look at things like Tibco Smart-Sockets, or SignalR if you're looking to build a REST API and would like to avoid re-creating the wheel for your RT needs.
I created a project that adds callbacks to the web socket send function: https://github.com/ModernEdgeSoftware/WebSocketR2
Message IDs are established so the client can implement callbacks. It handles message retries after timeouts as well as reconnects to the server if the connection gets dropped. You can then structure you payload to be as RESTful as you want by adding verbs and paths.
This is similar to when a video game studio uses UDP to achieve the speeds they need, but their net code implements a lot of TCP like features for reliability.
The OP's original question is: "Is ReST over websockets possible?"
What this question implies is the following: Is REST API possible over Websockets as a transport.
Of course, OP did not mean the following: Is REST architectural style possible over Websockets. His question was more an operational one i.e. can REST API requests, such as GET, PUT, POST, DELETE etc. be exchanged over a Websockets pipe.
To answer this question, we have to understand that both sockets and Websockets are the same type of interface (full duplex, 3-way handshake protocol), but the difference is that sockets interface originated in ARPANET reference model. In that network model, sockets were an interface between Session layer and Transport layer. The word "interface" means that it resides "in between" network layers, i.e. within their boundary. In other words, sockets are not part of any specific network layer.
Websockets are the same type of socket interface, but in OSI 7-layer network model they no longer reside in between Session and Transport layers. Instead, they reside in between Session layer and Presentation layer. Why there? Why this "move"? A motivation for this was to be able to leverage HTTP protocol as a transport for sockets. And what is so special about HTTP protocol? In enterprise establishments, there are a lot of network zones and segments and these security domains are protected by firewalls. And firewalls, as we know, have associated rules for inbound/outbound traffic. If we want two components in two different network zones to talk to each other, we have to ensure that ports on corresponding firewalls are open. That would involve collaboration of infrastructure, operations teams, business approvals etc. and would introduce significant delays in achieving a simple thing: two components communicating with each other.
Which brings us to our use case: Websockets interface placed between Session OSI layer (where HTTP resides) and Presentation OSI layer (where things like TLS reside). By default, port 80 is open on all firewalls thus no involvement of operations and infrastructure is needed. And our two components can now converse over Websockets communication pipe.
Back to the OP's question. Any type of a string list can be transferred over sockets. Sockets/Websockets are an ideal mechanism for transporting all sorts of custom protocols, whether they are STOMP, HL7, FHIR, or many others. GET, PUT, POST, DELETE requests are different operations on a REST API endpoint. These operations are in the form of a specific string list, and as we saw, sockets/Websockets are very convenient for passing string lists back and forth. In the case of REST over HTTP, though, you are leveraging the whole HTTP "infrastructure" available in all modern Browsers, such as Chrome, Firefox, Edge etc., as well as web servers such as Apache, nginx, IIS, OHS, IHS etc. In other words, REST API piggybacks on an established, string list-based protocol called HTTP that is built-in (part of) both clients and servers' sides. This cannot be said about Websockets. You would have to ensure every type of client and server complies with your (custom) transport solution based on Websockets!
I just spot new topic on the blog of one company who providing cloud solution and "Server-end/Service as a Platform" (SaaS) for games.
I'm not advertising this company, nor I used them, so I don't even know how good or bad they are.
However, they very clearly explain reasons and what are the benefits of using WebSockets in REST
Have a read on their blog
REST requires a stateless protocol according to the statelessness constraint, websockets is a stateful protocol, so it is not possible.

How often does RESTful client pull server data

I have a RESTful web-service application that I developed using the Netbeans IDE. The application uses MySQL server as its back end server. What I am wondering now is how often a client application that uses my RESTful application would refresh to reflect the data change in the server.
Are there any default pull intervals that clients get from the RESTful application? Does the framework(JAX-RS) do something about it Or is that my business to take care of.
Thanks in advance
#Abraham
There are no such rules. Only thing you can use for properly implementing this is HTTP's caching capabilities. Service must include control information how long representation of a particular resource can be cached, revalidated, never cached etc...
On client application side of things each client may decide it's own path how it will keep itself in sync with a service. It can be done by locally storing data and serve end user from local cache etc... Service can not(and shouldn't know) how clients are implemented, only thing service can do is to include caching information in response messages as i already mentioned above.
It is your responsibility to schedule the service to execute again and again. We can set time out interval but there is no pull interval.

Rate-limiting RESTful consumer

I am consuming a rate-limited web service, which only allows me to make 5 calls per second. I am using my server to proxy these calls to a web client:
Mashery > My Web Server > Client's Browser
I have optimized the usage of this web service, but there are still occasional times when I go over the rate limit. What I would like to do instead is hold the client's request for one second (or longer if warranted) before making the web service call to Mashery.
There are some ways I can solve this by building a simple queue system with a database back-end, but I'd rather avoid that if something already exists. Does something already exist to rate-limit the consuming side of this?
To reliably throttle requests to only 5 per second, I would employ a queue/worker system. But before that I would just request a higher rate limit by contacting API support for that platform. Of course, I would also consider using caching if it's a read-only method and you're pulling down a lot of the same data and it's compliant with the API TOS.