Web sockets and Rest API in same Tomcat based application - rest

I have read up on web sockets providing full duplex connections over TCP which can be used in scenarios where long polling was used to get live updates to client from server. Now I have a Tomcat based application which serves multiple REST based web service response, and I want couple of API's to be implemented using web sockets say to render dashboard with latest data where multiple users are working on them concurrently, is that possible ? My concern here was even if the connection was upgraded to TCP from HTTP wouldn't web socket require a separate port to run than the default Tomcat port 8080. In that case should I house the Web Socket based endpoints separate to the Tomcat based application already running. Please do correct me if any of the above is wrong.

A couple of month ago, I wrote a small Spring Boot webapp with embedded Tomcat that provides both, REST endpoints and websocket support, and both via the same port. So, yes that works... if you wanna sneak a peek: https://github.com/tommybrettschneider/pinterest-boot
Besides that, this post should also clarify things:
Shall I use WebSocket on ports other than 80?

Related

Is it possible to run a Golang REST web app on an internal (private) IIS server?

I would like to create a web service with GoLang that runs either on IIS (7, 8 or 10) or under Tomcat 7.0. We have multiple environments, each with multiple servers, all being Windows 2008 R2, 2012 or 2016. All servers are private (10.x). My goal is to add some REST services to a COTS product that uses both IIS and Tomcat. I'd prefer to avoid gluing nginx or something else onto either server at the risk of impairing the COTS product or having their tech support people not answer the phone. Again .. the servers are only accessible via corporate VPN and are not public internet-facing.
Which server would offer the easiest path to get something working -- Tomcat or IIS?
That's not really about Go, but still there exist at least two solutions I can think of:
Reverse proxying of HTTP requests.
Write a plain Go server serving requests via HTTP.
Maybe turn it into a proper Windows service using golang.org/x/sys/windows/svc.
Deploy it.
If it's to be hosted on the same machine which runs IIS, then it's fine to make it listen on localhost only. Note that it will need a dedicated TCP port to listen on, and you'll need to make it possible for your server to be somehow configurable in this regard.
Set up reverse proxying in your IIS so that it forwards requests coming to whatever (part of an) URL you want to the Go server.
Use FastCGI.
Go supports serving requests over the FastCGI
protocol by means of its standard library,
and IIS suports FastCGI workers.
So it's possible to (re-)write your Go server to use FastCGI
instead of HTTP and then hook it into IIS via this protocol.
The pros and cons of these solutions—as I view them—are:
Serving over plain HTTP is more familiar to a developer and
makes the server more "portable"—in the sense it will be easier to change its deployment scheme if/when you'll need it.
Right to making it available to the Internet directly.
Conversely, with FastCGI, you'll always need a FastCGI host as a "middleware".
There were rumors that HTTP code is more fine-tuned in terms
of performance than that of FastCGI.
Still, this only will be of concern for reasonably hard-core loads.
One possible upside of FastCGI over HTTP is that it can
be served over pipes rather than TCP. For instance, you might
get it served over named pipes as it's supported by IIS's FastCGI module and there exists 3rd-party packages for Go implementing support for them
(even including one from Microsoft®).
The upside of this is that pipes are beleived to incur lesser overhead for data transfer (basically it's just shoveling bytes between in-kernel buffers belonging to two processes instead of pushing them through the whole TCP/IP stack), and using pipes frees you from the need of dedicating a TCP port to the Go server.
Still, I have no personal experience with this kind of setup and its performance trade-offs.

Application Server and Web Server on Two Different Machines

Today I'm hosting a Laravel v4 web application on a MacMini. Why a Mac? Because I created the application logic in Objective-C (leveraging my experience with iOS dev). Whether or not this was the right choice isn't the point of the question.
What I'm interested in knowing is how can I separate my web and application server. For instance, if I put my web server on Linode (or whatever) how do I go about communicating back and forth between the web server and the application server? Is there some sort of resource I can look to to understand how to do this?
Assumptions
Here's some assumptions I'm making:
I'm guessing Laravel and the Objetive-C Application are part of the same "system" and so I'm just gonna treat this as if you need a web server to send requests to a PHP application.
The Linode server will be a web server which sends request to the PHP application (Laravel)
Hosting PHP Applications
There are three moving parts:
The web server (Apache, Nginx)
The application gateway (PHP-FPM)
The application
The gateway and the code must live on the same computer/server. The web server can live on a separate computer/server.
This means you'll need your Macintosh to run PHP-FPM, which can then listen for remote connects and send them to the PHP application.
Macintosh
Install php-fpm on your mac. Make sure it can listen for remote network connections. This is usually done in the www.conf file in the listen directory, you can listen for connections on the remote network interface (whatever IP address the computer is assigned).
Linode
Install Nginx or Apache and have it proxy FastCGI requests off to your macintosh server at the macintosh's IP address (the one you set up to listen to addresses in the step above).
Firewalls
You may need to ensure the firewalls at both ends allow incoming/outgoing connects on the networks being used to communicate to eachother.

Dynamic Webpages how many TCP connections for each include/GET request

I have a semantic web/RDF application which runs on Tomcat and is backended with MySQL. Apache HTTPD is in front as a reverse proxy and the OS is Red Hat Linux 6. I am seeing a lot of Connections to port 80 per IP. What I want to know is what determines whether the include for a css or .js file is served over the existing TCP socket, or a new one is created for each GET that occurs while the web page is "built". Is that exclusively the application itself, or is it the Apache web server, or the Linux kernel as well?

Can a Java web app listen to a tcp port in a local network?

forgive the triviality of my question. I was asked this question and I wasn't able to find a proper answer so I decided to research this myself and understand. I have spring ,maven etc background. Supposing I deployed my web app on a box 192.168.0.10 in my network, can I listen on the port say 9090 of the 192.168.0.10 and do something with it in my application itself running on tomcat7 on the usual port 8080.
What all this is supposed to do is listen on a port and display a graph on the client side based on the value received.
I was thinking using maven, I will have a jar packaged project handling the networking bit and transfer the control to the web app.Event that it's really blur in my mind.
Can anyone clarify things a little bit for me?
Thanks in advance
Why do you need a different port, effectively your web app is already callable on the port provided by tomcat. You can have various servlets each distinguished by URL, and one can return graphs. There's lots more possibilities, but I don't see any need for another port.

Highly available standalone java server built using J2SE

What is the best way to make a standalone java server built using J2SE Socket API high available? Using an HTTP server would have been a good choice specially for the built-in features e.g. security, clustering, transactions, etc. but the server should be capable of accepting TCP/IP socket connection from java & non-java clients (mainly legacy). Tomcat does not accept non-http TCP/IP requests? Moreover this post points out servlet for implementing socket connection it's not a good practice. What would be good approach?
After exploring online, this is what I have compe up with. A standalone java application can be made high available by using a combination of the following:
2 VM deployed with HAproxy and keepalived to form the highly available load balancing layer.
Keepalived will keep the load balancers in active-passive mode and the HAproxy will forward the requests to a cluster of backend socket based java server apps
At least 2 VM deployed with the custom socket based java server apps. The HAproxy servers will distribute the requests over these 2 VMs
Use at least 2 terracotta server to share the java server apps. Terracotta will provide the sharing of the memory and help the custom java servers to scale.
Use MySQL NDB Cluster for the database.
Any suggestions?