Deploy http2 to l7 load balanced pool of webservers - webserver

I have to implement http2 on a website and going through the implementation plan.
The infrastructure is as such :
Users ( connecting over tcp 443 only )
|
V
asa fw
|
V
l7 lb ( ssl/tls terminates here and least connection load balancing )
/ \ encrypted communication using self signed cert between
V V lb and web servers
web01 web02
Question is :
in order to have the wesite fully http2 compliant ( https, multiplexing, server push, and the like ... ) do both lb and web servers have to support http2 ? Or only the web servers ?
I belive both of them need to support http2, for reasons such as : a) this is a layer7 lb hence http protocol aware b) the termination happens on the lb c) the full journey visitor to webserver must be http2 compliant.
But as I am not an expert on the subject, I'd like if anyone out there could share their personal experience.
Thanks

You need to support HTTP2 where the HTTPS terminates.
Not sure why you think you NEED to be fully HTTP2 compliant - most of the benefits, at least at present until server push becomes more mainstream are gained by being HTTP/2 at end point. See here for more info:
HTTP2 with node.js behind nginx proxy

Related

Outgoing connection proxy for http ingoing traffic

I've got two applications, a client and a rest server on two different servers.
The server is in the DMZ, and the client is on a hosted server.
My entreprise IT department wants to have only ontgoing connexions to the hosted server so that the firewall only sees outgoing connections.
They sugsets to have the following architecture :
hosted dmz
Client <---------------> Server
Proxy server Proxy client
1) Proxy server opens a tcp socket
2) Proxy client connects to this tcp socket permanently
3) http requests can be forwarded from client app to rest server app through the tcp connection
Do you know any software that implements such an active proxy mechanism ? (eg apache, nginx...)
Is it more secure that just opening port 80 for the web hosted machine ?
Do you know any software that implements such an active proxy mechanism ?
There are a variety of solutions: from netcat (nc) to socks5. I suggest you to start with netcat, since it is much easier to understand and configure.
Is it more secure that just opening port 80 for the web hosted machine ?
Yes, it is more secure that just forwarding port 80 to DMZ, since you are punching a hole in firewall just for the specific flow.
On the other hand, adding an access list on top of 80 port forwarding should make it even, but there might be other issues, like corporate politics, hardware limitations etc...

Complex HAProxy Setup with HTTP2 and HTTP1 togather

We have a complex haproxy setup, where one https frontend were used with multiple backend based on path, host name, subdomians, sni, ssl termination etc. Now in one of our case we need to use http2 with all those http backends. and most importantly it should be support host name based routing to different backends. Our backend support both http2 and http1 we just need to forward traffic there after terminating ssl based on hostname.
Couldn't find much more examples related to http2 in Haproxy. All those i found used mode tcp in frontend like this. I am afraid using mode tcp could break my http routings.
So how could i may accomplish this?
Our Haproxy version is 1.7.1
OS Ubuntu16.04
originally asked on discourse.haproxy.org

Which ports does Secure Gateway Client use?

I plan to set the Secure Gateway Client at DMZ at on-premise environment, so I need to open Outbound ports for SG Client to connect to SG on Bluemix. The following question is similar to my question, but the answer doesn't show the needed ports.
For the Bluemix Secure Gateway service, how does the data center's network need to be configured?
For the Bluemix Secure Gateway service, how does the data center's network need to be configured?
The following Bluemix Doc shows Outbound 443 is needed.
https://www.ng.bluemix.net/docs/troubleshoot/SecureGateway/ts_index-gentopic1.html#ts_sg_006
What are the best practices for running the Secure Gateway client?
Before you install the Docker client into your environment, ensure that both the internet and your on-premises assets are accessible and all host names are resolvable by a DNS. The client uses outbound port 443 to connect to the IBM Bluemix environment, normally this port is open since its secure. Ensure you check or modify additional firewall and IP Table rules that might apply.
But, the tcpdump, which I got when I executed "docker run -it ibmcom/secure-gateway-client XXXX", showed that SG Client used Outbound 443 and 9000. Is it collect that all ports SG Client uses are Outbound 443 and 9000 ?
Correct, if you are closing down both outbound and inbound ports using your firewall, then for outbound allow ports 443/9000. So your initial assertion is correct.

Pingfederate SSO on port 9031

Why do SSO providers like Ping Federate run on ports that aren't well-known like 9031. Does this enhance security? It seems like it just increases connectivity issues in organizations with strict firewall rules.
That's just a default semi-random port so that it doesn't clash with existing services on the same machine and is a high port so that the server can run under a non-privileged user account.
For production usage one would typically change it to 443 and/or run a reverse-proxy/loadbalancer in front of the SSO server (on port 443).
Generally security is managed at the perimeter of a network. For deployments I have been involved, port 443 is predominately used for SSO (e.g. PingFederate) at the perimeter. For the internal network, I have seen two models, mainly (i) change the HTTPS port in PingFederate to 443, or (ii) utilize load balancer port forwarding from 443 to 9031. I usually see item (i) for Windows deployments and item (ii) for Linux deployments where reserved ports are avoided. There really isn't a true security enhancement for either pattern.
As Hans points out, PingFederate utilizes 9031 as a default so that conflict with other processes on a server are avoided when first deploying the technology. As the SSO capability matures into an environment, the proper port for the service can be managed. The default port avoids issues when first installing that can be frustrating to folks new to the technology.

how to disable haproxy after backend servers are down

Can you any help me with this issue. I have installed haproxy loadbalancer. it is working perfect, but the problem is other. When the application connect to the backend server direct without loadbalancer and the server is down, the application say "trying to reconnect" - this is good, because a user know that the server is down. But wenn application is connect to loadbalancer and server is down, the application staying open and don't say "trying to reconnect". This is because the app is connect direct to haproxy and the app think, that everything is ok with connection. Do you have any ideas how to make haproxy to be disable or service to be shutdown when all backend servers are down and of course when some of the servers are up, haproxy to be up also
I think you're asking the same question as How can I make HAProxy reject TCP connections when all backend servers are down
You want to explicitly reject the connection if backend servers are down:
acl site_dead nbsrv lt 1
tcp-request reject if site_dead
Or acl site_dead nbsrv(backend_name) lt 1 where backend_name is the name of a different backend.
nbsrv documentation
acl documentation
tcp-reject documentation