I have an HA Proxy server through which requests are routed to the backend. The backend servers are node.js/geddy . Enabling gzip on geddy requires some changes to the core module and I did not want to risk doing that.
My question is if I enable compression in HA Proxy, can it be configured to do the compression/de-compression and send the uncompressed request to the geddy backend and the compressed request back to the client?
compression/decompression uncompressed request
happens here no change to code
browser <=====> HA Proxy <====> Node/geddy1
On the other hand, if you think that there is an easy way to enable compression on geddy that I'm missing, I'd be happy to implement that. Also, If anyone needs me to discuss my exploration into geddy to figure out what needs to be done to enable compression, I'd be happy to document what I feel were the code changes so that it could be reviewed by somebody else.
Related
What configuration should I usually set to increase web server performance ? (For example Apache TomCat). I need that my web server process a lot of requests at the same time.
Something like Facebook Hip Hop.
Or just try to use cache e.g in Redis
In order to improve webserver performance, there are a few steps to follow.
Implement a high performance reverse proxy, like nginx.
Let nginx to handle all static content delivery.
Handle Secure Socket Layers negotiation in nginx because SSL and TLS traffic encryption uses CPU time and memory.
Make your content compression (gzip) in nginx.
Resuming, you have to avoid assigning tasks to your webserver, other than handle dynamic content, and backend processing tasks.
PD: You can also take a look to Varnish.
I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.
I'm working with Django and Loggly, and I need to decide between using Loggly with rsyslog or with its RESTful API. For the second option, I'd use grequests, sending a single request at a time (i.e., just to make the calls non-blocking, but I wouldn't send requests in bulk).
What are the advantages of using rsyslog over the RESTful API and vice versa?
Haven't tested it yet, but using the syslog approach has several advantages:
You can centralize logs at a system level, without particular
configurations on the django app
Logging is decoupled from the django app, you can set it to log to file,
a remote syslog server or loggly, without touching the django app
It should be faster if using UDP
If using a centralized syslog server, you only have to set the loggly
agent there
On the other hand, using the RestAPI would couple the app to the loggly implementation, and it could raise some errors while trying to report errors (DNS resolution failures, network problems, etc)
I have a working setup using a hardware load balancer that controls redirection in such a fashion that all requests to http://example.com/login/* are redirected (using HTTP 302) to https://example.com/login/* and all requests that are NOT for /login are inversely redirected from HTTPS to HTTP.
This allows me to wrap the login functions and user/password exchange in SSL but otherwise avoid slowing connections with encryption and also solving some problems with embedded content mixed content warnings in some browsers.
The load balance, however, is end of life and I am looking for a replacement solution, preferably in software.
I think HAproxy is going to be able to serve as my load balacing solution, but I have only been able to find configuration examples and documentation for redirecting everything from HTTP to HTTPS, or vice versa.
Is it possible to do what I am proposing using HAproxy or should I look for a different solution?
I realize I will need to use the development version of HAproxy to support SSL at all.
I would suggest you do not use a DEV build for your production environment.
To answer your question, I would assume you're going to use HAProxy version 1.4:
Is it possible to do what I am proposing using HAProxy or should I look for a different solution?
Yes. It is possible but you have to use another software to handle the HTTPS traffic. Stunnel is proven to be good in this. So I'd say the setup is going to be:
HAProxy 1.4
# Redirect http://../login to https://../login
frontend HTTPSRedirect
bind 1.2.3.4:80
default_backend AppServers
redirect prefix https://www.domain.com/login if { path_beg -i /login }
# Handler for requests coming from Stunnel4.
frontend HTTPReceiver
bind 5.6.7.8:80
default_backend AppServers
Stunnel4
[https]
accept=443
connect=5.6.7.8:80 (HAProxy IP)
I'm looking to start using javascript on the server, most likely with node.js, as well as use websockets to communicate with clients. However, there doesn't seem to be a lot of information about encrypted websocket communication using TLS and the wss:// handler. In fact the only server that I've seen explicitly support wss:// is Kaazing.
This TODO is the only reference I've been able to find in the various node implementations. Am I missing something or are the websocket js servers not ready for encrypted communication yet?
Another option could be using something like lighttpd or apache to proxy to a node listener, has anyone had success there?
TLS/SSL support works for this websocket implementation in Node.js, I just tested it: https://github.com/Worlize/WebSocket-Node/issues/29
Well you have stream.setSecure() and server.setSecure().
I'm guessing you should be able to use one of those (specially the last one) to use TLS in websockets since in the end a websocket is just a normal http connection "upgraded" to websocket.
Using TLS in the normal http server object should theorically also secure the websocket, only by testing this can be confirmed.