Re-asking the question from HA-Proxy discourse site here in the hopes of getting more eyes on it.
I am using HA-Proxy version 1.9.4 2019/02/06 for proxying HTTP traffic to a h2c backend. I am however seeing HA-Proxy set the :scheme to https (and from as far as I can tell uses SSL in the request) when proxying the request. When I hit the backend directly, the :scheme is set to http and the request is non-SSL as expected. I have verified this HA-Proxy behavior using wireshark.
Any suggestions on what I should change in my configuration so that I can set to make sure that the :scheme gets set to http while proxying the request to the backend?
I am using curl 7.54.0 to make requests:
$ curl http://localhost:9090
where HA-Proxy is listening on port 9090.
My HA-Proxy config file:
global
maxconn 4096
daemon
defaults
log global
option http-use-htx
timeout connect 60s
timeout client 60s
timeout server 60s
frontend waiter
mode http
bind *:9090
default_backend local_node
backend local_node
mode http
server localhost localhost:8080 proto h2
It's not supported yet. The client=>haproxy connection can be HTTP/2, the haproxy=>server connection cannot.
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#1.1
HTTP/2 is only supported for incoming connections, not on connections
going to servers.
Just add proto h2 to the server definition.
Cite from example:
server server1 192.168.1.13:80 proto h2
It's an experimental feature of haproxy-1.9, you must enable option http-use-htx to use it.
option http-use-htx is enabled by default since haproxy-2.0-dev3.
This was reported as an issue in haproxy github and has been fixed in version 2.0.
Related
I’m quite new to HAProxy and have what I believe to be a simple use-case:
I want to redirect requests made into my KVM host --> URL of the guest VMs.
In my case the redirects are for several VMs that run HTTPS content.
Example:
-- HAProxy is running on the KVM host (10.10.10.5 - ansout.mine.local)
-- HTTPS guest VM running on KVM (172.10.10.5 - ans.lab.local)
How can I make a request from a client (on the KVM network) to ‘http://ansout.mine.local’ and redirect it into, 'https://ans.lab.local'
It would seem I need to use the ‘http-request redirect’ function in HAProxy but I still can’t wrap my head around it.
Could anyone one kindly provide some pointers on how to achieve the example above?
Many thanks in advance.
I assume that the client then should connect directly to ans.lab.local.
listen http-in
bind :80
log stdout format raw daemon
mode http
option httplog
tcp-request inspect-delay 5s
tcp-request content accept if HTTP
timeout client 5s
timeout connect 30s
timeout server 30s
http-request redirect code 301 location https://ans.lab.local if { hdr(host) -i ansout.mine.local }
If you have several hosts which you want to redirect please take a look into HAProxy Maps.
I am trying to setup haproxy on EC2 instance but facing below error:
503 Service Unavailable. No server is available to handle this
request.
Any help is highly appreciated. I tried many ways but all in vain.
My haproxy version is 1.5 and this is haproxy.cfg file :
frontend main
bind *:80
default_backend server
backend server
balance roundrobin
server node1 xx.xx.xx.xx:80 check maxconn 32
server node2 xx.xx.xx.xx:80 check maxconn 32
Probably the config file you shared is not complete. It should contain mode http in frontend and backend server if not mentioned in global settings.
Also check if you can access the webserver, it is up and running.
Allow the connection on webserver through firewall.
You can also share full config file so exact issue can be identified.
Hope this helps!
The question seems to be quite straight and easy, however I have not been able to find a proper answer.
In haproxy I have 1 backend, say:
backend-1
and 2 frontends, say:
frontend-1
frontend-2
In the backend stanza I want to set a "timeout server" parameter, but, only if the connection comes from frontend-1.
As I didn't find anything I tried to figure it out myself:
backend backend-1
bind *:80
option <blahblah_option>
timeout server 1d if frontend frontend-1
This syntax does not work, and I am mentioning it to let understand what I am trying to achieve.
This is not doable yet in HAProxy.
Later, you will be able to set timeouts using tcp-request and http-request rules.
What we usually do to workaround this for now, is that we setup 2 backends using the same parameters, but different timeout servers.
This is useful when a few urls only deserve a long server timeout.
Edit followup your comment about multiple health checks:
Well, that's why the server's 'track' directive exists:
backend my_app
server srv1 10.0.0.1:80 check
backend my_app_longtime
server srv1 10.0.0.1:80 track my_app/srv1
In the conf above, the server in my_app_longtime backend won't be checked. That said, it will follow up the same state than srv1 in the backend my_app.
Baptiste
Baptiste
I did it like this and it worked. It made it possible to extend timeout on specific app urls, which are more time consuming. Used that trace health check - thanks Babtiste.
frontend www-http
bind 10.0.0.1:80
default_backend app
acl long_url path_beg -i /long_url
use_backend app-extended if long_url
backend app
server web-1 10.0.0.2:80 check
backend app-extended
server web-1 10.0.0.2:80 trace app/web-1
timeout server 10m
I have the need to have both layer7 and layer4. An api where i divert requests two 2 different backends. one backend for GET requests and one backend for PUT,GET & DELETE requests. That being my layer 7 front end.
The layer4 front to handle websites who make the api requests. They just split round robin.
Layer7 was working consistently before I add in this layer4 section, ie just did our loadbalancing for the api. As i migrate the websites w/ the layer4 frontend, api requests, ie layer7, get lost sometimes but not always.
As you can see, sometimes request is lost, sometimes works:
[richv#lb2 ~]$ curl api
curl: (52) Empty reply from server <-- this is not ok, http 503
[richv#lb2 ~]$ curl api
{"error_code":256,"error_message":"No Site specified"} <-- this is ok, cause didnt give all the headers.
Front ends are:
frontend layer7-http-listener
bind *:80
bind *:443 ssl crt /etc/httpd/certs/haproxy.pem
mode http
option httpclose
option httplog
option forwardfor
option accept-invalid-http-request
frontend layer4-listener
bind *:80 transparent
bind *:443 transparent
bind *:3306
mode tcp
option tcplog
I am hosting two different application versions on same servers on different ports. In basic version i expect that following configuration should send request in RoundRobin fashion to different ports. But what i am observing is the request is getting broadcasted to ALL of my server endpoints. Meaning in below example my main request to port 8080 gets FWD to both www.myappdemo.com:5001 and www.myappdemo.com:5002... although the response send by proxy is ALWAYS from www.myappdemo.com:5001.
Can anyone tell what is wrong here?
global
debug
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8080
default_backend servers
backend servers
balance roundrobin
server svr_50301 www.myappdemo.com:5001 maxconn 32 check
server svr_50302 www.myappdemo.com:5002 maxconn 32 check
i can advise you to enable logs and web interface, after that you can provide us more logs and you can check in web interface also if haproxy detects you second server(svr_50302) to be alive.
Reference to HAProxy 1.5 Doc's :
Web Interface - http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stats%20admin
Good info how to enable login - http://webdevwonders.com/haproxy-load-balancer-setup-including-logging-on-debian/
Best Regards,
Dani