How to redirect to domain name with https using haproxy - haproxy

I tried to receive request and want to redirect it to other host using dns name and exposed with https protocol. For example, my server is http://8.8.8.8:10101/partnerA/getUser. I want haproxy redirect this to https://partner.com/partnerA/getUser (same path as the source).
I also want to filter by path for another redirect destination such as http://8.8.8.8:10101/partnerB/getMarketShare will redirected by HAProxy to https://subdomainb.differentpartner.com/partnerB/getMarketShare(notice the path also follow the same rule, but based on path it will give different host name.
I tried below haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind *:10101
acl url_partnerA path_beg -i /partnerA
acl url_partnerB path_beg -i /partnerB
http-request redirect scheme https if url_partnerA
http-request redirect scheme https if url_partnerB
http-request redirect prefix https://partnerA.com if url_partnerA
http-request redirect prefix https://subdomainb.differentpartner.com/ if url_partnerA
default_backend app
#---------------------------------------------------------------------
# round robin balancing between the various backends
backend app
balance roundrobin
# server app1 127.0.0.1:11003 check
But everytime I access (I use http) POST http://8.8.8.8:10101/partnerA/getUser, the log from haproxy -f haproxy10101.cfg -d will give me this
00000000:main.accept(0005)=0009 from [8.8.8.8:48554] ALPN=<none>
00000000:main.clireq[0009:ffffffff]: POST /partnerA/getUser HTTP/1.1
00000000:main.clihdr[0009:ffffffff]: Host: 8.8.8.8:10101
00000000:main.clihdr[0009:ffffffff]: User-Agent: curl/7.47.0
00000000:main.clihdr[0009:ffffffff]: Accept: */*
00000000:main.clihdr[0009:ffffffff]: Authorization: Basic dGNhc2g6RzBqM2tmMHJsMWYzIQ==
00000000:main.clihdr[0009:ffffffff]: Content-Type: application/json
00000000:main.clihdr[0009:ffffffff]: Postman-Token: 45a236c-740a-4859-a13a-1c45195a99f2
00000000:main.clihdr[0009:ffffffff]: cache-control: no-cache
00000000:main.clihdr[0009:ffffffff]: Content-Length: 218
00000000:main.clicls[0009:ffffffff]
00000000:main.closed[0009:ffffffff]
Anything I miss to make it work? Thanks

Related

Sometimes server connect failed via loadbalance

My server is load balanced to the backend server via gcp https lb and the backend server uses pm2 start -i options different ports and distribute them to those nodes using haproxy.
connection log
Log taken from server using code.
io.on('connection', (socket) => {
console.debug("connection!", socket.id);
}
Once every two times, the server connection fails.
Below is Log through haproxy -d.
fail
success
this log is success after failure
00000041:http-in.clireq[000a:ffffffff]: GET /socket.io/?EIO=3&transport=websocket&sid=rf3JvyUz2KCKV4KDAABS HTTP/1.1
00000041:http-in.clihdr[000a:ffffffff]: User-Agent: websocket-sharp/1.0
00000041:http-in.clihdr[000a:ffffffff]: Host: mydomain.com
00000041:http-in.clihdr[000a:ffffffff]: Upgrade: websocket
00000041:http-in.clihdr[000a:ffffffff]: Sec-WebSocket-Key: 9JCkV46YNC66nIUaZQZl9w==
00000041:http-in.clihdr[000a:ffffffff]: Sec-WebSocket-Version: 13
00000041:http-in.clihdr[000a:ffffffff]: X-Cloud-Trace-Context: 1210ae7f3bb6e56c817e7f5ad30e1d24/17748396103009389644
00000041:http-in.clihdr[000a:ffffffff]: Connection: Upgrade
00000041:http-in.clihdr[000a:ffffffff]: Via: 1.1 google
00000041:http-in.clihdr[000a:ffffffff]: X-Forwarded-For: source ip, dest ip
00000041:http-in.clihdr[000a:ffffffff]: X-Forwarded-Proto: https
00000041:websockets.srvrep[000a:000b]: HTTP/1.1 400 Bad Request
00000041:websockets.srvhdr[000a:000b]: Connection: close
00000041:websockets.srvhdr[000a:000b]: Content-type: text/html
00000041:websockets.srvhdr[000a:000b]: Content-Length: 18
00000041:websockets.srvcls[000a:adfd]
00000041:websockets.clicls[adfd:adfd]
00000041:websockets.closed[adfd:adfd]
00000042:http-in.accept(0007)=000a from [130.211.3.23:53189] ALPN=<none>
00000042:http-in.clireq[000a:ffffffff]: GET /socket.io/?EIO=3&transport=websocket&sid=xmZEqtHokEmBfs2QAABT HTTP/1.1
00000042:http-in.clihdr[000a:ffffffff]: User-Agent: websocket-sharp/1.0
00000042:http-in.clihdr[000a:ffffffff]: Host: mydomain.com
00000042:http-in.clihdr[000a:ffffffff]: Upgrade: websocket
00000042:http-in.clihdr[000a:ffffffff]: Sec-WebSocket-Key: 8PKkFxEv3c3KqIUW8dQbLA==
00000042:http-in.clihdr[000a:ffffffff]: Sec-WebSocket-Version: 13
00000042:http-in.clihdr[000a:ffffffff]: X-Cloud-Trace-Context: de0fa56d1689317bb9212879da8edfcb/11012650454108009103
00000042:http-in.clihdr[000a:ffffffff]: Connection: Upgrade
00000042:http-in.clihdr[000a:ffffffff]: Via: 1.1 google
00000042:http-in.clihdr[000a:ffffffff]: X-Forwarded-For: source ip, dest ip
00000042:http-in.clihdr[000a:ffffffff]: X-Forwarded-Proto: https
00000042:websockets.srvrep[000a:000b]: HTTP/1.1 101 Switching Protocols
00000042:websockets.srvhdr[000a:000b]: Upgrade: websocket
00000042:websockets.srvhdr[000a:000b]: Connection: Upgrade
00000042:websockets.srvhdr[000a:000b]: Sec-WebSocket-Accept: tW86o/zu95tHQayPP7IlGNXi96s=
There is no difference, but two results. i don't know why 400 bad request
haproxy.cfg
global
maxconn 4096
defaults
mode http
balance roundrobin
option redispatch
option forwardfor
timeout connect 5s
timeout queue 5s
timeout client 50s
timeout server 50s
frontend http-in
bind *:80
default_backend servers
# Any URL beginning with socket.io will be flagged as 'is_websocket'
acl is_websocket path_beg /socket.io
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
# The connection to use if 'is_websocket' is flagged
use_backend websockets if is_websocket
backend servers
server server1 10.168.0.50:80
# server server2 [Address]:[Port]
backend websockets
balance source
option http-server-close
option forceclose
cookie io prefix indirect nocache # using the `io` cookie set upon handshake
server ws-server1 10.168.0.50:5000 weight 1 maxconn 1024 cookie ws-server1 check
server ws-server2 10.168.0.50:5001 weight 1 maxconn 1024 cookie ws-server2 check
#server ws-server3 10.168.0.50:5002 weight 1 maxconn 1024 check
i use cookie SRVNAME insert options and server name SA, SB
but socket.io document read change cookie io prefix indirect nocache and server name ws-server1, ws-server2
My Tested:
client used long polling and websocket
testClient give option {transports: ['websocket']} always success.. but real client not use only websocket options
I don't know why it fails.
If only use ws-server1 the connection will always succeed. but ws-server2 use sometime connection failed. I guess sticky session problem. I try haproxy.cfg add cookie option but The problem is not solved.
How can I solve this problem?

haproxy.conf frontend/subdirectory to backend on a different host

I have a pretty basic digital ocean container set-up to hold a personal blog (jcress.org)
I'd like jcress.org/drobot/ to forward to my octoprint server, hosted on a raspberry pi in my basement. haproxy will handle http auth for requests originating outside the lan.
nginix serves a port i'm forwarding from the raspberry pi with ssh -R, all that seems to work.
When the request lands on the raspberry pi I see this render of the login page; filling the form and hitting log in doesn't work, and i don't see any activity in /var/log/haproxy.log
From the LAN I get:
Here's haproxy.conf
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
option http-server-close
option forwardfor
maxconn 2000
timeout connect 5s
timeout client 15min
timeout server 15min
frontend public
log /dev/log local0 debug
bind :::80 v4v6
bind :::443 v4v6 ssl crt /etc/ssl/snakeoil.pem
option forwardfor except 127.0.0.1
use_backend webcam if { path_beg /webcam/ }
use_backend octoprint_lan if { hdr_beg(host) -i 10.0 }
default_backend octoprint
backend octoprint_lan
reqrep ^([^\ :]*)\ /(.*) \1\ /\2
option forwardfor
server octoprint1 127.0.0.1:5000
errorfile 503 /etc/haproxy/errors/503-no-octoprint.http
backend octoprint
http-request set-header Host octopi-drobot.local
reqrep ^([^\ :]*)\ /drobot/?(.*) \1\ /\2
option forwardfor
server octoprint1 127.0.0.1:5000
errorfile 503 /etc/haproxy/errors/503-no-octoprint.http
acl ValidOctoPrintUser http_auth(OctoPrintUsers)
http-request auth realm OctoPrint if !ValidOctoPrintUser
userlist OctoPrintUsers
user USAR insecure-password PASSWARD
here's what shows up in the haproxy log:
Oct 15 17:10:43 octopi-drobot haproxy[3777]: ::1:57030 [15/Oct/2019:17:10:42.938] public octoprint/octoprint1 0/0/92/45/137 200 3074 - - ---- 9/9/1/1/0 0/0 "GET / HTTP/1.0"
Looks like it's not loading your CSS when you're not on the LAN. You need to add a redirect for the CSS/JavaScript files. Try taking a look at your OctoPrint login source HTML, and add the necessary redirects.

How to link frontend to backend when the path request are different?

I have an Haproxy set with https offloadin, and I'm trying to correctly point the requests made to frontend to it's corresponding backend, but bumped into some obstacles.
I have a backend server on http://:9000/abc (NOT in root of the webserver) and when I set a frontend with https:///abc the pointing works as expected and I see the login page.
But I also have another backend server, which is on http://:8888 (IN the root of webserver, it makes it's own redirect to http://:8888/def) and I want it to be accessible by https:///def. But in this case the pointing doesn't work.
How can I make https:///def point to http://:8888 ? Heres is my .cfg
Using HAproxy 1.7
# Automaticaly generated, dont edit manually.
# Generated on: 2019-01-28 13:59
global
maxconn 1000
stats socket /tmp/haproxy.socket level admin
uid 80
gid 80
nbproc 1
hard-stop-after 15m
chroot /tmp/haproxy_chroot
daemon
tune.ssl.default-dh-param 2048
server-state-file /tmp/haproxy_server_state
listen HAProxyLocalStats
bind 127.0.0.1:2200 name localstats
mode http
stats enable
stats refresh 10
stats admin if TRUE
stats show-legends
stats uri /haproxy/haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000
frontend shared-frontend-merged
bind 200.129.168.14:443 name 200.129.168.14:443 no-sslv3 ssl crt-list /var/etc/haproxy/shared-frontend.crt_list
mode http
log global
option http-keep-alive
option forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
timeout client 30000
acl aclcrt_shared-frontend var(txn.txnhost) -m reg -i ^ifamcmc\.ddns\.net(:([0-9]){1,5})?$
acl ACL1 var(txn.txnpath) -m sub -i abc
acl ACL2 var(txn.txnpath) -m sub -i def
http-request set-var(txn.txnhost) hdr(host)
http-request set-var(txn.txnpath) path
use_backend glpi_ipvANY if ACL1
use_backend ciweb_ipvANY if ACL2
frontend http-to-https
bind 200.129.168.14:80 name 200.129.168.14:80
mode http
log global
option http-keep-alive
timeout client 30000
http-request redirect scheme https
backend abc_ipvANY
mode http
id 102
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk OPTIONS /
server abc 10.100.0.30:9000 id 103 check inter 1000
backend def_ipvANY
mode http
id 104
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk OPTIONS /
server def 10.100.0.40:8888 id 105 check inter 1000
I expect that access to https:///def correctly points to the backend at http://:8888
https://<my.address.com>/abc ------> http://<internal_ip>:9000/abc (OK)
https://<my.address.com>/def ------> http://<internal_ip_2>:8888 (NOT OK)
Have your HAProxy system do initially forwarding based on ports, and then wildcards on your directory.
Please see below:
frontend a-frontend-conf
# Declare an ACL using path_beg (Path Begins)
acl path_images path_beg /images
# Use backend server1 if acl condition path_images is fulfilled
use_backend server1 if path_images
backend server1
[...]
Source: https://serverfault.com/questions/659793/haproxy-how-to-balance-traffic-within-directory-reached

HAProxy frontend rule matching order

I have a haproxy configuration as follows. (haproxy 1.7) We want to catch all OPTIONS request and respond directly to them instead of routing the requests to backends (which have basic auth enabled).
This was working fine when we developed it but now it seems to not be matching the rules in order (not sure what we have/haven't done which has caused this):
global
log 127.0.0.1 local1
tune.ssl.default-dh-param 2048
lua-load /etc/haproxy/cors.lua
stats socket /var/run/haproxy.sock mode 400
# Default certificate and key directories
ca-base /etc/ssl/private
crt-base /etc/ssl/private
# User lists used to enforce HTTP Basic Authentication
userlist ul_100123-2ovt9rsu
user app1 password $6$lCjf6VnWhI$kcjmpWdV.odeYf4psUhcVKs49ZtPk3MDhg5wtLNUx658A3EWdDHJQqs9xCD1d.7zG05M2nwOxdkC6o/MSpifv0
userlist ul_100123-9uvsclqr
user app1 password $6$DlcLoDMMu$wDm3O0W1eiQuk8gI.GmpzI1.jbBf.UYQ.KM73nHa1tGZJNfzkDpVnLUhh7v7C9yPHB1oo0cRrFnfOdeyAf/eU1
# Front-end for public services which have SSL termination at the router.
frontend term
bind *:443 accept-proxy ssl no-sslv3 crt router/fred-external.pem crt router/fred-external.ace.pem crt router
reqadd X-Forwarded-Proto:\ https
rspidel ^(Server|X-Powered-By):
option forwardfor
mode http
http-request use-service lua.cors-response if METH_OPTIONS { req.hdr(origin) -m found }
acl host_match_100123-2ovt9rsu ssl_fc_sni -i 2ovt9rsu.fredurl.com
use_backend b_term_100123-2ovt9rsu if host_match_100123-2ovt9rsu
......
If I curl -X OPTIONS to 2ovt9rsu.fredurl.com it matches the 2nd rule and forwards me to the b_term_100123-2ovt9rsu backend which then fails as I haven't provided auth creds.
If I curl -X OPTIONS to Anything.fredurl.com it matches the first http-request and responds with the cors response as expected.
Why does the 2ovt9rsu.fredurl.com not match the first http-request rule and then return the cors-response?
In the logs we can see
Nov 7 18:24:09 localhost haproxy[37302]: 94.45.23.22:49853 [07/Nov/2017:18:24:09.807] term~ b_term_100123-2ovt9rsu/<lua.cors-response> -1/-1/-1/-1/73 401 249 - - PR-- 0/0/0/0/3 0/0 "OPTIONS / HTTP/1.1"
when the request gets forwarded to the backend
http-request gets executed before use_backend, the config looks good to me, have you set origin header when you curl ?

haproxy heartbeat with backend based on http post

I want to create a configuration such that the heartbeat between haproxy and the backend is based on HTTP POST.
Does anyone have any idea about this?
I have tried the below configuration, but it only sent the http HEAD to the backend server (I want HTTP POST):
backend mlp
mode http
balance roundrobin
server mlp1 192.168.12.165:9210 check
server mlp2 192.168.12.166:9210 check
Thanks for your help.
#Mohsin,
Thank you so much. I indeed work.
But I want to specify the request message, seems my configure doesn't work. I appreciate that if you can help too.
[root#LB_vAPP_1 tmp]# more /var/www/index.txt
POST / HTTP/1.1\r\nHost: 176.16.0.8:2234\r\nContent-Length: 653\r\n\r\n<?xml version=\"1.0\" encoding=\"gb2312\"?>\r\n<svc_init ver=\"3.2.0\">\r\n<hdr ver=\"3.2.0\">\r\n<client>\r\n<id>915948</id>\r\n<pwd>915948</pwd>\r\n<serviceid></serviceid>\r\n</client>\r\n<requestor><id>13969041845</id></requestor>\r\n</hdr>\r\n<slir ver=\"3.2.0\" res_type=\"SYNC\">\r\n<msids><msid enc=\"ASC\" type=\"MSISDN\">00000000000</msid></msids>\r\n<eqop>\r\n<resp_req type=\"LOW_DELAY\"/>\r\n<hor_acc>200</hor_acc>\r\n</eqop>\r\n<geo_info>\r\n<CoordinateReferenceSystem>\r\n<Identifier
>\r\n<code>4326</code>\r\n<codeSpace>EPSG</codeSpace>\r\n<edition>6.1</edition>\r\n</Identifier\r\n</CoordinateReferenceSystem>\r\n</geo_info>\r\n<loc_type type=\"CURRENT_OR_LAST\"/>\r\n<prio type=\"HIGH\"/>\r\n</slir>\r\n</svc_init>\r\n\r\n\r\n\r\n
my haproxy.conf file is as bellowing:
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local7
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
ulimit-n 65536
daemon
nbproc 1
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
mode tcp
retries 3
log global
option redispatch
# option abortonclose
retries 3
timeout queue 28s
timeout connect 28s
timeout client 28s
timeout server 28s
timeout check 1s
maxconn 32000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend mlp
mode tcp
option persist
# bind 10.68.97.42:9211 ssl crt /etc/ssl/server.pem
#bind 10.68.97.42:9211
bind 10.68.97.42:9210
default_backend mlp
frontend supl
mode tcp
option persist
bind 10.68.97.42:7275
default_backend supl
#-------------
# option1 http check
#------------
backend mlp
mode http
balance roundrobin
option httpchk POST / HTTP/1.1\r\nHost: 176.16.0.8:2234\r\nContent-Length: 653\r\n\r\n{<?xml version=\"1.0\" encoding=\"gb2312\"?>\r\n<svc_init ver=\"3.2.0\">\r\n<hdr ver=\"3.2.0\">\r\n<client>\r\n<id>915948</id>\r\n<pwd>915948</pwd>\r\n<serviceid></serviceid>\r\n</client>\r\n<requestor><id>13969041845</id></requestor>\r\n</hdr>\r\n<slir ver=\"3.2.0\" res_type=\"SYNC\">\r\n<msids><msid enc=\"ASC\" type=\"MSISDN\">00000000000</msid></msids>\r\n<eqop>\r\n<resp_req type=\"LOW_DELAY\"/>\r\n<hor_acc>200</hor_acc>\r\n</eqop>\r\n<geo_info>\r\n<CoordinateReferenceSystem>\r\n<Identifier>\r\n<code>4326</code>\r\n<codeSpace>EPSG</codeSpace>\r\n<edition>6.1</edition>\r\n</Identifier>\r\n</CoordinateReferenceSystem>\r\n</geo_info>\r\n<loc_type type=\"CURRENT_OR_LAST\"/>\r\n<prio type=\"HIGH\"/>\r\n</slir>\r\n</svc_init>\r\n\r\n\r\n\r\n}
http-check expect rstring <result resid=\"4\">UNKNOWN SUBSCRIBER</result>
server mlp1 192.168.12.165:9210 check
server mlp2 192.168.12.166:9210 check
#server mlp2 192.168.12.166:9210 check
backend supl
mode tcp
source 0.0.0.0 usesrc clientip
balance roundrobin
server supl1 192.168.12.165:7275 check
server supl2 192.168.12.166:7275 check
#server supl2 192.168.12.166:7275 check
#Mohsin,
Thanks for your answer, it gave me the critical clue to resolve this issue.
However, my message is as bellowing, right now it can work as I want(send the specified request and check the specified response). I post it, hopefully, it may help others also. One point is, the content-length is very important.
backend mlp
mode http
balance roundrobin
option httpchk POST / HTTP/1.1\r\nUser-Agent:HAProxy\r\nHost:176.16.0.8:2234\r\nContent-Type:\ text/xml\r\nContent-Length:516\r\n\r\n91594891594813969041845000000000003200
http-check expect rstring <result resid=\"4\">UNKNOWN SUBSCRIBER</result>
server mlp1 192.168.12.165:9210 check
server mlp2 192.168.12.166:9210 check
I was able to get this working after a bit of experimenting.
This was my setup
HAProxy -> NGINX -> Backend
I was sniffing the requests at the NGINX stage with tcpdump to see what was actually happening.
In order to change the health check request we have to follow a hack described in the documentation to change the HTTP version and send headers:
It is possible to send HTTP headers after the string by concatenating them using rn and backslashes spaces. This is useful to send Host headers when probing a virtual host
This is the raw http check I want to send:
POST ${ENDPOINT} HTTP/1.0
Content-Type: application/json
{"body": "json"}
The big issue here is that HAProxy adds a new header by itself: Connection: close, so this is what NGINX gets:
POST ${ENDPOINT} HTTP/1.0
Content-Type: application/json
{"body": "json"}
Connection: close
This leads, at least in my case to error 400s due to a malformed request.
The fix is to add a Content-Length header:
POST ${ENDPOINT} HTTP/1.0
Content-Type: application/json
Content-Length: 16
{"body": "json"}
Connection: close
Since the Content-Length should take precedence over the actual length, this forces the last header to be ignored. This is what NGINX passes to the backend:
POST ${ENDPOINT} HTTP/1.0
Host: ~^(.+)$
X-Real-IP: ${IP}
X-Forwarded-For: ${IP}
Connection: close
Content-Length: 16
Content-Type: application/json
{"body": "json"}
This is my final check:
option httpchk POST ${ENDPOINT} HTTP/1.0\r\nContent-Type:\ application/json\r\nContent-Length:\ 16\r\n\r\n{\"body\":\"json\"}
If it's just JSON you should be ok copying and pasting this and adjusting the content length.
However, I do recommend that you follow the same procedure and sniff the actual health checks, because, with the characters one has to escape in the config file, creating the request properly can be tricky.
Open haproxy/conf/haproxy.conf file. Goto end of the page, you will see that there is a line 'option httpchk GET /', change GET to POST and you are done.
Let me know if you face any problem.