HAProxy dynamic configuration - haproxy

Here is my setup:
we do have one external load balancer (AWS) attached to root domain mydomain.com
external load balancer forward traffic to HAProxy instance, and haproxy have to forward it further to on of two internal load balancers
we have 2 internal load balancers, first point to latest version of our app, second to minus one version
each of our client can have 1 to many subdomains like sub1.mydomain.com, sub2. sub3
some subdomains should be redirected to old version, some to new, so same client can have old and new in subdomains
Basically:
sub1.mydomain.com -> latest-load-balancer
sub2.mydomain.com -> older-load-balancer
The problem is how to setup this routing, we can't stop/start haproxy for each new subdomain. And it could be more them 10k of that subdomains in future.

haproxy can use maps to decide which backend (internal load balancer) to use, based on the domain.
haproxy can be reloaded instead of restarted when the map changes.
If you do not wish to even reload, you could pass map commands to the unix socket, thus changing the map in realtime.

Related

Two Server which is behind load balancer must not be called directly from Public

I have facing with an issue in my lab. My lab infra is on Digital Ocean. I have two servers (Droplets) and one Load Balancer. The two servers are actually webservers. I can call the website from Load Balancer. But, I only want to call from the load balancer not directly from the webservers.
How can I manage to it?
You can
Drop the public IP of the servers
Block the requests via a firewall if they are not coming from your load balancers
Don't have the process that is actually serving the requests listen to the public ip of the server

Route53: 2 A records to 1 load balancer

I have two hosted zones in Route53:
domain1.com > A record > Simple routing policy > my_load_balancer
domain2.com > A record > Simple routing policy > my_load_balancer
my_load_balancer is the same for both A records. I'd like to use the same application load balancer (my_load_balancer) to forward the requests to domain1.com to target1, and domain2.com requests to target2 (I have created the rules on the load balancer already). However, I am getting "request timed out" for domain2 and cannot figure out the cause. It looks like Route53 does not resolve for the second domain pointing to the same load balancer
Is 1 load balancer for 2 targets/domains (hosted on 2 different instances) possible at all (I read somewhere that it is possible)? What is missing? Please, advise
In case someone else comes across the same. The above (and below) setup works fine.
The basic settings are as follows:
Route53:
Hosted zone domain1.com: A record > my_app_load_balancer
Hosted zone domain2.com: A record > my_app_load_balancer
(where my_app_load_balancer is the same load balancer)
On the load balancer:
Rule 1: if host is domain1.com then forward to the target for domain1.com
Default rule: if host is domain2.com then forward to the target for domain2.com
In my case this was not working due to the DNS cache issues on my machine. I was testing from Windows, so to make it work I did >ipconfig /flushdns

How to use a different dns name for OpenShift 3.11 routes than the default wildcard dns name?

I'm not able to get a custom domain record working with an openshift cluster. I've read tons of articles, StackOverflow posts, and this youtube video https://www.youtube.com/watch?v=Y7syr9d5yrg. All seem to "almost" be usefull for me, but there is always something missing and I'm not able to get this working by myself.
The scenario is as follows. I've got an openshift cluster deployed on an IBM Cloud account. I've registered myinnovx.com. I want to use it with an openshift app. Cluster details:
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
openshift v3.11.146
kubernetes v1.11.0+d4cacc0
I've got an application deployed with a blue/green strategy. In the following screenshot, you can see the routes I've available.
mobile-blue: I created this one manually pointing to my custom domain mobileoffice.myinnovx.com
mobile-office: Created with oc expose service mobile-office --name=mobile-blue to use external access.
mobile-green: Openshift automatically generated a route for the green app version. (Source2Image deployment)
mobile-blue: Openshift automatically generated a route for the blue app version. (Source2Image deployment)
I've set up a two CNAME record on my DNS edit page as follows:
In several blogs/articles, I've found that I'm supposed to point my wildcard record to the router route canonical name. But I don't have any route canonical name in my cluster. I don't even have an Ingress route configured.
I'm at a loss here as to what I'm missing. Any help is greatly appreciated.
This is the response I get testing my DNS:
This is a current export of my DNS:
$ORIGIN myinnovx.com.
$TTL 86400
# IN SOA ns1.softlayer.com. msalimbe.ar.ibm.com. (
2019102317 ; Serial
7200 ; Refresh
600 ; Retry
1728000 ; Expire
3600) ; Minimum
# 86400 IN NS ns1.softlayer.com.
# 86400 IN NS ns2.softlayer.com.
*.myinnovx.com 900 IN CNAME .mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
mobileoffice 900 IN CNAME mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud
mobile-test.myinnovx.com 900 IN A 169.63.244.76
I think you almost got it, Matias.
The FQDN - mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud - resolves for me to an IP that is part of SOFTLAYER-RIPE-4-30-31 and is accessible from the Internet. So, it should be possible to configure what you want.
That snapshot in your question of the DNS records isn't displaying the entries in full but what might be missing is a dot . at the end of both the "Host/Service" and "Value/Target". Something like this:
mobileoffice.myinnovx.com. CNAME 900 (15min) mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
Most of what I'm about to say only applies to OpenShift 3.x. In OpenShift 4.x things are sufficiently different that most of the below doesn't quite apply.
By default OpenShift 3.11 exposes applications via Red Hat's custom HAProxy Ingress Controller (colloquially known as the "Router"). The typical design in a OpenShft 3.x cluster is to designate particular cluster hosts for running cluster infrastructure workloads like the HAProxy router and the internal OpenShift registry (usually using the node-role.kubernetes.io/infra=true node labels).
For convenience purposes so admins don't have to manually create a DNS record for each exposed OpenShift application, there is a wildcard DNS entry that points to the load balancer associated with the HAProxy Router. The DNS name of this is configured in the openshift_master_default_subdomain of the ansible inventory file used to do your cluster installation.
The structure of this record is generally something like *.apps.<cluster name>.<dns subdomain>, but it can be anything you like.
If you want to have a prettier DNS name for your applications you can do a couple things.
The first is to create a DNS entry myapp.example.com pointing to your load balancer and have your load balancer configured to forward those requests to the cluster hosts where the HAProxy Router is running on port 80/443. You can then configure your application's Route object to use hostname myapp.example.com instead of the default <app name>-<project name>.apps.<cluster name>.<dns subdomain>.
Another method would be to do what your suggesting and let the application use the default wildcard route name, but create a DNS CNAME pointing to the original wildcard route name. For example if my openshift_master_default_subdomain is apps.openshift-dev.example.com and my application route is myapp-myproject.apps.openshift-dev.example.com then I could create a CNAME DNS record myapp.example.com pointing to myapp-myproject.apps.openshift-dev.example.com.
The key thing that makes either of the above work is that the HAProxy router doesn't care what the hostname of the request is. All its going to do is match the Host header (SNI must be set in the case of TLS requests and the HAProxy router configured for pass through) of the incoming request against all of Route objects in the cluster and see if any of them match. So if your DNS/Load Balancer configuration is setup to bring requests to the HAProxy Router and the Host header matches a Route, that request will get forwarded to the appropriate OpenShift service.
In your case I don't think you have the CNAME pointed at the right place. You need to point your CNAME at the wildcard hostname your application Route is using.
Also, please note the instructions for custom DNS setup for a route on OpenShift v4 are a bit different and are not correctly displayed in the web console:
apps.<clustername>.<clusterid>.<shard>.openshiftapps.com will not resolve to anything. *.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com is the wildcard entry, so you need something prepending that.
To align with the way it was on v3 we usually chose the arbitrary string elb, e.g. - elb.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com. That will hit the routers.
Here is the related BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1925132

Gatling with load balanced IP hash Nginx

I'm load testing a Tomcat web application with 4 nodes. Those nodes are configured through Nginx with ip_hash:
ip_hash;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
server example:8888 weight=4 max_fails=3 fail_timeout=10s;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
Anyway, I use Gatling for load and performance testing but everytime when I start a test all traffic is routed to one node.. Only when I change the load balance node to least_conn of round robin then the traffic is divided. But this application needs a persistent node to do the work.
Is there any way to let Gatling route the traffic to all 4 nodes during a run? Maybe with a setup configuration? I'm using this setUp right now:
setUp(scenario1.inject(
atOnceUsers(50),
rampUsers(300) over (1800 seconds),
).protocols(httpConf)
)
Thank you!
ip_hash;
Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses.
You should use sticky:
Enables session affinity, which causes requests from the same client to be passed to the same server in a group of servers.
Edit:
Right, I didn't see that it's for nginx plus only :(
I found this post (maybe it helps...):
https://serverfault.com/questions/832790/sticky-sessions-with-nginx-proxy
Reference to: https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng
There is also a version of the module for older versions of nginx:
http://dgtool.treitos.com/2013/02/nginx-as-sticky-balancer-for-ha-using.html
Reference to: https://code.google.com/archive/p/nginx-sticky-module/

GCE + K8S - Accessing referral IP address

With a standard Kubernetes deployment on Google Container Engine, to include services configured with the Kubernetes load balancer settings which creates network load balancers, is it possible to access the user's (or referring) IP address in an application? In the case of PHP, checking common headers in the $_SERVER superglobal only results in the server and internal network addresses being available.
Not yet. Services go through kube_proxy, which answers the client connection and proxies through to the backend (your PHP server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work has been done, and a tracking issue is still open to switch over to an iptables-only proxy. That would allow your PHP server to get the actual client IP.