Route53: 2 A records to 1 load balancer - amazon-route53

I have two hosted zones in Route53:
domain1.com > A record > Simple routing policy > my_load_balancer
domain2.com > A record > Simple routing policy > my_load_balancer
my_load_balancer is the same for both A records. I'd like to use the same application load balancer (my_load_balancer) to forward the requests to domain1.com to target1, and domain2.com requests to target2 (I have created the rules on the load balancer already). However, I am getting "request timed out" for domain2 and cannot figure out the cause. It looks like Route53 does not resolve for the second domain pointing to the same load balancer
Is 1 load balancer for 2 targets/domains (hosted on 2 different instances) possible at all (I read somewhere that it is possible)? What is missing? Please, advise

In case someone else comes across the same. The above (and below) setup works fine.
The basic settings are as follows:
Route53:
Hosted zone domain1.com: A record > my_app_load_balancer
Hosted zone domain2.com: A record > my_app_load_balancer
(where my_app_load_balancer is the same load balancer)
On the load balancer:
Rule 1: if host is domain1.com then forward to the target for domain1.com
Default rule: if host is domain2.com then forward to the target for domain2.com
In my case this was not working due to the DNS cache issues on my machine. I was testing from Windows, so to make it work I did >ipconfig /flushdns

Related

How to use a different dns name for OpenShift 3.11 routes than the default wildcard dns name?

I'm not able to get a custom domain record working with an openshift cluster. I've read tons of articles, StackOverflow posts, and this youtube video https://www.youtube.com/watch?v=Y7syr9d5yrg. All seem to "almost" be usefull for me, but there is always something missing and I'm not able to get this working by myself.
The scenario is as follows. I've got an openshift cluster deployed on an IBM Cloud account. I've registered myinnovx.com. I want to use it with an openshift app. Cluster details:
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
openshift v3.11.146
kubernetes v1.11.0+d4cacc0
I've got an application deployed with a blue/green strategy. In the following screenshot, you can see the routes I've available.
mobile-blue: I created this one manually pointing to my custom domain mobileoffice.myinnovx.com
mobile-office: Created with oc expose service mobile-office --name=mobile-blue to use external access.
mobile-green: Openshift automatically generated a route for the green app version. (Source2Image deployment)
mobile-blue: Openshift automatically generated a route for the blue app version. (Source2Image deployment)
I've set up a two CNAME record on my DNS edit page as follows:
In several blogs/articles, I've found that I'm supposed to point my wildcard record to the router route canonical name. But I don't have any route canonical name in my cluster. I don't even have an Ingress route configured.
I'm at a loss here as to what I'm missing. Any help is greatly appreciated.
This is the response I get testing my DNS:
This is a current export of my DNS:
$ORIGIN myinnovx.com.
$TTL 86400
# IN SOA ns1.softlayer.com. msalimbe.ar.ibm.com. (
2019102317 ; Serial
7200 ; Refresh
600 ; Retry
1728000 ; Expire
3600) ; Minimum
# 86400 IN NS ns1.softlayer.com.
# 86400 IN NS ns2.softlayer.com.
*.myinnovx.com 900 IN CNAME .mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
mobileoffice 900 IN CNAME mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud
mobile-test.myinnovx.com 900 IN A 169.63.244.76
I think you almost got it, Matias.
The FQDN - mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud - resolves for me to an IP that is part of SOFTLAYER-RIPE-4-30-31 and is accessible from the Internet. So, it should be possible to configure what you want.
That snapshot in your question of the DNS records isn't displaying the entries in full but what might be missing is a dot . at the end of both the "Host/Service" and "Value/Target". Something like this:
mobileoffice.myinnovx.com. CNAME 900 (15min) mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
Most of what I'm about to say only applies to OpenShift 3.x. In OpenShift 4.x things are sufficiently different that most of the below doesn't quite apply.
By default OpenShift 3.11 exposes applications via Red Hat's custom HAProxy Ingress Controller (colloquially known as the "Router"). The typical design in a OpenShft 3.x cluster is to designate particular cluster hosts for running cluster infrastructure workloads like the HAProxy router and the internal OpenShift registry (usually using the node-role.kubernetes.io/infra=true node labels).
For convenience purposes so admins don't have to manually create a DNS record for each exposed OpenShift application, there is a wildcard DNS entry that points to the load balancer associated with the HAProxy Router. The DNS name of this is configured in the openshift_master_default_subdomain of the ansible inventory file used to do your cluster installation.
The structure of this record is generally something like *.apps.<cluster name>.<dns subdomain>, but it can be anything you like.
If you want to have a prettier DNS name for your applications you can do a couple things.
The first is to create a DNS entry myapp.example.com pointing to your load balancer and have your load balancer configured to forward those requests to the cluster hosts where the HAProxy Router is running on port 80/443. You can then configure your application's Route object to use hostname myapp.example.com instead of the default <app name>-<project name>.apps.<cluster name>.<dns subdomain>.
Another method would be to do what your suggesting and let the application use the default wildcard route name, but create a DNS CNAME pointing to the original wildcard route name. For example if my openshift_master_default_subdomain is apps.openshift-dev.example.com and my application route is myapp-myproject.apps.openshift-dev.example.com then I could create a CNAME DNS record myapp.example.com pointing to myapp-myproject.apps.openshift-dev.example.com.
The key thing that makes either of the above work is that the HAProxy router doesn't care what the hostname of the request is. All its going to do is match the Host header (SNI must be set in the case of TLS requests and the HAProxy router configured for pass through) of the incoming request against all of Route objects in the cluster and see if any of them match. So if your DNS/Load Balancer configuration is setup to bring requests to the HAProxy Router and the Host header matches a Route, that request will get forwarded to the appropriate OpenShift service.
In your case I don't think you have the CNAME pointed at the right place. You need to point your CNAME at the wildcard hostname your application Route is using.
Also, please note the instructions for custom DNS setup for a route on OpenShift v4 are a bit different and are not correctly displayed in the web console:
apps.<clustername>.<clusterid>.<shard>.openshiftapps.com will not resolve to anything. *.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com is the wildcard entry, so you need something prepending that.
To align with the way it was on v3 we usually chose the arbitrary string elb, e.g. - elb.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com. That will hit the routers.
Here is the related BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1925132

CIS Firewall Issue

Finding it difficult on how to apply firewall rules for Global Load Balancer & DNS records managed as part of IBM CIS service. Unable to find descriptive documentation regarding that.
WOuld appreciate if we can get the help on how to address below concerns:-
We've enterprise plan for IBM CIS service & currently under a single CIS instance, we have 2 Global Load balancers plus 4 DNS records managed via it.
We have a requirement for below :-
1) To whitelist Global Load Balancers to be accessible only from the defined set of IP ranges
2) To whitelist DNS records to be accessible only from the defined set of IP ranges
3) TO Blacklist certain Global Load Balancers URL patterns
To us, it not clear how to use the feature "IP rule" or "Domain lockdown" , Examples or scenario based approach which explain the use -case of each these options would help.
I am part of the development team for CIS. Can you please open a Support Ticket with the following information
CRN for the Instance and Domain name
Details on the Firewall Rules you want to add
1) To whitelist Global Load Balancers to be accessible only from the defined set of IP ranges
2) To whitelist DNS records to be accessible only from the defined set of IP ranges ----- Are you asking for Private DNS.
3) TO Blacklist certain Global Load Balancers URL patterns ---- Can you please give examples?
Thanks Vasu

How do I host my script in a Google Cloud server?

So I have created something small which is a image-rehost where I wish to use Python script where I have a URL such as https://i.imgur.com/VBPNX9p.jpg but with my rehost it would be
https://ip:port/abc123def456
so whenever I access that page it would give me the url that I posted here.
However the issue I am having is that I have no clue how to actually host the server that I made by node-js. Right now I just used the external IP with port of 5000. When I tried to send the image through my home ip by using the
https://external_ip:5000/abc123
the server doesn't recognize anything and nothing is being sent to the server which I in that case think I have setup something wrong.
I am using Google cloud server and I would wish to know how I can host my own server in the google cloud?
As you are having trouble adding a firewall rule, I'm going to suggest make sure port 5000 is open and not 8888.
To open the firewall rule for port 5000 in Google Cloud Platform follow these steps.
1) Navigate to VPC Network > Firewall rules > Create firewall rule.
2) In the 'Create a firewall rule' page, select these settings:
Name - choose a name for this firewall rule
Network - select the name of the network your instance belongs to, most probably
'default' unless you've configured a custom network.
Direction of traffic - 'Ingress'.
Action on match - 'Allow'.
Targets - 'All instances in the network'.
Source filter - 'IP ranges'.
Source IP ranges - '0.0.0.0/0'.
Second source filter - 'None'.
Specified protocols and ports - 'tcp:5000' or 'udp:5000' depending on whether the protocol you are using uses tcp or udp.
3) Hit 'Create'.
This will create a rule allowing traffic on port 5000 to all instances in your network from all IP address sources.
My advice would be to see if these settings work, and then once confirming this, lock down the settings by specifying a specific IP address or range of IP addresses in the 'Source IP ranges' text box, and adding a target tag to you instance and specifying 'Specified target tags' so the port is only open to the instance.
If this doesn't work, you may have a firewall rule turned on within the instance, which you would need to configure (or turn it off).
For more detailed information about setting firewall rules please see here.
For running Node.sj on GCE VM I will suggest you use the Bitnami Node.js package on GCP Marketplace which includes the latest version of Node.js, Apache, Python, and Redis. Using a pre-configured Node.js environment gets you up and running quickly because everything works out of the box. Manually configuring an environment can be a difficult and time-consuming hurdle to developing an application.
Also if you wish to do URL redirection you can use URL map feature provided with Google Cloud HTTP load balancer. This feature allows you to direct traffic to different instances based on the incoming URL. For example, you can send requests for http://www.example.com/audio to one backend service, which contains instances configured to deliver audio files, and requests for http://www.example.com/video to another backend service, which contains instances configured to deliver video files. You find steps to configure and more information here.

Gatling with load balanced IP hash Nginx

I'm load testing a Tomcat web application with 4 nodes. Those nodes are configured through Nginx with ip_hash:
ip_hash;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
server example:8888 weight=4 max_fails=3 fail_timeout=10s;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
Anyway, I use Gatling for load and performance testing but everytime when I start a test all traffic is routed to one node.. Only when I change the load balance node to least_conn of round robin then the traffic is divided. But this application needs a persistent node to do the work.
Is there any way to let Gatling route the traffic to all 4 nodes during a run? Maybe with a setup configuration? I'm using this setUp right now:
setUp(scenario1.inject(
atOnceUsers(50),
rampUsers(300) over (1800 seconds),
).protocols(httpConf)
)
Thank you!
ip_hash;
Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses.
You should use sticky:
Enables session affinity, which causes requests from the same client to be passed to the same server in a group of servers.
Edit:
Right, I didn't see that it's for nginx plus only :(
I found this post (maybe it helps...):
https://serverfault.com/questions/832790/sticky-sessions-with-nginx-proxy
Reference to: https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng
There is also a version of the module for older versions of nginx:
http://dgtool.treitos.com/2013/02/nginx-as-sticky-balancer-for-ha-using.html
Reference to: https://code.google.com/archive/p/nginx-sticky-module/

HAProxy dynamic configuration

Here is my setup:
we do have one external load balancer (AWS) attached to root domain mydomain.com
external load balancer forward traffic to HAProxy instance, and haproxy have to forward it further to on of two internal load balancers
we have 2 internal load balancers, first point to latest version of our app, second to minus one version
each of our client can have 1 to many subdomains like sub1.mydomain.com, sub2. sub3
some subdomains should be redirected to old version, some to new, so same client can have old and new in subdomains
Basically:
sub1.mydomain.com -> latest-load-balancer
sub2.mydomain.com -> older-load-balancer
The problem is how to setup this routing, we can't stop/start haproxy for each new subdomain. And it could be more them 10k of that subdomains in future.
haproxy can use maps to decide which backend (internal load balancer) to use, based on the domain.
haproxy can be reloaded instead of restarted when the map changes.
If you do not wish to even reload, you could pass map commands to the unix socket, thus changing the map in realtime.