How to use a different dns name for OpenShift 3.11 routes than the default wildcard dns name? - kubernetes

I'm not able to get a custom domain record working with an openshift cluster. I've read tons of articles, StackOverflow posts, and this youtube video https://www.youtube.com/watch?v=Y7syr9d5yrg. All seem to "almost" be usefull for me, but there is always something missing and I'm not able to get this working by myself.
The scenario is as follows. I've got an openshift cluster deployed on an IBM Cloud account. I've registered myinnovx.com. I want to use it with an openshift app. Cluster details:
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
openshift v3.11.146
kubernetes v1.11.0+d4cacc0
I've got an application deployed with a blue/green strategy. In the following screenshot, you can see the routes I've available.
mobile-blue: I created this one manually pointing to my custom domain mobileoffice.myinnovx.com
mobile-office: Created with oc expose service mobile-office --name=mobile-blue to use external access.
mobile-green: Openshift automatically generated a route for the green app version. (Source2Image deployment)
mobile-blue: Openshift automatically generated a route for the blue app version. (Source2Image deployment)
I've set up a two CNAME record on my DNS edit page as follows:
In several blogs/articles, I've found that I'm supposed to point my wildcard record to the router route canonical name. But I don't have any route canonical name in my cluster. I don't even have an Ingress route configured.
I'm at a loss here as to what I'm missing. Any help is greatly appreciated.
This is the response I get testing my DNS:
This is a current export of my DNS:
$ORIGIN myinnovx.com.
$TTL 86400
# IN SOA ns1.softlayer.com. msalimbe.ar.ibm.com. (
2019102317 ; Serial
7200 ; Refresh
600 ; Retry
1728000 ; Expire
3600) ; Minimum
# 86400 IN NS ns1.softlayer.com.
# 86400 IN NS ns2.softlayer.com.
*.myinnovx.com 900 IN CNAME .mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
mobileoffice 900 IN CNAME mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud
mobile-test.myinnovx.com 900 IN A 169.63.244.76

I think you almost got it, Matias.
The FQDN - mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud - resolves for me to an IP that is part of SOFTLAYER-RIPE-4-30-31 and is accessible from the Internet. So, it should be possible to configure what you want.
That snapshot in your question of the DNS records isn't displaying the entries in full but what might be missing is a dot . at the end of both the "Host/Service" and "Value/Target". Something like this:
mobileoffice.myinnovx.com. CNAME 900 (15min) mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.

Most of what I'm about to say only applies to OpenShift 3.x. In OpenShift 4.x things are sufficiently different that most of the below doesn't quite apply.
By default OpenShift 3.11 exposes applications via Red Hat's custom HAProxy Ingress Controller (colloquially known as the "Router"). The typical design in a OpenShft 3.x cluster is to designate particular cluster hosts for running cluster infrastructure workloads like the HAProxy router and the internal OpenShift registry (usually using the node-role.kubernetes.io/infra=true node labels).
For convenience purposes so admins don't have to manually create a DNS record for each exposed OpenShift application, there is a wildcard DNS entry that points to the load balancer associated with the HAProxy Router. The DNS name of this is configured in the openshift_master_default_subdomain of the ansible inventory file used to do your cluster installation.
The structure of this record is generally something like *.apps.<cluster name>.<dns subdomain>, but it can be anything you like.
If you want to have a prettier DNS name for your applications you can do a couple things.
The first is to create a DNS entry myapp.example.com pointing to your load balancer and have your load balancer configured to forward those requests to the cluster hosts where the HAProxy Router is running on port 80/443. You can then configure your application's Route object to use hostname myapp.example.com instead of the default <app name>-<project name>.apps.<cluster name>.<dns subdomain>.
Another method would be to do what your suggesting and let the application use the default wildcard route name, but create a DNS CNAME pointing to the original wildcard route name. For example if my openshift_master_default_subdomain is apps.openshift-dev.example.com and my application route is myapp-myproject.apps.openshift-dev.example.com then I could create a CNAME DNS record myapp.example.com pointing to myapp-myproject.apps.openshift-dev.example.com.
The key thing that makes either of the above work is that the HAProxy router doesn't care what the hostname of the request is. All its going to do is match the Host header (SNI must be set in the case of TLS requests and the HAProxy router configured for pass through) of the incoming request against all of Route objects in the cluster and see if any of them match. So if your DNS/Load Balancer configuration is setup to bring requests to the HAProxy Router and the Host header matches a Route, that request will get forwarded to the appropriate OpenShift service.
In your case I don't think you have the CNAME pointed at the right place. You need to point your CNAME at the wildcard hostname your application Route is using.

Also, please note the instructions for custom DNS setup for a route on OpenShift v4 are a bit different and are not correctly displayed in the web console:
apps.<clustername>.<clusterid>.<shard>.openshiftapps.com will not resolve to anything. *.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com is the wildcard entry, so you need something prepending that.
To align with the way it was on v3 we usually chose the arbitrary string elb, e.g. - elb.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com. That will hit the routers.
Here is the related BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1925132

Related

GKE Kubernetes external domain provider

I built simple cluster in GKE with two services using this tutorial
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
After finishing that I'm able to access my service using external IP address. So I bought domain for using this IP address. After setup A record in DNS settings to that IP address, domain doesn't work, it still loads and then show ERR_CONNECTION_TIMED_OUT. Do I need to do something in google console, or how I can make this IP public and accessed through domain?
Please refer to official documentation, which describes steps you need to take to configure domain names with static IP.
There are steps that you need to cover:
Go to NETWORKING section at GCP console, than VPC Network -> External IP addresses to ensure that you are running static IP address, not ephemeral one.
Go to Network services -> Cloud DNS. You need to create DNS zone, where at DNS name line you have to wright your domain name. After creation you will see Add record set, where you need to paste your external IP address.
There is also a good tutorial at YouTube with setting up custom domain on GCP. Let me know if it works for you.

How do I host my script in a Google Cloud server?

So I have created something small which is a image-rehost where I wish to use Python script where I have a URL such as https://i.imgur.com/VBPNX9p.jpg but with my rehost it would be
https://ip:port/abc123def456
so whenever I access that page it would give me the url that I posted here.
However the issue I am having is that I have no clue how to actually host the server that I made by node-js. Right now I just used the external IP with port of 5000. When I tried to send the image through my home ip by using the
https://external_ip:5000/abc123
the server doesn't recognize anything and nothing is being sent to the server which I in that case think I have setup something wrong.
I am using Google cloud server and I would wish to know how I can host my own server in the google cloud?
As you are having trouble adding a firewall rule, I'm going to suggest make sure port 5000 is open and not 8888.
To open the firewall rule for port 5000 in Google Cloud Platform follow these steps.
1) Navigate to VPC Network > Firewall rules > Create firewall rule.
2) In the 'Create a firewall rule' page, select these settings:
Name - choose a name for this firewall rule
Network - select the name of the network your instance belongs to, most probably
'default' unless you've configured a custom network.
Direction of traffic - 'Ingress'.
Action on match - 'Allow'.
Targets - 'All instances in the network'.
Source filter - 'IP ranges'.
Source IP ranges - '0.0.0.0/0'.
Second source filter - 'None'.
Specified protocols and ports - 'tcp:5000' or 'udp:5000' depending on whether the protocol you are using uses tcp or udp.
3) Hit 'Create'.
This will create a rule allowing traffic on port 5000 to all instances in your network from all IP address sources.
My advice would be to see if these settings work, and then once confirming this, lock down the settings by specifying a specific IP address or range of IP addresses in the 'Source IP ranges' text box, and adding a target tag to you instance and specifying 'Specified target tags' so the port is only open to the instance.
If this doesn't work, you may have a firewall rule turned on within the instance, which you would need to configure (or turn it off).
For more detailed information about setting firewall rules please see here.
For running Node.sj on GCE VM I will suggest you use the Bitnami Node.js package on GCP Marketplace which includes the latest version of Node.js, Apache, Python, and Redis. Using a pre-configured Node.js environment gets you up and running quickly because everything works out of the box. Manually configuring an environment can be a difficult and time-consuming hurdle to developing an application.
Also if you wish to do URL redirection you can use URL map feature provided with Google Cloud HTTP load balancer. This feature allows you to direct traffic to different instances based on the incoming URL. For example, you can send requests for http://www.example.com/audio to one backend service, which contains instances configured to deliver audio files, and requests for http://www.example.com/video to another backend service, which contains instances configured to deliver video files. You find steps to configure and more information here.

How to allow egress to maps.googleapi.com

I have Micro services running on GKE clusters.They need to communicate with https://maps.googleapis.com/ . All these microservices are running in a cluster which is created in a custom network. Now If I want to Know will need to allow egress for these clusters/(Nodes) or Since it is also GCP service by default cmmuninication is allowed? If I need To allow a firewall rule for egress, How Can I do that for Domain name instead of IP. I read that the IP may change for these maps.googleapis.com. Can you please help me.
GKE works on the same infrastructure that Google Compute Engine.
Unfortunately, it is not possible to add firewall rules with destination defined as a DNS address.
Although Google Maps API is a part of Google services, there are no template or something like that to add it as an exception to the firewall and firewall do not know anything about Google services. If you block all egress traffic - access to all APIs will be blocked too.
So, you need to get IP ranges of the API somehow and add them to the firewall.
I found the only one way how to get all ranges (using DNS names) here. But, you should have:
the Google Maps APIs Premium Plan or a previous Google Maps APIs for Work or Google Maps for Business license.
If you have it, just go to that link where you can get a current list of domains related to Google Maps API.
If not, you can try to allow traffic to all addresses which Google is publishing as its CIRD blocks, it might help.
You can get it by nslookup command:
nslookup -q=TXT _spf.google.com 8.8.8.8
And then get all "include" names from the answer, like:
nslookup -q=TXT _netblocks.google.com 8.8.8.8

OpenShift Origin Route Hostname not accessible

I have a query which is basically a clarification regarding Routes in OpenShift Origin.
I managed to setup OpenShift Origin version 1.4.0-rc1 on a CentOS hosted in local VMWare installation. Am also able to pull and setup image for nginx and pod status shows Running. Able to access nginx on the service endpoint also. Now as per documentations if I want to access this nginx instance outside the hosted system I need to create a Route, which I also did.
Confusion is on the Create Route screen from OpenShift Web Console it generates a hostname or allows to enter a hostname. Both of the option i tried, generated hostname seems to be a a long subdomain kind of hostname and it doesn't work. What I mean is I'm not able to access this hostname from anywhere in the network including the hosting OS as well.
To summarize, service endpoints which looks like 172.x.x.x is working on the local machine which is hosting OpenShift. But the generated/entered hostname for the route doesn't work from anywhere.
Please clarify the idea behind this route concept and how could one access a service from outside the host machine (Part of same network)
As stated in documentation:
An OpenShift Origin route exposes a service at a host name, like
www.example.com, so that external clients can reach it by name. DNS
resolution for a host name is handled separately from routing; your
administrator may have configured a cloud domain that will always
correctly resolve to the OpenShift Origin router, or if using an
unrelated host name you may need to modify its DNS records
independently to resolve to the router.
It is important to notice the difference between "route" and "router". The Opensfhit router (that is mentioned above)listens to all requests to Openshift deployed applications, and has to be previoulsy deployed, in order for routes to work.
https://docs.openshift.org/latest/architecture/core_concepts/routes.html
So once you have the router deployed and working, all routes that you create in openshift should resolve where that Openshift router is listening. For example, configuring your DNS with a wildcard (this is dnsmaq wildcard example):
address=/.yourdomain.com/107.117.239.50
This way all your "routes" to services should be like this:
service1.yourdomain.com
service2.yourdomain.com
...
Hope this helps

Redirecting a subdomain to AWS instance

I have a domain example.com that is being host on webfaction. However i would like to redirect its subdomain (e.g sub.example.com) to one of my AWS instance which has public DNS of:
https://ec2-xx-xxx-xxx-xxx.ap-southeast-1.compute.amazonaws.com:8083 (please note the port number).
This instance is then assigned with elastic IP address.
So far, the solution that i tried is:
Using CNAME redirection, however it does not work because of this: https://forums.aws.amazon.com/thread.jspa?threadID=55995
Then i proceed to use the old fashion .htaccess:
Redirect permanent / http://ec2-xx-xx-x-xxx.compute-1.amazonaws.com
order deny,allow
However, i want to keep the sub.example.com on the address bar instead of changing it AWS public dns.
Does anyone know what is the best way to solve this? Thanks
If you have assigned an elastic ip to the instance, you should be able to just setup a new 'A' record in your DNS that points directly to that IP address, no?
The listening on the specific port should be handled by the bindings on the instance (either thru apache or IIS)