Forwarding All Traffic from Global External IP to a Domain on GCP - kubernetes

I have an Autopilot GKE cluster set up. There is an Ingress which is an entry point to the app deployed in the cluster. I managed to configure SSL and HTTP -> HTTPS redirection with ease.
I also configured Cloud DNS that resolves my domain name to the cluster's IP (global static IP, let's name it global-front-app-ip).
This works without any problems. I'm able to access the app with the domain I own. My setup is very simillar to the one described in this article.
What I'm trying to achieve now is to redirect all the clients that try to access the app with LB IP global-front-app-ip to the domain name (http://global-front-app-ip -> http://my-domain.com).
I played with LB forwarding rules and Cloud Armor but I haven't found a working solution.

Related

Keycloak internal and external link

I understand that the question was asked and discussed in different formats before. However, I still miss clear guidelines on how to handle the situation.
Our keycloak setup has multiple keycloak replicas and is behind a load balancer without a fixed ip in a separate infrastructure. So that our DNS records look like:
CNAME keycloak.acme.com public-lb.acme.com
And public-lb.acme.com forwards the request to specific instances of keycloak.
One of our end-user applications is located in a completely different infrastructure with strict access. The end-user application is built using java and is using Keycloak integration org.keycloak:keycloak-servlet-filter-adapter. We do not have any custom adapters and simply follow "standard" configuration:
{
"auth-server-url" : "https://keycloak.acme.com",
..
However, this does not work since keycloak.acme.com ip address have to be whitelisted in that "special" infrastructure. So that validation requests from the application inside the "special" infrastructure do not hit the keycloak. And we cannot whitelist the ip, since the ip of our load balancer public-lb.acme.com is not fixed and changes with time.
We have a "tunnel" between the keycloak infrastructure and that "special" infrastructure with a dedicated ip cidr range which is whitelisted.
Hence we have create a special internal load balancer that is in the tunnels cidr range and forwards requests to the keycloak replicas. Unfortunately that internal load balancer does not have a fixed ip address, and can change within time.
Since we do not have fixed ip address, is the only correct method is to use add DNS record inside the "special" infrastructure pointing to the internal load balancer? Something like:
CNAME keycloak.acme.com internal-lb.acme.com
Or are there any alternative solutions? I understand the historical reasons behind this.

Using CloudFlare's CustomHostname with k8s ingress to enable CustomDomain

We have a custom domain feature, which allows our clients to use our services with their custom DNS records.
For example, our client ACME has a CNAME to ssl.company.com like so login.acme.com -> ssl.company.com. right now we are using a k8s cluster to provide such traffic. On each custom domain, we create an ingress, external service, and a certificate using LetsEncrypt cert-manager.
We started using Cloudflare WAF and they are providing CustomHostname feature which allows us to do the same as our CD cluster but without changing the host header. So
for the example above we get
host: login.acme.com -> login.acme.com
SNI: login.acme.com -> ssl.company.com
The issue is of course how to map a generic k8s ingress to allow such traffic.
when we did the POC we used this method and it worked, but now it stopped. We have also tried default backend and unhosted ingress path.
We are using nginx-ingress controller but migrating to another API gateway like kong.
Any help will be grateful.

IAP connector not routing request to on-prem. "No healthy upstream"

I'm trying to setup Identity Aware Proxy for my backend services parts of which resides in GCP and other on on-prem,according to the instruction given in the following link
Enabling IAP for on-premises apps and
Overview of IAP for on-premises apps
After, following the guide I ended up in a partial state where services running on GCP serving at https endpoint is perfectly accessible via IAP. However, the app which is running on on-prem is not reachable through pods* and external loadbalancer*.
Current Architecture followed:
Steps Followed
On GCP project
Created a VPC network in any region with one subnet in my case (asia-southeast1)
Used IAP connector https://github.com/GoogleCloudPlatform/iap-connector
Configured the mapping for 2 domains.
For app in GCP
source: gcp.domain.com
destination: app1.domain.com (serving at https endpoint)
For app in on-prem(Another GCP project)
source: onprem.domain.com
destination: app2.domain.com (serving at https endpoint but not exposed to internet)
Configured VPN Tunnel between both the project so the network gets peered
Enabled IAP for the loadbalancer which is created by the deployment.
Added corresponding accounts to allow access to the services with IAP web-user role.
On-prem
Created VPC network in a region with one subnet (asia-southeast1)
Created VM on VPC in that region
Assigned that VM to an instance group
Created Internal Https loadbalancer and chose instance group as backend
Secured load balancer http with ssl
Setup VPN tunnel to the first project
What I have tried?
logged in to pods and pinged different pods. All pods were reachable.
logged in to nodes and pinged the remote VM on port 80 and 443 both are reachable.
pinged remote VM from inside the pods. Not reachable.
Expected Behaviour:
User requests to loadbalancer on the app1.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp.
User requests to loadbalancer on the app2.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp running on on-prem.
Actual Behaviour
Request to the app1.domain.com prompts OAuth screen after authenticating the website is returned to the user.
Request to the app2.domain.com prompts OAuth screen after authenticating the browser returns 503 - "No healthy upstream"
Note:
I am using a separate GCP project to simulate on-premise.
Both projects are peered via VPN tunnel.
Both peering projects have subnets in the same region.
I have used internal https loadbalancer in my on-prem project to make my VM visible in my host project so that the external loadbalancer can route request to the VM's https endpoint.
** I'm suspecting that if pod could able to reach the remote VM the problem might as well be resolved. It's just a wild guess.
Thank you so much guys. I'm looking forward for your responses.

Getting DNS for Load Balancer in GCP

In Google Kubernetes Engine, I created a Load Balancer (External IP Address). I can access it using the IP address. However, I want to get a domain name. ( I am not asking about buying my own domain and adding DNS records ). I am not able to find how to get the url.
For eg. in Azure, in Azure Kubernetes Service, I created a Load Balancer and added a label. So, I can get a url like http://<dns_label_which_i_gave>.<region_name>.cloudapp.azure.com. So, for trial purpose, I don't have to pay for a domain and I can get an easy to read domain name.
How to get the same in GCP Load Balancer?
With Google Cloud you can't do this. The Load balancer expose an IP and you have to create a A record in your registrar to make the link.

How to alias my domain's subdomain with my jelastic environment subdomain?

I have a kubernetes cluster hosted on a Jelastic environment env.jelastic-provider.com. In that k8s cluster, I am exposing a frontend app on app.env.jelastic-provider.com. I would like to use a CNAME record to alias my custom domain www.example.com to the frontend subdomain app.env.jelastic-provider.com. How can I achieve that? My DNS provider does not propose ANAME records.
Currently, I have defined a CNAME record aliasing www.example.com to app.env.jelastic-provider.com on my dns provider. On the Jelastic side, I've bound www.example.com to env.jelastic-provider.com with the jelastic.environment.Binder.BindExtDomain api method, which of course doesn't work, because I'd need to bind to app.env.jelastic-provider.com, which does not seem to be possible.
Do I have a way out not involving:
serving my frontend e.g. through CDN instead of my cluster
using ANAME record
?
Edit
Following the advice of Jelastic and of my Jelastic provider, I was able to make some good progress. Today, it turns out attaching external IPs to the k8s cluster worker nodes is not supported yet. It will come in a later release of the jelastic kubernetes jps. We can see in that manifest that most of the configuration is there, just the attachment of the IP to the worker nodes isn't done, as it is pretty involved.
Therefore, the only solution I am left with, according to this answer from Jelastic, is that I add an nginx load-balancer in front of my k8s cluster and configure the dns for it. To do so, I need to configure SSL on that nginx instance, as the cluster will not work correctly without https. So the first steps are
Add nginx node in front of the cluster
Install let's encrypt addon on the nginx node
Configure an A record on my domain provider panel, where I link the IPv4 address resulting from the previous let's encrypt installation with www.example.com
When the A record is valid, update the let's encrypt addon so that it takes the domain into account.
Also, I got rid of my domain bindings, as they are useless with A records.
If I do all that, then I can again access a working k8s cluster. The kubernetes dashboard as well as the kubernetes api are working.
What is, however, not working, is the access to my cluster's subdomains. As I stated in my original post, I need to access app.env.jelastic-provider.com. This is where I am now stuck. How can I now access that subdomain?
Usage of CNAME together with Public IP the only way out you are looking for.
Custom Domain Name
Public IP
So, long story short. After the initial configuration mentioned in the edit of my initial post,
Add nginx node in front of the cluster
Install let's encrypt addon on the nginx node
Configure an A record on my domain provider panel, where I link the IPv4 address resulting from the previous let's encrypt installation with www.example.com
When the A record is valid, update the let's encrypt addon so that it takes the domain into account.
the address https://www.example.com leads to my cluster back again, with working k8s dashboard and api. Then,
in my domain provider, I've added another A record for app.env.jelastic-provider.com pointing to the IPv4 of the nginx load-balancer with name app
in the let's encrypt configuration of the nginx load-balancer, I've added the app.example.com external domain
in the nginx-jelastic.conf file, I've added
server {
listen *:80;
listen [::]:80;
server_name app.example.com;
location / {
proxy_pass http://app.env.jelastic-provider.com;
}
}
in the ssl.conf, I've added
server {
listen 443 ssl;
server_name app.example.com;
ssl_certificate /var/lib/jelastic/SSL/jelastic.chain;
ssl_certificate_key /var/lib/jelastic/SSL/jelastic.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://app.env.jelastic-provider.com;
}
}
Of course, the above SSL config is not perfect, it must be tuned for production purposes.
EDIT
I noticed there is one downside to this way of proceeding with the frontal nginx load-balancer. Whatever headers / config you set in the load-balancer will be somehow overriden by the cluster's own ingress controller. If you go down this way, make sure both configs are kept consistent...