GKE: ingres with sub-domain - kubernetes

I used to work with Openshift/OKD cluster deployed in AWS and there it was possible to connect cluster to some domain name from Route53. Then as soon as I was deploying ingress with some hosts mappings (and the hosts defined in ingres were subdomains of the basis domain) all necessary lb rules (Routes in Openshift) and subdomain itself were created by Openshift and were directly available. For example: Openshift is connected to domain "somedomain.com" which is registered in Route53. In ingress I have the host mapping like:
hosts:
- host: sub1.somedomain.com
paths:
- path
After deployment I can reach sub1.somedomain.com. Is this kind of functionality available in GKE?
So far I have seen only mapping to static IP.
Also I red here https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 that if I need to connect service with ingress, the service have to be of type NodePort. Is it realy so? In Openshift it was not required any normal ClusterIP service could be connected to ingress.
Thanks in advance!

I think you should consider the other Ingress Controllers for your use cases.
I'm not an expert of the GKE, but as I can see Best practices for enterprise multi-tenancy as follows,
you need to consider how to route the multiple Ingress hostnames through wildcard subdomain like the OpenShift additionally.
Set up HTTP(S) Load Balancing with Ingress
:
You can create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress resource,
which defines how traffic reaches your Services and how the traffic is routed to your tenant's application.
By registering Services with the Ingress resource, the Services' naming convention becomes consistent,
showing a single ingress, such as tenanta.example.com and tenantb.example.com.
The routing feature depends on the Ingress Controllers basically.
In my finding, the default Ingress Controllers of the GKE just creates a Google Cloud HTTP(S) Load Balancer, but it does not consider multi-tenancy by default like the OpenShift.
In contrast, in the OpenShift, the Ingress Controller was implemented using HAProxy with dynamic configuration feature as follows.
LB -tenanta.example.com--> HAProxy(directly forward the tenanta.example.com traffic to the target pod IPs) ---> Target Pods

The type of service exposition depends on the K8S implementation on each cloud provider.
If the ingress controller is a component inside your cluster, a ClusterIP is enough to have your service reachable (internally from inside the cluster itself)
If the ingress definition configure an external element (in case of GKE, a load balancer), this element isn't a part of the cluster and can't know the ClusterIP (because it is only accessible internally). A node port is required in this case.
So, in your case, either you expose your service in NodePort, or you configure GKE with another Ingress controller, locally installed in the cluster, instead of using this one by default.

So far GKE does not provide the possibility to dynamically create subdomains. The wished situation would be if GKE cluster can be set some DNS zone managed in GCP and there is a mimik of OpenShift Routes using for example ingress annotations.
But the reality tight now - you have to create subdomain or domain youself as well as IP address wich you connect this domain to. And this particular GCP IP address (using name) can be connected to ingress using annotations. Or it can be used in loadbalancer service.

Related

Exposing Service from a BareMetal(Kubeadm) Kubernetes Cluster to outside world

Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
For that, I have created a deployment and service for NGINX as shown below,
As per my search, found that we have below to expose to outside world
MetalLb
Ingress NGINX
Some HELM resources
I would like to know all these 3 or any more approaches in such way it help me to learn new things.
GOAL
Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
How Can I make my service has its own public IP to access from the outside cluster?
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
I have done a similar setup you can check it here in detail:
https://developerdiary.me/lets-build-low-budget-aws-at-home/
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.

Is it possible to have multiple ingress resources with a single GKE ingress controller

In GKE Ingress documentation
it states that:
When you create an Ingress object, the GKE Ingress controller creates a Google Cloud HTTP(S) Load Balancer and configures it according to the information in the Ingress and its associated Services.
To me it seems that I can not have multiple ingress resources with single GCP ingress controller. Instead, GKE creates a new ingress controller for every ingress resource.
Is this really so, or is it possible to have multiple ingress resources with a single ingress controller in GKE?
I would like to have one GCP LoadBalancer as ingress controller with static IP and DNS configured, and then have multiple applications running in cluster, each application registering its own ingress resource with application specific host and/or path specifications.
Please note that I'm very new to GKE, GCP and Kubernetes in general, so it might be that I have misunderstood something.
I think the question you're actually asking is slightly different than what you have written. You want to know if multiple Ingress resources can be linked to a single GCP Load Balancer, not GKE Ingress controller. Based on the concept of a controller, there is only one GKE Ingress controller in a cluster, which is responsible for fulfilling multiple resources and provisioning multiple load balancers.
So, to answer the question directly (because I've been searching for a straight answer for a long time!):
Combining multiple Ingress resources into a single Google Cloud load
balancer is not supported.
Source: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Sad.
However, using the nginx-ingress controller is one way to at least minimize the number of external (GCP) load balancers provisioned (it only provisions a single TCP load balancer), but since the load balancer is for TCP traffic, it cannot terminate SSL, or apply Firewall rules for you (Cloud Armor cannot be used, for instance).
The only way I know of to have a single HTTPS load-balancer in GCP terminate SSL and route traffic to multiple services in GKE is to combine the ingresses into a single resource with all paths and certificates defined in one place.
(If anybody figures out a way to do it with multiple separate ingress resources, I'd love to hear it!)
Yes it is possible to have the single ingress controller for multiple ingress resources.
You can create multiple ingress resources as per path requirement and all will be managed by single ingress controller.
There are multiple ingress controller options also available you can use Nginx also that will create one LB and manage the paths.
Inside Kubernetes if you are creating a service with type LoadBalancer it will create the new LB resource in GCP so make sure your microservice type is ClusterIP and your all traffic goes inside K8s cluster via ingress path.
When you setup the ingress controller it will create one service with type LoadBalancer you can can use that IP in DNS servers to forward the subdomain and path to K8s cluster.

Expose pods in AKS to internet with existing setup

We have a request to expose certain pods in an AKS environment to the internet for 3rd party use.
Currently we have a private AKS cluster with a managed standard SKU load balancer in front using the advanced azure networking (basically Calico) where each Pod gets its own private IP from the Vnet IP space. All private IPs currently route through a firewall via user defined route in order to reach the internet, and vice versa. Traffic between on prem routes over a VPN connection through the azure virtual wan. I don’t want to change any existing routing behavior unless 100% necessary.
My question is, how do you expose an existing private AKS cluster’s specific Pods to be accessible from the internet? The entire cluster does not need to be exposed to the internet. The issue I foresee is the ephemeral Pods and ever changing IPs making simple NATing in the firewalls not an option. I’ve also thought about simply making a new AKS cluster with a public load balancer. The issue here though is security as it must still go through the firewalls and likely could with existing user defined routes
What is the recommended way to setup the architecture where certain Pods in AKS can be accessible over the internet, while still allowing those Pods to access the Pods over the private network. I want to avoid exposing all Pods to the internet
There are a couple of options that you can use in order to expose your application to
outside your network, such as: Service:
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Also, there is another option that is use an ingress, IMO this is the best way to expose HTTP applications externally, because it's possible to create rules by path and host, and gives you much more flexibility than services. For ingress only HTTP/HTTPS is supported, if you need TCP then go to Services
I'd recommend you take a look in this links to understand in deep how services and ingress works:
Kubernetes Services
Kubernetes Ingress
NGINX Ingress
AKS network concepts
Deploy nginx ingress controller and bind the ingress controller service to a public Load Balancer. Define Ingress rules for the kubernetes services that you want to access from internet. Note that ingress controller enables entry point to the services running inside kubernetes
Several years later and wanted to update.
We did successfully implement a scalable ingress option into our private AKS cluster using NGINX as the ingress. The basic flow was
Public IP > NAT to frontend private IP of NGINX > NGINX path rules that point to your pod/service
Taking a URL as an example for a microservice of www.example.com/service1, the public DNS entry you create is what resolves www.example.com to the public IP that you will NAT to the private IP of NGINX. Then, the rules you create within NGINX take the specific /service1 path of the URL and use it to route to the specific service you pointed it at. It behaves much like URL switching in other load balancers. That is really all NGINX is doing for you. In NGINX syntax, this involves specifying a hosts name (URL) and an associated rule with a backend path and service name. The service name in this example is service1 and the path is / because service1 sits just behind the root.
Something like this saves cost by using less public IPs. For example, you can use a subdomain to easily NAT traffic to a seperate test environment. www.test.example.com and www.example.com can point to separate public IPs, which you can NAT to separate AKS clusters running NGINX. In this way, your NGINX rules can be identical because it's only looking for /service1 which hopefully you've mirrored test and prod environments.
Many ways to do this but a few recommendations from lessons learned
use subdomains to break out multiple environments
standardize your NGINX private front end IP across envronments (make them all end in .100 as an example
create a standard NGINX ingress template where you really only need to modify the serviceName. Your hostName should be static within an environment
have your devs include this and deploy their microservices with helm rather than relying on an infrastructure team to update NGINX services. Sort of defeats the devops mentality and speed gains

Why do we need a load balancer to expose kubernetes services using ingress?

For a sample microservice based architecture deployed on Google kubernetes engine, I need help to validate my understanding :
We know services are supposed to load balance traffic for pod replicaset.
When we create an nginx ingress controller and ingress definitions to route to each service, a loadbalancer is also setup automatically.
had read somewhere that creating nginx ingress controller means an nginx controller (deployment) and a loadbalancer type service getting created behind the scene. I am not sure if this is true.
It seems loadbalancing is being done by services. URL based routing is
being done by ingress controller.
Why do we need a loadbalancer? It is not meant to load balance across multiple instances. It will just
forward all the traffic to nginx reverse proxy created and it will
route requests based on URL.
Please correct if I am wrong in my understanding.
A Service type LoadBalancer and the Ingress is the way to reach your application externally, although they work in a different way.
Service:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).
There are some types of Services, and of them is the LoadBalancer type that permit you to expose your application externally assigning a externa IP for your service. For each LoadBalancer service a new external IP will be assign to it.
The load balancing will be handled by kube-proxy.
Ingress:
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
When you setup an ingress (i.e.: nginx-ingress), a Service type LoadBalancer is created for the ingress-controller pods and a Load Balancer in you cloud provider is automatically created and a public IP will be assigned for the nginx-ingress service.
This load balancer/public ip will be used for incoming connection for all your services, and nginx-ingress will be the responsible to handle the incoming connections.
For example:
Supose you have 10 services of LoadBalancer type: This will result in 10 new publics ips created and you need to use the correspondent ip for the service you want to reach.
But if you use a ingress, only 1 IP will be created and the ingress will be the responsible to handle the incoming connection for the correct service based on PATH/URL you defined in the ingress configuration. With ingress you can:
Use regex in path to define the service to redirect;
Use SSL/TLS
Inject custom headers;
Redirect requests for a default service if one of the service failed (default-backend);
Create whitelists based on IPs
Etc...
A important note about Ingress Load balancing in ingress:
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB kube-proxy rules which would correctly balance across all endpoints.
With the new functionality, the external traffic is not equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).
An ingress controller(nginx for example) pods needs to be exposed outside the kubernetes cluster as an entry point of all north-south traffic coming into the kubernetes cluster. One way to do that is via a LoadBalancer. You could use NodePort as well but it's not recommended for production or you could just deploy the ingress controller directly on the host network on a host with a public ip. Having a load balancer also gives ability to load balance the traffic across multiple replicas of ingress controller pods.
When you use ingress controller the traffic comes from the loadBalancer to the ingress controller and then gets to backend POD IPs based on the rules defined in ingress resource. This bypasses the kubernetes service and load balancing(by kube-proxy at layer 4) offered by kubernetes service.Internally the ingress controller discovers all the POD IPs from the kubernetes service's endpoints and directly route traffic to the pods.
It seems loadbalancing is being done by services. URL based routing is being done by ingress controller.
Services do balance the traffic between pods. But they aren't accessible outside the kubernetes in Google Kubernetes Engine by default (ClusterIP type). You can create services with LoadBalancer type, but each service will get its own IP address (Network Load Balancer) so it can get expensive. Also if you have one application that has different services it's much better to use Ingress objects that provides single entry point. When you create an Ingress object, the Ingress controller (e.g. nginx one) creates a Google Cloud HTTP(S) load balancer. An Ingress object, in turn, can be associated with one or more Service objects.
Then you can get the assigned load balancer IP from ingress object:
kubectl get ingress ingress-name --output yaml
As a result your application in pods become accessible outside the kubernetes cluster:
LoadBalancerIP/url1 -> service1 -> pods
LoadBalancerIP/url2 -> service2 -> pods

What's the exactly flow chart of an outside request comes into k8s pod via Ingress?

all
I knew well about k8s' nodePort and ClusterIP type in services.
But I am very confused about the Ingress way, because how will a request come into a pod in k8s by this Ingress way?
Suppose K8s master IP is 1.2.3.4, after Ingress setup, and can connect to backend service(e.g, myservice) with a port(e.g, 9000)
Now, How can I visit this myservice:9000 outside? i.e, through 1.2.3.4? As there's no entry port on the 1.2.3.4 machine.
And many docs always said visit this via 'foo.com' configed in the ingress YAML file. But that is really funny, because xxx.com definitely needs DNS, it's not a magic to let you new-invent any xxx.com you like be a real website and can map your xxx.com to your machine!
The key part of the picture is the Ingress Controller. It's an instance of a proxy (could be nginx or haproxy or another ingress type) and runs inside the cluster. It acts as an entrypoint and lets you add more sophisticated routing rules. It reads Ingress Resources that are deployed with apps and which define the routing rules. This allows each app to say what the Ingress Controller needs to do for routing to it.
Because the controller runs inside the cluster, it needs to be exposed to the outside world. You can do this by NodePort but if you're using a cloud provider then it's more common to use LoadBalancer. This gives you an external IP and port that reaches the Ingress controller and you can point DNS entries at that. If you do point DNS at it then you have the option to use routing rules base on DNS (such as using different subdomains for different apps).
The article 'Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?' has some good explanations and diagrams - here's the diagram for Ingress: