What is the glue between k8 ingress and google load bancers - kubernetes

I am using kubernetes on google cloud container, and I still don't understand how the load-balancers are "magically" getting configured when I create / update any of my ingresses.
My understanding was that I needed to deploy a glbc / gce L7 container, and that container would watch the ingresses and do the job. I've never deployed such container. So maybe it is part of this cluster addon glbc, so it works even before I do anything?
Yet, on my cluster, I can see a "l7-default-backend-v1.0" Replication Controller in kube-system, with its pod and NodePort service, and it corresponds to what I see in the LB configs/routes. But I can't find anything like a "l7-lb-controller" that should do the provisionning, such container does not exist on the cluster.
So where is the magic ? What is the glue between the ingresses and the LB provisionning ?

Google Container Engine runs the glbc "glue" on your behalf unless you explicitly request it to be disabled as a cluster add-on (see https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing).
Just like you don't see a pod in the system namespace for the scheduler or controller manager (like you do if you deploy Kubernetes yourself), you don't see the glbc controller pod either.

Related

How to expose app deployed on Rancher k3s to the Internet

I've different deployments over different namespaces and I would like to expose some of them to the Internet, even if I don't have a static and public IP available.
The different services are deployed on Rancher k3s and every service which should be publicly accessible has an Ingress defined in the same namespace.
I was trying to follow Rancher - How to expose my services publicly?, but I didn't really get what I've to do and, moreover:
Why do we need to define a LoadBalancer? It seems to me that the IngressController used by k3s (Traefik?) already creates one. If this is a must (or a good way to go), how it should the service defined exactly?
I don't have any Rancher UI in my environment. Therefore, is there a way to achieve what described in that link in a declarative way?
Is there a way to use services like No-IP or FreeDNS for the final hostname?
If I get it right, you deployed Kubernetes manually on barebone/vms nodes and now you want to reach you deployments running inside that cluster.
There is two level of loadbalancing in this setup, the one managed by your ingress controller, sounds like it is traefik in your case, and it is recommanded to run a second L4 load balancer in front of your workers to reach the ingress pods that are usually deployed on multiple/all nodes. Traefik, or other lb controllers, will load balancer traffic inside the k8s cluster without issue even if you don't have a L4 load balancer, but it is not recommanded as if you loose this node, no traffic can reach the kubernetes cluster anymore. You "just" need to have your dns resolution pointing at your public ip and routed to one of your worker, or the LB in front of it. However, if you don't have a L4 LB, you'll need to have your ingress pods listening on ports 80 and/or 443.
Most things that you do in Rancher UI is just an easier way to see your k8s objects, all ingress configuration can be achieved via kubectl, k9s (strongly recommand thatone!), lens or other methods. However k8s objects are still k8s objects. In this case, you need to have your services exposed with ClusterIP that are then reachable by the ingress pods.
I've never used such a solution natively from k8s, but when I had too the internet router was able to do this part, once you're there, it is internal routing.
I hope this helps. Ingress can definitely be a tough one to grasp!

Is it possible or right to configure Nginx Ingress Controller on Openshift cluster?

I have Kubernetes cluster setup available on my system. I have verified that my definitions for Ingress controller and Ingress resources working over it as I am able to invoke services inside my K8 cluster from outside the cluster.
Now, I have to move all my resources to Openshift cluster. I just deployed my definitions of Ingress resources from K8 to Openshift but those are not working, as I am not able to access services from outside the cluster. Note that I didn't deploy Ingress controller on Openshift which I did on K8.
The problem is, I don't want to use Openshift Route as an Ingress alternate. So, in order to get my Ingress resource working on OpenShift(OC) cluster what am I supposed to do? Shall I install Ingress controller on Openshift like I did on K8 cluster?
I don't want to use any Openshift specific resource on Openshift cluster, just wanted to utilize embedded Kubernetes of it.
Kindly suggest.

How to talk to Kubernetes CRD service within a pod in the same k8s cluster?

I installed a Spark on K8s operator in my K8s cluster and I have an app running within the k8s cluster. I'd like to enable this app to talk to the sparkapplication CRD service. Can I know what would be the endpoint I should use? (or what's the K8s endpoint within a K8s cluster)
It's clearly documented here. So basically, it creates a NodePort type of service. It also specifies that it could create an Ingress to access the UI. For example:
...
status:
sparkApplicationId: spark-5f4ba921c85ff3f1cb04bef324f9154c9
applicationState:
state: COMPLETED
completionTime: 2018-02-20T23:33:55Z
driverInfo:
podName: spark-pi-83ba921c85ff3f1cb04bef324f9154c9-driver
webUIAddress: 35.192.234.248:31064
webUIPort: 31064
webUIServiceName: spark-pi-2402118027-ui-svc
webUIIngressName: spark-pi-ui-ingress
webUIIngressAddress: spark-pi.ingress.cluster.com
In this case, you could use 35.192.234.248:31064 to access your UI. Internally within the K8s cluster, you could use spark-pi-2402118027-ui-svc.<namespace>.svc.cluster.local or simply spark-pi-2402118027-ui-svc if you are within the same namespace.

Multiple Host Kubernetes Ingress Controller

I've been studying Kubernetes for a few weeks now, and using the kube-lego NGINX examples (https://github.com/jetstack/kube-lego) have successfully deployed services to Kubernetes cluster using Rancher on DigitalOcean.
I've deployed sample static sites, Wordpress, Laravel, Craft CMS, etc. All of which use custom Namespaces, Deployment, Secrets, Containers with external registries, Services, and Ingress Definitions.
Using the example (lego) NGINX Ingress Controller setup, I'm able to apply DNS to the exposed IP address of my K8s cluster, and have the resulting sites appear.
What I don't know, though, is how to allow for multiple hosts to have Ingress Controllers service the same deployments, and thus provide HA Ingress to the cluster. (by applying an external load balancer service, or geo-ip, or what-have-you).
Rancher (stable) allows me to add multiple hosts, I've spun up 3 to 5 at a time, and Kubernetes is configured and deployed across all Hosts. Furthermore, I'll define many replicas and/or deployments (listed above) and they will be spread over the cluster and accessible as would be expected. I've even specified multiple replicas of the Ingress Controller, but of course they all get scheduled on the same host, giving me only one IP address of Ingress.
So how do I allow multiple hosts (each with their own public facing IP address) to allow ingress into the cluster? I've also read about setting up multiple Ingress Controllers, but then you must specify what deployment/services are being serviced by what Ingress Controller, which then totally defeats the purpose.
Maybe I'm missing something, but if K8s multi-host is supposed to provide HA, and the Host with the Ingress Controller goes down, then the service will be rescheduled on the other Hosts, but the IP address that everything is pointing to will be dead, and thus an outage. Any way to have multiple IP Addresses to the same set of deployment/services?
I investigated my setup a bit more today, and I think I found out why I was having difficulty. The "LoadBalancer" is often mentioned as for use with Cloud Providers (in both docs, and what #fiunchinho describes). I was using it with a Rancher setup, which auto creates an HA-Proxy LoadBalancer ingress for you on the hosts.
By default, it will just schedule it on one of the hosts. You can specify that you want it scheduled globally buy providing an 'annotation' of io.rancher.scheduler.global: "true".
Like so:
annotations:
# Create load balancers on every host in the environment
io.rancher.scheduler.global: "true"
http://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/
I preferred LoadBalancer over NodePort because I wanted the ability to send port 80 (and in the future port 443) to any of the Nodes, and have them successfully fulfil my request by inspecting the Host header, and directing as-needed.
These LBs can also be setup in the Rancher UI under the "Infrastructure Stack" menu. I have successfully removed the single LB, and re-added one with an "Always run one instance of this container on every host" option enabled.
After this was configured, I could make a request to any of the Hosts for any of the Ingresses, and get a response, no matter what host the container was scheduled on.
https://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/
So cool!
The ingress controller is deployed like any regular pod. That means that you can have as many replicas as you'd like, which will be spread among all your nodes.
You need a Service object that group all the pods for the ingress controller.
Then you just need to expose that Service to outside the cluster. You can do that using a LoadBalancer service if you are on a cloud provider. Or you can use just a NodePort service.
The point is that the service will balance the traffic that your ingress controller receives between all the pods that are running on different kubernetes nodes. If one of the nodes goes down, it doesn't really matter, because there are other nodes containing ingress controller pods.

Kubernetes External Load Balancer Service on DigitalOcean

I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP.
Things have changed, DigitalOcean created their own cloud provider implementation as answered here and they are maintaining a Kubernetes "Cloud Controller Manager" implementation:
Kubernetes Cloud Controller Manager for DigitalOcean
Currently digitalocean-cloud-controller-manager implements:
nodecontroller - updates nodes with cloud provider specific labels and
addresses, also deletes kubernetes nodes when deleted on the cloud
provider.
servicecontroller - responsible for creating LoadBalancers
when a service of Type: LoadBalancer is created in Kubernetes.
To try it out clone the project on your master node.
Next get the token key from https://cloud.digitalocean.com/settings/api/tokens and run:
export DIGITALOCEAN_ACCESS_TOKEN=abc123abc123abc123
scripts/generate-secret.sh
kubectl apply -f do-cloud-controller-manager/releases/v0.1.6.yml
There more examples here
What will happen once you do the above? DO's cloud manager will create a load balancer (that has a failover mechanism out of the box, more on it in the load balancer's documentation
Things will change again soon as DigitalOcean are jumping on the Kubernetes bandwagon, check here and you will have a choice to let them manage your Kuberentes cluster instead of you worrying about a lot of the infrastructure (this is my understanding of the service, let's see how it works when it becomes available...)
The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean (supported cloud providers), so the LoadBalancer type will not be able to take advantage of Digital Ocean's Floating IPs.
Instead, you should consider using a NodePort service or attaching an ExternalIP to your service and mapping the exposed IP to a DO floating IP.
It is actually possible to expose a service through a floating ip. The only catch is that the external IP that you need to use is a little unintuitive.
From what it seems DO has some sort of overlay network for their Floating IP service. To get the actual IP you need to expose you need to ssh into your gateway droplet and find its anchor IP by hitting up the metadata service:
curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
and you will get something like
10.x.x.x
This is the address that you can use as an external ip in LoadBalancer type service in kubernetes.
Example:
kubectl expose rc my-nginx --port=80 --public-ip=10.x.x.x --type=LoadBalancer