Is there a way to not use GKE's standard load balancer? - kubernetes

I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc.
Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.
I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.
Is there a way to use GKE without using Google's load balancer?

An Ingress is just a set of rules that tell the cluster how to route to your services, and a Service is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:
ClusterIP - this gives the service an IP that's only available inside the cluster which routes to the pods.
NodePort - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.
LoadBalancer - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.
These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.
The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a Service and instead must specify the port directly in your Deployment. Use the hostPort setting to bind the containers directly to port 80 on the node:
containers:
- name: yourapp
image: yourimage
ports:
- name: http
containerPort: 80
hostPort: 80 ### this will bind to port 80 on the actual node
This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.
If you still want load-balancing then you can run a DaemonSet (so that it's available on every node) with Nginx (or any other proxy) exposed via hostPort and then that will route to your internal services. An easy way to run this is with the standard nginx-ingress package, but skip creating the LoadBalancer service for it and use the hostPort setting. The Helm chart can be configured for this:
https://github.com/helm/charts/tree/master/stable/nginx-ingress

One option is to completely disable this feature on your GKE cluster. When creating the cluster (on console.cloud.google.com) under Add-ons disable HTTP load balancing. If you are using gcloud you can use gcloud beta container clusters create ... --disable-addons=HttpLoadBalancing.
Alternatively, you can also inhibit the GCP Load Balancer by adding an annotation to your Ingress resources, kubernetes.io/ingress.class=somerandomstring.
For newly created ingresses, you can put this in the yaml document:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: somerandomstring
...
If you want to do that for all of your Ingresses you can use this example snippet (be careful!):
kubectl get ingress --all-namespaces \
-o jsonpath='{range .items[*]}{"kubectl annotate ingress -n "}{.metadata.namespace}{" "}{.metadata.name}{" kubernetes.io/ingress.class=somerandomstring\n"}{end}' \
| sh -x
Now using Ingresses is pretty useful with Kubernetes, so I suggest you check out the nginx ingress controller and after deployment, annotate your Ingresses accordingly.

If you specify the Ingress class as an annotation on the Ingress object
kubernetes.io/ingress.class: traefik
Traefik will pick it up while the Google Load Balancer will ignore it. There is also a bit of Traefik documentation on this part.

You could deploy the nginx ingress controller using NodePort mode (e.g. if using the helm chart set controller.service.type to NodePort) and then load-balance amongst your instances using DNS. Just make sure you have static IPs for the nodes or you could even create a DaemonSet that somehow updates your DNS with each node's IP.
Traefik seems to support a similar configuration (e.g. through serviceType in its helm chart).

Related

Exposing Service from a BareMetal(Kubeadm) Kubernetes Cluster to outside world

Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
For that, I have created a deployment and service for NGINX as shown below,
As per my search, found that we have below to expose to outside world
MetalLb
Ingress NGINX
Some HELM resources
I would like to know all these 3 or any more approaches in such way it help me to learn new things.
GOAL
Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
How Can I make my service has its own public IP to access from the outside cluster?
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
I have done a similar setup you can check it here in detail:
https://developerdiary.me/lets-build-low-budget-aws-at-home/
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.

Kubernetes (EKS) Design confusions

I am a bit new to Kubernetes and I am working with EKS.
I have two main apps for which there is a number of pods and I have set up a ELB for external access.
I also have a small app with say 1-2 pods. I don't want to set up a ELB just for this small app. I checked the node port, but in that case, I can't use the default HTTPS port 443.
So I feel the best thing to do in this case would be to bring the small app outside the cluster, then maybe set it up in a EC2 instance. Or is there some other way to expose the small app while keeping it inside the cluster itself?
You can try to use the Host network (Node) like hostport (Not recommended in k8s to use in prod)
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 443
The hostPort feature allows to expose a single container port on the
host IP. Using the hostPort to expose an application to the outside of
the Kubernetes cluster has the same drawbacks as the hostNetwork
approach discussed in the previous section. The host IP can change
when the container is restarted, two containers using the same
hostPort cannot be scheduled on the same node and the usage of the
hostPort is considered a privileged operation on OpenShift.
Extra
I don't want to set up a elb just for this small app.
Ideally, you have to use the deployments with the ingress and ingress controller. So there will be single ELB for the whole EKS cluster and all services will be using that single point.
All PODs or deployment will be running into a single cluster if you want. Single point ingress will work as handling the traffic into EKS cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can read this article how to setup the ingress in EKS aws so you will get an idea.
You can use a different domains for exposing services.
Example :
https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/

How does Traefik / Ngnix - (Ingress Controllers) forwards request to two different services having configured with same port number.?

Basically I have Following Hdfs Cluster setup using docker-compose:
Node 1 with IP: 192.168.1.1 having service deployed as below:
Namenode1:9000
HMaster1: 8300
ZooKeeper1:1291
Node 2 with IP: 192.168.1.2 having service deployed as below:
Namenode2:9000
ZooKeeper2:1291
How does Traefik / Ngnix - (Ingress Controllers) forwards request to two different services having configured with same port number?
There are several great tutorials on how ingress and load balancing works in kubernetes, e.g. this one by Mark Betz. As a general rule, it helps to think in terms of services and workloads instead of specific nodes where your workloads are running on.
A workload deployed in Kubernetes (a so called Pod) has its own internal IP address, called a ClusterIP. That pod can have one or more ports open, just on that pod-owned ip address.
If you now have several pods to distribute the load, e.g. like 5 web server processes or backend logic, it would be hard for a client (inside the cluster) to keep track of all those pod IPs, because they also change when a pod is updated or just restarted due to a crash. This is why Kubernetes has a so called concept of services. Those provide a stable DNS name and IP which then transparently "forwards" to one of the healthy pods. So your client only needs to know the DNS name and not keep track of the specific pod IPs.
If you now want to expose such a service to the public, there are different ways. Either you set your service to type: LoadBalancer which then sets up some load balancer infrastructure on your cloud provider and routes traffic to the nodes and then to the pods - or - you already have an ingress controller in place and just define the routing based on host names and paths. An ingress controller itself is such a loadbalanced service with an attached cloud load balancer and also has some pods (with e.g. a traefik or nginx container) which then route your packets accordingly.
So coming back to your initial question: If you want to expose a service with several pods of the same kind, then you would first create a Service resource that matches your Pods using the selector and then you create one single ingress resource that provides a hostname/path and references this service. The ingress controller will pick up those ingress resources and configure the traefik or nginx accordingly. The ingress controller doesn't really care about the host IPs and port numbers, because it acts on the internal kubernetes ClusterIPs, so you even don't need (and shouldn't) expose such a service directly when you have an ingress in place.
I hope this answers your question regarding exposing two workloads over an ingress controller. For details, check the Kubernetes docs on Ingresses. Based on the services you named (zookeeper, hdfs) load balancing and ingresses might not be what you need for that case. Zookeeper instances should be internal in most cases and need to be adressed individually, so you might want to check out headless services, for this use case. Also check the Kubernetes docs for a way to run zookeeper.

Why do we need a load balancer to expose kubernetes services using ingress?

For a sample microservice based architecture deployed on Google kubernetes engine, I need help to validate my understanding :
We know services are supposed to load balance traffic for pod replicaset.
When we create an nginx ingress controller and ingress definitions to route to each service, a loadbalancer is also setup automatically.
had read somewhere that creating nginx ingress controller means an nginx controller (deployment) and a loadbalancer type service getting created behind the scene. I am not sure if this is true.
It seems loadbalancing is being done by services. URL based routing is
being done by ingress controller.
Why do we need a loadbalancer? It is not meant to load balance across multiple instances. It will just
forward all the traffic to nginx reverse proxy created and it will
route requests based on URL.
Please correct if I am wrong in my understanding.
A Service type LoadBalancer and the Ingress is the way to reach your application externally, although they work in a different way.
Service:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).
There are some types of Services, and of them is the LoadBalancer type that permit you to expose your application externally assigning a externa IP for your service. For each LoadBalancer service a new external IP will be assign to it.
The load balancing will be handled by kube-proxy.
Ingress:
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
When you setup an ingress (i.e.: nginx-ingress), a Service type LoadBalancer is created for the ingress-controller pods and a Load Balancer in you cloud provider is automatically created and a public IP will be assigned for the nginx-ingress service.
This load balancer/public ip will be used for incoming connection for all your services, and nginx-ingress will be the responsible to handle the incoming connections.
For example:
Supose you have 10 services of LoadBalancer type: This will result in 10 new publics ips created and you need to use the correspondent ip for the service you want to reach.
But if you use a ingress, only 1 IP will be created and the ingress will be the responsible to handle the incoming connection for the correct service based on PATH/URL you defined in the ingress configuration. With ingress you can:
Use regex in path to define the service to redirect;
Use SSL/TLS
Inject custom headers;
Redirect requests for a default service if one of the service failed (default-backend);
Create whitelists based on IPs
Etc...
A important note about Ingress Load balancing in ingress:
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB kube-proxy rules which would correctly balance across all endpoints.
With the new functionality, the external traffic is not equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).
An ingress controller(nginx for example) pods needs to be exposed outside the kubernetes cluster as an entry point of all north-south traffic coming into the kubernetes cluster. One way to do that is via a LoadBalancer. You could use NodePort as well but it's not recommended for production or you could just deploy the ingress controller directly on the host network on a host with a public ip. Having a load balancer also gives ability to load balance the traffic across multiple replicas of ingress controller pods.
When you use ingress controller the traffic comes from the loadBalancer to the ingress controller and then gets to backend POD IPs based on the rules defined in ingress resource. This bypasses the kubernetes service and load balancing(by kube-proxy at layer 4) offered by kubernetes service.Internally the ingress controller discovers all the POD IPs from the kubernetes service's endpoints and directly route traffic to the pods.
It seems loadbalancing is being done by services. URL based routing is being done by ingress controller.
Services do balance the traffic between pods. But they aren't accessible outside the kubernetes in Google Kubernetes Engine by default (ClusterIP type). You can create services with LoadBalancer type, but each service will get its own IP address (Network Load Balancer) so it can get expensive. Also if you have one application that has different services it's much better to use Ingress objects that provides single entry point. When you create an Ingress object, the Ingress controller (e.g. nginx one) creates a Google Cloud HTTP(S) load balancer. An Ingress object, in turn, can be associated with one or more Service objects.
Then you can get the assigned load balancer IP from ingress object:
kubectl get ingress ingress-name --output yaml
As a result your application in pods become accessible outside the kubernetes cluster:
LoadBalancerIP/url1 -> service1 -> pods
LoadBalancerIP/url2 -> service2 -> pods

How to make an HTTP request from a K8 pod to a NodePort service in the same cluster

I need for a service in a K8 pod to be able to make HTTP calls to downstream services, load balanced by a NodePort, within the same cluster and namespace.
My constraints are these:
I can do this only through manipulation of deployment and service
entities (no ingress. I don't have that level of access to the
cluster)
I cannot add any K8 plugins
The port that the NodePort exposes must be randomized, not hard coded
This whole thing must be automated. I can't set the deployment with the literal value of
the exposed port. It needs to be set by some sort of variable, or
similar process.
Is this possible, and, if so, how?
It probably can be done but it will not be straight forward and you might have to add some custom automation. A NodePort service is meant to be used by an entity outside your cluster.
For inter-cluster communication, a regular service (with a ClusterIP) will work as designed. Your service can reach another service using DNS service discovery. For example. svc-name.mynamespace.svc.cluster.local would be the DNS entry for a svc-name in the mynamespace namespace.
If you can only do a NodePort which essentially is a port on your K8s nodes, you could create another Deployment or Pod of something like nginx or haproxy. Then have this deployment being serviced by regular K8s service with a ClusterIP. Then have nginx or haproxy point to the NodePort on all your nodes in your Kubernetes cluster. Also, have it configured so that it only forwards to listening NodePorts with some kind of healthcheck.
The above seems like an extra necessary step, but if NodePort from within the cluster is what you need (for some reason), it should do the trick.