Multiple ingress controller different zone bare-metal - kubernetes

I have two-zone, each has to master node. Today I created a simple ingress-nginx controller and successfully pointed a DNS test.example.com to one of the public IP in zone-1.
But now I want to create another nginx-controller in zone-2 and point test.example.com to the public IP address of that zone with cloud DNS.
What approach should I take? Is there any best practice?

Your question is unclear and needs to be improved with any minimal reproducible example. You can find the manual here.
According to your subject, you're using Kubernetes cluster in bare-metal, however you mentioned using cloud DNS. Where did you get cloud DNS?
If you're using pure bare metal, then consider using MetalLB.
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
But this approach has a few other limitations one ought to be aware of, one of them is about Ingress status:
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS
test-ingress myapp.example.com 80
Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service.
Please see the example below:
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is )
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IP
host-1 Ready master 203.0.113.1
host-2 Ready node 203.0.113.2
host-3 Ready node 203.0.113.3
one could edit the ingress-nginx Service and add the following field to the object spec
spec:
externalIPs:
- 203.0.113.1
- 203.0.113.2
- 203.0.113.3
which would in turn be reflected on Ingress objects as follows:
$ kubectl get ingress -o wide
NAME HOSTS ADDRESS PORTS
test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80
See more detailed info here

Related

Expose service to the world from my bare metal k8s node

I am following this guide to expose a service running on my bare metal k8s cluster to the world.
The guide suggests using metallb for giving external access. The problem is, during the setup process of metallb, I am asked to give a range of available IP addresses.
The hosting provider I am using is very basic, and all I have is the IP address of the Linux instance that is running my K8s node. So my question is, how can I provision an IP address for assigning to my application? Is this possible with a single IP?
Alternatively I'd love get this done with a NodePort, but I need to support HTTPS traffic and I am not sure its possible if I go that way.
Specify a single IP using CIDR notation. your-ip/32 (192.168.10.0/32 for example)
Single IP Address Load Balancer
Using a single IP address is possible. In this case you don't need the speaker pods that announce the external IP addresses and thus no pod security labels. So, if you install using helm, prepare a metallb-values-helm.yaml file:
speaker:
enabled: false
Then install metallb:
kubectl create namespace metallb-system
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system -f metallb-values-helm.yaml
Now prepare a configuration of the public IP address in a metallb-config-ipaddress.yaml file:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-ip
namespace: metallb-system
spec:
addresses:
# The IP address of the kubernetes master node
- 192.168.178.10/32
And apply:
kubectl apply -f metallb-config-ipaddress.yaml
Multiple Services Sharing the Same IP Address
This should already work for a single service. However, if you want to apply multiple services on different ports of the same IP address, you need to provide an annotation in every service manifest, as described here. A service manifest will look like:
apiVersion: v1
kind: Service
metadata:
name: cool-service
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "key-to-share-192.168.178.10"
...
The string "key-to-share-192.168.178.10" is arbitrary, but must be equal for all services. If there is really just a single IP address in your pool (as specified above), you don't have to specify it as loadBalancerIP: 192.168.178.10. This would be only required if you had multiple IP addresses and wanted to select one. So that's all.
What's next?
You can also use nginx-ingress as ingress controller behind your metallb load balancer, which is still required to expose the nginx-ingress service. Then you can e. g. separate your services via subdomains (pointing to the same IP address) like
service1.domain.com
service2.domain.com

Clean way to connect to services running on the same host as the Kubernetes cluster

I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>

DNS server in kubernetes for translate LAN hosts

I am using a baremetal cluster of 1 master and 2 nodes on premise in my home lab with istio, metallb and calico.
I want to create a DNS server in kubernetes that translates IPs for the hosts on the LAN.
Is it possible to use the coreDNS already installed in k8s?
Yes, it's possible but there are some points to consider when doing that. Most of them are described in the Stackoverflow answer below:
Stackoverflow.com: Questions: How to expose Kubernetes DNS externally
For example: The DNS server would be resolving the queries that are internal to the Kubernetes cluster (like nslookup kubernetes.default.svc.cluster.local).
I've included the example on how you can expose your CoreDNS to external sources and add a Service that would be pointing to some IP address
Steps:
Modify the CoreDNS Service to be available outside.
Modify the configMap of your CoreDNS accordingly to:
CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Test
Modify the CoreDNS Service to be available outside.
As you are new to Kubernetes you are probably aware on how Services work and which can be made available outside. You will need to change your CoreDNS Service from ClusterIP to either NodePort or LoadBalancer (I'd reckon LoadBalancer would be a better idea considering the metallb is used and you will access the DNS server on a port: 53)
$ kubectl edit --namespace=kube-system service/coredns (or kube-dns)
A side note!
CoreDNS is using TCP and UDP simultaneously, it could be an issue when creating a LoadBalancer. Here you can find more information on it:
Metallb.universe.tf: Usage (at the bottom)
Modify the configMap of your CoreDNS
If you would like to resolve domain like for example: example.org you will need to edit the configMap of CoreDNS in a following way:
$ kubectl edit configmap --namespace=kube-system coredns
Add the line to the Corefile:
k8s_external example.org
This plugin allows an additional zone to resolve the external IP address(es) of a Kubernetes service. This plugin is only useful if the kubernetes plugin is also loaded.
The plugin uses an external zone to resolve in-cluster IP addresses. It only handles queries for A, AAAA and SRV records; all others result in NODATA responses. To make it a proper DNS zone, it handles SOA and NS queries for the apex of the zone.
-- CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Following on the link that I've included, you can now create a Service that will point to an IP address:
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: None
externalIPs:
- 192.168.200.123
type: ClusterIP
Test
I've used minikube with --driver=docker (with NodePort) but I'd reckon your can use the ExternalIP of your LoadBalancer to check it:
dig #192.168.49.2 test.default.example.org -p 32261 +short
192.168.200.123
where:
#192.168.49.2 - IP address of minikube
test.default.example.org - service-name.namespace.k8s_external_domain
-p 32261 - NodePort port
+short - to limit the output
Additional resources:
Linux.die.net: Man: Dig

Where does a kubernetes ingress gets its IP address from?

I have a typhoon kubernetes cluster on aws with an nginx ingress controller installed.
If I add an test ingress object it looks like this:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default foo <none> * 10.0.8.180 80 143m
Question: Where does my ingress controller get that address(10.0.8.180) from?
There is no (loadbalancer) service on the system with that address.
(Because it is a private address external-dns does not work correctly.)
In order to answer your first question:
Where does a kubernetes ingress gets its IP address from?
we have to dig a bit into the code and its behavior.
It all starts here with publish-service flag:
publishSvc = flags.String("publish-service", "",
`Service fronting the Ingress controller
Takes the form "namespace/name". When used together with update-status, the
controller mirrors the address of this service's endpoints to the load-balancer
status of all Ingress objects it satisfies.`)
The flag variable (publishSvc) is later assigned to other variable (PublishService):
PublishService: *publishSvc,
Later in the code you can find that if this flag is set, this code is being run:
if s.PublishService != "" {
return statusAddressFromService(s.PublishService, s.Client)
}
statusAddressFromService function as an argument takes value of publish-service flag and queries kubernetes for service with this name, and returns IP address of related service.
If this flag is not set it will query the k8s for IP address of a node where nginx ingress pod is running. This is the bahaviour you are experiencing and it makes me think that you didn't set this flag. This also answers your second question:
Why does it take the address of the node instead of the NLB?
Meanwhile you can find all yaml for all sort of platforms in k8s nginx ingress documentation.
Lets have a look at AWS ingress yaml.
Notice here that publish-service has a value called ingress-nginx/ingress-nginx-controller (<namespace>/<service>).
and this is what you also want to do.
TLDR: All you have to do is to create a LoadBalancer Service and set the publish-service to <namespace_name>/<service_name>

Is there a way to not use GKE's standard load balancer?

I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc.
Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.
I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.
Is there a way to use GKE without using Google's load balancer?
An Ingress is just a set of rules that tell the cluster how to route to your services, and a Service is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:
ClusterIP - this gives the service an IP that's only available inside the cluster which routes to the pods.
NodePort - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.
LoadBalancer - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.
These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.
The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a Service and instead must specify the port directly in your Deployment. Use the hostPort setting to bind the containers directly to port 80 on the node:
containers:
- name: yourapp
image: yourimage
ports:
- name: http
containerPort: 80
hostPort: 80 ### this will bind to port 80 on the actual node
This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.
If you still want load-balancing then you can run a DaemonSet (so that it's available on every node) with Nginx (or any other proxy) exposed via hostPort and then that will route to your internal services. An easy way to run this is with the standard nginx-ingress package, but skip creating the LoadBalancer service for it and use the hostPort setting. The Helm chart can be configured for this:
https://github.com/helm/charts/tree/master/stable/nginx-ingress
One option is to completely disable this feature on your GKE cluster. When creating the cluster (on console.cloud.google.com) under Add-ons disable HTTP load balancing. If you are using gcloud you can use gcloud beta container clusters create ... --disable-addons=HttpLoadBalancing.
Alternatively, you can also inhibit the GCP Load Balancer by adding an annotation to your Ingress resources, kubernetes.io/ingress.class=somerandomstring.
For newly created ingresses, you can put this in the yaml document:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: somerandomstring
...
If you want to do that for all of your Ingresses you can use this example snippet (be careful!):
kubectl get ingress --all-namespaces \
-o jsonpath='{range .items[*]}{"kubectl annotate ingress -n "}{.metadata.namespace}{" "}{.metadata.name}{" kubernetes.io/ingress.class=somerandomstring\n"}{end}' \
| sh -x
Now using Ingresses is pretty useful with Kubernetes, so I suggest you check out the nginx ingress controller and after deployment, annotate your Ingresses accordingly.
If you specify the Ingress class as an annotation on the Ingress object
kubernetes.io/ingress.class: traefik
Traefik will pick it up while the Google Load Balancer will ignore it. There is also a bit of Traefik documentation on this part.
You could deploy the nginx ingress controller using NodePort mode (e.g. if using the helm chart set controller.service.type to NodePort) and then load-balance amongst your instances using DNS. Just make sure you have static IPs for the nodes or you could even create a DaemonSet that somehow updates your DNS with each node's IP.
Traefik seems to support a similar configuration (e.g. through serviceType in its helm chart).