Traefik load balancer via helm chart does not route any traffic - kubernetes

I was trying to set up a traefik load balancer as an alternative LB for nginx-ingress. I used the helm chart from https://github.com/helm/charts/tree/master/stable/traefik and installed on my GKE cluster with rbac enabled since I use Kubernetes v1.12:
helm install --name traefik-lb --namespace kube-system --set rbac.enabled=true stable/traefik
My test application's ingress.yaml points to the new ingress class now:
kubernetes.io/ingress.class: "traefik"
What I've seen in the logs is that Traefik reloads its config all the time. I would also like to know if Traefik definitely needs a TLS cert to "just" route traffic.

What I've seen in the logs is that traefik reloads its config all the time.
It should reload every time you change the Ingress resources associated with it (The Traefik ingress controller). If it reloads all the time without any change to your cluster, there may be an issue with Traefik itself or the way your cluster is set up.
I would also like to know if traefik definitely needs a TLS cert to "just" route traffic.
No, it doesn't. This basic example from the documentation shows that
you don't need TLS if you don't want to set it up.

Related

Migrate deployments and services from stable/nginx-ingress to kubernetes/ingress-nginx

I'm trying to migrate our ingress controllers from the old stable/nginx-ingress to the newer kubernetes/ingress-nginx
I have followed their instructions for zero downtime deployments.
Create a second nginx-controller with the kubernetes/ingress-nginx helm chart.
The instanceClassName has to be different than the original.
original instanceClassName: nginx
new instanceClassName: nginx2
Update dns to point to the new nginx 2 ELB.
Get rid of the old nginx-controller
This is all great, but all of our services/deployments are attached to instanceClassName: nginx. We can update the DNS, but then the services attached to it won't receive traffic. We can update the services at the same time, but they update at different times. This will cause an outage of some type while updating.
All of the research I have done seems to stop at that controller level. It doesn't go deeper and explain how to keep all the services connected during the switch.
How can I get both nginx controllers to route traffic to the application at the same time? I have not been able to get that to happen at the service or nginx controller level.
Or maybe I'm thinking about incorrectly, and it can work in a different way.
thanks.
There are multiple methods and I have given below with Istio and providing alternate method documentations for your reference.
You can avoid down time while migrating via splitting the traffic. There are few traffic splitting tools. Kubernetes has traffic splitting in-built feature with Istio and it will help you to direct a percentage of traffic to the new ingress controller while keeping the rest of the traffic on the old ingress controller.
Install Istio in your cluster and configure your ingress resources to use Istio gateway instead of the ingress controllers directly.
Install Istio in your cluster.
Configure your ingress resources to use the Istio gateway.
Create virtual service for your ingress resources and gradually increase the traffic to the new ingress controller and also make sure to update your DNS records to point to the new ingress controllers IP.
Reference and for further information please check the official Istio page and Istio Service Mesh Workshop.
For alternative methods please refer below options:
Canary Deployments and Type Loadbalancer

Kubernetes nginx ingress controller not working

I'm using minikube to have a local Kubernetes cluster.
I enabled the ingress addon with the following command :
minikube addons enable ingress
I have created an application that turns actually in my cluster and also a service tied to it plus an ingress resource where mydomain.name will forward traffic to that service.
The issue that I have: this will not work if I don't add an entry like the following one to the /etc/hosts file :
<kubernetes_cluster_ip_adress> mydomain.name
I thought having the ingress controller with an ingress resource would do the DNS resolution & that I don't need to add any entry to the /etc/hosts file but maybe I am wrong.
Thank you for your help

List all available ingress controllers on Kubernetes

I have a GKE cluster with Traefik being used as an ingress controller.
I want to create a GKE ingress, but I can't find anywhere which kubernetes.io/ingress.class to use.
I tried to use kubernetes.io/ingress.class: gce, but nothing happened... it's almost like the ingress was completely ignored.
Is there a way to list all available ingress controllers/classes? Or, at least, which kubernetes.io/ingress.class should I use to create a GKE Ingress? (I'll still use traefik for other ingresses).
Run describe on the Ingress. If you see create/add events, you have an Ingress controller running in the cluster, otherwise, you probably have the HttpLoadBalancing(GKE Ingress Controller) add-on disabled on your GKE cluster.
On GKE, The kubernetes.io/ingress.class: gce is the default ingress class, if there is no an annotation defined under the metadata section, the Ingress Resource uses the GCP GCLB L7 load balancer to serve traffic. So have you tried setting the annotation to an empty string?
Being said that, answering the following questions it will help me to understand the contest:
Could you please define you use case? Are you trying to define two
ingress for the same service or convert to curren Traefik to a GCE
ingress?
Could you please attach your GKE ingress definition to see
if there is a sintaxis error ?

Securing Nginx-Ingress > Istio-Ingress

I have Istio Ingress which is working with traffic going in to microservices and inbetween microservices is being encrypted within ISTIO domain. But i dont want to expose ISTIO ingress to public.
So tried deploying NGINX or HAPROXY ingress (with https certs) and point them to ISTIO ingress and everything is working great.
My only worry now is that traffic between NGINX INGRESS (https term) > ISTIO INGRESS is not encrypted.
What is the usual way on Istio to get full encryption of traffic but with NGINX/HAPROXY ingress.
I guess one way is to HAPROXY tcp mode to ISTIO ingress with certs on Istio ingress. Haven't tried it but it should work. Wild idea is running NGINX ingress within ISTIO mash but then i would loose some Istio Ingress capabilities.
What is the recommended way or any suggestion. How is usualy Istio being exposed on some real Prod env example.
Since i dont use cloud loadbalancer on voyager instances but expose Voyager/Haproxy on Host-Port
I collocated Voyager(HostPort) and Istio(HostPort) via DeamonSet/node-selector(and taints) on same machines called frontend. Then just pointed Voyager to loadbalance the loopback/localhost with port of Istio HostPort I specified.
backendRule:
- 'server local-istio localhost:30280'
This way no unenctypted traffic is traversing the network between Voyager/Haproxy and Istio Ingress since they communicate now on same Host. I have 2 frontend nodes witch are beeing loadbalanced so i have redundancy. But its kind of improvisation and breaking kubernetes logic. On the other hand it works great.
Other solution was to use selfsigned certificates on Istio, than just point Voyager/Haproxy to Istio instances. But this requires multiple terminations since Voyager is also terminating Https. Advanteg of this is that you can leave Voyager and Istio instances to Kubernetes to distribute. No need to bind them to specific machines.

does daemonset need RBAC in kubernetes?

When I deploy a deamonset in kubernetes(1.7+), i.e, nginx ingress as daemonset, do I need to set some rbac rule ??? I know I need to set some rbac rule if I use deployment.
To deploy ingress, you need to enable some RBAC rules. In the nginx controller repository you can find the RBAC rules: https://github.com/kubernetes/ingress/blob/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml
To create daemonset you don't need to create RBAC rules for it. You might need RBAC for what is running in your Pod, be it via Deployment, Daemonset or whatever. It is the software you're running inside that might want to interact with kubernetes API, like it is in case of Ingress Controller. So, it is in fact irrelevant how you make the Pod happen, the RBAC (Cluster)Role, bindings etc. It is what soft you deploy, that defines what access rules it needs.
I was able to enable RBAC using helm (--set rbac.create=true) and this error is not seen anymore, and the nginx ingress controller is working as expected!
helm install --name my-release stable/nginx-ingress --set rbac.create=true