Add TLS ingress to Kubernetes deployment - kubernetes

I have a working kubernetes cluster where ingress and letsencrypt is working just fine when I use helm charts. I have a deployment not included in a chart that I want to expose using ingress with TLS. How can I do this with kubectl commands?
EDIT: I can manually create an ingress but I don't have a secret so HTTPS won't work. So my question is probably "How to create a secret with letsencrypt to use on a new ingress for an existing deployment"

Google provides a way to do this for their own managed certificates. The documentation for it is at https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs.

Related

ingress-nginx tls not working on AKS when deployed in separate namespace - Ingress is looking for the cert secret in the wrong namespace

I have an ingress-nginx controller installed though helm in a namespace called nginx
My services run in the default namespace
I have a SecretProviderClass in the nginx namespace, which is referenced from the nginx controller yaml in the nginx namespace.
When I deploy an Ingress resource into the default namespace, and I go into the logs of the controller pod, I see
W0930 13:57:10.224167 7 backend_ssl.go:47] Error obtaining X.509 certificate: no object matching key "default/ingress-tls-csi" in local store
Clearly it is looking in the wrong namespace for the secret.
What is the right way to handle this? I have tried to duplicate the SecretProviderClass in the default namespace but it doesn't seem to be creating the secret. I have also tried to point my Ingress to nginx/ingress-tls-csi instead of just ingress-tls-csi but it complains about changing an immutable field when I try to deploy that even if I delete the Ingress resource first.
The docs I am referencing are https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls
Conveniently there they are using the same namespace for the services and the controller. I'd prefer not to do that as the controller needs privilege escalation and I don't want to allow that in the namespace my service run in.
We solve a similar problem using reflector:
https://github.com/EmberStack/kubernetes-reflector
It is a Kubernetes addon that keeps ConfigMaps and Secrets in sync across namespaces.
Install reflector and add the following annotations to the secret you need to have available in the default namespace:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "default"

InfluxDb on Kubernetes with TLS ingress

I'm setting up influxdb2 on a kubernetes cluster using helm. I have enabled ingress and it works ok on port 80, but when I enable TLS and set the "secretName" to an existing TLS secret on kubernetes it times out on port 443. Is my assumption that "secretName" in the helm chart refers to a kubernetes cluster secret? Or is it a secret within influxdb itself? I can't find any useful documentation about this.
It is a reference to a new Kubernetes secret that is going to be created corresponding to the tls cert. It does not have to reference an existing secret. If you run kubectl get secrets after a successful apply , you would see a secret something like <cert-name>-afr5d

Traefik load balancer via helm chart does not route any traffic

I was trying to set up a traefik load balancer as an alternative LB for nginx-ingress. I used the helm chart from https://github.com/helm/charts/tree/master/stable/traefik and installed on my GKE cluster with rbac enabled since I use Kubernetes v1.12:
helm install --name traefik-lb --namespace kube-system --set rbac.enabled=true stable/traefik
My test application's ingress.yaml points to the new ingress class now:
kubernetes.io/ingress.class: "traefik"
What I've seen in the logs is that Traefik reloads its config all the time. I would also like to know if Traefik definitely needs a TLS cert to "just" route traffic.
What I've seen in the logs is that traefik reloads its config all the time.
It should reload every time you change the Ingress resources associated with it (The Traefik ingress controller). If it reloads all the time without any change to your cluster, there may be an issue with Traefik itself or the way your cluster is set up.
I would also like to know if traefik definitely needs a TLS cert to "just" route traffic.
No, it doesn't. This basic example from the documentation shows that
you don't need TLS if you don't want to set it up.

How to need to annotate ingresses for traefik to generate letsencrypt certificates

I am using traefik as an ingress-controller and want to serve other ingresses via auto-generated letsencrypt certificates and enforce SSL.
I set up traefik with it's official helm chart like this:
helm install stable/traefik --name traefik --set dashboard.enabled=true,dashboard.domain=traefik.mycompany.com,rbac.enabled=true,externalIP=123.456.789.123,ssl.enabled=true,ssl.enforced=true,ssl.permanentRedirect=true,acme.enabled=true,acme.staging=false,acme.challengeType=http-01
How do I need to annotate the ingresses for the apps I need to expose to use an auto-generated letsencrypt certificate?
With this setup traefik.mycompany.com is delivered via SSL with a wild-card certificate of the default host *.example.com:
I digged through the whole traefik documentation (https://docs.traefik.io/) but could only find out how I need to setup the ingresses.

does daemonset need RBAC in kubernetes?

When I deploy a deamonset in kubernetes(1.7+), i.e, nginx ingress as daemonset, do I need to set some rbac rule ??? I know I need to set some rbac rule if I use deployment.
To deploy ingress, you need to enable some RBAC rules. In the nginx controller repository you can find the RBAC rules: https://github.com/kubernetes/ingress/blob/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml
To create daemonset you don't need to create RBAC rules for it. You might need RBAC for what is running in your Pod, be it via Deployment, Daemonset or whatever. It is the software you're running inside that might want to interact with kubernetes API, like it is in case of Ingress Controller. So, it is in fact irrelevant how you make the Pod happen, the RBAC (Cluster)Role, bindings etc. It is what soft you deploy, that defines what access rules it needs.
I was able to enable RBAC using helm (--set rbac.create=true) and this error is not seen anymore, and the nginx ingress controller is working as expected!
helm install --name my-release stable/nginx-ingress --set rbac.create=true