InfluxDb on Kubernetes with TLS ingress - kubernetes

I'm setting up influxdb2 on a kubernetes cluster using helm. I have enabled ingress and it works ok on port 80, but when I enable TLS and set the "secretName" to an existing TLS secret on kubernetes it times out on port 443. Is my assumption that "secretName" in the helm chart refers to a kubernetes cluster secret? Or is it a secret within influxdb itself? I can't find any useful documentation about this.

It is a reference to a new Kubernetes secret that is going to be created corresponding to the tls cert. It does not have to reference an existing secret. If you run kubectl get secrets after a successful apply , you would see a secret something like <cert-name>-afr5d

Related

Nginx Ingress controller performance issue

I have a GKE cluster, external domain name, and letsencrypt certs.
When I am using a Load balancer and instruct pods to use certs that I generate using certbot then performance is quite good. But I have to renew certs manually which takes a lot of effort.
When using an ingress controller and letting cert-manager update certs by itself then additional hops add latency and make the traffic path more complex. Then the connection is on h2 from client to ingress and then the connection become plain HTTP from ingress to pods.
Is there any way remove the extra hops when using nginx ingress controller and take out the performance issue?
There is no extra hop if you are using the cert-manager with ingress.
You can use the cert-manager it will save the cert into secret and attach to ingress. However, it's up to you where you are doing TLS termination.
You can also bypass the HTTPS traffic till POD for end-to-end encryption if you are doing TLS termination at ingress level backed traffic till POD will be in plain HTTP.
Internet > ingress (TLS in secret) > Plain HTTP if you terminate > service > PODs
If you want to use the certificate into POD you can mount the secret into POD and that will be further used by the application.
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod
If you will use the secret with POD you might need to reload the POD in that case you can use the Reloader to auto roll out the PODs.
Reloader : https://github.com/stakater/Reloader

Prometheus cannot scrape kubernetes metrics

I have setup a kubernetes cluster using kubeadm. I then deployed prometheus on it using the community helm charts.
I notice that prometheus cannot scrape metrics from the scheduler, etcd or the controller manager.
For example I see errors like this:
Get "https://192.168.3.83:10259/metrics": dial tcp 192.168.3.83:10259: connect: connection refused
The reason I get these errors is because there is in fact nothing listening on https://192.168.3.83:10259/metrics. This because kube-scheduler has --bind-address set to 127.0.0.1
One way I can fix this is by manually editing the manifest files in /etc/kubernetes/manifests, changin --bind-address to 0.0.0.0
When I do this prometheus is able to scrape those metrics.
However, is this the correct solution? I assume that those manifest files are actually managed by kubernetes itself, and that I should probably not directly edit those, and do something else. But what?
edit: I have since noticed that changes I make to the manifest files do indeed get overwritten when doing an upgrade. And now I have again lost the etcd and other metrics.
I must be missing something obvious here.
I though that maybe changing the "clusterconfiguration" configmap would do the trick. But if you can do this (and how you should do this) is not documented anywhere.
I have an out of the box kubernetes, and out of the box prometheus and it does not collect metrics. I cannot be the only one running in to this issue. Is there really no solution?
Exposing kube-scheduler, etcd or the kube-controller manager (and persisting the changes)
You can expose the metrics on 0.0.0.0 just as you have done by editing the configmap and then pulling those changes to each control plane node. These changes will then be persisted accross upgrades. For etcd this can also be done in another way which might preferrable (see further down)
First step: edit the configmap with the below command:
kubectl edit -n kube-system cm/kubeadm-config
Add/change the relevant bind addresses as described here, but for example for etcd like outline below:
kind: ClusterConfiguration
etcd:
local:
extraArgs:
listen-metrics-urls: http://0.0.0.0:2381
Second step: NOTE: Please read here to understand the upgrade command before applying it to any cluster you care about since it might also update cluster component versions unless you just did an upgrade :)
For the changes to be reflected you thus need to run kubeadm upgrade node on each controlplane node (one at the time please..) This will bring down the affected pods (those to which you have made changes) and start a new instance with the metrics exposed. You can verify before & after with for example: netstat -tulpn | grep etcd
For etcd the default port in Prometheus is 2379 so it also need to be adjusted to 2381 as below in your prometheus value file:
kubeEtcd:
service:
port: 2381
targetPort: 2381
Source to the above solution here
Accessing existing etcd metrics without exposing it further
For ETCD metrics there is a second, perhaps preferred way of accessing the metrics using the already exposed https metric endpoint on port 2379 (which requires authentication). You can verify this with Curl:
curl https://<your IP>:2379/metrics -k --cert /etc/kubernetes/pki/etcd/healthcheck-client.crt --key /etc/kubernetes/pki/etcd/healthcheck-client.key
For this to work we need to supply Prometheus with the correct certificates as a secret in kubernetes. Steps described here and outlined below:
Create a secret in the namespace where Prometheus is deployed.
kubectl -n monitoring create secret generic etcd-client-cert --from-file=/etc/kubernetes/pki/etcd/ca.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key
add the following to your prometheus helm value file
prometheus:
prometheusSpec:
secrets: ['etcd-client-cert']
kubeEtcd:
serviceMonitor:
scheme: https
insecureSkipVerify: false
serverName: localhost
caFile: /etc/prometheus/secrets/etcd-client-cert/ca.crt
certFile: /etc/prometheus/secrets/etcd-client-cert/healthcheck-client.crt
keyFile: /etc/prometheus/secrets/etcd-client-cert/healthcheck-client.key
Prometheus should now be able to access the https endpoint with the certificates that we mounted in the secret. I would say this is the preferred way for etcd since we don't expose the open http endpoint further.

Need help in configuring a simple TLS/SSL within k8s cluster for pod to pod communication over https

Need help on how to configure TLS/SSL on k8s cluster for internal pod to pod communication over https. Able to curl http://servicename:port over http but for https i am ending up with NSS error on client pod.
I generated a self signed cert with CN=*.svc.cluster.local (As all the services in k8s end with this) and i am stuck on how to configure it on k8s.
Note: i exposed the main svc on 8443 port and i am doing this in my local docker desktop setup on windows machine.
No Ingress --> Because communication happens within the cluster itself.
Without any CRD(custom resource definition) cert-manager
You can store your self-signed certificate inside the secret of Kubernetes and mount it to the volume of the pod.
If you don't want to use the CRD or cert-manager you can use the native Kubernetes API to generate the Certificate which will be trusted by all the pods by default.
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
managing the self singed certificate across all pods and service might be hard I would suggest using the service mesh. Service mesh encrypts the network traffic using the mTLS.
https://linkerd.io/2.10/features/automatic-mtls/#:~:text=By%20default%2C%20Linkerd%20automatically%20enables,TLS%20connections%20between%20Linkerd%20proxies.
Mutual TLS between service to service communication managed by the Side car containers in case of service mesh.
https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/
in this case, No ingress required and no cert-manager required.

Add TLS ingress to Kubernetes deployment

I have a working kubernetes cluster where ingress and letsencrypt is working just fine when I use helm charts. I have a deployment not included in a chart that I want to expose using ingress with TLS. How can I do this with kubectl commands?
EDIT: I can manually create an ingress but I don't have a secret so HTTPS won't work. So my question is probably "How to create a secret with letsencrypt to use on a new ingress for an existing deployment"
Google provides a way to do this for their own managed certificates. The documentation for it is at https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs.

Traefik load balancer via helm chart does not route any traffic

I was trying to set up a traefik load balancer as an alternative LB for nginx-ingress. I used the helm chart from https://github.com/helm/charts/tree/master/stable/traefik and installed on my GKE cluster with rbac enabled since I use Kubernetes v1.12:
helm install --name traefik-lb --namespace kube-system --set rbac.enabled=true stable/traefik
My test application's ingress.yaml points to the new ingress class now:
kubernetes.io/ingress.class: "traefik"
What I've seen in the logs is that Traefik reloads its config all the time. I would also like to know if Traefik definitely needs a TLS cert to "just" route traffic.
What I've seen in the logs is that traefik reloads its config all the time.
It should reload every time you change the Ingress resources associated with it (The Traefik ingress controller). If it reloads all the time without any change to your cluster, there may be an issue with Traefik itself or the way your cluster is set up.
I would also like to know if traefik definitely needs a TLS cert to "just" route traffic.
No, it doesn't. This basic example from the documentation shows that
you don't need TLS if you don't want to set it up.