Configuring Lets Encrypt with Traefik using Helm - kubernetes

I'm deploying taefik to my kubernetes cluster using helm. Here's what I have at the moment:
helm upgrade --install load-balancer --wait --set ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=an#email.com stable/traefik
I'm trying to configure letsencrypt. According to this documentation - you add the domains to the bottom of the .toml file.
Looking at the code for the helm chart, there's no provision for such configuration.
Is there another way to do this or do I need to fork the chart to create my own variation of the .toml file?

Turns out this is the chicken and the egg problem, described here.
For the helm chart, if acme.enabled is set to true, then Treafik will automatically generate and serve certificates for domains configured in Kubernetes ingress rules. This is the purpose of the onHostRule = true line in the yaml file (referenced above).
To use Traefik with Let's Encrypt, we have to create an A record in our DNS server that points to the ip address of our load balancer. Which we can't do until Traefik is up and running. However, this configuration needs to exist before Traefik starts.
The only solution (at this stage) is to kill the first Pod after the A record configuration has propagated.

Note that the stable/traefik chart now supports the ACME DNS-01 protocol. By using DNS it avoids the chicken and egg problem.
See: https://github.com/kubernetes/charts/tree/master/stable/traefik#example-aws-route-53

Related

Setting up Nifi for use with Kafa in Kubernetes using Helm in a VirtualBox

I need to set up NiFi in Kubernetes (microk8s), in a VM (Ubuntu, using VirtualBox) using a helm chart. The end goal is to have two-way communication with Kafka, which is also already deployed in Kubernetes.
I have found a helm chart for NiFi available through Cetic here. Kafka is already set up to allow external access through a NodePort, so my assumption is that I should do the same for NiFi (at least for simplicity's sake), though any alternative solution is welcome.
From the documentation, there is NodePort access optionality:
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). You’ll be able to contact the NodePort service, from
outside the cluster, by requesting NodeIP:NodePort.
Additionally, the documentation states (paraphrasing):
service.type defaults to NodePort
However, this does not appear to be true for the helm file, given that the default value in the chart's values.yaml file has service.type=ClusterIP.
I have very little experience with any of these technologies, so my question is, how do I actually set up the NiFi helm chart YAML file to allow two-way communication (presumably via NodePorts)? Is it as simple as "requesting NodeIP:NodePort", and if so, how do I do this?
UPDATE
I attempted JM Robles's approach (which does not use helm), but the API version used for Ingress is out-of-date and I haven't been able to figure out how to fix it.
I also tried GetInData's approach, but the helm commands provided result in: Error: unknown command "nifi" for "helm".
I found an answer, for any faced with a similar problem. As of late January 2023, the following can be used to set up NiFi as described in the question:
helm remo add cetic https://cetic.github.io/helm-charts
helm repo update
helm install -n <namespace> --set persistence.enabled=True --set service.type=NodePort --set properties.sensitiveKey=<key you want> --set auth.singleUser.username=<your username> --set auth.singleUser.password=<password you select, must be at least 12 characters> nifi cetic/nifi

Keep permanent IP for ingress-nginx (and using kustomize)

Problem
On a fresh Kubernetes cluster, I want to configure an nginx ingress with a public IP that remains stable, regardless of later changes to the nginx ingress controller; ie. I want to be able to delete and restore the ingress without changes to the IP address; the IP address is handed out by my Kubernetes provider.
There is an inspiring guide on github.com/kubernetes/ingress-nginx; however, the solution is very handcrafted and does not tell how to do it with automation tools like helm or kustomize.
I want (1) to use ingress-nginx, which installs via helm, (2) to have the public IP stable, regardless of helm (un)install, and (3) to use kustomize.
Question
I have isolated my kustomize configuration on a branch here on github, three files. (If I better could/should provide this otherwise, let me know.)
When I apply this, it creates two LoadBalancer endpoints, instead of a single one; ingress-nginx is not using the public-static-ip service's endpoint, but launches its own.
How do I have to change the kustomization so it picks up the helm chart but also uses the public-static-ip service's endpoint?

How do I use crossplane to Install helm charts (with provider-helm) into other cluster

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

kubernetes: deploying kong helm chart

I deploy kong via helm on my kubernetes cluster but I can't configure it as I want.
helm install stable/kong -f values.yaml
value.yaml:
{
"persistence.size":"1Gi",
"persistence.storageClass":"my-kong-storage"
}
Unfortunately, the created persistenceVolumeClaim stays at 8G instead of 1Gi. Even adding "persistence.enabled":false has no effect on deployment. So I think my all my configuration is bad.
What should be a good configuration file?
I am using kubernetes rancher deployment on bare metal servers.
I use Local Persistent Volumes. (working well with mongo-replicaset deployment)
What you are trying to do is to configure a dependency chart (a.k.a subchart ) which is a little different from a main chart when it comes to writing values.yaml. Here is how you can do it:
As postgresql is a dependency chart for kong so you have to use the name of the dependency chart as a key then the rest of the options you need to modify in the following form:
The content of values.yaml does not need to be surrounded with curly braces. so you need to remove it from the code you posted in the question.
<dependcy-chart-name>:
<configuration-key-name>: <configuration-value>
For Rancher you have to write it as the following:
#values.yaml for rancher
postgresql.persistence.storageClass: "my-kong-storage"
postgresql.persistence.size: "1Gi"
Unlike if you are using helm itself with vanilla kubernetes - at least - you can write the values.yml as below:
#values.yaml for helm
postgresql:
persistence:
storageClass: "my-kong-storage"
size: "1Gi"
More about Dealing with SubChart values
More about Postgresql chart configuration
Please tell us which cluster setup you are using. A cloud managed service? Custom setup kubernetes?
The problem you are facing is that there is a "minimum size" of storage to be provisioned. For example in IBM Cloud it is 20 GB.
So even if 2GB are requested in the PVC , you will end up with a 20GB PV.
Please check the documentation of your NFS Provisioner / Storage Class

Passing the Google cloud service account file to traefik

As per https://docs.traefik.io/configuration/acme/
I've created a secret like so:
kubectl --namespace=gitlab-managed-apps create secret generic traefik-credentials \
--from-literal=GCE_PROJECT=<id> \
--from-file=GCE_SERVICE_ACCOUNT_FILE=key.json \
And passed it to the helm chart by using: --set acme.dnsProvider.$name=traefik-credentials
However I am still getting the following error:
{"level":"error","msg":"Unable to obtain ACME certificate for domains \"traefik.my.domain.com\" detected thanks to rule \"Host:traefik.my.domain.com\" : cannot get ACME client googlecloud: Service Account file missing","time":"2019-01-14T21:44:17Z"}
I don't know why/if traefik uses GCE_SERVICE_ACCOUNT_FILE variable. All Google tooling and 3rd party integrations use GOOGLE_APPLICATION_CREDENTIALS environment variable for that purpose (and all Google API clients automatically pick up this variable). So looks like traefik might have done a poor decision here calling it something else.
I recommend you look at the Pod spec of the traefik pod (fields volumes and volumeMounts to see if the Secret is mounted to the pod correctly).
If you follow this tutorial https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform you can learn how to mount IAM Service accounts to any Pod. So maybe you can combine this with the Helm chart itself and figure out what you need to do to make this work.