Helmfile with additional resource without chart - kubernetes-helm

I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!

Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...

Related

Helm - Override existing configmap or pass contents in

I am working with the Community Prometheus helm charts, and am trying to override the existing configmap without having to install with helm, then do a kubectl apply. I was curious if there is a way to either:
During the helm install set my configmap in place of the one it installs
If not, the chart provides a custom value to set your configurations. This is going to include a lot of lines and I wanted to avoid putting this in the command, so I was wondering what is the best way to to just cat the file out and set it as the value instead.'
If it helps, I am essentially passing a --set rules.custom here and then wanting to fill in the configmap myself. If I can just override the configmap with my own that would be even better.

Look up secrets from gcloud secrets manager directly as secretGenerator with kustomize

I am setting up my Kubernetes cluster using kubectl -k (kustomize). Like any other such arrangement, I depend on some secrets during deployment. The route I want go is to use the secretGenerator feature of kustomize to fetch my secrets from files or environment variables.
However managing said files or environment variables in a secure and portable manner has shown itself to be a challenge. Especially since I have 3 separate namespaces for test, stage and production, each requiring a different set of secrets.
So I thought surely there must be a way for me to manage the secrets in my cloud provider's official way (google cloud platform - secret manager).
So how would the secretGenerator that accesses secrets stored in the secret manager look like?
My naive guess would be something like this:
secretGenerator:
- name: juicy-environment-config
google-secret-resource-id: projects/133713371337/secrets/juicy-test-secret/versions/1
type: some-google-specific-type
Is this at all possible?
What would the example look like?
Where is this documented?
If this is not possible, what are my alternatives?
I'm not aware of a plugin for that. The plugin system in Kustomize is somewhat new (added about 6 months ago) so there aren't a ton in the wild so far, and Secrets Manager is only a few weeks old. You can find docs at https://github.com/kubernetes-sigs/kustomize/tree/master/docs/plugins for writing one though. That links to a few Go plugins for secrets management so you can probably take one of those and rework it to the GCP API.
There is a Go plugin for this (I helped write it), but plugins weren't supported until more recent versions of Kustomize, so you'll need to install Kustomize directly and run it like kustomize build <path> | kubectl apply -f - rather than kubectl -k. This is a good idea anyway IMO since there are a lot of other useful features in newer versions of Kustomize than the one that's built into kubectl.
As seen in the examples, after you've installed the plugin (or you can run it within Docker, see readme) you can define files like the following and commit them to version control:
my-secret.yaml
apiVersion: crd.forgecloud.com/v1
kind: EncryptedSecret
metadata:
name: my-secrets
namespace: default
source: GCP
gcpProjectID: my-gcp-project-id
keys:
- creds.json
- ca.crt
In your kustomization.yaml you would add
generators:
- my-secret.yaml
and when you run kustomize build it'll automatically retrive your secret values from Google Secret Manager and output Kubernetes secret objects.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.

Helm and configmap checksum annotations

I'm working on a Jenkins deployment using a wrapper for the standard chart (stable/jenkins). The chart includes a value flag to allow you totally replace the configmap with your own as long as you match the format of the original. But I'm running in to a problem because the checksum annotation in the deployment is based on the original configmap, not my replacement. So I have to manually force the deployment pods to re-roll after updating the configmap. I could use a post-upgrade hook in my own chart with a job that does the scale-down-and-back-up dance, but that seems slightly gross.
This is not currently possible with Helm 2, but will be doable in Helm 3 more directly via chart scripts.
My eventual solution was to fork the jenkins chart and cut it down to only the parts I needed.

Passing variables to args field in a yaml file, kubernetes

I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?
You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!