How to replace existing configmap in kubernetes using helm - kubernetes-helm

I want to replace coredns configmap data in kube-system namespace as below.
First snippet:
apiVersion: v1
kind: ConfigMap
metadata:
name: Corefile
data:
abc:53 {
log
errors
cache 30
forward . IP1 IP2 IP3
}
xyz:53 {
log
errors
cache 30
forward . IP1 IP2 IP3
}
But I want to read from values.yaml and create data of configmap based on values. I have created template as below for that in helm inside templates directory. When I do helm install, it throws error saying "coredns" configmap already exists.
Second snippet:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
data:
Corefile: |
{{- range $domain := splitList " " .Values.dns_int_domains }}
$domain:53 {
log
errors
cache 30
{{- range $dns_int_server := splitList " " .Values.dns_int_servers }}
{{- if $dns_int_server }}
forward . $dns_int_server
{{- end }}
}
{{- end }}
If I give kubectl apply or kubectl create configmap command, it is created with data as in second snippet and not as rendered data(first snippet). How to create or replace an existing configmap data with rendered output of above code ?
Some example on the internet shows creating configmap with different name "custom-coredns". But I am not sure what are additional changes needs to be done on coredns deployment to take new configmap data for its Corefile. I see below in describe pods output of coredns pod.
Args:
-conf
/etc/coredns/Corefile
My requirement is to replace the Corefile data instead of preparing data manually and then kubectl apply, I want to automate it either in helm reading values from values.yaml or somehow. Expecting any way to achieve this.
It would be grateful if someone helps me out. Thanks in advance!!!

Related

Approach for configmap and secret for a yaml file

I have a yaml file which needs to be loaded into my pods, this yaml file will have both sensitive and non-sensitive data, this yaml file need to be present in a path which i have included as env in containers.
env:
- name: CONFIG_PATH
value: /myapp/config/config.yaml
If my understanding is right, the configmap was the right choice, but i am forced to give the sensitive data like password as plain text in the values.yaml in helm chart.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
labels:
app: {{ .Release.Name }}-config
data:
config.yaml: |
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
Values.yaml
config:
password: "mypassword"
Mounted the above config map as follows
volumeMounts:
- name: {{ .Release.Name }}-config
mountPath: /myapp/config/
So i wanted to try secret, If i try secret, it is loading as Environment Variables inside pod, but it is not going into this config.yaml file.
If i convert the above yaml file into secret instead of configmap , should i convert the entire config.yaml into base64 secret? my yaml file has more entries and it will look cumbersome and i dont think it as a solution.
If i take secret as a stringdata then the base64 will be taken as it is.
How do i make sure that config.yaml loads into pods with passwords not exposed in the values.yaml Is there a way to combine configmap and secret
I read about projected volumes, but i dont see a use case for merging configmap and secrets into single config.yaml
Any help would be appreciated.
Kubernetes has no real way to construct files out of several parts. You can embed an entire (small) file in a ConfigMap or a Secret, but you can't ask the cluster to assemble a file out of parts in multiple places.
In Helm, one thing you can do is to put the configuration-file data into a helper template
{{- define "config.yaml" -}}
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
{{ end -}}
In the ConfigMap you can use this helper template rather than embedding the content directly
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
config.yaml: |
{{ include "config.yaml" . | indent 4 }}
If you move it to a Secret you do in fact need to base64 encode it. But with the helper template that's just a matter of invoking the template and encoding the result.
apiVersion: v1
kind: Secret
metadata: { ... }
data:
config.yaml: {{ include "config.yaml" . | b64enc }}
If it's possible to set properties in this file directly via environment variables (like Spring properties) or to insert environment-variable references in the file (like a Ruby ERB file) that could let you put the bulk of the file into a ConfigMap, but use a Secret for specific values; you would need a little more wiring to also make the environment variables available.
You briefly note a concern around passing the credential as a Helm value. This does in fact require having it in plain text at deploy time, and an operator could helm get values later to retrieve it. If this is a problem, you'll need some other path to inject or retrieve the secret value.

In Helm, can I loop over a set of manifests in my Helm chart?

I am developing a Helm chart for an application that I've developed. One of the components requires multiple manifests that each need to be deployed multiple times. So far, I have taken care of that by putting all of those manifests into a single file, then looping at the top. Something like this:
{{- range .Values.thingies }}
---
apiVersion: v1
kind: ConfigMap
...
---
apiVersion: v1
kind: PersistentVolume
...
---
apiVersion: v1
kind: PersistentVolumeClaim
...
As the list of manifests grows, though, this is starting to feel a little cumbersome. Is there a way to loop over a directory of manifests rather than needing to put everything into the same file?
Another option that I'm not fond of is to implement the same loop in each of the individual manifests.
You can use "Named Templates".
Put this in one file (i.e. _cm.tpl):
{{- define "mychart.configmap" }}
apiVersion: v1
kind: ConfigMap
...
{{- end }}
and use it in another (i.e. thingies.yaml):
{{- range .Values.thingies }}
{{- template "mychart.configmap" }}
{{- }}
Documentation for Named Templates: https://helm.sh/docs/chart_template_guide/named_templates/

ArgoCD multiple files into argocd-rbac-cm configmap data

Is it possible to pass a csv file to the data of the "argocd-rbac-cm" config map? Since I've deployed argo-cd through gitops (with the official argo-cd helm chart), I would not like to hardcode a large csv file inside the configmap itseld, I'd prefer instead reference a csv file direct from the git repository where the helm chart is located.
And, is it also possible to pass more than one file-like keys?
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
<<< something to have this append many files
<<< https://gitlab.custom.net/proj_name/-/blob/master/first_policy.csv # URL from the first csv file in the git repository >>
<<< https://gitlab.custom.net/proj_name/-/blob/master/second_policy.csv # URL from the second csv file in the git repository >>
Thanks in advance!
Any external evaluation in a policy.csv would lead to some unpredictable behaviour in cluster and would complicate argocd codebase without obvious gains. That's why this configmap should be set statically before deploying anything
You basically have two options:
Correctly set .server.rbacConfig as per https://github.com/argoproj/argo-helm/blob/master/charts/argo-cd/templates/argocd-configs/argocd-rbac-cm.yaml - create your configuration with some bash scripts and assign to a variable, i.e. RBAC_CONFIG then pass it in your CI/CD pipeline as helm upgrade ... --set "server.rbacConfig=$RBAC_CONFIG"
Extend chart with your own template and use .Files.Get function to create cm from files that already exists in your repository, see https://helm.sh/docs/chart_template_guide/accessing_files/ , with something like
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.csv: |-
{{- $files := .Files }}
{{- range tuple "first_policy.csv" "first_policy.csv" }}
{{ $files.Get . }}
{{- end }}

How to use to kubectl to patch statefulset envFrom

I have a Kubernetes Statefulset and im using envFrom to add environment variables from ConfigMaps and Secrets, by defining configMapRefs and secretRefs in an 'extra-values.yaml' file and including that file in my helm install command.
The Statefulset.yaml snippet:
apiVersion: apps/v1
kind: StatefulSet
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: {{ .Chart.Name | lower}}
envFrom:
{{- if .Values.envFrom }}
{{- toYaml .Values.envFrom | nindent 10}}
{{- end }}
The values.yaml file has a single envFrom: line with no children, and the extra-values.yaml file contains the configMapRefs and secretRefs:
envFrom:
- configMapRef:
name: my-configmap-name
- configMapRef:
name: another-configmap-name
- secretRef:
name: my-secret-name
- secretRef:
name: second-secret-name
The Helm install command:
helm install myapp /some-folder/myapps-chart-folder -f extra-values.yaml
What I want to do is install myapp without the extra-values.yaml file, and then use the kubectl patch command to add the configMapRefs and secretRefs to the statefulset and its pods.
I can manually do a kubectl edit statefulset to make these changes, which will terminate and restart the pod(s) with the correct environment variables.
But I cannot for the life of me figure out the correct syntax and parameters for the kubectl patch command, despite hours of research, trial, and error, and repeated headbanging. Help!
Thanks to mdaniel for the answer, which contains the clue to what I was missing. Basically, I completely overlooked the fact that the containers element is an array (because my statefulset only specified one container, duh). In all of the kubectl patch command variations that I tried, I did not treat containers as an array, and never specified the container name, so kubectl patch never really had the correct information to act on.
So as suggested, the command that worked was something like this:
kubectl patch statefulset my-statefulset -p '{"spec": {"template": {"spec": {"containers": [{"name":"the-container-name", "envFrom": [{"configMapRef":{"name":"my-configmap-name"}}, {"configMapRef":{"name":"another-configmap-name"}}] }] }}}}'

Setup a Kubernetes Namespace and Roles Using Helm

I am trying to setup Kuberentes for my company. In that process I am trying to learn Helm.
One of the tasks I have is to setup automation to take a supplied namespace name parameter, and create a namespace and setup the correct permissions in that namespace for the deployment user account.
I can do this simply with a script that uses kubectl apply like this:
kubectl create namespace $namespaceName
kubectl create rolebinding deployer-edit --clusterrole edit --user deployer --namespace $namespaceName
But I am wondering if I should set up things like this using Helm charts. As I look at Helm charts, it seems that everything is a deployment. I am not sure that this fits the model of "deploying" things. It is more just a general setup of a namespace that will then allow deployments into it. But I want to try it out as a Helm chart if it is possible.
How can I create a Kubernetes namespace and rolebinding using Helm?
A Namespace is a Kubernetes object and it can be described in YAML, so Helm can create one. #mdaniel's answer describes the syntax for doing it for a single Namespace and the corresponding RoleBinding.
There is a chicken-and-egg problem if you are trying to use this syntax to create the Helm installation namespace, though. In Helm 3, metadata about the installation is stored in Kubernetes objects, usually in the same namespace you're installing into
helm install release-name ./a-chart-that-creates-a-namespace --namespace ns
If the namespace doesn't already exist, then Helm can't retrieve the installation metadata; or, if it does, then the declaration of the Namespace object in the chart will conflict with an existing object in the cluster. You can create other objects this way (like RoleBindings) but Namespaces themselves are a problem.
But! You can create other namespaces safely. You can also use Helm's templating constructs to create multiple objects based on what's present in the .Values configuration. So if your values.yaml file (possibly environment-specific) has
namespaces: [service-a, service-b]
clusterRole: edit
user: deploy
Then you can write a template file like
{{- $top := . }}
{{- range $namespace := .Values.namespaces -}}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ $namespace }}
name: deployer-edit
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ $top.Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ $top.Values.user }}
{{ end -}}
This will create two YAML documents for each item in .Values.namespaces. Since the range looping construct overwrites the . special variable, we save its value in a $top local variable before we start, and then use $top.Values where we'd otherwise need to reference .Values. We also need to make sure to explicitly name the metadata: { namespace: } of each object we create, since we're not using the default installation namespace.
You need to make sure the helm install --namespace name isn't any of the namespaces you're managing with this chart.
This would let you have a single chart that manages all of the per-service namespaces. If you needed to change the set of services, you can just update the chart values and helm update. The one other caution is that this will happily delete namespaces with no warning if you remove a value from the .Values.namespaces list, and also take everything in that namespace with it (notably, any PersistentVolumeClaims that have data you might need).
Almost any chart for an install that needs to interact with kubernetes itself will include RBAC resources, so it is for sure not just Deployments
# templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.bindingName }}
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ .Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ .Values.user }}
then a values.yaml isn't strictly required, but helps folks know what values could be provided:
# values.yaml
bindingName: deployment-edit
clusterRole: edit
user: deployer
Helm v3 has --create-namespace which will create the provided --namespace if it doesn't already exist, which isn't very declarative but does achieve the end result just like the kubectl version
It's also theoretically possible to have the chart create the Namespace but I would not guess that helm remove the-namespaced-rolebinding will do the right thing, since the order of item removal matters a lot:
# templates/00namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.theNamespace }}
and then run helm --namespace kube-system ... or any NS other than the real one, since it doesn't yet exist