How to use to kubectl to patch statefulset envFrom - kubernetes

I have a Kubernetes Statefulset and im using envFrom to add environment variables from ConfigMaps and Secrets, by defining configMapRefs and secretRefs in an 'extra-values.yaml' file and including that file in my helm install command.
The Statefulset.yaml snippet:
apiVersion: apps/v1
kind: StatefulSet
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: {{ .Chart.Name | lower}}
envFrom:
{{- if .Values.envFrom }}
{{- toYaml .Values.envFrom | nindent 10}}
{{- end }}
The values.yaml file has a single envFrom: line with no children, and the extra-values.yaml file contains the configMapRefs and secretRefs:
envFrom:
- configMapRef:
name: my-configmap-name
- configMapRef:
name: another-configmap-name
- secretRef:
name: my-secret-name
- secretRef:
name: second-secret-name
The Helm install command:
helm install myapp /some-folder/myapps-chart-folder -f extra-values.yaml
What I want to do is install myapp without the extra-values.yaml file, and then use the kubectl patch command to add the configMapRefs and secretRefs to the statefulset and its pods.
I can manually do a kubectl edit statefulset to make these changes, which will terminate and restart the pod(s) with the correct environment variables.
But I cannot for the life of me figure out the correct syntax and parameters for the kubectl patch command, despite hours of research, trial, and error, and repeated headbanging. Help!

Thanks to mdaniel for the answer, which contains the clue to what I was missing. Basically, I completely overlooked the fact that the containers element is an array (because my statefulset only specified one container, duh). In all of the kubectl patch command variations that I tried, I did not treat containers as an array, and never specified the container name, so kubectl patch never really had the correct information to act on.
So as suggested, the command that worked was something like this:
kubectl patch statefulset my-statefulset -p '{"spec": {"template": {"spec": {"containers": [{"name":"the-container-name", "envFrom": [{"configMapRef":{"name":"my-configmap-name"}}, {"configMapRef":{"name":"another-configmap-name"}}] }] }}}}'

Related

How to replace existing configmap in kubernetes using helm

I want to replace coredns configmap data in kube-system namespace as below.
First snippet:
apiVersion: v1
kind: ConfigMap
metadata:
name: Corefile
data:
abc:53 {
log
errors
cache 30
forward . IP1 IP2 IP3
}
xyz:53 {
log
errors
cache 30
forward . IP1 IP2 IP3
}
But I want to read from values.yaml and create data of configmap based on values. I have created template as below for that in helm inside templates directory. When I do helm install, it throws error saying "coredns" configmap already exists.
Second snippet:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
data:
Corefile: |
{{- range $domain := splitList " " .Values.dns_int_domains }}
$domain:53 {
log
errors
cache 30
{{- range $dns_int_server := splitList " " .Values.dns_int_servers }}
{{- if $dns_int_server }}
forward . $dns_int_server
{{- end }}
}
{{- end }}
If I give kubectl apply or kubectl create configmap command, it is created with data as in second snippet and not as rendered data(first snippet). How to create or replace an existing configmap data with rendered output of above code ?
Some example on the internet shows creating configmap with different name "custom-coredns". But I am not sure what are additional changes needs to be done on coredns deployment to take new configmap data for its Corefile. I see below in describe pods output of coredns pod.
Args:
-conf
/etc/coredns/Corefile
My requirement is to replace the Corefile data instead of preparing data manually and then kubectl apply, I want to automate it either in helm reading values from values.yaml or somehow. Expecting any way to achieve this.
It would be grateful if someone helps me out. Thanks in advance!!!

How to change a pod name

I'm very new to k8s and the related stuff, so this may be a stupid question: How to change the pod name?
I am aware the pod name seems set in the helm file, in my values.yaml, I have this:
...
hosts:
- host: staging.application.com
paths:
...
- fullName: application
svcPort: 80
path: /*
...
Since the application is running in the prod and staging environment, and the pod name is just something like application-695496ec7d-94ct9, I can't tell which pod is for prod or staging and can't tell if a request if come from the prod or not. So I changed it to:
hosts:
- host: staging.application.com
paths:
...
- fullName: application-staging
svcPort: 80
path: /*
I deployed it to staging, pod updated/recreated automatically but the pod name still remains the same. I was confused about that, and I don't know what is missing. I'm not sure if it is related to the fullnameOverride, but it's empty so it should be fine.
...the pod name still remains the same
The code snippet in your question likely the helm values for Ingress. In this case not related to Deployment of Pod.
Look into your helm template that define the Deployment spec for the pod, search for the name and see which helm value was assigned to it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox # <-- change & you will see the pod name change along. the helm syntax surrounding this field will tell you how the name is construct/assign
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 3600"]
Save the spec and apply, check with kubectl get pods --selector app=busybox. You should see 1 pod with name busybox prefix. Now if you open the file and change the name to custom and re-apply and get again, you will see 2 pods with different name prefix. Clean up with kubectl delete deployment busybox custom.
This example shows how the name of the Deployment is used for pod(s) underneath. You can paste your helm template surrounding the name field to your question for further examination if you like.

Kubernetes: Restart pods when config map values change

I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization

using node selector helm chart to assign pods to a specific node pool

i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}

POD specific folder in hostPath in Kubernetes manifests

I am fairly new to Kubernetes and need some help.
I am using StatefulSet and 3 replicas. There are 2 worker nodes.
I am looking to provision separate hostPath for each replica and not a hardcoded hostPath. Also hostPath is 'local' on the worker node.
For example -
volumeMounts:
- mountPath: /usr/local/store
name: store
volumes:
- name: store
hostPath:
path: /space/myapp/$(POD_NAME)
Here POD_NAME is app-0, app-1, app-2 (3 replicas).
It is fine for our need to have /space/myapp/app-0, /space/myapp/app-1, /space/myapp/app-2 created on multiple worker nodes.
I did some reading and could not come across any obvious solution.
A solution is not to use replicas and create 3 individual PODs with their own hardcoded hostPath. But that is not desirable.
Could you please guide, what in Kubernetes can help me achieve this? Grateful for any help or direction.
The thing you want is a StatefulSet, which brings with it PVC templates. You are definitely in a worse position by requiring the hostPath, but either the formal CSI or the previous FlexVolume support can help with that problem.
Am not discussing whether you need to use Deployments vs. Stateful
Sets with PVC templates as that is out of the scope of the question -
though it may help to research that out first.
One possible solution would be to manage the replicas yourself rather than using the Kubernetes replicas feature (as "everything" isn't being replicated). Having tagged the question "kubernetes-helm" am assuming you can use helm templates.
What you want can be achieved using the following:
Define all non-common pod properties in a values.yaml file.
Indicative code:
chartname:
pods:
- name: pod1
path: volumeHostpathForPod1
- name: pod2
path: volumeHostpathForPod2
...etc
In your chartname/templates/pod.yaml (or equivalent file)
Indicative Code :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "statefulset"
...(other stateful set properties)
spec: #stateful set specs
template:
spec: #pod(s) specs
containers:
{{- range .Values.pods }}
- ... (common pod properties)
name: "pod{{ .name }}"
volumeMounts:
- mountPath: /usr/local/store
name: "storeForPod{{ .name }}"
{{- end }}
volumes:
{{- range .Values.pods }}
- name: "storeForPod{{ .name }}"
hostPath:
path: "/space/myapp/{{ .path }}"
{{- end }}
Generate the final Kubernetes specs using helm. Eg. command: helm install <deployment-name> ./chartname -f values.yaml <--dry-run> <--debug>