POD specific folder in hostPath in Kubernetes manifests - kubernetes

I am fairly new to Kubernetes and need some help.
I am using StatefulSet and 3 replicas. There are 2 worker nodes.
I am looking to provision separate hostPath for each replica and not a hardcoded hostPath. Also hostPath is 'local' on the worker node.
For example -
volumeMounts:
- mountPath: /usr/local/store
name: store
volumes:
- name: store
hostPath:
path: /space/myapp/$(POD_NAME)
Here POD_NAME is app-0, app-1, app-2 (3 replicas).
It is fine for our need to have /space/myapp/app-0, /space/myapp/app-1, /space/myapp/app-2 created on multiple worker nodes.
I did some reading and could not come across any obvious solution.
A solution is not to use replicas and create 3 individual PODs with their own hardcoded hostPath. But that is not desirable.
Could you please guide, what in Kubernetes can help me achieve this? Grateful for any help or direction.

The thing you want is a StatefulSet, which brings with it PVC templates. You are definitely in a worse position by requiring the hostPath, but either the formal CSI or the previous FlexVolume support can help with that problem.

Am not discussing whether you need to use Deployments vs. Stateful
Sets with PVC templates as that is out of the scope of the question -
though it may help to research that out first.
One possible solution would be to manage the replicas yourself rather than using the Kubernetes replicas feature (as "everything" isn't being replicated). Having tagged the question "kubernetes-helm" am assuming you can use helm templates.
What you want can be achieved using the following:
Define all non-common pod properties in a values.yaml file.
Indicative code:
chartname:
pods:
- name: pod1
path: volumeHostpathForPod1
- name: pod2
path: volumeHostpathForPod2
...etc
In your chartname/templates/pod.yaml (or equivalent file)
Indicative Code :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "statefulset"
...(other stateful set properties)
spec: #stateful set specs
template:
spec: #pod(s) specs
containers:
{{- range .Values.pods }}
- ... (common pod properties)
name: "pod{{ .name }}"
volumeMounts:
- mountPath: /usr/local/store
name: "storeForPod{{ .name }}"
{{- end }}
volumes:
{{- range .Values.pods }}
- name: "storeForPod{{ .name }}"
hostPath:
path: "/space/myapp/{{ .path }}"
{{- end }}
Generate the final Kubernetes specs using helm. Eg. command: helm install <deployment-name> ./chartname -f values.yaml <--dry-run> <--debug>

Related

How to use to kubectl to patch statefulset envFrom

I have a Kubernetes Statefulset and im using envFrom to add environment variables from ConfigMaps and Secrets, by defining configMapRefs and secretRefs in an 'extra-values.yaml' file and including that file in my helm install command.
The Statefulset.yaml snippet:
apiVersion: apps/v1
kind: StatefulSet
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: {{ .Chart.Name | lower}}
envFrom:
{{- if .Values.envFrom }}
{{- toYaml .Values.envFrom | nindent 10}}
{{- end }}
The values.yaml file has a single envFrom: line with no children, and the extra-values.yaml file contains the configMapRefs and secretRefs:
envFrom:
- configMapRef:
name: my-configmap-name
- configMapRef:
name: another-configmap-name
- secretRef:
name: my-secret-name
- secretRef:
name: second-secret-name
The Helm install command:
helm install myapp /some-folder/myapps-chart-folder -f extra-values.yaml
What I want to do is install myapp without the extra-values.yaml file, and then use the kubectl patch command to add the configMapRefs and secretRefs to the statefulset and its pods.
I can manually do a kubectl edit statefulset to make these changes, which will terminate and restart the pod(s) with the correct environment variables.
But I cannot for the life of me figure out the correct syntax and parameters for the kubectl patch command, despite hours of research, trial, and error, and repeated headbanging. Help!
Thanks to mdaniel for the answer, which contains the clue to what I was missing. Basically, I completely overlooked the fact that the containers element is an array (because my statefulset only specified one container, duh). In all of the kubectl patch command variations that I tried, I did not treat containers as an array, and never specified the container name, so kubectl patch never really had the correct information to act on.
So as suggested, the command that worked was something like this:
kubectl patch statefulset my-statefulset -p '{"spec": {"template": {"spec": {"containers": [{"name":"the-container-name", "envFrom": [{"configMapRef":{"name":"my-configmap-name"}}, {"configMapRef":{"name":"another-configmap-name"}}] }] }}}}'

How to mount same volume on to all pods in a kubernetes namespace

We have a namespace in kubernetes where I would like some secrets (files like jks,properties,ts,etc.) to be made available to all the containers in all the pods (we have one JVM per container & one container per pod kind of Deployment).
I have created secrets using kustomization and plan to use it as a volume for spec of each Deployment & then volumeMount it for the container of this Deployment. I would like to have this volume to be mounted on each of the containers deployed in our namespace.
I want to know if kustomize (or anything else) can help me to mount this volume on all the deployments in this namespace?
I have tried the following patchesStrategicMerge
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: myNamespace
spec:
template:
spec:
imagePullSecrets:
- name: pull-secret
containers:
- volumeMounts:
- name: secret-files
mountPath: "/secrets"
readOnly: true
volumes:
- name: secret-files
secret:
secretName: mySecrets
items:
- key: key1
path: ...somePath
- key: key2
path: ...somePath
It requires name in metadata section which does not help me as all my Deployments have different names.
Inject Information into Pods Using a PodPreset
You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Update: Feb 2021. The PodPreset feature only made it to alpha. It was removed in v1.20 of kubernetes. See release note https://kubernetes.io/docs/setup/release/notes/
The v1alpha1 PodPreset API and admission plugin has been removed with
no built-in replacement. Admission webhooks can be used to modify pods
on creation. (#94090, #deads2k) [SIG API Machinery, Apps, CLI, Cloud
Provider, Scalability and Testing]
PodPresent (https://kubernetes.io/docs/tasks/inject-data-application/podpreset/) is one way to do this but for this all pods in your namespace should match the label you specify in PodPresent spec.
Another way (which is most popular) is to use Dynamic Admission Control (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) and write a Mutating webhook in your cluster which will edit your pod spec and add all the secrets you want to mount. Using this you can also make other changes in your pod spec like mounting volumes, adding label and many more.
Standalone kustomize support a patch to many resources. Here is an example Patching multiple resources at once. the built-in kustomize in kubectl doesn't support this feature.
To mount secret as volume you need to update yaml construct for your pod/deployment manifest files and rebuild them.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-secret-volume
mountPath: /etc/secretpath
volumes:
- name: my-secret-volume
secret:
secretName: my-secret
kustomize (or anything else) will not mount it for you.

K8s: Multicontainered pod

I'm writing helm chart for multicontainer pod. One container must work always, but another may gracefully shutdown. But when it's get down the service entered state whitout endpoint IP. Pod status in this time is Running< but its conditions: ready: false, ContainerReady: false How can I handle?
I can distribute containers in two pods with PV, but i don't want to do it. Now i'm using shared volumes to communicate between containers.
apiVersion: batch/v1beta1
kind: CronJob
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
volumes:
- name: "shared-dir"
emptyDir: {}
containers:
- name: {{ .Values.*.name }}
image: ...
- name: {{ .Values.*.name }}
image: ...
I expected, that one container generates few files, place it in shared volume and gracefully shutdown it's work, while nginx will be share it to other services. Next time for job, all containers will be restarted by the concurrencyPolicy
Have you considered using init containers instead? This will allow you to prepare your volume before use. It only runs once during the deployment's life cycle. Configuring probes may also serve you well.

Invalid spec when I run pod.yaml

When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
However, I searched on the kubernetes website and I didn't find anything wrong:
(I really don't understand where is my mistake)
Does it better to set volumeMounts in a Pod or in Deployment?
apiVersion: v1
kind: Pod
metadata:
name: cas-de
namespace: ds-svc
spec:
containers:
- name: ds-mg-cas
image: "docker-all.xxx.net/library/ds-mg-cas:latest"
imagePullPolicy: Always
ports:
- containerPort: 8443
- containerPort: 6402
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas/configs"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas/context"
volumes:
- name: ds-cas-config
hostPath:
path: "/apps/ds-cas/context"
YAML template is valid. Some of the fields might have been changed that are forbidden and then kubectl apply .... is executed.
Looks like more like a development. Solution is to delete the existing pod using kubectl delete pod cas-de command and then execute kubectl apply -f file.yaml or kubectl create -f file.yaml.
There are several fields on objects that you simply aren't allowed to change after the object has initially been created. As a specific example, the reference documentation for Containers notes that volumeMounts "cannot be updated". If you hit one of these cases, you need to delete and recreate the object (possibly creating the new one first with a different name).
Does it better to set volumeMounts in a Pod or in Deployment?
Never use bare Pods; always prefer using one of the Controllers that manages Pods, most often a Deployment.
Changing to a Deployment will actually solve this problem because updating a Deployment's pod spec will go through the sequence of creating a new Pod, waiting for it to become available, and then deleting the old one for you. It never tries to update a Pod in place.

Kubernetes rolling update in case of secret update

I have a Replication Controller with one replica using a secret. How can I update or recreate its (lone) pod—without downtime—with latest secret value when the secret value is changed?
My current workaround is increasing number of replicas in the Replication Controller, deleting the old pods, and changing the replica count back to its original value.
Is there a command or flag to induce a rolling update retaining the same container image and tag? When I try to do so, it rejects my attempt with the following message:
error: Specified --image must be distinct from existing container image
A couple of issues #9043 and #13488 describe the problem reasonably well, and I suspect a rolling update approach will eventuate shortly (like most things in Kubernetes), though unlikely for 1.3.0. The same issue applies with updating ConfigMaps.
Kubernetes will do a rolling update whenever anything in the deployment pod spec is changed (eg. typically image to a new version), so one suggested workaround is to set an env variable in your deployment pod spec (eg. RESTART_)
Then when you've updated your secret/configmap, bump the env value in your deployment (via kubectl apply, or patch, or edit), and Kubernetes will start a rolling update of your deployment.
Example Deployment spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-nginx
spec:
replicas: 2
template:
metadata:
spec:
containers:
- name: nginx
image: "nginx:stable"
ports:
- containerPort: 80
- mountPath: /etc/nginx/conf.d
name: config
readOnly: true
- mountPath: /etc/nginx/auth
name: tokens
readOnly: true
env:
- name: RESTART_
value: "13"
volumes:
- name: config
configMap:
name: test-nginx-config
- name: tokens
secret:
secretName: test-nginx-tokens
Two tips:
your environment variable name can't start with an _ or it magically disappears somehow.
if you use a number for your restart variable you need to wrap it in quotes
If I understand correctly, Deployment should be what you want.
Deployment supports rolling update for almost all fields in the pod template.
See http://kubernetes.io/docs/user-guide/deployments/