I have a rather peculiar use case. Specifically, prior to the deployment of my statefulset I am deploying a ConfigMap which contains an environment variable setting (namely RECREATE_DATADIR) which instructs the pod's container to create a new data structure on the file system.
However, during the typical lifetime of the container the data structure should NOT be recreated. Hence, right after the pod is successfully running, I am changing the ConfigMap and then reapply it. Hence - if the pod ever fails, it won't recreate the data directory structure when it respawns.
How can I achieve this same result using Helm charts?
You can create a job as part of your helm chart, with the post-install helm hook which will have configmap edit permissions, will use a kubectl image (bitnami/kubectl for example), and it will patch the configmap to false using kubectl commands.
Related
A pod can be created by Deployment or ReplicaSet or DaemonSet, if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod? Would it be erroneous once I have done that?
Brief Question:
Is kubectl apply -f xxx.yml the silver bullet for all pod update?
...if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod?
The fact that the pod spec is part of the controller spec (eg. deployment, daemonset), to update the container spec you naturally start with the controller spec. Also, a running pod is largely immutable, there isn't much you can change directly unless you do a replace - which is what the controller already doing.
you should not make changes to the pods directly, but update the spec.template.spec section of the deployment used to create the pods.
reason for this is that the deployment is the controller that manages the replicasets and therefore the pods that are created for your application. that means if you apply changes to the pods manifest directly, and something like a pod rescheduling/restart happens, the changes made to the pod will be lost because the replicaset will recreate the pod according to its own specification and not the specification of the last running pod.
you are safe to use kubectl apply to apply changes to existing resources but if you are unsure, you can always extract the current state of the deployment from kubernetes and pipe that output into a yaml file to create a backup:
kubectl get deploy/<name> --namespace <namespace> -o yaml > deploy.yaml
another option is to use the internal rollback mechanism of kubernetes to restore a previous revision of your deployment. see https://learnk8s.io/kubernetes-rollbacks for more infos on that.
Deployment resource object is still not supported in our cluster and not enabled.
We are using Pod resource object Yaml file. something like below:
apiVersion: v1
kind: Pod
metadata:
name: sample-test
namespace: default
spec:
automountServiceAccountToken: false
containers:
I have explored patch and Put rest api for Pod(Kubectl patch and replace) - it will update to new image version and pod restarts.
I need help in below:
When the image version is same, it will not update and pod will not restart.
How can i acheive Pod restart, is there any API for this or any alternate
approach for this. Because My pod also refers configmap and secret. After i
make changes to secret, i want to restart pod so that it can take updated
value.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach.
Achieving solutions for your scenario, can be handled like this:
When the image version is same, it will not update and pod will not restart. How can i acheive Pod restart, is there any API for this or any alternate approach for this. Because My pod also refers configmap and secret. After i make changes to secret, i want to restart pod so that it can take updated value
Create a new secret/configmap each time and update the pod yaml to use the new configmap/secret rather than the old name.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach
Before you do a Pod update, get the current Pod yaml using kubectl like this,
kubectl get pod <pod-name> -o yaml -n <namespace>
After getting the yaml, generate the new pod yaml and apply it. In case of failure, clean up the new resources created(configmaps & secrets) and apply the older version of pod to achieve rollback
I am modify config maps environment from DEV to FAT, and now I want to make it works in all my pods in dabai-fat name space.How to restart all pods in the namespace? If I modify one by one it is too slow, and my deployment service have more than 20 now. How to enable the config the easy way?
You should prefer mounted config maps for your solution where you will not need POD restart.
Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync.
Total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet. You can trigger an immediate refresh by updating one of the pod’s annotations. Important to remember that container using a ConfigMap as a subPath volume will not receive ConfigMap updates.
How to Add ConfigMap data to a Volume
You should not edit already existing ConfigMap.
This question Restart pods when configmap updates in Kubernetes? is the best possible answer to your question.
First, use Deployments so it's easy to scale everything.
Second, create new ConfigMap and point Deployment to it. If new ConfigMap is broken the Deployment won't scale and if it's correct, the Deployment will scale to 0 and reschedule new pods that will be using new ConfigMap.
When installing consul using Helm, it expects the cluster to dynamic provison the PersistentVolume requested by consul-helm chart. It is the default behavior.
I have the PV and PVC created manually and need to use this PV to be used by consul-helm charts. Is it posisble to install consul using helm to use manually created PV in kubernetes.
As #coderanger said
For this to be directly supported the chart author would have to provide helm variables you could set. Check the docs.
As showed on github docs there is no variables to change that.
If You have to change it, You would have to work with consul-statefulset.yaml, this chart provide dynamically volumes for each statefulset pod created.
volumeMounts
volumeClaimTemplates
Use helm fetch to download consul files to your local directory
helm fetch stable/consul --untar
Then i found a github answer with good explain and example about using one PV & PVC in all replicas of Statefulset, so I think it could actually work in consul chart.
My pod jenkins nexus pod has run out of disk space and I need to up the persistent volume claim.
I can see the yaml file for this in the kubernetes dashboard, however when I try to change it I get - PersistentVolumeClaim "jenkins-x-nexus" is invalid: spec: Forbidden: field is immutable after creation
Deleting the pod and quickly trying to update the yaml doesn't work either.
Our version of kubernetes (1.8) doens't have kubectl stop, so is there a way to stop the replication controller in order to change the yaml?
Our version of kubernetes (1.8) doens't have kubectl stop, so is there a way to stop the replication controller in order to change the yaml?
You can scale RC to 0, and it will stop spawning pods.
I can see the yaml file for this in the kubernetes dashboard, however when I try to change it I get - PersistentVolumeClaim "jenkins-x-nexus" is invalid: spec: Forbidden: field is immutable after creation
That message means that you cannot change the size of your volume. There are several tickets on GitHub about that limitation, and regarding different types of volumes, that one for example.
So, to change size, you need to create a new bigger PVC and somehow migrate your data from old volume to the new one.