Cannot update Kubernetes pod from yaml generated from kubectl get pod pod_name -o yaml - kubernetes

I have a pod in my kubernetes which needed an update to have securityContext. So generated a yaml file using -
kubectl get pod pod_name -o yaml > mypod.yaml
After updating the required securityContext and executing command -
kubectl apply -f mypod.yaml
no changes are observed in pod.
Where as a fresh newly created yaml file works perfectly fine.
new yaml file -
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: default
spec:
securityContext:
runAsUser: 1010
containers:
- command:
- sleep
- "4800"
image: ubuntu
name: myubuntuimage

Immutable fields
In Kubernetes you can find information about Immutable fields.
A lot of fields in APIs tend to be immutable, they can't be changed after creation. This is true for example for many of the fields in pods. There is currently no way to declaratively specify that fields are immutable, and one has to rely on either built-in validation for core types, or have to build a validating webhooks for CRDs.
Why ?
There are resources in Kubernetes which have immutable fields by design, i.e. after creation of an object, those fields cannot be mutated anymore. E.g. a pod's specification is mostly unchangeable once it is created. To change the pod, it must be deleted, recreated and rescheduled.
Editing existing pod configuration
If you want to apply new config with security context using kubectl apply you will get error like below:
The Pod "mypod" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
Same output will be if you would use kubectl patch
kubectl patch pod mypod -p '{"spec":{"securityContext":{"runAsUser":1010}}}'
Also kubectl edit will not change this specific configuration
$ kubectl edit pod
Edit cancelled, no changes made.
Solution
If you need only one pod, you must delete it and create new one with requested configuration.
Better solution is to use resource which will make sure to fulfil some own requirements, like Deployment. After change of the current configuration, deployment will create new Replicaset which will create new pods with new configuration.
by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.

Related

When do I use `apply` when to use `rollout`?

I am new to kubernete and have a bit confused about apply and rollout command. If I update the kubernete configuration file, should I use kubectl apply -f or kubectl rollout?
if I update kubernete configuration and I run kubectl apply -f, it will terminate the running pod and create a new one.
but rollout also has restart command which is used to restart the pod. so when should I use rollout restart?
The kubectl apply -f used to apply the configuration file kubernetes(where your deploy your desired application).
And kubectl rollout is used to check the above deployed application
Example
Suppose your deployment configuration file looks like this and you saved that in nginx.yaml file. Now you want deploy the nginx app from the below yaml file. so you should use kubectl apply -f nginx.yaml, and now you want to check if your application deployed successfully or not using kubectl rollout status nginx
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
and if you updated the yaml locally and you want replace that with existing then use kubectl replace -f nginx.yaml
One major difference I can think of is kubectl apply can be used for all Kubernetes objects (Pod, Deployment, ConfigMaps, Secrets, etc.) where as the kubectl rollout applies specifically to objects that deals with some computation like Deployments, Statefulsets, etc.
Further, kubectl rollout restart is useful to restart the pods without requiring any changes in the spec fields which is not possible with kubectl apply. If we run kubectl apply with no changes in the spec fields, the pods will not get updated as there is no change to update.
Consider the scenario where some configuration (say, external certificate) is mounted to the pods as ConfigMap and any change in the ConfigMap do not cause the pods to get updated automatically. kubectl rollout restart can be useful in such scenarios to create new pods which can then read updated configurations from the ConfigMap.
Also, a important note from the docs:
Note: A Deployment's rollout is triggered if and only if the
Deployment's Pod template (that is, .spec.template) is changed, for
example if the labels or container images of the template are updated.
Other updates, such as scaling the Deployment, do not trigger a
rollout.

Kubectl get deployments, no resources

I've just started learning kubernetes, in every tutorial the writer generally uses "kubectl .... deploymenst" to control the newly created deploys. Now, with those commands (ex kubectl get deploymets) i always get the response No resources found in default namespace., and i have to use "pods" instead of "deployments" to make things work (which works fine).
Now my question is, what is causing this to happen, and what is the difference between using a deployment or a pod? ? i've set the docker driver in the first minikube, it has something to do with this?
First let's brush up some terminologies.
Pod - It's the basic building block for Kubernetes. It groups one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
Deployment - It is a controller which wraps Pod/s and manages its life cycle, which is to say actual state to desired state. There is one more layer in between Deployment and Pod which is ReplicaSet : A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
Below is the visualization:
Source: I drew it!
In you case what might have happened :
Either you have created a Pod not a Deployment. Therefore, when you do kubectl get deployment you don't see any resources. Note when you create Deployments it in turn creates a ReplicaSet for you and also creates the defined pods.
Or may be you created your deployment in a different namespace, if that's the case, then type this command to find your deployments in that namespace kubectl get deploy NAME_OF_DEPLOYMENT -n NAME_OF_NAMESPACE
More information to clarify your concepts:
Source
Below the section inside spec.template is the section which is supposedly your POD manifest if you were to create it manually and not take the deployment route. Now like I said earlier in simple terms Deployments are a wrapper to your PODs, therefore anything which you see outside the path spec.template is the configuration which you will need to defined on how you want to manage (scaling,affinity, e.t.c) your POD
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Deployment is a controller providing higher level abstraction on top of pods and ReplicaSets. A Deployment provides declarative updates for Pods and ReplicaSets. Deployments internally creates ReplicaSets within which pods are created.
Use cases of deployment is documented here
One reason for No resources found in default namespace could be that you created the deployment in a specific namespace and not in default namespace.
You can see deployments in a specific namespace or in all namespaces via
kubectl get deploy -n namespacename
kubectl get deploy -A

Adding a volume to a Kubernetes StatefulSet using kubectl patch

Problem summary:
I am following the Kubernetes guide to set up a sample Cassandra cluster. The cluster is up and running, and I would like to add a second volume to each node in order to try enable backups for Cassandra that would be stored on a separate volume.
My attempt to a solution:
I tried editing my cassandra-statefulset.yaml file by adding a new volumeMounts and volumeClaimTemplates entry, and reapplying it, but got the following error message:
$ kubectl apply -f cassandra-statefulset.yaml
storageclass.storage.k8s.io/fast unchanged
The StatefulSet "cassandra" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I then tried to enable rolling updates and patch my configuration following the documentation here:
https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/
$ kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
statefulset.apps/cassandra patched (no change)
My cassandra-backup-patch.yaml:
spec:
template:
spec:
containers:
volumeMounts:
- name: cassandra-backup
mountPath: /cassandra_backup
volumeClaimTemplates:
- metadata:
name: cassandra-backup
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
However this resulted in the following error:
$ kubectl patch statefulset cassandra --patch "$(cat cassandra-backup-patch.yaml)"
The request is invalid: patch: Invalid value: "map[spec:map[template:map[spec:map[containers:map[volumeMounts:[map[mountPath:/cassandra_backup name:cassandra-backup]]]]] volumeClaimTemplates:[map[metadata:map[name:cassandra-backup] spec:map[accessModes:[ReadWriteOnce] resources:map[requests:map[storage:1Gi]] storageClassName:fast]]]]]": cannot restore slice from map
Could anyone please point me to the correct way of adding an additional volume for each node or explain why the patch does not work? This is my first time using Kubernetes so my approach may be completely wrong. Any comment or help is very welcome, thanks in advance.
The answer is in your first log:
The StatefulSet "cassandra" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy'
You can't change some fields in a statefulset after creation. You will likely need to delete and recreate the statefulset to add a new volumeClaimTemplate.
edit:
It can many times be useful to leave your pods running even when you delete the statefulset. To accomplish this use the --cascade=false flag on the delete operation.
kubectl delete statefulset <name> --cascade=false
Then your workload will stay running while you recreate your statefulset with the updated VPC.
As mentioned by switchboard.op, deleting is the answer.
but
Watch out for deleting these objects:
PersistentVolumeClaim (kubectl get pvc)
PersistentVolume (kubectl get pv)
which for example in case you'd want to do just helm uninstall instead of kubectl delete statefulset/<item> will be deleted thus unless there's any other reference for the volumes and in case you don't have backups of the previous YAMLs that contain the IDs (i.e. not just generated from Helm templates, but from the orchestrator) you might have a painful day ahead of you.
PVCs and PVs hold IDs and other reference properties for the underlying (probably/mostly?) vendor specific volume referencing by e.g. S3 or other object or file storage implementation used in the background as a volume in a Pod or other resources.
Deleting or otherwise modifying a StatefulSet if you preserve the PVC name within the spec doesn't affect mounting of the correct resource.
If in doubt, always just copy locally the whole volume prior to doing destructive action to PVCs and PVs if you need them in the future or running commands without knowing the underlying source code e.g. by:
kubectl cp <some-namespace>/<some-pod>:/var/lib/something /tmp/backup-something
and then just load it back by reversing the arguments.
Also for Helm usage, delete the StatefulSet, then issue helm upgrade command and it'll fix the missing StatefulSet without touching PVCs and PVs.

How to verify the rolling update?

I tried to automate the rolling update when the configmap changes are made. But, I am confused about how can I verify if the rolling update is successful or not. I found out the command
kubectl rollout status deployment test-app -n test
But I guess this is used when we are performing the rollback rather than for rolling update. What's the best way to know if the rolling update is successful or not?
I think it is fine,
kubectl rollout status deployments/test-app -n test can be used to verify the rollout deployment as well.
As an additional step, you can run,
kubectl rollout history deployments/test-app -n test as well.
if you need further clarity,
kubectl get deployments -o wide and check the READY and IMAGE fields.
ConfigMap generation and rolling update
I tried to automate the rolling update when the configmap changes are made
It is a good practice to create new resources instead of mutating (update in-place). kubectl kustomize is supporting this workflow:
The recommended way to change a deployment's configuration is to
create a new configMap with a new name,
patch the deployment, modifying the name value of the appropriate configMapKeyRef field.
You can deploy using Kustomize to automatically create a new ConfigMap every time you want to change content by using configMapGenerator. The old ones can be garbage collected when not used anymore.
Whith Kustomize configMapGenerator can you get a generated name.
Example
kind: ConfigMap
metadata:
name: example-configmap-2-g2hdhfc6tk
and this name get reflected to your Deployment that then trigger a new rolling update, but with a new ConfigMap and leave the old unchanged.
Deploy both Deployment and ConfigMap using
kubectl apply -k <kustomization_directory>
When handling change this way, you are following the practices called Immutable Infrastructure.
Verify deployment
To verify a successful deployment, you are right. You should use:
kubectl rollout status deployment test-app -n test
and when leaving the old ConfigMap unchanged but creating a new ConfigMap for the new ReplicaSet it is clear which ConfigMap belongs to which ReplicaSet.
Also rollback will be easier to understand since both old and new ReplicaSet use its own ConfigMap (on change of content).
Your command is fine to check if an update went through.
Now, a ConfigMap change eventually gets applied to the Pod. There is no need to do a rolling update for that. Depending on what you have passed in the ConfigMap, you probably could have restarted the service and that's it.
What's the best way to know if the rolling update is successful or not?
To check if you rolling update was executed correctly, you command works fine, you could check also if you replicas is running.
I tried to automate the rolling update when the configmap changes are made.
You could use Reloader to perform your rolling updates automatically when a configmap/secret changed.
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.
Let's explore how Reloader works in a pratical way using a nginx deployment as exxample.
First install Reloader in your cluster:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
You will see a new container named reloader-reloader-... this container is responsible to 'watch' your deployments and make the rolling updates when necessary.
Create a configMap with your values, in my case I'll create a my-config and set a variable called myvar.value with value hello:
kubectl create configmap my-config --from-literal=myvar.value=hello
Now, let's create a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
labels:
app: nginx
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: MYVAR
valueFrom:
configMapKeyRef:
name: my-config
key: myvar.value
In this example, the nginx image will be used getting the value from my configMap and set in a variable called MYVAR.
To Reloader works, you must specify the name of your configMap in annotations, in the example above it will be:
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
Apply the deployment example with kubectl apply -f mydeplyment-example.yaml and check the variable from the new pod.
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hello
Now let's change the value of the variable:
Edit configmap with kubectl edit configmap my-config, change the value of myvar.value to hi, save and close.
After save, Reloader will recreate your container and get the new value from configmap.
To check if the rolling update was executed successfully:
kubectl rollout status deployment deployment-example
Check the new value:
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hi
That's all!
Check the Reloader github see more options to use.
I hope it helps!
Mounted ConfigMaps are updated automatically reference
To Check roll-out history used below steps
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
Update image on deployment as below
$ kubectl set image deployment.app/busybox *=busybox --record
deployment.apps/busybox image updated
Check new rollout history which will list the new change cause for rollout
$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
2 kubectl set image deployment.app/busybox *=busybox --record=true
To Rollback Deployment : use undo flag along rollout command
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout undo deployment busybox deployment.apps/busybox rolled back
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
2 kubectl set image deployment.app/busybox *=busybox --record=true
3 kubectl create --filename=busybox.```

What is the recommended way to get the pods of a Kubernetes deployment?

Especially considering all the asynchronous procedures involved with creating and updating a deployment, I find it difficult to reliably find the current pods associated with the current version of a given deployment.
Currently, I do:
Add unique labels to the deployment's template.
Get the revision number of the deployment.
Get all replica sets with the labels.
Filter them further to find the one with the correct revision number.
Extract the pod template hash from the replica set.
Get all pods with the labels plus the pod template hash.
This is awkward and complex. Besides, I am not sure that (4) and (6) are guaranteed to yield only the wanted objects. But I cannot filter by ownerReferences, can I?
Is there a more robust and simpler way?
When you create Deployment, it creates ReplicaSet, which creates Pods.
ReplicaSet contains "ownerReferences" path which includes the name and the UID of the parent deployment.
Pods contain the same path with the link to the parent ReplicaSet.
Here is an example of ReplicaSet info:
# kubectl get rs nginx-deployment-569477d6d8 -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
...
name: nginx-deployment-569477d6d8
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: nginx-deployment
uid: acf5fe8a-5d0e-11e8-b14f-42010a8000fc
...