What's the differences between patch and replace the deployment in k8s? - kubernetes

I want to update the image for the k8s deployment and I found two RESTAPI in k8s to update the deployment: PATCH and PUT.
I found out, that the PATCH is for updating and the PUT is for replacing in the official document but after testing with the two command:
kubectl patch -p ...
kubectl replace -f ...
it seems to has no differences between the two method.
Both of them can rollback and name of the new pod changed.
I wondered if it is only different in the request body for this two commands? (patch only need the changed part and put need the whole parts)

According to the documenation:
kubectl patch
is to change the live configuration of a Deployment object. You do not change the configuration file that you originally used to create the Deployment object.
kubectl replace
If replacing an existing resource, the complete resource spec must be provided.

replace is a full replacement. You have to have ALL the fields present.
patch is partial.

Related

Manually creating and editing Kubernetes objects

Most Kubernetes objects can be created with kubectl create, but if you need e.g. a DaemonSet — you're out of luck.
On top of that, the objects being created through kubectl can only be customized minimally (e.g. kubectl create deployment allows you to only specify the image to run and nothing else).
So, considering that Kubernetes actually expects you to either edit a minimally configured object with kubectl edit to suit your needs or write a spec from scratch and then use kubectl apply to apply it, how does one figure out all possible keywords and their meanings to properly describe the object they need?
I expected to find something similar to Docker Compose file reference, but when looking at DaemonSet docs, I found only a single example spec that doesn't even explain most of it's keys.
The spec of the resources in .yaml file that you can run kubectl apply -f on is described in Kubernetes API reference.
Considering DeamonSet, its spec is described here. It's template is actually the same as in Pod resource.

Need to change pod definition before rolling update

Kubernetes newbie question: Can I somehow include my pod definition inside my deployment definition?
I currently have a pod.yml, a service.yml and a deployment.yml file. Both in pod.yml and deployment.yml I specify my docker image and tag: user/image:v1.
To do a rolling update, I tried doing kubectl set image deployment/api api=user/image:v2
However, that doesnt work alone.. It seems to conflict with the image tag in the pod definition. I need to also update the pod with tag v2 for kubectl set image to work.. I feel like I'm doing something wrong. Thoughts?
Yes, you can include all definitions in one file. Have a look at the guestbook-all-in-one.yaml example.
The recommended way to do a rolling update is to change the file and then use apply:
$ vim guestbook-all-in-one.yaml # make the desired changes
$ kubectl apply -f guestbook-all-in-one.yaml # apply these changes
If possible, you should also have this file under version control, so that the file with the current status is always easily accessible.

Automated alternative for initiating a rolling update for a deployment

So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like kubectl apply -f deploy.yaml.
This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.
Are there any better approaches?
It seems there is a kubectl rolling-update command, but I'm not sure if this works for 'deployments'.
For example running the following: kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760
Produces an error of:
error: couldn't find a replication controller with source id == default/wordpress
I am using this for changing images in Deployments:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
If you view the yaml files as source of truth then use a tag like stable in the yaml and only issue kubectl set image commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from :stable to :stable is a noop even if the tag now points to a different image).
See updating a deployment for more details.
The above requires the deployment replica count to be set more then 1, which is explained here: https://stackoverflow.com/a/45649024/1663462.

updating the deployment, need to change multiple values

I am trying to automate the update to the deployment using
kubectl set
I have no issues using kubectl set image command to push new version of the docker image out, but I also need to add a new persistent disk for the new image to use. I don't believe i can set 2 different options using the set command. What would be the best option to do this?
http://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources has the different options you have.
You can use kubectl apply to modify multiple fields at once.
Apply a configuration to a resource by filename or stdin. This
resource will be created if it doesn’t exist yet. To use ‘apply’,
always create the resource initially with either ‘apply’ or ‘create
–save-config’. JSON and YAML formats are accepted.
Alternately, one can use kubectl patch.
Update field(s) of a resource using strategic merge patch JSON and
YAML formats are accepted.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.