Need to change pod definition before rolling update - kubernetes

Kubernetes newbie question: Can I somehow include my pod definition inside my deployment definition?
I currently have a pod.yml, a service.yml and a deployment.yml file. Both in pod.yml and deployment.yml I specify my docker image and tag: user/image:v1.
To do a rolling update, I tried doing kubectl set image deployment/api api=user/image:v2
However, that doesnt work alone.. It seems to conflict with the image tag in the pod definition. I need to also update the pod with tag v2 for kubectl set image to work.. I feel like I'm doing something wrong. Thoughts?

Yes, you can include all definitions in one file. Have a look at the guestbook-all-in-one.yaml example.
The recommended way to do a rolling update is to change the file and then use apply:
$ vim guestbook-all-in-one.yaml # make the desired changes
$ kubectl apply -f guestbook-all-in-one.yaml # apply these changes
If possible, you should also have this file under version control, so that the file with the current status is always easily accessible.

Related

How Kustomize finds the deployment's yaml file?

Following this Github Action auto deploy to GKE workflow:
https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-google-kubernetes-engine
In the last step notice these lines:
./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -f -
How does Kustomize knows to which file to change the image?
Does he finds the file by searching and then grab it fully and just apply on it?
How does it work?
How does Kustomize knows to which file to change the image? Does he finds the file by searching and then grab it fully and just apply on it?
Kustomize doesn't know in which file to change the image. For the most part, Kustomize doesn't care about "files". In this case, the command is adding a configuration for the image transformer. Running that kustomize edit command...
kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
Adds a configuration like this to your kustomization.yaml:
images:
- name: gcr.io/PROJECT_ID/IMAGE:TAG
newName: gcr.io/my_project/my_image
newTag: 12345678
This means "whenever you find an image reference for gcr.io/PROJECT_ID/IMAGE:TAG, replace it with the given values". This will work for deployments, pods, statefulsets, daemonsets, and all other native kubernetes resources that contain image references.

What's the differences between patch and replace the deployment in k8s?

I want to update the image for the k8s deployment and I found two RESTAPI in k8s to update the deployment: PATCH and PUT.
I found out, that the PATCH is for updating and the PUT is for replacing in the official document but after testing with the two command:
kubectl patch -p ...
kubectl replace -f ...
it seems to has no differences between the two method.
Both of them can rollback and name of the new pod changed.
I wondered if it is only different in the request body for this two commands? (patch only need the changed part and put need the whole parts)
According to the documenation:
kubectl patch
is to change the live configuration of a Deployment object. You do not change the configuration file that you originally used to create the Deployment object.
kubectl replace
If replacing an existing resource, the complete resource spec must be provided.
replace is a full replacement. You have to have ALL the fields present.
patch is partial.

How to update a Deployment via editing yml file

The official kubernetes guidelines, instructs on updating the deployment either by performing a command line set:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
or by inline editing (that will launch the default editor I guess)
kubectl edit deployment/nginx-deployment
However both processes make consistency more difficult given that one needs to go and udpate offline the my-deployment.yml file, where the up & running deployment came from. (and this deprives one from the advantage of keeping their manifests version-controlled).
Is there a way to
launch a deployment via the file
perform (when needed) updates to the same file
update the deployment by pointing to the same, updated file?
You can do it simply by following steps -
Edit the deployment.yaml file
Run below command -
kubectl apply -f deployment.yaml
This is what I usually follow. You can use a kubectl patch or edit also.

Automated alternative for initiating a rolling update for a deployment

So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like kubectl apply -f deploy.yaml.
This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.
Are there any better approaches?
It seems there is a kubectl rolling-update command, but I'm not sure if this works for 'deployments'.
For example running the following: kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760
Produces an error of:
error: couldn't find a replication controller with source id == default/wordpress
I am using this for changing images in Deployments:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
If you view the yaml files as source of truth then use a tag like stable in the yaml and only issue kubectl set image commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from :stable to :stable is a noop even if the tag now points to a different image).
See updating a deployment for more details.
The above requires the deployment replica count to be set more then 1, which is explained here: https://stackoverflow.com/a/45649024/1663462.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.