We have deployed etcd of k8s using static pod, it's 3 of them. We want to upgrade pod to define some labels and readiness probe for them. I have searched but found no questions/article mentioned. So I'd like to know the best practice for upgrading static pod.
For example, I found modifying yaml file directly may result pod unscheduled for a long time, maybe I should remove the old file and create a new file?
You need to recreate the pod if you want to define readiness probe for it, for labels an edit should suffice.
Following error is thrown by Kubernetes if editing readinessProbe:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
See also https://stackoverflow.com/a/40363057/499839
Have you considered using DaemonSets? https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Related
I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
Let's say I have 10 pods running a stable version, and I wish to replace the image of one of them to run a newer version before a full rollout.
Is there a way to do that?
Not as such: every pod managed by a Deployment is expected to be identical, including running the same image. You can't change a pod's image once it's been created, and if you change the Deployment's image, it will try to recreate all of its managed pods.
If the only thing you're worried about is the pod starting up, the default behavior of a deployment is to start 25% of its specified replicas with the new image. The old pods will continue running uninterrupted until the new replicas successfully start and pass their readiness checks. If the new pods immediately go into CrashLoopBackOff state, the old pods will still be running.
If you want to start a pod specifically as a canary deployment, you can create a second Deployment to handle that. You'll need to include some label on the pods (for instance, canary: 'true') where you can distinguish the canary from main pods. This would be present in the pod spec, and in the deployment selector, but it would not be present in the corresponding Service selector: the Service matches both canary and non-canary pods. If this runs successfully then you can remove the canary Deployment and update the image on the main Deployment.
Like the other answer mentioned, It sounds like you are talking about a canary deployment. You can do this with Kubernetes and also with Istio. I prefer Istio as it gives you some great control over traffic weighting. I.e you could send 1% of traffic to the canary and 99% to the control. Great for testing in production. It also lets you route using HTTP headers.
https://istio.io/latest/blog/2017/0.1-canary/
If you want to do it with k8s just create two deployments with unique deployment names (myappv1 & myappv2 for example) with the same app= label. Then you can just create a service with the selector = whatever your app label is. The svc will round robin between the two v1 and v2 deployments.
For monitoring purposes, I want to rely on a pod's restartCount. However, I cannot seem to do that for certain apps, as restartCount is not reset even after rebooting the whole node the pod is scheduled to run on.
Usually, restarting a pod resets this, unless the pod name of the restarted pod is the same (e.g. true for etcd, kube-controller-manager, kube-scheduler and kube-apiserver).
For those cases, there is a longrunning minor issue as well as the idea to use kubectl patch.
To sum up the info there, kubectl edit will not allow to change anything in status. Unfortunately, neither does e.g.
kubectl -n kube-system patch pod kube-controller-manager-some.node.name --type='json' -p='[{"op": "replace", "path": "/status/containerStatuses/0/restartCount", "value": 14}]'
The Pod "kube-controller-manager-some.node.name" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
So, I am checking if anyone has found a workaround?
Thanks!
Robert
This seems to be quite an old issue (2017). Take a look here.
I believe the solution to it was supposed to be implementing unique UIDs for static pods. This issue got reopened few days ago as another github issue and hasn't been implemented to this day.
I have found a workaround for it. You need to change static pod manifest file e.g. by adding some random annotation to pod.
Let me know if it was helpful.
For the debug and testing purposes I'd like to find a most convenient way launching Kubernetes pods and altering its specification on-the-fly.
The launching part is quite easy with imperative commands.
Running
kubectl run nginx-test --image nginx --restart=Never
gives me exactly what I want: the single pod not managed by any controller like Deployment or ReplicaSet. Easy to play with and cleanup when it needed.
However when I'm trying to edit the spec with
kubectl edit po nginx-test
I'm getting the following warning:
pods "nginx-test" was not valid:
* spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
i.e. only the limited set of Pod spec is editable at runtime.
OPTIONS FOUND SO FAR:
Getting Pod spec saved into the file:
kubectl get po nginx-test -oyaml > nginx-test.yaml
edited and recreated with
kubectl apply -f
A bit heavy weight for changing just one field though.
Creating a Deployment not single Pod and then editing spec section in Deployment itself.
The cons are:
additional API object needed (Deployment) which you should not forget to cleanup when you are done
the Pod names are autogenerated in the form of nginx-test-xxxxxxxxx-xxxx and less
convenient to work with.
So is there any simpler option (or possibly some elegant workaround) of editing arbitrary field in the Pod spec?
I would appreciate any suggestion.
You should absolutely use a Deployment here.
For the use case you're describing, most of the interesting fields on a Pod cannot be updated, so you need to manually delete and recreate the pod yourself. A Deployment manages that for you. If a Deployment owns a Pod, and you delete the Deployment, Kubernetes knows on its own to delete the matching Pod, so there's not really any more work.
(There's not really any reason to want a bare pod; you almost always want one of the higher-level controllers. The one exception I can think of is kubectl run a debugging shell inside the cluster.)
The Pod name being generated can be a minor hassle. One trick that's useful here: as of reasonably recent kubectl, you can give the deployment name to commands like kubectl logs
kubectl logs deployment/nginx-test
There are also various "dashboard" type tools out there that will let you browse your current set of pods, so you can do things like read logs without having to copy-and-paste the full pod name. You may also be able to set up tab completion for kubectl, and type
kubectl logs nginx-test<TAB>
The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.