I'm trying to understand what does kubectl rollout status <deployment name> do.
I'm using k8s-node-api, and from this thread (https://github.com/kubernetes-client/javascript/issues/536), the maintainer suggest using k8s-watch api to watch for changes in the deployment, but I'm not sure what to check.
Questions:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
My questions are probably ambiguous because I'm new to k8s.
Any guidance will be great!
I can't use Kubectl - I'm writing a code that does that based on what kubectl does.
As we have discussed in comment section I have mentioned that to check any object and processes in Kubernetes you have to use kubectl - see: kubernetes.io/docs/reference/kubectl/overview.
Take a look how to execute proper command to gain required information - kubectl-rollout.
If you want to check how rollout process looks from backgroud look at the source code - src-code-rollout-kubernetes.
Pay attention on that if you are using node-api:
The node-api group was migrated to a built-in API in the > k8s.io/api repo with the v1.14
release. This repo is no longer maintained, and no longer synced
with core kubernetes as of the v1.18 release.
I often use 2 following command for check out deployment status
kubectl describe deployment <your-deployment-name>
kubectl get deployment <your-deployment-name> -oyaml
The first will show you some events about process of schedule a deployment.
The second is more detailed. It contains all of your deployment's resource info as yaml format.
Is that enough for your need ?
After digging through k8s source code, I was able to implement this logic by my self in Node.js:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L387
Basically, I'm subscribing to events about a specific deplyoment (AFTER chancing something in it, for example, the image).
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
Yes. But https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L62 will help also to idenfity that there is a new deployment going on and the yours is no longer the "latest-deployment".
For More Info
I wrote an answer about how deployment works under the hood: https://stackoverflow.com/a/66092577/806963
Related
I have a Kubernetes cluster and do all my deployments in a declarative manner, by executing the "apply" command in CI/CD pipeline.
"Apply" works in such a way that it merges the state of an object with the manifest that came with the command. As the result, you may make manual changes - for example, add a new key-value to a ConfigMap and "apply" will leave it intact, even though this key doesn't exist in source code.
So I wonder, how can I detect such issues? Doing "delete" and "create" is not an option since it disrupts the availability. I don't want to change deployment from "apply" since it's production. I just to find manual modifications in a namespace.
kubediff is what you are looking for.
You can run it as a command-line tool e.g.:
$ ./kubediff k8s
Checking ReplicationController 'kubediff'
*** .spec.template.spec.containers[0].args[0]: '-repo=https://github.com/weaveworks/kubediff' != '-repo=https://github.com/<your github repo>'
Checking Secret 'kubediff-secret'
Checking Service 'kubediff'
or as a service inside K8s cluster. This mode also gives you a simple UI showing the output.
kubectl patch or kubectl replace are suitable for your use case. Take a look to this blog which better explain the difference between them.
If you always want to update configmap to whatever is in your manifest, then go for replace.
I can not find a way to rollout my deployment directly like kubectl rollout undo xxx in a third-party k8s-client called fabric, but the client offers the way to modify k8s objects.
so I'm wondering if there is a way to rollout a deployment by modify Kubernetes Object Deployment?
There is no direct way to do something like that in the fabric8 kubernetes-client.
So you will have to simulate the kubectl rollout command using the client to modify the Deployment object (pretty much as you described).
This is however a really neat feature and I am sure that it wouldn't be that hard to add to the client.
I can see that you've already raised an issue about it, so we can take it from there.
After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.
I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml
I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.
Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?
I'm curious how people have been approaching this problem.
Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.
But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?
I'm not entirely sure I understood your question but that may very well be my reading comprehension :)
In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)
Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever
Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)
Option 3: lookup the rollout option to manage deployments, not sure if it works on services though
Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.
Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.
(I am (all things considered) a Kubernetes rookie.)
I know that kubectl create -f myDeployment.yaml will send my deployment specification off to the cluster to be reified, and if it says to start three replicas of its contained pod template then Kubernetes will set about starting up three pods.
I wonder: is there a Kubernetes concept or practice of somehow uploading the deployment for reference later and then "activating" it later? Perhaps by, say, changing replicas from zero to some positive number? If this is not a meaningful question, or this isn't the Right Way To Think About Things, I'd appreciate pointers as well.
I don't think you idea would work well with Kubernetes. Firstly, there so no way of "pausing" a Deployment or any other ReplicationController or ReplicaSet, besides setting the replicas to 0, as you mentioned.
The next issue is, that the YAML you would get from the apiserver isn't the same as you created. The controller manager adds some annotations, default values and statuses. So it would be hard to verify the Deployment that way.
IMO a better way to verify Deployments is to add them to a version control system and peer-review the YAML files. Then you can create or update is on the apiserver with kubectl apply -f myDeployment.yaml. If the Deployment is wrong in term of syntax, then kubectl will complain about it and you could patch the Deployment accordingly. This also simplifies the update procedure of Deployments.
Deployment can be paused, please refer https://kubernetes.io/docs/user-guide/deployments/#pausing-and-resuming-a-deployment , or see information with kubectl rollout pause -h.
You can adjust replicas of a paused deployment, but changes on pod template will not trigger a rollout. If the deployment is paused in the middle of a rollout, then it will not continue until you resume it.