Is there the concept of uploading a Deployment without causing pods to start? - kubernetes

(I am (all things considered) a Kubernetes rookie.)
I know that kubectl create -f myDeployment.yaml will send my deployment specification off to the cluster to be reified, and if it says to start three replicas of its contained pod template then Kubernetes will set about starting up three pods.
I wonder: is there a Kubernetes concept or practice of somehow uploading the deployment for reference later and then "activating" it later? Perhaps by, say, changing replicas from zero to some positive number? If this is not a meaningful question, or this isn't the Right Way To Think About Things, I'd appreciate pointers as well.

I don't think you idea would work well with Kubernetes. Firstly, there so no way of "pausing" a Deployment or any other ReplicationController or ReplicaSet, besides setting the replicas to 0, as you mentioned.
The next issue is, that the YAML you would get from the apiserver isn't the same as you created. The controller manager adds some annotations, default values and statuses. So it would be hard to verify the Deployment that way.
IMO a better way to verify Deployments is to add them to a version control system and peer-review the YAML files. Then you can create or update is on the apiserver with kubectl apply -f myDeployment.yaml. If the Deployment is wrong in term of syntax, then kubectl will complain about it and you could patch the Deployment accordingly. This also simplifies the update procedure of Deployments.

Deployment can be paused, please refer https://kubernetes.io/docs/user-guide/deployments/#pausing-and-resuming-a-deployment , or see information with kubectl rollout pause -h.
You can adjust replicas of a paused deployment, but changes on pod template will not trigger a rollout. If the deployment is paused in the middle of a rollout, then it will not continue until you resume it.

Related

if we want to make modification to running pods configuration, which is advisable whether to deployment or to pods?

If we have some requirement to modify property of running pods, Which will be the recommeneded way and whats the reason.
I guess once a pod deployed as part of the deployment, we can modify the pods properties either by kubectl edit pod or by kubectl edit deploy.
Would like to understand is there any difference between these 2 actions. ?
Modify the Deployment not the Pod.
Why?
The Deployment describe the desired state for your pods. The Deployment controller continuously watches for the Deployment object in a control loop. It reads the desired pod state from the Deployment specification and try to ensure the state in the cluster. So, if you edit the pod and change something, the Deployment controller will overwrite the change in next resync because your modification is not present in the Deployment specification.
For the most part you can't edit the pods. In the API definition of a PodSpec, the containers and initContainers fields are both described as "Cannot be updated." Almost all of the interesting things in a Pod spec are in the Container sub-objects.
The corollary to this is that you can't "modify properties of running pods" for the most part; you can only delete and replace them with new pods with the properties you want. If you edit the pod template in a deployment spec, Kubernetes will do exactly that.

How to check if k8s deployment is passed/failed manually?

I'm trying to understand what does kubectl rollout status <deployment name> do.
I'm using k8s-node-api, and from this thread (https://github.com/kubernetes-client/javascript/issues/536), the maintainer suggest using k8s-watch api to watch for changes in the deployment, but I'm not sure what to check.
Questions:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
My questions are probably ambiguous because I'm new to k8s.
Any guidance will be great!
I can't use Kubectl - I'm writing a code that does that based on what kubectl does.
As we have discussed in comment section I have mentioned that to check any object and processes in Kubernetes you have to use kubectl - see: kubernetes.io/docs/reference/kubectl/overview.
Take a look how to execute proper command to gain required information - kubectl-rollout.
If you want to check how rollout process looks from backgroud look at the source code - src-code-rollout-kubernetes.
Pay attention on that if you are using node-api:
The node-api group was migrated to a built-in API in the > k8s.io/api repo with the v1.14
release. This repo is no longer maintained, and no longer synced
with core kubernetes as of the v1.18 release.
I often use 2 following command for check out deployment status
kubectl describe deployment <your-deployment-name>
kubectl get deployment <your-deployment-name> -oyaml
The first will show you some events about process of schedule a deployment.
The second is more detailed. It contains all of your deployment's resource info as yaml format.
Is that enough for your need ?
After digging through k8s source code, I was able to implement this logic by my self in Node.js:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L387
Basically, I'm subscribing to events about a specific deplyoment (AFTER chancing something in it, for example, the image).
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
Yes. But https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L62 will help also to idenfity that there is a new deployment going on and the yours is no longer the "latest-deployment".
For More Info
I wrote an answer about how deployment works under the hood: https://stackoverflow.com/a/66092577/806963

kubernetes creates more pods than scale amount

I have encountered a strange situation in one of our clusters, where all of a sudden a number of new pods have been created so that we end up with a greater number of running pods than the scale amount.
So in the dashboard it will show
serviceX pods: 8/2
and then 8 running instances of that service
Questions
How can this possibly happen?
Is there an easy way to get rid
of the extra pods (which all seem to be running)?
I have tried changing the scale amount in the dashboard and the extra pods do not disappear.
Both Pod and deployment are full-fledged objects in the Kubernetes API. Deployment manages creating Pods by means of ReplicaSets. What it boils down to is that Deployment will create Pods with spec taken from the template.
In your case deployment name edgeservicepublic-svc is set to have 13 replicas. Deployment is a kind of controller in Kubernetes. Its is naturally that this controller with continuously check if 13 pods are created. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. Probably at first not enough pods are created co controller with pursue to achieve desried number of them.
To make sure your deployment works properly you can delete deployment, make sure that that pods are deleted. Make sure that you haven't set up autoscaler ( $ kubectl get hpa ) if so, delete it. Then if you want to change deployment specification edit deployment configuration file and apply changes ($ kubectl apply -f deployment_configuration_file.yaml).
Useful documentation about deployment , autoscaling in context of GKE.
EDIT:
Basically at first place check autoscaler then delete it if it exists. I told you to delete deployment because you told that you try to change scale amount/ number of replicas. So if you want to be 100 % sure that changes are applied is to delete whole deployment end then recreate it with desired number of replicas. Of course you can just apply changes in deployment configuration file ($ kubectl edit ...) or ( $ kubectl apply -f ) but sometimes existing pods are not deleted so it will be saver. You could also create new deployment with the same parameters but different name.

Kubernetes Automated Rollbacks

I was trying to find some alternative for docker-swarm rollback command, which allows you to specify rollback strategy in the deployment file.
In k8s ideally it should use readinessProbe, and if didn't pass failureThreshold it should rollback, before starting deployment of the next pod (to avoid downtime).
Currently, in my deployment script, I'm using hook kubectl rollout status deployment $DEPLOYMENT_NAME || kubectl rollout undo deployment $DEPLOYMENT_NAME, which works, but it's not ideal because first rollout command will trigger error way after unhealthy pod will be deployed and a healthy one will be destroyed which will cause downtime.
Ideally, it shouldn't even kill current pod before a new one will pass readinessProbe
There is no specific rollback strategy in a Kubernetes deployment. You could try a combination of RollingUpdate with max unavailable (aka Proportional Scaling), then at some point pause your deployment resume if everything looks good and then rollback if something went wrong.
The recommended way is really to use another deployment as canary split the traffic through a load balancer between canary and non-canary, then if everything goes well upgrade the non-canary and shut down the canary. If something goes wrong shutdown the canary and keep the non-canary until the issue is fixed.
Another strategy is to use something like Istio that facilitates canary deployments.

Kubernetes rolling update vs set image

After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.