terminationGracePeriodSeconds in kind pod vs kind deployment - kubernetes

I'm trying to set a grace shutdown period for my pods. I found out you can add a field called terminationGracePeriodSeconds to the helm charts to set the period. I then looked for example and crossed upon these:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
In the above link they define the value in a kind: pod template.
https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html
In the above link they define the value in a kind: deployment template.
Is there a difference between the 2 kinds in regard to where I define this value?

Is there a difference between the 2 kinds in regard to where I define this value?
A Deployment has a field template: and that is actually a PodTemplate (most structure of a Pod) that includes the terminationGracePeriodSeconds property.
A good way to check documentations for fields is to use kubectl explain.
E.g.
kubectl explain Deployment.spec.template.spec
and
kubectl explain Pod.spec

Related

Assign specific pods to statefulset and others to deployment

I'm moving from a Kubernetes statefulset to a deployment type. The stateful set has an associated persistent volume, but the deployment type will not have that.
To do this smoothly, I'd like half the replicas to be on the stateful set, and half on the deployment type. I've read through some documentation and it seems that I may use spec.selector.matchLabels to select which pods to deploy to, but I don't understand how exactly to achieve that.
I'm also somewhat confused if I'll be selecting "nodes" here or "pods" but it seems like I should care about pods here.
How do I use stateful set for half my pods, and deployment for the other half?

What is kind in kubernetes YAML meant?

I'm very new to kubernetes .What is meant for kind: Deployment?. What are the different kind. Is there any document available for this?
kind represents the type of Kubernetes objects to be created while using the yaml file.
kind: Deployment represents the Kubernetes Deployment object.
You can use the following command to view different supported objects:
kubectl api-resources
You can also review the API reference for detailed overview regarding the Kubernetes objects.

Evaluate affinity rules

I updated a statefulset deployment and the deleted pods of that statefulset are pending forever. Thus I described the pods and saw that they can not be scheduled on nodes because the nodes didn't match the pod affinity/anti-affinity rules. This statefulset however has no affinity rules at all.
My question
How can I evaluate the affinity rules of my statefulset, so that I can see what affinity rules are hindering these pods from starting?
I believe it must be a different deployment which hinders these pods from starting up, but I am clueless which deployment it might be.
check this in order to determine the possible root cause
check if your nodes have taints (kubectl describe node {Node_Name} | grep Taint), if it is the case look for tolerations in order to schedule a workload in a specific node.
you have in the definition the field nodeName and is being pointed to an no existing node.
as Prateek Jain recommended above check your pod with describe in order to see what exactly is being overriden in your definition.
Statefulsets pods might be preventing the deletion because you may have some pv-protection, the best way to troubleshoot that situation is running kubectl get events -n ${yournamespace}, any event on your namespace will be listed.
Try to see if any warning or error message is displayed.
NOTE: If you get too many events, try to filter using --field-selector=type!=Normal,reason!=Unhealthy
✌

Schedule pod on specific node without modifying PodSpec

As a k8s cluster administrator, I want to specify on which nodes (using labels) pods will be scheduled, but without modifying any PodSpec section.
So, nodeSelector, affinity and taints can't be options.
Is there any other solution ?
PS: the reason I can't modify the PodSpec is that deployed applications are available as Helm charts and I don't have hand on those files. Moreover, if I change the PodSpec, it will be lost on next release upgrade.
You can use the PodNodeSelector admission controller for this:
This admission controller has the following behavior:
If the Namespace has an annotation with a key scheduler.kubernetes.io/nodeSelector, use its value as the node selector.
If the namespace lacks such an annotation, use the clusterDefaultNodeSelector defined in the PodNodeSelector plugin configuration file as the node selector.
Evaluate the pod’s node selector against the namespace node selector for conflicts. Conflicts result in rejection.
Evaluate the pod’s node selector against the namespace-specific whitelist defined the plugin configuration file. Conflicts result in rejection.
First of all you will need to enable this admission controller. The way to enable it depends on your environment, but it's done via the parameter kube-apiserver --enable-admission-plugins=PodNodeSelector.
Then create a namespace and annotate it with whatever node label you want all Pods in that namespace to have:
kubectl create ns node-selector-test
kubectl annotate ns node-selector-test \
scheduler.alpha.kubernetes.io/node-selector=mynodelabel=mynodelabelvalue
To test it you could do something like this:
kubectl run busybox \
-n node-selector-test -it --restart=Never --attach=false --image=busybox
kubectl get pod busybox -n node-selector-test -o yaml
It should output something like this:
apiVersion: v1
kind: Pod
metadata:
name: busybox
....
spec:
...
nodeSelector:
mynodelabel: mynodelabelvalue
Now, unless that label exists on some nodes, this Pod will never be scheduled, so put this label on a node to see it scheduled:
kubectl label node myfavoritenode mynodelabel=mynodelabelvalue
No, there are no other ways to specify on which nodes pod will be scheduled, only labels and selectors.
I think the problem with a Helm is related to that issue.
For now, the only way for you is to change a spec, remove release and deploy a new one with updated specs.
UPD
#Janos Lenart provides a way how to manage scheduling per-namespace. That is a good idea if your releases already are split among namespaces and if you don't want to spawn different pods on different nodes in a single release. Otherwise, you will have to create new releases in new namespaces and in that case I highly recommend you to use selectors in the Pods spec.

How does 'kubectl apply -f' update a deployment behind the scenes?

I have a deployment created with a YAML file, the image for the containers is image:v1.
Now I update the file to image:v2, and do kubectl apply -f newDeploymentFile.yml. Does Kubernetes use rolling update behind the scenes to update my deployment or some other way?
What happens exactly is controlled by the Deployment itself:
.spec.stategy: RollingUpdate (default) or Recreate
.spec.strategy.rollingUpdate: see the docs I've linked for the explanation of the maxSurge and maxUnavailable
(I assumed that by deployment you actually mean a Deployment type object and not speaking in general.)