Deployment vs POD - change - kubernetes

I was wondering what would happen in this scenario or if it's even possible:
Kubernetes cluster -
If the deployment has a container restartPolicy of: Always
but on the POD level you specify a restartPolicy of: Never
Which will Kubernetes do?

As #Turing85 commented, in the normal use case a Deployment and its Pod cannot have different restartPolicys, as the Deployment creates the Pods. If you try to alter the Pods restartPolicy manually after it is created (e.g. with kubectl edit pod <pod-name>) you will get an error, as this property cannot be changed after creation. However, we can trick a Deployment or more specifically the underlying ReplicaSet into accepting a manually created Pod. ReplicaSets in Kubernetes know which Pods are theirs through the use of labels. If you inspect the ReplicaSet belonging to your Deployment, you will see a label selector, that shows you which labels need to be present for the ReplicaSet to consider the Pod part of the ReplicaSet.
So if you want to manually create a Pod that is later managed by the ReplicaSet, you first create a Pod with the desired restartPolicy. After this Pod has started and is ready you delete an existing Pod of the ReplicaSet and update the labels of your pod to contain the correct labels. Now there is a Pod in the ReplicaSet with a different restartPolicy.
This is really hacky and actually depends on the timing of deletion and update of the labels, because as soon as you delete a Pod in the ReplicaSet it will try to create a new one. You essentially have to be faster with the label change than the ReplicaSet is with the creation of a new Pod.

Related

Kubernetes : How to delete a specific pod managed by StatefulSet without it being recreated?

I have a StatefulSet with 2 pods. It has a headless service and each pod has a LoadBalancer service that makes it available to the world.
Let's say pod names are pod-0 and pod-1.
If I want to delete pod-0 but keep pod-1 active, I am not able to do that.
I tried
kubectl delete pod pod-0
This deletes it but then restarts it because StatefulSet replica is set to 2.
So I tried
kubectl delete pod pod-0
kubectl scale statefulset some-name --replicas=1
This deletes pod-0, deletes pod-1 and then restarts pod-0. I guess because when replica is set to 1, StatefulSet wants to keep pod-0 active but not pod-1.
But how do I keep pod-1 active and delete pod-0?
This is not supported by the StatefulSet controller. Probably the best you could do is try to create that pod yourself with a sleep shim and maybe you could be faster. But then the sts controller will just be unhappy forever.
You can try using a custom controller like:
https://github.com/openkruise/kruise/
You can have more fine grain control with selective pod removal, if you use a CloneSet custom resource.
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
spec:
# ...
replicas: 4
scaleStrategy:
podsToDelete:
- sample-9m4hp # you select which pod to remove
https://openkruise.io/en-us/docs/cloneset.html
The issue of removing specific pods of a deployment or StatefulSet has been opened for years with no resolution:
https://github.com/kubernetes/kubernetes/issues/45509
Statefulset always creates the pods with indices 0..(replica-1).
If you really want to have a finer control over individual pods, i guess you would need to create separate STS objects (with replica = 1)
ReplicaSet in K8s 1.21+ will have PodDeletionCost POD annotation to solve that issue for Deployments but so far nothting for STS.

Can a Pod be managed by two different ReplicaSets?

3 pods were running under ReplicationController 'rc1', then I deleted only rc1 (not pds) and created a new ReplicaSet 'rs1' with the same label selector of rc1. So as expected rs1 matched the existing pods created but rc1.
After some time, I created the ReplicationController rc2 with the same manifest file as that of rc1. Now, rc1 is spun up new pods instead of referring pods with same labels.
So I was wondering if it is possible that a pod can be scoped under two different ReplicaSets/ReplicationsControllers?
A ReplicaSet purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
So I was wondering if it is possible that a pod can be scoped under two different ReplicaSets/ReplicationsControllers?
The link a ReplicaSet has to its Pods is via the Pods’ metadata.ownerReferences field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet’s identifying information within their ownerReferences field. It’s through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a Controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet. That is explained very well (with examples) in the official documentation.
After some time, I created the ReplicationController rc2 with the same manifest file as that of rc1. Now, rc1 is spun up new pods instead of referring pods with same labels.
Please note that a Deployment that configures a ReplicaSet is now the recommended way to set up replication.
A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
Hope that helps.

Does ReplicaSets replace Pods?

I have a conceptional question, does ReplicaSets use Pod settings?
Before i applied my ReplicaSets i deleted my Pods, so there is no information about my old Pods ?
If I apply now the Replicaset does this reference to the Pod settings, so with all settings like readinessProbe/livenessProbe ... ?
My Questions came up because in my replicaset.yml is a container section where I specified my docker image, but why does it need that information, isn't it a redundant information, because this information is in my pods.yml ?
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: test1
spec:
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: test1
image: test/test
Pods are the smallest deployable units of computing that can be
created and managed in Kubernetes.
A Pod (as in a pod of whales or pea pod) is a group of one or more
containers (such as Docker containers), with shared storage/network,
and a specification for how to run the containers.
See, https://kubernetes.io/docs/concepts/workloads/pods/pod/.
So, you can specify how your Pod will be scheduled (one or more containers, ports, probes, volumes, etc.).
But in case of the node failure or anything bad that can harm to the Pod, then that Pod won't be rescheduled (you have to rescheduled manually). So, in that case, you need a controller. Kubernetes provides some controllers (each one for different purposes). They are -
ReplicaSet
ReplicationController
Deployment
StatefulSet
DaemonSet
Job
CronJob
All of the above controllers and the Pod together are called as Workload. Because they all have a podTemplate section. And they all create some number of identical Pods ass specified by the spec.replicas field (if this field exists in the corresponding workload manifest). They all are upper-level concept than Pod.
Though the Deployment is more suitable than the ReplicaSet, this answer focuses on ReplicaSet over Pod cause the question is between the Pod and ReplicaSet.
In addition, each one of the above controllers has it's own purpose. Like a ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
A ReplicaSet contains a podTemplate field including selectors to identify and acquire Pod(s). A pod template specifying the configuration of new Pods it should create to meet the number of replicas criteria. It creates and deletes Pod(s) as needed to reach the desired number. When a ReplicaSet needs to create new Pod(s), it uses its Pod template.
The Pod(s) maintained by a ReplicaSet has metadata.ownerReferences field, to tell which resource owns the current Pod(s).
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
**
Now, its time to answer your questions
Since ReplicaSet is one of the Pod controller (listed above), obviously, it needs a podTemplate (using this template, your Pods will be scheduled). All of the Pods the ReplicaSet creates will have the same Pod configuration (same containers, same ports, same readiness/livelinessProbe, volumes, etc.). And having this podTemplate is not redundant info, it's needed. So, if you have a Pod controller like ReplicaSet or other (as your need), you don't need the Pod itself anymore. Because the ReplicaSet (or the other controllers) will create Pod(s).
**
Guess, you got the answer.
Does ReplicaSets replace Pods?
Yes, if you have replicaset.yml you don't need pods.yml.
I have a conceptional question, does ReplicaSets use Pod settings?
Before i applied my ReplicaSets i deleted my Pods, so there is no
information about my old Pods ? If I apply now the Replicaset does
this reference to the Pod settings, so with all settings like
readinessProbe/livenessProbe ... ?
No, the ReplicaSet manifest has to contain the Pod specification in order to determine what is the configuration of the pods that should be deployed.
With the labels, you link the ReplicaSet to running Pods.
You don't link the ReplicaSet.yml manifest to the Pods.yml manifest.
Don’t use naked Pods (that is, Pods not bound to a ReplicaSet or Deployment) if you can avoid it. Naked Pods will not be rescheduled in the event of a node failure.
In 99% of the cases, there isn't a separate pods.yml manifest.
The pods + the ReplicaSet are defined in a single manifest, hence the containers section in the replicaset.yml.

Pod gets recreated after deletion

I'm unable to delete the kubernetes pod, it keeps recreating it.
There's no service or deployment associated with the pod. There's a label on the pod thou, is that the root cause?
If I edit the label out with kubectl edit pod podname it removes the label from the pod, but creates a new pod with the same label at the same time. ¿?
Pod can be created by ReplicationControllers or ReplicaSets. The latter one might be created by an Deployment. The described behavior strongly indicates, that the Pod is managed by either of these two.
You can check for these with this commands:
kubectl get rs
kubectl get rc

How to delete kubernetes pods (and other resources) in the system namespace

I have by mistake added a pod in the system namespace "kube-system". And then I am unable to remove this pod. It also seems to have created a replica set. Every time delete these items, they are recreated.
Can't seem to find a way to delete pods or replica sets belonging to the system namespace "kube-system"
If you created the pod using kubectl run, then you will need to delete the deployment (which created the replica set, which created the pod). Otherwise, the higher level controllers will continue to ensure that the objects they are responsible for keeping running stay around in the system, even if you try to delete them manually. Try kubectl get deployment --namespace=kube-system to see if you have a deployment in the kube-system namespace. If so, deleting it should also delete the replica set and the pods that you created.
If a pod is recreated even after kubectl delete pod-name, it means that the pod is controlled by a higher level kubernetes objects such as Deployment, Replicaset, Replication controller.
You can use kubectl describe pods pod-name | grep Controllers to find which controller your pod belongs to. You need to delete this higher level object to delete the pod.