How to delete pods inside "default" namespace permanently. As when I delete only pods it is coming back because of "replication controller" - kubernetes

"How to permanently delete pods inside "default" namespace? As when I delete pods, they are coming back because of "replication controller".
As this is in a Default namespace, I am sure that we can delete it permanently. Any idea how to do it ?

I'd like to add some update to what was already said in previous answer.
Basically in kubernetes you have several abstraction layers. As you can read in the documentation:
A Pod is the basic execution unit of a Kubernetes application–the
smallest and simplest unit in the Kubernetes object model that you
create or deploy. A Pod represents processes running on your Cluster .
It is rarely deployed as separate entity. In most cases it is a part of higher level object such as Deployment or ReplicationController. I would advise you to familiarize with general concept of controllers, especially Deployments, as they are currently the recommended way of setting up replication [source]:
Note: A Deployment that configures a ReplicaSet is now the recommended
way to set up replication.
As you can read further:
A ReplicationController ensures that a specified number of pod
replicas are running at any one time. In other words, a
ReplicationController makes sure that a pod or a homogeneous set of
pods is always up and available.
It applies also to situation when certain pods are deleted by user. Replication controller doesn't care why the pods were deleted. Its role is just to make sure they are always up and running. Its very simple concept. When you don't want certain pods to exist any more, you must get rid of the higher level object that manages them and ensures they are always available.

Read about Replication Controllers, then delete the ReplicationController.
It can't "ensure that a specified number of pod replicas are running" when it's dead.
kubectl delete replicationcontroller <name>

Related

In Kubernetes, what is the real purpose of replicasets?

I am aware about the hierarchical order of k8s resources. In brief,
service: a service is what exposes the application to outer world (or with in cluster). (The service types like, CluserIp, NodePort, Ingress are not so much relevant to this question. )
deployment: a deployment is what is responsible to keep a set of pods running.
replicaset: a replica set is what a deployment in turn relies on to keep the set of pods running.
pod: - a pod consist of a container or a group of container
container - the actual required application is run inside the container.
The thing i want to empasise in this question is, why we have replicaset. Why don't the deployment directly handle or take responsibility of keeping the required number of pods running. But deployment in turn relies on replicset for this.
If k8s is designed this way there should be definitely some benefit of having replicaset. And this is what i want to explore/understand in depth.
Both essentially serves the same purpose. Deployments are a higher abstraction and as the name suggests it deals with creating, maintining and upgrading the deployment (collection of pods) as a whole.
Whereas, ReplicationControllers or Replica sets primary responsibility is to maintain a set of identical replicas (which you can achieve declaratively using deployments too, but internally it creates a resplicaset to enable this).
More specifically, when you are trying to perform a "rolling" update to your deployment, such as updating the image versions, the deployment internally creates a new replica set and performs the rollout. during the rollout you can see two replicasets for the same deployment.
So in other words, Deployment needs the lower level "encapsulation" of Replica sets to achive this.

StatefulSet update: recreate THEN delete pods

The Kubernetes StatefulSet RollingUpdate strategy deletes and recreates each Pod in order. I am interested in updating a StatefulSet by recreating a pod and then deleting the old Pod (note the reversal), one-by-one.
This is interesting to me because:
There is no reduction in the number of Ready Pods. I understand this is how a normal Deployment update works too (i.e. a Pod is only deleted after the new Pod replacing it is Ready).
More importantly, it allows me to perform application-specific live migration during my StatefulSet upgrade. I would like to "migrate" data from (old) pod-i to (new) pod-i before (old) pod-i is terminated (I would implement this in (new) pod-i readiness logic).
Is such an update strategy possible?
This is inherently possible with Deployments, but not StatefulSets. StatefulSets are used when you care strongly about an exact number of replicas with well known names. Deployments are used for more elastic workloads.
You may be able to accomplish your goal by using multiple StatefulSets- e.g. instead of a StatefulSet of 3 replicas, use 3 StatefulSets of 1 replica each. Then deploy an additional StatefulSet for your data migration before removing one of the previous ones.
Alternatively, this may be a use case for an Operator to manage the application.
No, because pods have specific names based on their ordinal (-0, -1, etc) and there can only be one pod at a time with a given name. Deployments and DaemonSets can burst for updates because their names are randomized so it doesn't matter what order you do things in.

Is there a cloud-native friendly method to select a master among the replicas?

Is there a way in Kubernetes to upgrade a given type of pod first when we have a deployment or stateful set with two or more replicas ( where one pod is master and others are not)?
My requirement to be specific is to ensure when calling upgrade on deployment/statefull set is to upgrade master as the last pod under a given number of replicas..
The only thing that's built into Kubernetes is the automatic sequential naming of StatefulSet pods.
If you have a StatefulSet, one of its pods is guaranteed to be named statefulsetname-0. That pod can declare itself the "master" for whatever purposes this is required. A pod can easily determine (by looking at its hostname(1)) whether it is the "master", and if it isn't, it can also easily determine what pod is. Updates happen by default in numerically reverse order, so statefulsetname-0 will be upgraded last, which matches your requirement.
StatefulSets have other properties, which you may or may not want. It's impossible for another pod to take over as the master if the first one fails; startup and shutdown happens in a fairly rigid order; if any part of your infrastructure is unstable then you may not be able to reliably scale the StatefulSet.
If you don't want a StatefulSet, you can implement your own leader election in a couple of ways (use a service like ZooKeeper or etcd that can help you maintain this state across the cluster; bring in a library for a leader-election algorithm like Raft). Kubernetes doesn't provide this on its own. The cluster will also be unaware of the "must upgrade the leader last" requirement, but if the current leader is terminated, another pod can take over the leadership.
The easiest way is probably having master in one deployment/statefulset, while followers in another deployment/statefulset. This approach ensure update is persist and can make use of update strategy in k8s.
The fact that k8s does not differentiate pod by containers nor any role specific to user application architecture ('master'); it is better to manage your own deployment when you have specific sequence that is outside of deployment/statefulset control. You can patch but change will not persist rollout restart.

What happens when we scale the kubernetes deployment and change one of the pod or container configuration?

When i scale the application by creating deployment .Let's say i am running nginx service on 3 cluster.
Nginx is running in containers in multiple pods .
If i change nginx configuration in one of the pod ,does it propagate to all the nodes and pods because it is running in cluster and scaled.
does it propagate to all the nodes and pods because it is running in
cluster and scaled.
No. Only when you change the deployment yaml. Then it re-creates pods 1 by 1 with the new configuration.
I would like to add a few more things to what was already said. First of all you are even not supposed to do any changes to Pods which are managed let's say by ReplicaSet, ReplicationController or Deployment. This are objects which provide additional abstraction layer and it is their responsibility to ensure that there are given number of Pods of a certain kind running in your kubernetes cluster.
It doesn't matter how many nodes your cluster consists of as mentioned controllers span across all nodes in the cluster.
Changes made in a single Pod will not only not propagate to other Pods but may be easily lost if such newly created Pod with changed configuration crashes.
Remember that one of the tasks of the Deployment is to make sure that certain number of Pods of a given type ( specified in a Pod template section of the Deployment ) are always up and running. When your manually reconfigured Pod goes down then your Deployment (actually ReplicaSet created by the Deployment) acts behind the scenes and recreates such Pod. But how does it recreate it ? Does it take into consideration changes introduced by you to such Pod ? Of course not, it will recreate it based on the template it is given in the Deployment.
If you want to make changes in your Pods one by one kubernetes allows you to do so by providing so called rolling update mechanism.
Here you can read about old-fashioned approach using ReplicationController which is not used any more as it is replaced by Deployments and ReplicaSets but I think it's still worth reading just to grasp the concept.
Currently Deployment is the way to go. About updating a Deployment you can read here. Note that the default update strategy is RollingUpdate which ensures that changes are not applied to all Pods at once but one by one.

Force Kubernetes Pod shutdown before starting a new one in case of disruption

I'm trying to set up a stateful Apache Flink application in Kubernetes and I need to save the current state in case of a disruption, such as someone deleting the pod or it being rescheduled due to cluster resizing.
I added a preStop hook to the container that accomplishes this behaviour, but when I delete a pod using kubectl delete pod it spins up a new Pod before the old one terminates.
Guides such as this one use the Recreate update strategy to make sure only one pod runs at a time. This works fine in case of updating a deployment, but it does not cover disruptions like I described above. I also tried to set spec.strategy.rollingUpdate.maxSurge to 0 but that made no difference.
Is it possible to configure my Deployment in such a way that no pod ever starts before another one is terminated, or do I need to switch to StatefulSets?
I agree with #Cosmic Ossifrage as StatefulSets make it easy to achieve your goal. Each Pod in StatefulSets is represented with unique, persistent identities and stable hostnames that Kubernetes Engine maintains regardless of where they are scheduled.
Therefore, StatefulSets are deployed in sequential order and are terminated in reverse ordinal order assuming that Kubernetes StatefulSet controller removes one Pod each time after complete deletion of previous one as well.