I created a ReplicationController using the below command.
kubectl run nginx --image=nginx -r=2 --generator=run/v1
Now I tried upgrading the image to version 1.7.1.
kubectl set image rc/nginx nginx=nginx:1.7.1
But, the image doesn't seem to update.
watch -n1 "kubectl describe pods | grep "Image:""
Also tried kubectl edit .... and the kubectl apply -f .... command, but the image is not getting updated.
How do I update an image in K8S ReplicationController?
Here in documentation is described how to make rolling upgrade on replication controllers https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#rolling-updates
You need to know that actually your image is updated in replication controller but replication controller won't kill existing pods and spawn new with new image. So to achieve that you need to do one of 2 options:
Manually kill pods
Scale your RC to 0 to kill pods and then to desired number of replicas using following command kubectl scale --replicas=3 rc/nginx
Replication controller is only able to scale numbers of replicas of given pod and can't do any updates.
There is a way to "update" your ReplicationController using kubectl rolling-update, but it does not update it literally.
That what is happening when you run kubectl rolling-update (link1):
Creating a new replication controller with the updated configuration.
Increasing/decreasing the replica count on the new and old controllers until the correct number of replicas is reached.
Deleting the original replication controller.
Rolling updates are initiated with the kubectl rolling-update command:
$ kubectl rolling-update NAME \
([NEW_NAME] --image=IMAGE | -f FILE)
Assume that we have a current replication controller named foo and it is running image image:v1 (link2)
kubectl rolling-update foo [foo-v2] --image=myimage:v2
If the user doesn't specify a name for the 'next' replication
controller, then the 'next' replication controller is renamed to the
name of the original replication controller.
Here are some more examples from the kubectl reference:
Update pods of frontend-v1 using new replication controller data in
frontend-v2.json.
kubectl rolling-update frontend-v1 -f frontend-v2.json
Update pods of frontend-v1 using JSON data passed into stdin.
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
Update the pods of frontend-v1 to frontend-v2 by just changing the
image, and switching the # name of the replication controller.
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
Update the pods of frontend by just changing the image, and keeping
the old name.
kubectl rolling-update frontend --image=image:v2
Abort and reverse an existing rollout in progress (from frontend-v1 to
frontend-v2).
kubectl rolling-update frontend-v1 frontend-v2 --rollback
There are alternatives to the ReplicationController (link3)
ReplicaSet (Still it does not support updating Pod's image)
ReplicaSet is the next-generation ReplicationController that supports
the new set-based label selector. It’s mainly used by Deployment as a
mechanism to orchestrate pod creation, deletion and updates. Note that
we recommend using Deployments instead of directly using Replica Sets,
unless you require custom update orchestration or don’t require
updates at all.
Deployment (Recommended) (It works as an orchestrator for ReplicaSets, so it supports updates by design)
Deployment is a higher-level API object that updates its underlying
Replica Sets and their Pods in a similar fashion as kubectl rolling-update.
Deployments are recommended if you want this rolling
update functionality, because unlike kubectl rolling-update, they are
declarative, server-side, and have additional features.
kubectl run nginx1 --image nginx --replicas=3
kubectl get deployment nginx1 --export -o yaml
Related
A pod can be created by Deployment or ReplicaSet or DaemonSet, if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod? Would it be erroneous once I have done that?
Brief Question:
Is kubectl apply -f xxx.yml the silver bullet for all pod update?
...if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod?
The fact that the pod spec is part of the controller spec (eg. deployment, daemonset), to update the container spec you naturally start with the controller spec. Also, a running pod is largely immutable, there isn't much you can change directly unless you do a replace - which is what the controller already doing.
you should not make changes to the pods directly, but update the spec.template.spec section of the deployment used to create the pods.
reason for this is that the deployment is the controller that manages the replicasets and therefore the pods that are created for your application. that means if you apply changes to the pods manifest directly, and something like a pod rescheduling/restart happens, the changes made to the pod will be lost because the replicaset will recreate the pod according to its own specification and not the specification of the last running pod.
you are safe to use kubectl apply to apply changes to existing resources but if you are unsure, you can always extract the current state of the deployment from kubernetes and pipe that output into a yaml file to create a backup:
kubectl get deploy/<name> --namespace <namespace> -o yaml > deploy.yaml
another option is to use the internal rollback mechanism of kubernetes to restore a previous revision of your deployment. see https://learnk8s.io/kubernetes-rollbacks for more infos on that.
I added a pod through Kubernetes Dashboard. I used Create new resource and I created a pod from input.
I then tried to delete it with:
kubectl delete -n default pod pod-name-0
It deletes it, but gets redeployed. As I understand, I should delete it's deployment first. So to list deployments, I used
kubectl get deployments
But it's not there. How do I permanently delete a pod?
The pods are maintained by a ReplicationController and they are automatically replaced if they fail, are deleted, or are terminated, you should check
kubectl describe pods POD_NAME
kubectl describe replicationcontrollers/REPLICATION_CONTROLLER_NAME
Alternatively you can check the ReplicaSet kubectl get rs
Afterwards you can: kubectl edit rs REPLICASET_NAME and change the replicas count up or down as you desire.
Nice explanation regarding ReplicaSet vs ReplicationController
I usually restart my applications by:
kubectl scale deployment my-app --replicas=0
Followed by:
kubectl scale deployment my-app --replicas=1
which works fine. I also have another running application but when I look at its replicaset I see:
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
another-app 2 2 2 2d
So to restart that correctly I would of course need to:
kubectl scale deployment another-app --replicas=0
kubectl scale deployment another-app --replicas=2
But is there a better way to do this so I don't have to manually look at the repliasets before scaling/restarting my application (that might have replicas > 1)?
You can restart pods by using level
kubectl delete pods -l name=myLabel
You can rolling restart of all pods for a deployments, so that you don't take the service down
kubectl patch deployment your_deployment_name -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
And After kubernetes version 1.15 you can
kubectl rollout restart deployment your_deployment_name
To make changes in your current deployment you can use kubectl rollout pause deployment/YOUR_DEPLOYMENT. This way the deployment will be marked as paused and won't be reconciled by the controller. After it's paused you can make necessary changes to your configuration and then resume it by using kubectl rollout resume deployment/YOUR_DEPLOYMENT. This way it will create a new replicaset with updated configuration.
Pod with new configuration will be started and when it's in running status, pod with old configuration will be terminated.
Using this method you will be able to rollout the deployment to previous version by using:
kubectl rollout history deployment/YOUR_DEPLOYMENT
to check history of the rollouts and then execute following command to rollback:
kubectl rollout undo deployment/YOUR_DEPLOYMENT --to-revision=REVISION_NO
When we are trying to redeploy the same deployment which is already running, Deployment does not do the rolling update on the replicasets which means old and new replica sets are running.
I have tried to set the revisionHistoryLimit to 1 but it also does not help. Everytime, I am trying to delete/scale down the old replicaset by
kubectl delete replicaset/<old_replciaset_name> -n <namespace>
or
kubectl scale replicaset/<old_replciaset_name> --replicas=0
I tried installing dgraph (single server) using Kubernetes.
I created pod using:
kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Now all I need to do is to delete the created pods.
I tried deleting the pod using:
kubectl delete pod pod-name
The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?
I did face same issue. Run command:
kubectl get deployment
you will get respective deployment to your pod. Copy it and then run command:
kubectl delete deployment xyz
then check. No new pods will be created.
The link provided by the op may be unavailable. See the update section
As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:
$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Update
Basically, this is an explanation for the reason.
Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:
Pods
Controllers (basically Pod controllers)
ReplicationController
ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
CronJob
See, who controls whom:
ReplicationController -> Pod(s)
ReplicaSet -> Pod(s)
Deployment -> ReplicaSet(s) -> Pod(s)
StatefulSet -> Pod(s)
DaemonSet -> Pod(s)
Job -> Pod
CronJob -> Job(s) -> Pod
a -> b means a creates and controls b and the value of field
.metadata.ownerReference in b's manifest is the reference of a. For
example,
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
This way, deletion of the parent object will also delete the child object via garbase collection.
So, a's controller ensures that a's current status matches with
a's spec. Say, if one deletes b, then b will be deleted. But
a is still alive and a's controller sees that there is a
difference between a's current status and a's spec. So a's
controller recreates a new b obj to match with the a's spec.
The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.
$ kubectl get deploy -n {namespace}
$ kubectl delete deploy {deployment name} -n {namespace}
Note Book
Another problem may arise during deletion is as follows:
If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,
$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}
You can perform a graceful pod deletion with the following command:
kubectl delete pods <pod>
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
kubectl delete pods <pod> --grace-period=0 --force
If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:
kubectl delete pods <pod> --grace-period=0
If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Pods in kubernetes also depends on its type.
Like
Replication Controllers
Replica Sets
Statefulsets
Deployments
Daemon Sets
Pod
Do kubectl describe pod <podname> and check
apiVersion: apps/v1
kind: StatefulSet
metadata:
Now do kubectl get <pod-kind>
At last delete the same and pod will also be deleted.
As #Shudipta Sharma's answer is obviously correct way on how to delete the pods. I would just like to make sure author will understand why this is happening.
The reason is the "mindset" of the Kubernetes in which Pods are considered to be ephemeral, throwaway entities. As Pods come and go, StatefulSets are one way of ensuring that a given number of pods with unique identities will be running at any given time. Reaching the yaml file you used to deploy:
# This StatefulSet runs 1 pod with one Zero, one Alpha & one Ratel containers.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph
spec:
serviceName: "dgraph"
replicas: 1
By deploying this you are basically saying that you want Kubernetes to always run 1 replica of that Pod, at any time. When you delete the Pod, that condition is no longer true so after deletion, there is another Pod spawning to make sure the condition above will be valid.
The way that #Shudipta Sharma provided is just deletion of that StatefulSet so you no longer have a desired state which will keep an eye on the number of running Pods.
You can find more about that in Kubernetes documentation on:
StatefulSets
Cluster's desired state
More about Kubernetes objects and difference between each of them
Delete deployment, not the pods. It is deployment that is making another pod. You can see the different pod name after you delete pods.
kubectl get all
kubectl delete deployment DEPLOYMENTNAME