Let's say that I have a running pod named my-pod
my-pod reads the secrets from foobar-secrets
Now let's say that I update some value in foobar-secrets
kubectl patch secret foobar-secrets --namespace kube-system --context=cluster-1 --patch "{\"data\": {\"FOOBAR\": \"$FOOBAR_BASE64\"}}"
What I should do to restart/reload the pod in order to get the new value?
https://github.com/stakater/Reloader is the usual solution for a fully standalone setup. Another option is https://github.com/jimmidyson/configmap-reload or similar but that requires coordination with the daemon process to have an API of some kind for reloading.
Related
I added a pod through Kubernetes Dashboard. I used Create new resource and I created a pod from input.
I then tried to delete it with:
kubectl delete -n default pod pod-name-0
It deletes it, but gets redeployed. As I understand, I should delete it's deployment first. So to list deployments, I used
kubectl get deployments
But it's not there. How do I permanently delete a pod?
The pods are maintained by a ReplicationController and they are automatically replaced if they fail, are deleted, or are terminated, you should check
kubectl describe pods POD_NAME
kubectl describe replicationcontrollers/REPLICATION_CONTROLLER_NAME
Alternatively you can check the ReplicaSet kubectl get rs
Afterwards you can: kubectl edit rs REPLICASET_NAME and change the replicas count up or down as you desire.
Nice explanation regarding ReplicaSet vs ReplicationController
Deployment resource object is still not supported in our cluster and not enabled.
We are using Pod resource object Yaml file. something like below:
apiVersion: v1
kind: Pod
metadata:
name: sample-test
namespace: default
spec:
automountServiceAccountToken: false
containers:
I have explored patch and Put rest api for Pod(Kubectl patch and replace) - it will update to new image version and pod restarts.
I need help in below:
When the image version is same, it will not update and pod will not restart.
How can i acheive Pod restart, is there any API for this or any alternate
approach for this. Because My pod also refers configmap and secret. After i
make changes to secret, i want to restart pod so that it can take updated
value.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach.
Achieving solutions for your scenario, can be handled like this:
When the image version is same, it will not update and pod will not restart. How can i acheive Pod restart, is there any API for this or any alternate approach for this. Because My pod also refers configmap and secret. After i make changes to secret, i want to restart pod so that it can take updated value
Create a new secret/configmap each time and update the pod yaml to use the new configmap/secret rather than the old name.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach
Before you do a Pod update, get the current Pod yaml using kubectl like this,
kubectl get pod <pod-name> -o yaml -n <namespace>
After getting the yaml, generate the new pod yaml and apply it. In case of failure, clean up the new resources created(configmaps & secrets) and apply the older version of pod to achieve rollback
I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.
I tried installing dgraph (single server) using Kubernetes.
I created pod using:
kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Now all I need to do is to delete the created pods.
I tried deleting the pod using:
kubectl delete pod pod-name
The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?
I did face same issue. Run command:
kubectl get deployment
you will get respective deployment to your pod. Copy it and then run command:
kubectl delete deployment xyz
then check. No new pods will be created.
The link provided by the op may be unavailable. See the update section
As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:
$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Update
Basically, this is an explanation for the reason.
Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:
Pods
Controllers (basically Pod controllers)
ReplicationController
ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
CronJob
See, who controls whom:
ReplicationController -> Pod(s)
ReplicaSet -> Pod(s)
Deployment -> ReplicaSet(s) -> Pod(s)
StatefulSet -> Pod(s)
DaemonSet -> Pod(s)
Job -> Pod
CronJob -> Job(s) -> Pod
a -> b means a creates and controls b and the value of field
.metadata.ownerReference in b's manifest is the reference of a. For
example,
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
This way, deletion of the parent object will also delete the child object via garbase collection.
So, a's controller ensures that a's current status matches with
a's spec. Say, if one deletes b, then b will be deleted. But
a is still alive and a's controller sees that there is a
difference between a's current status and a's spec. So a's
controller recreates a new b obj to match with the a's spec.
The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.
$ kubectl get deploy -n {namespace}
$ kubectl delete deploy {deployment name} -n {namespace}
Note Book
Another problem may arise during deletion is as follows:
If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,
$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}
You can perform a graceful pod deletion with the following command:
kubectl delete pods <pod>
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
kubectl delete pods <pod> --grace-period=0 --force
If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:
kubectl delete pods <pod> --grace-period=0
If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Pods in kubernetes also depends on its type.
Like
Replication Controllers
Replica Sets
Statefulsets
Deployments
Daemon Sets
Pod
Do kubectl describe pod <podname> and check
apiVersion: apps/v1
kind: StatefulSet
metadata:
Now do kubectl get <pod-kind>
At last delete the same and pod will also be deleted.
As #Shudipta Sharma's answer is obviously correct way on how to delete the pods. I would just like to make sure author will understand why this is happening.
The reason is the "mindset" of the Kubernetes in which Pods are considered to be ephemeral, throwaway entities. As Pods come and go, StatefulSets are one way of ensuring that a given number of pods with unique identities will be running at any given time. Reaching the yaml file you used to deploy:
# This StatefulSet runs 1 pod with one Zero, one Alpha & one Ratel containers.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph
spec:
serviceName: "dgraph"
replicas: 1
By deploying this you are basically saying that you want Kubernetes to always run 1 replica of that Pod, at any time. When you delete the Pod, that condition is no longer true so after deletion, there is another Pod spawning to make sure the condition above will be valid.
The way that #Shudipta Sharma provided is just deletion of that StatefulSet so you no longer have a desired state which will keep an eye on the number of running Pods.
You can find more about that in Kubernetes documentation on:
StatefulSets
Cluster's desired state
More about Kubernetes objects and difference between each of them
Delete deployment, not the pods. It is deployment that is making another pod. You can see the different pod name after you delete pods.
kubectl get all
kubectl delete deployment DEPLOYMENTNAME
I want to debug the pod in a simple way, therefore I want to start the pod without deployment.
But it will automatically create a deployment
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
So I have to create the nginx.yaml file
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
And create the pod like below, then it creates pod only
kubectl create -f nginx.yaml
pod "nginx" created
How can I specify in the command line the kind:Pod to avoid deployment?
// I run under minikue 0.20.0 and kubernetes 1.7.0 under Windows 7
kubectl run nginx --image=nginx --port=80 --restart=Never
--restart=Always: The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to Always
a deployment is created, if set to OnFailure a job is created, if set to Never, a regular pod is created. For the latter two --replicas must be 1. Default Always [...]
see official document https://kubernetes.io/docs/user-guide/kubectl-conventions/#generators
Now there are two ways one can create pod through command line.
kubectl run nginx --image=nginx --restart=Never
OR
kubectl run --generator=run-pod/v1 nginx1 --image=nginx
See official documentation.
https://kubernetes.io/docs/reference/kubectl/conventions/#generators
Use generators for this, default kubectl run will create a deployment object. If you want to override this behavior use "run-pod/v1" generator.
kubectl run --generator=run-pod/v1 nginx1 --image=nginx
You may refer the link below for better understanding.
https://kubernetes.io/docs/reference/kubectl/conventions/#generators
I'm relatively new to kubernetes, but it seems it has evolved quite a bit since this question was asked. As of its latest versions(I'm running v1.16) generators are deprecated and they are completely removed in v1.18.
See the corresponding ticket and the release notes.
Release notes explicitly say:
Remove all the generators from kubectl run. It will now only create
pods.
I've tested kubectl run with various --restart flags and never got any deployments created. What we do have now is called "naked" Pod. And while you might be tempted to use it, it goes against k8s best practices:
Don’t use naked Pods (that is, Pods not bound to a ReplicaSet or
Deployment) if you can avoid it. Naked Pods will not be rescheduled in
the event of a node failure.
When you are using "kubectl run nginx --image=nginx --port=80" it creates a deployment by default.
To create a pod you have two options.
kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80
kubectl create pod nginx --image=nginx --port=80