Unabel to deploy mariadb on kubernetes using openstack-helm charts - kubernetes

I am trying to deploy OpenStack on kubernetes using helm charts. I am seeing the below error when trying to deploy MariaDB. Mariadb-server-0 looks for PVC which is in LOST state. I tried creating the PersistentVolume and assign the same but still, the pod looks for a lost PVC as shown in the error below.
2018-10-05T17:05:04.087573+00:00 node2: kubelet[9897]: E1005 17:05:04.087449 9897 desired_state_of_world_populator.go:273] Error processing volume "mysql-data" for pod "mariadb-server-0_openstack(c259471b-c8c0-11e8-9636-441ea14dfc98)": error processing PVC "openstack"/"mysql-data-mariadb-server-0": PVC openstack/mysql-data-mariadb-server-0 has non-bound phase ("Lost") or empty pvc.Spec.VolumeName ("pvc-74e81ef0-bb97-11e8-9636-441ea14dfc98")
Is there a way we can delete the old PVC entry from a cluster, so MariaDB doesn't look for the same while deploying ??
Thanks,
Ab

To delete a PVC, you can just use the typical kubectl commands.
See all the PVCs:
kubectl -n <namespace> get pvcs
To delete PVCs:
kubectl -n <namespace> delete pvc <pvc-id-from-the-previous-command>
Similarly, I would try PVs, to see if there are any dangling PVs.
See all the PVs:
kubectl -n <namespace> get pvcs
To delete PVs:
kubectl -n <namespace> delete pv <pv-id-from-the-previous-command>

Related

Pod is not found when trying to delete, however, can be patched

I have a pod that I can see on GKE. But if I try to delete them, I got the error:
kubectl delete pod my-pod --namespace=kube-system --context=cluster-1
Error from server (NotFound): pods "my-pod" not found
However, if I try to patch it, the operation was completed successfully:
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
deployment.apps/my-pod patched
Same namespace, same context, same pod. Why kubectl fails to delete the pod?
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
You are patching the deployment here, not the pod.
Additionally, your pod will not be called "my-pod" but would be called the name of your deployment plus a hash (random set of letters and numbers), something like "my-pod-ace3g"
To see the pods in the namespace use
kubectl get pods -n {namespace}
Since you've put the deployment in the "kube-system" namespace, you would use
kubectl get pods -n kube-system
Side note: Generally don't use the kube-system namespace unless your deployment is related to the cluster functionality. There's a namespace called default you can use to test things

How to delete persistent volumes in Kubernetes

I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:
kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
However it showed:
persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
But the command did not exit. So I CONTROL+C to force exit the command. After a few minutes, I ran:
kubectl get pv
And the status is Terminating, but the volumes don't appear to be deleting.
How can I delete these persistent volumes?
It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.
kubectl delete pvc --all
It sometimes could take some time so be patient.
Delete all the pods, which is using the pvc(you want to delete), then delete the PVC(PersistentVolumeClaim) & PV(PersistentVolume) in sequence.
Some thing like below(in sequence):
kubectl delete pod --all / pod-name
kubectl delete pvc --all / pvc-name
kubectl delete pv --all / pv-name
I have created below diagram to help explain this better.
The Kubectl commands are mentioned by other answers in this thread. The same should work.
kubectl delete sts sts-name
kubectl delete pvc pvc-name
kubectl delete pv pv-name
Some more useful info
If you see something stuck in terminating state, its because of guardrails set in place by k8s. These are referred to as 'Finalizers'.
If your PV is stuck in terminating state after deletion, it likely because you have deleted the PV before deleting the PVC.
If your PVC is stuck in terminating state after deletion, it likely because your pods are still running. (simply delete the pods/statefulset in such cases)
If you wish to delete the resource in terminating state, use below commands to bypass the pvc, pv protection finalizers.
kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}'
Here is the documentation on PVC retention policy.
Here is the documentation on PV reclaim policy.
PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources. I guess you have still deployed the corresponding PVC.
Delete the deployment. E.g.:
kubectl delete deployment mongo-db
List the Persistent Volume Claim. E.g.:
kubectl get pvc
Delete the corresponding pcv. E.g.:
kubectl delete pvc mongo-db

kubectl create doesn't seem to do anything

I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?
By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces

Kubernetes how to deploy a pod created by configuration file?

I create a pod by configuration file with a volume and privileged security.
How can I deploy this pod?
I try to deploy with kubectl run or deployment configuration file. it's created a new pod without my volume and security privileged.
Best regards,
Daniel
Use these commands to create and verify
This will create the pod
# kubectl create -f abc-pod.yml
This will list the running pods
# kubectl get pods
This will show the details of that pod
# kubectl describe pod <pod_name>
This will show the logs of pods
# kubectl logs pod_name container_name

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.