Deleting namespace was stuck at "Terminating" State - kubernetes

I want to delete a namespace created in kubernetes.
Command i executed:
kubectl delete namespaces devops-ui
But the process is taking too long (~20mins) and counting.
On checking the minikube dashboard a pod is still there which is not getting deleted, it is in terminating state.
Any Solution?

Please delete the pods first using below command
kubectl delete pod pod_name_here --grace-period=0 --force --namespace devops-ui
now delete the namespace
kubectl delete namespaces devops-ui

when you delete a namespace, it triggers deleting all the entities within that namespace
you can run "kubectl get all -n namespace-name" and see the status of all the components within the namespace
Ideally it is preferable to wait for all the pods to be cleanly deleted (instead of forcing the pod deletion with --grace-period=0 : this only deletes the etcd record for the pod - but the corresponding containers could be running)
Reference: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/

Some CRD's have finalizers and this will prevent a namespace from terminating
Example followed from here
https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-408599873
#ManifoldFR , I had the same issue as yours and I managed to make it work by making an API call with json file .
kubectl get namespace annoying-namespace-to-delete -o json > tmp.json
then edit tmp.json and remove"kubernetes"
curl -k -H "Content-Type: application/json" -X PUT --data-binary #tmp.json https://kubernetes-cluster-ip/api/v1/namespaces/annoying-namespace-to-delete/finalize
Note - use this https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/ - if you are running test cluster and need to get cluster-api access
In my case it threw up the resources holding (in default namespaces)
{
"type": "NamespaceContentRemaining",
"status": "True",
"lastTransitionTime": "2020-10-09T09:35:11Z",
"reason": "SomeResourcesRemain",
"message": "Some resources are remaining: cephblockpools.ceph.rook.io has 2 resource instances, cephclusters.ceph.rook.io has 1 resource instances"
},
{
"type": "NamespaceFinalizersRemaining",
"status": "True",
"lastTransitionTime": "2020-10-09T09:35:11Z",
"reason": "SomeFinalizersRemain",
"message": "Some content in the namespace has finalizers remaining: cephblockpool.ceph.rook.io in 2 resource instances, cephcluster.ceph.rook.io in 1 resource instances"
}
]

Related

How to determine what the affinity/anti-affinity of programmatically created pod are?

We are having issues that rarely we will have a pod fail to be scheduled with an error that all 16 nodes failed due to affinity/anti-affinity. We would not expect affinity to prevent any of the nodes from being scheduled.
I'd like to determine what the actual cause of the affinity failing in scheduling is, and for that I think I need to know what the affinities a pod was initialized with. However, I can't look at chart configuration files since these particular pods are being scheduled programmatically at runtime. Is there a kubectl command I can use to view what the pod's affinity was set to, or to determine why every node is failing its affinity checks?
Figured this out on my own. The command I used was:
kubectl get pods <pod_name> -o json | jq '.spec.affinity'
I had to yum install jq for this to work. if instead you wanted to look at affinity of all pods I think you need to remove the pod name and add a .items[] in front of the .spec for the jq command.
For those curious my affinity has this
{
"key": "host",
"operator": "In",
"values": [
"yes"
]
}
That "yes" doesn't seem quite right to me. So yeah something funky is happening in our pod creation.

kubectl patch doesn't update status subresource

I am trying to update status subresource for a Custom Resource and I see a discrepency with curl and kubectl patch commands. when I use curl call it works perfectly fine but when I use kubectl patch command it says patched but with no change. Here are the command that I used
Using Curl:
When I connect to kubectl proxy and run the below curl call, it's successful and updates status subresource on my CR.
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status'
Kubectl patch command:
Using kubectl patch says the CR is patch but with no change and the status sub-resource is updated.
$ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]'
myresource.acme.com/default patched (no change)
However when I do the kubectl patch on the other sub-resources like spec it works fine. Am i missing something here?
As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g. --subresource=status. This flag is considered "Alpha" but does not require enabling the feature.
As an example, with a yaml merge:
kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}'
The Sysdig "What's New?" for v1.24 includes some more words about this flag:
Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources.
You now can stop using complex curl commands to directly update subresources.
The --subresource flag is scheduled for promotion to "Beta" in Kubernetes v1.27 through KEP-2590: graduate kubectl subresource support to beta. The lifecycle of this feature can be tracked in #2590 Add subresource support to kubectl.

How fetch all the k8s object which have finalizer attached to it

I am trying to delete a namespace but it is in terminating state, I tried removing the finalizer and applying replace but not able to succeed. Below are the steps and error
[root#~]# kubectl replace "/api/v1/namespaces/service-catalog/finalize" -f n.json
namespace/service-catalog replaced
[root#~]#
[root#~]#
[root#~]# k get ns service-catalog
NAME STATUS AGE
service-catalog Terminating 6d21h
[root#~]# k delete ns service-catalog
Error from server (Conflict): Operation cannot be fulfilled on namespaces "service-catalog": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.
In the namespace I had created few crd objects and my good guess is those are the thing which are preventing it from deletion. Right now I am not able memorise all the crd object that I created.
Is there a way where I can query all the object with the finalizer: service-catalog?
I was looking for all the finalizers that were used in our cluster and this worked for me. It checks for all types of objects in all namespaces and returns their finalizers -- you can probably use awk and grep to filter it out for what you're looking for
kubectl get all -o custom-columns=Kind:.kind,Name:.metadata.name,Finalizers:.metadata.finalizers --all-namespaces
Note, this doesn't return the cluster scoped resources
To get the registered CRD list, use:
$ kubectl get crds
elasticsearches.kubedb.com 2020-03-03T04:05:13Z
elasticsearchversions.catalog.kubedb.com 2020-03-03T04:05:16Z
etcds.kubedb.com 2020-03-03T04:05:13Z
etcdversions.catalog.kubedb.com 2020-03-03T04:05:16Z
ingresses.voyager.appscode.com 2020-03-03T05:07:42Z
m3dbclusters.operator.m3db.io 2020-03-02T10:56:55Z
Once you have the CRDs, you can find the objects of that type in given namespace:
$ kubectl get m3dbclusters.operator.m3db.io -n m3db
NAME AGE
m3db-cluster 47h
To list all objects along with the finalizers, you can use custom-columns.
# kubectl get <crd-name> -n <namespace> -o custom-columns=Kind:.kind,Name:.metadata.name,Finalizers:.metadata.finalizers
$ kubectl get m3dbclusters.operator.m3db.io -n m3db -o custom-columns=Kind:.kind,Name:.metadata.name,Finalizers:.metadata.finalizers
Kind Name Finalizers
M3DBCluster m3db-cluster [operator.m3db.io/etcd-deletion]

How to get kubernetes resource information(overall CPU and memory usage) through APIs

I have installed minikube in a VIM and I have service account token with all the privileges. Is there any API from kubernetes to fetch the resource usage(Overall).
To get CPU and Memory usage you can use (depending on the object you like to see) the following:
kubectl top pods
or
kubectl top nodes
which will show you
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-1-5d4f8f66d9-xmhnh 0m 1Mi
Api reference might look like the following:
$ curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods
...
{
"metadata": {
"name": "nginx-1-5d4f8f66d9-xmhnh",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-1-5d4f8f66d9-xmhnh",
"creationTimestamp": "2019-07-29T11:48:13Z"
},
"timestamp": "2019-07-29T11:48:11Z",
"window": "30s",
"containers": [
{
"name": "nginx",
"usage": {
"cpu": "0",
"memory": "1952Ki"
}
}
]
}
...
As for API there is few ways of accessing it.
You can use proxy by running kubectl proxy --port:8080 &
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles locating the API server and authenticating.
See kubectl proxy for more details.
Then you can explore the API with curl, wget, or a browser, like so:
curl http://localhost:8080/api/
You can access it without proxy by using authentication token.
It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:
Using grep/cut approach:
# Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
And you can also access the API using several official client libraries for example Go or Python. Other libraries are available to see here.
If you install the kubernetes metrics server it will expose those metrics as an api https://github.com/kubernetes-incubator/metrics-server

There was a problem authenticationg with your cluster. when i making gitlab and k8s cluster integration

I create k8s cluster in aws by using kops
i wrote kubernetes cluster name : test.fuzes.io
api url : https://api.test.fuzes.io/api/v1
and i fill the CA Certificate field with result of
kubectl get secret {secrete_name} -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
and finally i fill the Service Token field with result of
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')
but when i save changes, i got message
There was a problem authenticating with your cluster. Please ensure your CA Certificate and Token are valid.
and i can't install helm tiller with kubernetes error:404
I really don't know what i did wrong. please help me....
As #fuzes confirmed cluster re-creation can be a workaround for this issue.
This was also described on a GitLab Issues - Kubernetes authentication not consistent
In short:
Using the same Kubernetes cluster integration configuration in multiple projects authenticates correctly on one but not the other.
Another suggestion to work around this by just setting CI Variables (KUBE_NAMESPACE and KUBECONFIG) instead of using our Kubernetes integration.
Hope this will be helpful for future references.
Adjust the api URL to https://api.test.fuzes.io:6443 (6443 is the default port kube master listens on for the api-server , if you have it edited then use the custom one )
use this command to validate the port "kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}' "
This command will print the api-server url , you can add it directly in the asked column
Next , for your CA certificate ensure you copy all the command output along with BEGIN CERTIFICATE and END CERTIFICATE
with this you will be able to add the cluster
kubectl cluster-info | \
grep 'Kubernetes master' | \
awk '/http/ {print $NF}'
return https://control.pomazan.xyz/k8s/clusters/c-t7qr5
But use like https://80.211.195.192:6443 as API URL.
{"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
This question is appeared in many people's environment, finally can be resolved!!!