kubectl patch doesn't update status subresource - kubernetes

I am trying to update status subresource for a Custom Resource and I see a discrepency with curl and kubectl patch commands. when I use curl call it works perfectly fine but when I use kubectl patch command it says patched but with no change. Here are the command that I used
Using Curl:
When I connect to kubectl proxy and run the below curl call, it's successful and updates status subresource on my CR.
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status'
Kubectl patch command:
Using kubectl patch says the CR is patch but with no change and the status sub-resource is updated.
$ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]'
myresource.acme.com/default patched (no change)
However when I do the kubectl patch on the other sub-resources like spec it works fine. Am i missing something here?

As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g. --subresource=status. This flag is considered "Alpha" but does not require enabling the feature.
As an example, with a yaml merge:
kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}'
The Sysdig "What's New?" for v1.24 includes some more words about this flag:
Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources.
You now can stop using complex curl commands to directly update subresources.
The --subresource flag is scheduled for promotion to "Beta" in Kubernetes v1.27 through KEP-2590: graduate kubectl subresource support to beta. The lifecycle of this feature can be tracked in #2590 Add subresource support to kubectl.

Related

Kubernetes increase resources for all deployments

I am new to Kubernetes. I have a K8 cluster with multiple deployments (more than 150), each having more than 4 pods scaled.
I have a requirement to increase resource limits for all deployments in the cluster; and I'm aware I can increase this directly via my deployment YAML.
However, I'm thinking if there is any way I can increase the resources for all deployments at one go.
Thanks for your help in advance.
There are few things to point out here:
There is a kubectl patch command that allows you to:
Update field(s) of a resource using strategic merge patch, a JSON
merge patch, or a JSON patch.
JSON and YAML formats are accepted.
See examples below:
kubectl patch deploy deploy1 deploy2 --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
or:
kubectl patch deploy $(kubectl get deploy -o go-template --template '{{range .items}}{{.metadata.name}}{{" "}}{{end}}') --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
For further reference see this doc.
You can add proper labels into deployment via kubectl set command:
kubectl set resources deployment -l key=value --limits memory=120Mi
Also, you can use some additional CLI like sed, awk or xargs. For example:
kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]'
or:
kubectl get deployments -o name | awk '{print $1 }' | xargs kubectl patch deployment $0 -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
It is also worth noting that configuration files should be stored in version control before being pushed to the cluster. See the Configuration Best Practices for more details.
You can use kustomize's "components" system if you want to set them all to the same thing. But that's unlikely. Better solution is probably write a little Python (or whatever lang you prefer) script to modify all the YAML files and push them back into source control.

How can get READY, STATUS, RESTARTS, AGE,etc in kubectl as custom-columns?

I just want to list pods with their .status.podIP as an extra column.
It seems that as soon as I specify -o=custom-colums= the default columns NAME, READY, STATUS, RESTARTS, AGE will disappear.
The closest I was able to get is
kubectl get pod -o wide -o=custom-columns="NAME:.metadata.name,STATUS:.status.phase,RESTARTS:.status.containerStatuses[0].restartCount,PODIP:.status.podIP"
but that is not really equivalent to the the default columns in the following way:
READY: I don't know how to get the default output (which looks like 2/2 or 0/1 by using custom columns
STATUS: In the default behaviour STATUS, can be Running, Failed, Evicted, but .status.phase will never be Evicted. It seems that the default STATUS is a combination of .status.phase and .status.reason. Is there a way to say show .status.phase if it's Running but if not show .status.reason?
RESTARTS: This only shows the restarts of the first container in the pod (I guess the sum of all containers would be the correct one)
AGE: Again I don't know how to get the age of the pod using custom-columns
Does anybody know the definitions of the default columns in custom-columns syntax?
I checked the differences between in API requests between the kubectl get pods and kubectl -o custom columns:
With aggregation:
curl -k -v -XGET -H Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500
Without aggregation:
curl -k -v -XGET -H Accept:
application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500
So you will notice that when -o custom columns is used, kubectl gets PodList instead of Table in response body. Podlist does not have that aggregated data, so to my understanding it is not possible to get the same output with kubectl pods using custom-column.
Here's a code snippet responsible for the output that you desire. Possible solution would be to fork the client and customize it to your own needs since as you already might notice this output requires some custom logic. Another possible solution would be to use one of the Kubernetes api client libraries. Lastly you may want to try extend kubectl functionalities with kubectl plugins.

Deleting namespace was stuck at "Terminating" State

I want to delete a namespace created in kubernetes.
Command i executed:
kubectl delete namespaces devops-ui
But the process is taking too long (~20mins) and counting.
On checking the minikube dashboard a pod is still there which is not getting deleted, it is in terminating state.
Any Solution?
Please delete the pods first using below command
kubectl delete pod pod_name_here --grace-period=0 --force --namespace devops-ui
now delete the namespace
kubectl delete namespaces devops-ui
when you delete a namespace, it triggers deleting all the entities within that namespace
you can run "kubectl get all -n namespace-name" and see the status of all the components within the namespace
Ideally it is preferable to wait for all the pods to be cleanly deleted (instead of forcing the pod deletion with --grace-period=0 : this only deletes the etcd record for the pod - but the corresponding containers could be running)
Reference: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
Some CRD's have finalizers and this will prevent a namespace from terminating
Example followed from here
https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-408599873
#ManifoldFR , I had the same issue as yours and I managed to make it work by making an API call with json file .
kubectl get namespace annoying-namespace-to-delete -o json > tmp.json
then edit tmp.json and remove"kubernetes"
curl -k -H "Content-Type: application/json" -X PUT --data-binary #tmp.json https://kubernetes-cluster-ip/api/v1/namespaces/annoying-namespace-to-delete/finalize
Note - use this https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/ - if you are running test cluster and need to get cluster-api access
In my case it threw up the resources holding (in default namespaces)
{
"type": "NamespaceContentRemaining",
"status": "True",
"lastTransitionTime": "2020-10-09T09:35:11Z",
"reason": "SomeResourcesRemain",
"message": "Some resources are remaining: cephblockpools.ceph.rook.io has 2 resource instances, cephclusters.ceph.rook.io has 1 resource instances"
},
{
"type": "NamespaceFinalizersRemaining",
"status": "True",
"lastTransitionTime": "2020-10-09T09:35:11Z",
"reason": "SomeFinalizersRemain",
"message": "Some content in the namespace has finalizers remaining: cephblockpool.ceph.rook.io in 2 resource instances, cephcluster.ceph.rook.io in 1 resource instances"
}
]

How to get running pod status via Rest API

Any idea how to get a POD status via Kubernetes REST API for a POD with known name?
I can do it via kubectl by just typing "kubectl get pods --all-namespaces" since the output lists STATUS as a separate column but not sure which REST API to use to get the STATUS of a running pod.
Thank you
You can just query the API server:
curl -k -X GET -H "Authorization: Bearer [REDACTED]" \
https://127.0.0.1:6443/api/v1/pods
If you want to get the status you can pipe them through something like jq:
curl -k -X GET -H "Authorization: Bearer [REDACTED]" \
https://127.0.0.1:6443/api/v1/pods \
| jq '.items[] | .metadata.name + " " + .status.phase'
When not sure which REST API and the command is known, run the command as below with -v9 option. Note the kubectl supports only a subset of options in imperative way (get, delete, create etc), so it's better to get familiar with the REST API.
kubectl -v9 get pods
The above will output the REST API call. This can be modified appropriately and the output can piped to jq to get subset of the data.

Kubernetes REST API - Create deployment

I was looking at the kubernetes API endpoints listed here. Im trying to create a deployment which can be run from the terminal using kubectl ru CLUSTER-NAME IMAGE-NAME PORT. However I cant seem to find any endpoint for this command in the link I posted above. I can create a node using curl POST /api/v1/namespaces/{namespace}/pods and then delete using the curl -X DELETE http://localhost:8080/api/v1/namespaces/default/pods/node-name where node name HAS to be a single node (if there are 100 nodes, each should be done individually). Is there an api endpoint for creating and deleting deployments??
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path, such as /api/v1 or /apis/extensions/v1beta1 and to extend the Kubernetes API, API groups is implemented.
Currently there are several API groups in use:
the core (oftentimes called legacy, due to not having explicit group name) group, which is at REST path /api/v1 and is not specified as part of the apiVersion field, e.g. apiVersion: v1.
the named groups are at REST path /apis/$GROUP_NAME/$VERSION, and use apiVersion: $GROUP_NAME/$VERSION (e.g. apiVersion: batch/v1). Full list of supported API groups can be seen in Kubernetes API reference.
To manage extensions resources such as Ingress, Deployments, and ReplicaSets refer to Extensions API reference.
As described in the reference, to create a Deployment:
POST /apis/extensions/v1beta1/namespaces/{namespace}/deployments
I debugged this by running kubectl with verbose logging: kubectl --v=9 update -f dev_inventory.yaml.
It showed the use of an API call like this one:
curl -i http://localhost:8001/apis/extensions/v1beta1/namespaces/default/deployments
Note that the first path element is apis, not the normal api. I don't know why it's like this, but the command above works.
I might be too late to help in this question, but here is what I tried on v1.9 to deploy a StatefulSet:
curl -kL -XPOST -H "Accept: application/json" -H "Content-Type: application/json" \
-H "Authorization: Bearer <*token*>" --data #statefulset.json \
https://<*ip*>:6443/apis/apps/v1/namespaces/eng-pst/statefulsets
I converted the statefulset.yaml to json cause I saw the data format when api was doing the POST was in json.
I ran this command to find out the API call i need to make for my k8s object:
kubectl --v=10 apply -f statefulset.yaml
(might not need a v=10 level but I wanted to as much info as I could)
The Kubernetes Rest Api documentation is quite sophisticated but unfortunately the deployment documentation is missing.
Since the Rest schema is identical to other resources you can figure out the rest calls:
GET retrieve a deployment by name:
curl -H "Authorization: Bearer ${KEY}" ${API_URL}/apis/extensions/v1beta1/namespaces/${namespace}/deployments/${NAME}
POST create a new deployment
curl -X POST -d #deployment-definition.json -H "Content-Type: application/json" -H "Authorization: Bearer ${KEY}" ${API_URL}/apis/extensions/v1beta1/namespaces/${namespace}/deployments
You should be able to use the calls right away when you provide the placeholder for your
API key ${KEY}
API url ${API_URL}
Deployment name ${NAME}
Namespace ${namespace}
Have you tried the analogous URL?
http://localhost:8080/api/v1/namespaces/default/deployment/deployment-name