Is calling Delete in the Kubernetes go API an idempotent operation, i.e. can it safely be called twice?
If so, is there any documentation defining this property?
The go code just states
Delete deletes the given obj from Kubernetes cluster.
Essentially, this statement is what one would expect anyway when looking at the code.
The api service is based on http, so you can check RFC 7231 about that
A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request. Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent.
If your're using kubectl the delete command will fail on the second run, because the resource can not be found. You can prevent it from failing by using the --ignore-not-found flag.
$ kubectl run nginx --image nginx
pod/nginx created
$ kubectl delete pod nginx
pod "nginx" deleted
$ kubectl delete pod nginx
Error from server (NotFound): pods "nginx" not found
$ kubectl delete pod nginx --ignore-not-found
So it's idempotent on the server but not on the client.
Related
"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380
Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A
There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6
We are using Airflow to schedule Spark job on Kubernetes. Recently, I have encountered a scenario where:
airflow received error 404 with message "pods pod-name not found"
I manually checked that POD was actually working fine at that time. In fact, I was able to collect logs using kubectl logs -f -n namespace podname
What happened due to this is that airflow created another POD for running the same job which resulted in race condition.
Airflow is using Kubernetes Python client's read_namespaced_pod API()
def read_pod(self, pod):
"""Read POD information"""
try:
return self._client.read_namespaced_pod(pod.metadata.name, pod.metadata.namespace)
except BaseHTTPError as e:
raise AirflowException(
'There was an error reading the kubernetes API: {}'.format(e)
)
I believe read_namespaced_pod() calls Kubernetes API. In order to investigate this further, I would like to like check logs of Kubernetes API server.
Can you please share steps to check what is happening on Kubernetes side ?
Note: Kubernetes version is 1.18 and Airflow version is 1.10.10.
Answering the question from the perspective of logs/troubleshooting:
I believe read_namespaced_pod() calls Kubernetes API. In order to investigate this further, I would like to like check logs of Kubernetes API server.
Yes, you are correct, this function calls the Kubernetes API. You can check the logs of Kubernetes API server by running:
$ kubectl logs -n kube-system KUBERNETES_API_SERVER_POD_NAME
I would also consider checking the kube-controller-manager:
$ kubectl logs -n kube-system KUBERNETES_CONTROLLER_MANAGER_POD_NAME
The example output of it:
I0413 12:33:12.840270 1 event.go:291] "Event occurred" object="default/nginx-6799fc88d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6799fc88d8-kchp7"
A side note!
Above commands will work assuming that your kubernetes-apiserver and kubernetes-controller-manager Pod is visible to you
Can you please share steps to check what is happening on Kubernetes side ?
This question targets the basics of troubleshooting/logs checking.
For that you can use following commands (and the ones mentioned earlier):
$ kubectl get RESOURCE RESOURCE_NAME:
example: $ kubectl get pod airflow-pod-name
also you can add -o yaml for more information
$ kubectl describe RESOURCE RESOURCE_NAME:
example: $ kubectl describe pod airflow-pod-name
$ kubectl logs POD_NAME:
example: $ kubectl logs airflow-pod-name
Additional resources:
Kubernetes.io: Docs: Concepts: Cluster administration: Logging Architecture
Kubernetes.io: Docs: Tasks: Debug application cluster: Debug cluster
Hi i'm a newbie for k8s and i was wondering where and how kubectl sends requests to the kube api-server.
So for example, if i'm sending a request such as "kubectl get pods --all-namespaces"(and my default kubernetes endpoints is set as "192.168.64.2:8443"), my understanding is that this would translate to a https request such as "https://192.168.64.2:8443/api/v1/pods......etc" and kubectl would use authentication stored in .kube/config file. Am i right?
And i also have a metrics-server up and running on endpoint "172.17.0.8:4443" but how does kubectl know to use this ip when i run "kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/<NODE_NAME> | jq"? are all kubectl commands directed to one ip?
Thanks in advance.
Authentication is one of the steps that kubectl achieves. You could see what happens when a command run with verbose ability. for example,
kubectl get pods -v9 --all-namespaces
Kubernetes know resource definitions and their implementation, you could check resource types with,
kubectl api-resources
So Kubernetes api-server knows which resources are metric-server and how that could call.
All kubectl requests go to the api-server. The api-server can either answer by itself or it delegates to other components, for example an extension api server.
I have a service called my-service with an endpoint called refreshCache. my-service is hosted on multiple servers, and occasionally I want an event in my-service on one of the servers to trigger refreshCache on my-service on all servers. To do this I manually maintain a list of all the servers that host my-service, pull that list, and send a REST request to <server>/.../refreshCache for each server.
I'm now migrating my service to k8s. Similarly to before, where I was running refreshCache on all servers that hosted my-service, I now want to be able to run refreshCache on all the pods that host my-service. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?
Note: I'm aware this information is available with kubectl get endpoints ..., however kubectl will not be available within my container.
For achieving this the best way would be to use a K8s config inside the pod.
For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
Here the method load_incluster_config() is used which loads the kubeconfig inside pod via the service account attached to that pod.
You don't need kubectl to access the Kubernetes API. You can do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named my-service from within a container in the cluster, you can do:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/my-service
Note: replace {namespace} with the namespace of the my-service Endpoints resource.
And to extract the IP addresses of the returned JSON, you could pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
Note that the Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request is denied.
You can do this with a ClusterRole, ClusterRoleBinding, and Service Account (you need to set this up only once):
kubectl create sa endpoint-reader
kubectl create clusterrole endpoint-reader --verb=get,list --resource=endpoints
kubectl create clusterrolebinding endpoint-reader --serviceaccount=default:endpoint-reader --clusterrole=endpoint-reader
Then, use the endpoint-reader ServiceAccount for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any other API operations (i.e. combinations of verbs and resources) works in the same way.
I'm trying to delete an existent job using
kubectl delete job/job-name -n my-namespace
But this error is displayed
caling the resource failed with: Job.batch "kong-loop" is invalid:
spec.template: Invalid value: api.PodTemplateSpec{...}: field is
immutable; Current resource version 12189833
The solution posted by #esnible does work in this scenario, but it is simpler do these steps:
Delete job with cascade false
kubectl delete job/jobname -n namespace --cascade=false
Delete any pod that exists
kubectl delete pod/podname -n namespace
Solution found at in this google groups discussion https://groups.google.com/forum/#!topic/kubernetes-users/YVmUgktoqtI
kubectl does an HTTP PUT to the job during the delete process. This PUT fails because the Job has gotten itself into an invalid state. We must DELETE without PUTing.
Try
kubectl proxy
curl -X DELETE localhost:8001/apis/batch/v1/namespaces/<namespace>/jobs/<jobname>
Then kill the kubectl proxy process. namespace is typically default