To know the container start time, we generally describe the pod using:
kubectl describe pod <pod-name>. I need to access the container's start time via a kubectl api in terms of timestamp or any format. Do this exist in the API?
Effectively you could grab this via the status and state transitions. With kubectl it would look like this:
kubectl get pod $PODNAME -o jsonpath='{.status.conditions[?(#.type=="Ready")].lastTransitionTime}'
would yield 2021-05-25T15:57:03Z right now for me.
You could give the pod API access but that would be tricky (no easily policy way to say "access only to itself"). There is the Downward API volume system but I don't think it includes this field?
I have written a shell-script for solving this problem, which follows up on info of pods from inside pod via kube APIs and parsing the same for required lastTransitionTime parameter:
APISERVER=https://kubernetes.default.svc.cluster.local
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
curl -ik -H "Authorization: Bearer ${TOKEN}" ${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods | grep "lastTransitionTime\:*[T]*\:*\:*" | tail -1
This script outputs the last restart time of pod (tail -1) for me.
Related
I would like to know, how to find service name from the Pod Name in Kubernetes.
Can you guys suggest ?
Services (spec.selector) and Pods (metadata.labels) are bound through shared labels.
So, you want to find all Services that include (some) of the Pod's labels.
kubectl get services \
--selector=${KEY-1}=${VALUE-1},${KEY-2}=${VALUE-2},...
--namespace=${NAMESPACE}
Where ${KEY} and ${VALUE} are the Pod's label(s) key(s) and values(s)
It's challenging though because it's possible for the Service's selector labels to differ from Pod labels. You'd not want there to be no intersection but a Service's labels could well be a subset of any Pods'.
The following isn't quite what you want but you may be able to extend it to do what you want. Given the above, it enumerates the Services in a Namespace and, using each Service's selector labels, it enumerates Pods that select based upon them:
NAMESPACE="..."
SERVICES="$(\
kubectl get services \
--namespace=${NAMESPACE} \
--output=name)"
for SERVICE in ${SERVICES}
do
SELECTOR=$(\
kubectl get ${SERVICE} \
--namespace=${NAMESPACE}\
--output=jsonpath="{.spec.selector}" \
| jq -r '.|to_entries|map("\(.key)=\(.value)")|#csv' \
| tr -d '"')
PODS=$(\
kubectl get pods \
--selector=${SELECTOR} \
--namespace=${NAMESPACE} \
--output=name)
printf "%s: %s\n" ${SERVICE} ${PODS}
done
NOTE This requires jq because I'm unsure whether it's possible to use kubectl's JSONPath to range over a Service's labels and reformat these as needed. Even using jq, my command's messy:
Get the Service's selector as {"k1":"v1","k2":"v2",...}
Convert this to "k1=v1","k2=v2",...
Trim the extra (?) "
If you want to do this for all Namespaces, you can wrap everything in:
NAMESPACES=$(kubectl get namespaces --output=name)
for NAMESPACE in ${NAMESPACE}
do
...
done
You can get information about a pods service from it's environment variables.
( ref: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables)
kubectl exec <pod_name> -- printenv | grep SERVICE
Example:
Getting details about pod
Getting service from environment variable
I just want to list pods with their .status.podIP as an extra column.
It seems that as soon as I specify -o=custom-colums= the default columns NAME, READY, STATUS, RESTARTS, AGE will disappear.
The closest I was able to get is
kubectl get pod -o wide -o=custom-columns="NAME:.metadata.name,STATUS:.status.phase,RESTARTS:.status.containerStatuses[0].restartCount,PODIP:.status.podIP"
but that is not really equivalent to the the default columns in the following way:
READY: I don't know how to get the default output (which looks like 2/2 or 0/1 by using custom columns
STATUS: In the default behaviour STATUS, can be Running, Failed, Evicted, but .status.phase will never be Evicted. It seems that the default STATUS is a combination of .status.phase and .status.reason. Is there a way to say show .status.phase if it's Running but if not show .status.reason?
RESTARTS: This only shows the restarts of the first container in the pod (I guess the sum of all containers would be the correct one)
AGE: Again I don't know how to get the age of the pod using custom-columns
Does anybody know the definitions of the default columns in custom-columns syntax?
I checked the differences between in API requests between the kubectl get pods and kubectl -o custom columns:
With aggregation:
curl -k -v -XGET -H Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500
Without aggregation:
curl -k -v -XGET -H Accept:
application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500
So you will notice that when -o custom columns is used, kubectl gets PodList instead of Table in response body. Podlist does not have that aggregated data, so to my understanding it is not possible to get the same output with kubectl pods using custom-column.
Here's a code snippet responsible for the output that you desire. Possible solution would be to fork the client and customize it to your own needs since as you already might notice this output requires some custom logic. Another possible solution would be to use one of the Kubernetes api client libraries. Lastly you may want to try extend kubectl functionalities with kubectl plugins.
I have a Kubernetes cluster, in which different pods are running in different namespaces. How do I know if any pod failed?
Is there any single command to check the failed pod list or restated pod list?
And reason for the restart(logs)?
Depends if you want to have detailed information or you just want to check a few last failed pods.
I would recommend you to read about Logging Architecture.
In case you would like to have this detailed information you should use 3rd party software, as its described in Kubernetes Documentation - Logging Using Elasticsearch and Kibana or another one FluentD.
If you are using Cloud environment you can use Integrated with Cloud Logging tools (i.e. in Google Cloud Platform you can use Stackdriver).
In case you want to check logs to find reason why pod failed, it's good described in K8s docs Debug Running Pods.
If you want to get logs from specific pod
$ kubectl logs ${POD_NAME} -n {NAMESPACE}
First, look at the logs of the affected container:
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
If your container has previously crashed, you can access the previous container's crash log with:
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Additional information you can obtain using
$ kubectl get events -o wide --all-namespaces | grep <your condition>
Similar question was posted in this SO thread, you can check if for more details.
This'll work: kubectl get pods --all-namespaces | | grep -Ev '([0-9]+)/\1'
Also, Lens is pretty good in these situations.
Most of the times, the reason for app failure is printed in the lasting logs of the previous pod. You can see them by simply putting --previous flag along with your kubectl logs ... cmd.
There is a documentation article here explaining on how one can reserve resources on a node for system use.
What I did not manage to figure out is how can one get these values? If I understand things correctly kubectl top nodes will return available resources, but I would like to see kube-reserved, system-reserved and eviction-threshold as well.
Is it possible?
by checking the kubelet's flag, we can get the values of kube-reserved, system-reserved and eviction-threshold.
ssh into the $NODE and ps aufx | grep kubelet will list out the running kubelet and its flag.
kube-reserved and system-reserved values are only useful for scheduling as scheduler can see the allocatable resources.
To see your eviction-threshold (evictionHard or systemReserved) after login on master node first start the kubectl proxy in the background using the following command:
kubectl proxy --port=8001 &
After that run the following command to see your desired node config (replace your node name in variable.eg VAR="worker-2")
VAR="NODE_NAME"; curl -sSL "http://localhost:8001/api/v1/nodes/$VAR/proxy/configz"
You shoul see a result look like:
"evictionHard":{"imagefs.available":"15%","memory.available":"100Mi","nodefs.available":"10%","nodefs.inodesFree":"5%"},
"systemReserved":{"cpu":"600m","memory":"0.5Gi"}
Enjoy ;)
I was trying to get the Ready status of pods by using -o=jsonpath.
To be more clear of what I want, I would like to get the value 1/1 of the following example using -o=jsonpath.
NAME READY STATUS RESTARTS AGE
some_pod 1/1 Running 1 34d
I have managed to get some information such as the pod name or namespace.
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{end}'
And I get somthing like:
some_namespace1 pod_name1
However, I don't know how to get the Ready status. What I would like to have is an aoutput similar to this:
some_namespace1 pod_name1 1/1
I know I can use bash commands like cut:
kubectl get pods --all-namespaces| tail -1 | cut -d' ' -f8
However, I would like to get it by using kubectl
You can get all the pods status using the following command:
kubectl get pods -o jsonpath={.items[*].status.phase}
Similar commands you can use for the name
kubectl get pods -o jsonpath={.items[*].metadata.name}
EDIT:
You need to compare the .status.replicas and .status.readyReplicas to get how many ready replicas are there.
I think this isn't directly reported in the Kubernetes API.
If you kubectl get pod ... -o yaml (or -o json) you'll get back an object matching a List (not included in the API docs) where each item is a Pod in the Kubernetes API, and -o jsonpath values follow that object structure. In particular a PodStatus has a list of ContainerStatus, each of which may or may not be ready, but the API itself doesn't return the counts as first-class fields.
There are a couple of different JSONPath implementations. I think Kubernetes only supports the syntax in the Kubernetes documentation, which doesn't include any sort of "length" function. (The original JavaScript implementation and a ready Googlable Java implementation both seem to, with slightly different syntax.)
The best I can come up with playing with this is to report all of the individual container "ready" statuses
kubectl get pods \
-o $'jsonpath={range .items[*]}{.metadata.name}\t{.status.containerStatuses[*].ready}\n{end}'
($'...' is bash/zsh syntax) but this still requires some post-processing to get back the original counts.