I was trying to get into kubernetes-dashboard Pod, but I keep getting this error:
C:\Users\USER>kubectl exec -n kubernetes-dashboard kubernetes-dashboard-66c887f759-bljtc -it -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
The Pod is running normally and I can access the Kubernetes UI via the browser. But I was getting some issues getting it running before, and I wanted to get inside the pod to run some commands, but I always get the same error mentioned above.
When I try the same command with a pod running nginx for example, it works:
C:\Users\USER>kubectl exec my-nginx -it -- sh
/ # ls
bin home proc sys
dev lib root tmp
docker-entrypoint.d media run usr
docker-entrypoint.sh mnt sbin var
etc opt srv
/ # exit
Any explanation, please?
Prefix the command to run with /bin so your updated command will look like:
kubectl exec -n kubernetes-dashboard <POD_NAME> -it -- /bin/sh
The reason you're getting that error is because Git in Windows slightly modifies the MSYS that changes command args. Generally using the command /bin/sh or /bash/bash works universally.
That error message means literally what it says: there is no sh or any other shell in the container. There's no particular requirement that a container have a shell, and if a Docker image is built FROM scratch (as the Kubernetes dashboard image is) or a "distroless" image, it just may not contain one.
In most cases you shouldn't need to "enter a container", and you should use kubectl exec (or docker exec) sparingly if at all. This is doubly true in Kubernetes: it's not just that changes you make manually will get lost when the container exits, but also that in Kubernetes you typically have multiple replicas that you can't manually edit all at once, and also that in some cases the cluster can delete and recreate a Pod outside of your control.
I would like to use kubectl cp to copy a file from a completed pod to my local host(local computer). I used kubectl cp /:/ , however, it gave me an error: cannot exec into a container in a completed pod; current phase is Succeeded error. Is there a way I can copy a file from a completed pod? It does not need to be kubectl cp. Any help appreciated!
Nope. If the pod is gone, it's gone for good. Only possibility would be if the data is stored in a PV or some other external resource. Pods are cattle, not pets.
You can find the files, because the containers of a pod in the state Completed are not deleted, they are just not running.
I am not aware of any way to do it via Kubernetes itself, but here is how to do it if your container runtime is Docker:
$ ssh <node where the pod is>
$ docker ps -a | grep <pod name>
$ docker cp <pod name>:/your/files ./
The files in containers are just overlayfs mounts; if the container still exists, the files still exist.
So if you are using containerd runtime or something else, look at /var/lib/containers or something (don't know where different runtimes do their overlayfs mounts, but it can't not be at the node. you could check if you find out where via $ mount).
I know where the location is by default which is under /var/lib/kubelet
However I don't want to depend on this knowledge. I'd like to get the location using command line or checking some config file (which hopefully is stored in a more permanent location).
Is there a way to determine where the location is? if indeed changed by the user?
So it depends how the root-dir parameter was applied to the kubelet:
F.e you should see this in systemctl status kubelet.
Another approach is to search for KubeletRootDir journalctl -u kubelet | grep KubeletRootDir
So please use this approach -root-dir by default = /var/lib/kubelet/
sudo ls -l /root-dir/`kubectl get pod -n mynamespace mypod -o 'jsonpath={.metadata.uid}'`/volumes/kubernetes.io~empty-dir/monted_volume
Hope this help.
I tried this command:
kubectl logs --tail
I got this error/help output:
Error: flag needs an argument: --tail
Aliases:
logs, log
Examples:
# Return snapshot logs from pod nginx with only one container
kubectl logs nginx
# Return snapshot logs for the pods defined by label app=nginx
kubectl logs -lapp=nginx
# Return snapshot of previous terminated ruby container logs from pod web-1
kubectl logs -p -c ruby web-1
# Begin streaming the logs of the ruby container in pod web-1
kubectl logs -f -c ruby web-1
# Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
# Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
ummm I just want to see all the logs, isn't this a common thing to want to do? How can I tail all the logs for a cluster?
kail from the top answer is Linux and macOS only, but Stern also works on Windows.
It can do pod matching based on e.g. a regex match for the name, and then can follow the logs.
To follow ALL pods without printing any prior logs from the default namespace you would run e.g.:
stern ".*" --tail 0
For absolutely everything, incl. internal stuff happening in kube-system namespace:
stern ".*" --all-namespaces --tail 0
Alternatively you could e.g. follow all login-.* containers and get some context with
stern "login-.*" --tail 25
If you don't mind using a third party tool, kail does exactly what you're describing.
Streams logs from all containers of all matched pods. [...] With no arguments, kail matches all pods in the cluster.
The only thing you can do is to get logs of multiple pods using label selectors like this:
kubectl logs -f -l app=nginx -l app=php
For getting all logs of the entire cluster you have to setup centralized log collection like Elasticsearch, Fluentd and Kibana. Simplest way to do it is installation using Helm charts like described here: https://linux-admin.tech/kubernetes/logging/2018/10/24/elk-stack-installation.html
I would recommend using a nice bash script named kubetail.
You can just download the bash script and add it to in your project and run for example:
$ ./some-tools-directory/kubetail.sh --selector app=user --since 10m
To see all pods with the label app=user.
Notice the nice display of colors per pod:
(*) Run ./tools/kubetail.sh -h to see some nice execution options.
kubetail.sh <search term> [-h] [-c] [-n] [-t] [-l] [-d] [-p] [-s] [-b] [-k] [-v] [-r] [-i] -- tail multiple Kubernetes pod logs at the same time
where:
-h, --help Show this help text
-c, --container The name of the container to tail in the pod (if multiple containers are defined in the pod).
Defaults to all containers in the pod. Can be used multiple times.
-t, --context The k8s context. ex. int1-context. Relies on ~/.kube/config for the contexts.
-l, --selector Label selector. If used the pod name is ignored.
-n, --namespace The Kubernetes namespace where the pods are located (defaults to "default")
-f, --follow Specify if the logs should be streamed. (true|false) Defaults to true.
-d, --dry-run Print the names of the matched pods and containers, then exit.
-p, --previous Return logs for the previous instances of the pods, if available. (true|false) Defaults to false.
-s, --since Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 10s.
-b, --line-buffered This flags indicates to use line-buffered. Defaults to false.
-e, --regex The type of name matching to use (regex|substring)
-j, --jq If your output is json - use this jq-selector to parse it.
example: --jq ".logger + \" \" + .message"
-k, --colored-output Use colored output (pod|line|false).
pod = only color pod name, line = color entire line, false = don't use any colors.
Defaults to line.
-z, --skip-colors Comma-separated list of colors to not use in output
If you have green foreground on black, this will skip dark grey and some greens -z 2,8,10
Defaults to: 7,8
--timestamps Show timestamps for each log line
--tail Lines of recent log file to display. Defaults to -1, showing all log lines.
-v, --version Prints the kubetail version
-r, --cluster The name of the kubeconfig cluster to use.
-i, --show-color-index Show the color index before the pod name prefix that is shown before each log line.
Normally only the pod name is added as a prefix before each line, for example "[app-5b7ff6cbcd-bjv8n]",
but if "show-color-index" is true then color index is added as well: "[1:app-5b7ff6cbcd-bjv8n]".
This is useful if you have color blindness or if you want to know which colors to exclude (see "--skip-colors").
Defaults to false.
examples:
kubetail.sh my-pod-v1
kubetail.sh my-pod-v1 -c my-container
kubetail.sh my-pod-v1 -t int1-context -c my-container
kubetail.sh '(service|consumer|thing)' -e regex
kubetail.sh -l service=my-service
kubetail.sh --selector service=my-service --since 10m
kubetail.sh --tail 1
I have hardly ever seen anyone pulling all logs from entire clusters, because you usually either need logs to manually search for certain issues or follow (-f) a routine, or collect audit information, or stream all logs to a log sink to have them prepared for monitoring (e.g. prometheus).
However, if there's a need to fetch all logs, using the --tail option is not what you're looking for (tail only shows the last number of lines of a certain log source and avoids spilling the entire log history of a single log source to your terminal).
For kubernetes, you can write a simple script in a language of your choice (bash, Python, whatever) to kubectl get all --show-all --all-namespaces and iterate over the pods to run kubectl -n <namespace> logs <pod>; but be aware that there might be multiple containers in a pod with individual logs each, and also logs on the cluster nodes themselves, state changes in the deployments, extra meta information that changes, volume provisioning, and heaps more.
That's probably the reason why it's quite uncommon to pull all logs from an entire cluster and thus there's no easy (shortcut) way to do so.
# assumes you have pre-set the KUBECONFIG or using the default one ...
do_check_k8s_logs(){
# set the desired namespaces here vvvv
for namespace in `echo apiv2 default kube-system`; do
while read -r pod ; do
while read -r container ; do
kubectl -n $namespace logs $pod $container | tail -n 2000
done < <(kubectl -n $namespace get pods -o json | jq -r ".items[]|select(.metadata.name | contains ( \"$pod\"))| .status.containerStatuses[].name") ;
done < <(kubectl -n $namespace get pods -o json | jq -r '.items[].metadata.name') \
| tee -a ~/Desktop/k8s-$namespace-logs.`date "+%Y%m%d_%H%M%S"`.log
done
}
do_check_k8s_logs
For your applications data, you probably just want to tail all the pods in the cluster.
But if you want logs for the control-plane of a cluster - you can use:
https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-eks-now-delivers-kubernetes-control-plane-logs-to-amazon-/
How can I set the timeout for the kubectl exec command ?
The below command does not work
kubectl exec -it pod_name bash --requrest-timeout=0 -n test
You have a typo, try:
kubectl exec -it pod_name bash --request-timeout=0 -n test
See kubectl official documentation about request-timeout
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
Note that "0" is already the default.
We ran into this issue standing up an on-prem instance of K8s. The answer in our situation was haproxy.
If you have a load-balancer in front of your K8s API (control-plane), I'd look at a timeout on that as the culprit.
I believe the default for haproxy was 20 seconds so after I changed it to 60m, we never noticed the problem again.