Sorry about a newbie question. I am trying to deploy an image into k3d (a dockerized version of k3s).
k3d image import -c my-cluster registry.gitlab.com/aaa/bbb/ccc/hello123
Now I can see the image on a node:
kubectl get node my-node -o json | grep hello123
However, the documentation doesn't say much about what "import" does. Is my image running? Is it allocated to a pod yet? Where can I find its logs?
If I knew what pod it's running in, I could do kubectl logs. The list of the cluster's pods doesn't show anything relevant.
I am beginning to think my image isn't running yet.
Edit: This if further confirmed by
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
showing nothing relevant.
What's the next step?
You have just pulled the image to the cluster registry, the image has not been yet assigned to a pod.
Once you create a pod with the same image and image tag, it will try to pull from the local registry.
If you could ssh to the k8s node (kubectl get nodes -o wide and ssh user#nodeip), you can run the docker commands like:
docker images
You can expect to see the image that you pulled in the list.
If non of the pods are running the docker ps will return you an empty list.
I would like to use kubectl cp to copy a file from a completed pod to my local host(local computer). I used kubectl cp /:/ , however, it gave me an error: cannot exec into a container in a completed pod; current phase is Succeeded error. Is there a way I can copy a file from a completed pod? It does not need to be kubectl cp. Any help appreciated!
Nope. If the pod is gone, it's gone for good. Only possibility would be if the data is stored in a PV or some other external resource. Pods are cattle, not pets.
You can find the files, because the containers of a pod in the state Completed are not deleted, they are just not running.
I am not aware of any way to do it via Kubernetes itself, but here is how to do it if your container runtime is Docker:
$ ssh <node where the pod is>
$ docker ps -a | grep <pod name>
$ docker cp <pod name>:/your/files ./
The files in containers are just overlayfs mounts; if the container still exists, the files still exist.
So if you are using containerd runtime or something else, look at /var/lib/containers or something (don't know where different runtimes do their overlayfs mounts, but it can't not be at the node. you could check if you find out where via $ mount).
I have been trying to install Entando 6 on my Mac following the instructions on http://docs.entando.com, however when deploying to Kubernetes I get an error with quickstart-kc-deployer. Has anyone managed to successfully go through with the installation?
deployment failure
Also I am new to Kubernetes and trying to access any logs, however as of now I have not been able to access logs and understand a bit more what the root cause of the failure is. Help on that is also more than welcome as well.
Thanks.
If you're in a local development environment the best bet would be to try the new instructions at dev.entando.org. If you're installing on a cloud Kubernetes provider try the updated instructions here.
I've reproduced them here for completeness:
Install Multipass (https://multipass.run/#install
Launch VM
multipass launch --name ubuntu-lts --cpus 4 --mem 8G --disk 20G
Open a shell multipass shell ubuntu-lts
Install k3s curl -sfL https://get.k3s.io | sh -
Download Entando custom resource definitions
curl -L -C - https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/custom-resources.tar.gz | tar -xz
Create custom resources
sudo kubectl create -f dist/crd
Create namespace
sudo kubectl create namespace entando
Download Helm chart
curl -L -C - -O https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/entando.yaml
Configure access to your cluster
IP=$(hostname -I | awk '{print $1}')
sed -i "s/192.168.64.25/$IP/" entando.yaml
If you want to deploy on a cloud provider (EKS, AKS, GKE) then there are new instructions under the Configuration and Operations section at
https://dev.entando.org/next/tutorials
I tried this command:
kubectl logs --tail
I got this error/help output:
Error: flag needs an argument: --tail
Aliases:
logs, log
Examples:
# Return snapshot logs from pod nginx with only one container
kubectl logs nginx
# Return snapshot logs for the pods defined by label app=nginx
kubectl logs -lapp=nginx
# Return snapshot of previous terminated ruby container logs from pod web-1
kubectl logs -p -c ruby web-1
# Begin streaming the logs of the ruby container in pod web-1
kubectl logs -f -c ruby web-1
# Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
# Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
ummm I just want to see all the logs, isn't this a common thing to want to do? How can I tail all the logs for a cluster?
kail from the top answer is Linux and macOS only, but Stern also works on Windows.
It can do pod matching based on e.g. a regex match for the name, and then can follow the logs.
To follow ALL pods without printing any prior logs from the default namespace you would run e.g.:
stern ".*" --tail 0
For absolutely everything, incl. internal stuff happening in kube-system namespace:
stern ".*" --all-namespaces --tail 0
Alternatively you could e.g. follow all login-.* containers and get some context with
stern "login-.*" --tail 25
If you don't mind using a third party tool, kail does exactly what you're describing.
Streams logs from all containers of all matched pods. [...] With no arguments, kail matches all pods in the cluster.
The only thing you can do is to get logs of multiple pods using label selectors like this:
kubectl logs -f -l app=nginx -l app=php
For getting all logs of the entire cluster you have to setup centralized log collection like Elasticsearch, Fluentd and Kibana. Simplest way to do it is installation using Helm charts like described here: https://linux-admin.tech/kubernetes/logging/2018/10/24/elk-stack-installation.html
I would recommend using a nice bash script named kubetail.
You can just download the bash script and add it to in your project and run for example:
$ ./some-tools-directory/kubetail.sh --selector app=user --since 10m
To see all pods with the label app=user.
Notice the nice display of colors per pod:
(*) Run ./tools/kubetail.sh -h to see some nice execution options.
kubetail.sh <search term> [-h] [-c] [-n] [-t] [-l] [-d] [-p] [-s] [-b] [-k] [-v] [-r] [-i] -- tail multiple Kubernetes pod logs at the same time
where:
-h, --help Show this help text
-c, --container The name of the container to tail in the pod (if multiple containers are defined in the pod).
Defaults to all containers in the pod. Can be used multiple times.
-t, --context The k8s context. ex. int1-context. Relies on ~/.kube/config for the contexts.
-l, --selector Label selector. If used the pod name is ignored.
-n, --namespace The Kubernetes namespace where the pods are located (defaults to "default")
-f, --follow Specify if the logs should be streamed. (true|false) Defaults to true.
-d, --dry-run Print the names of the matched pods and containers, then exit.
-p, --previous Return logs for the previous instances of the pods, if available. (true|false) Defaults to false.
-s, --since Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 10s.
-b, --line-buffered This flags indicates to use line-buffered. Defaults to false.
-e, --regex The type of name matching to use (regex|substring)
-j, --jq If your output is json - use this jq-selector to parse it.
example: --jq ".logger + \" \" + .message"
-k, --colored-output Use colored output (pod|line|false).
pod = only color pod name, line = color entire line, false = don't use any colors.
Defaults to line.
-z, --skip-colors Comma-separated list of colors to not use in output
If you have green foreground on black, this will skip dark grey and some greens -z 2,8,10
Defaults to: 7,8
--timestamps Show timestamps for each log line
--tail Lines of recent log file to display. Defaults to -1, showing all log lines.
-v, --version Prints the kubetail version
-r, --cluster The name of the kubeconfig cluster to use.
-i, --show-color-index Show the color index before the pod name prefix that is shown before each log line.
Normally only the pod name is added as a prefix before each line, for example "[app-5b7ff6cbcd-bjv8n]",
but if "show-color-index" is true then color index is added as well: "[1:app-5b7ff6cbcd-bjv8n]".
This is useful if you have color blindness or if you want to know which colors to exclude (see "--skip-colors").
Defaults to false.
examples:
kubetail.sh my-pod-v1
kubetail.sh my-pod-v1 -c my-container
kubetail.sh my-pod-v1 -t int1-context -c my-container
kubetail.sh '(service|consumer|thing)' -e regex
kubetail.sh -l service=my-service
kubetail.sh --selector service=my-service --since 10m
kubetail.sh --tail 1
I have hardly ever seen anyone pulling all logs from entire clusters, because you usually either need logs to manually search for certain issues or follow (-f) a routine, or collect audit information, or stream all logs to a log sink to have them prepared for monitoring (e.g. prometheus).
However, if there's a need to fetch all logs, using the --tail option is not what you're looking for (tail only shows the last number of lines of a certain log source and avoids spilling the entire log history of a single log source to your terminal).
For kubernetes, you can write a simple script in a language of your choice (bash, Python, whatever) to kubectl get all --show-all --all-namespaces and iterate over the pods to run kubectl -n <namespace> logs <pod>; but be aware that there might be multiple containers in a pod with individual logs each, and also logs on the cluster nodes themselves, state changes in the deployments, extra meta information that changes, volume provisioning, and heaps more.
That's probably the reason why it's quite uncommon to pull all logs from an entire cluster and thus there's no easy (shortcut) way to do so.
# assumes you have pre-set the KUBECONFIG or using the default one ...
do_check_k8s_logs(){
# set the desired namespaces here vvvv
for namespace in `echo apiv2 default kube-system`; do
while read -r pod ; do
while read -r container ; do
kubectl -n $namespace logs $pod $container | tail -n 2000
done < <(kubectl -n $namespace get pods -o json | jq -r ".items[]|select(.metadata.name | contains ( \"$pod\"))| .status.containerStatuses[].name") ;
done < <(kubectl -n $namespace get pods -o json | jq -r '.items[].metadata.name') \
| tee -a ~/Desktop/k8s-$namespace-logs.`date "+%Y%m%d_%H%M%S"`.log
done
}
do_check_k8s_logs
For your applications data, you probably just want to tail all the pods in the cluster.
But if you want logs for the control-plane of a cluster - you can use:
https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-eks-now-delivers-kubernetes-control-plane-logs-to-amazon-/
I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP.
Is there a way to accomplish this?
The procedure is longly decribed in an article of the Azure documentation:
https://learn.microsoft.com/en-us/azure/aks/ssh. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:
You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:
$ az vm user update \
--resource-group MC_myResourceGroup_myAKSCluster_region \
--name node-name \
--username theusername \
--ssh-key-value ~/.ssh/id_rsa.pub
To find your nodes names:
az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table
When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:
kubectl run -it --rm my-ssh-pod --image=debian
# install ssh components, as their is none in the Debian image
apt-get update && apt-get install openssh-client -y
On your workstation, get the name of the pod you just created:
$ kubectl get pods
Add your private key into the pod:
$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa
Then, in the pod, connect via ssh to one of your node:
ssh -i /id_rsa theusername#10.240.0.4
(to find the nodes IPs, on your workstation):
az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table
This Gist and this page have pretty good explanations of how to do it. Sshing into the nodes and not shelling into the pods/containers.
you can use this instead of SSH. This will create a tiny priv pod and use nsenter to access the noed.
https://github.com/mohatb/kubectl-wls