Kubectl status nodes provides different responses for equivalent clusters - kubernetes

I have recently started using kubectl krew (v0.3.4), which later was used to install "status" plugin (v0.4.1).
I am managing right now different clusters, and I'm checking the nodes' status. Most of the clusters answer something exactly like:
Node/[NodeName], created 25d ago linux Oracle Linux Server 7.8
(amd64), kernel 4.1.12-124.36.4.el7uek.x86_64, kubelet v1.18.2, kube-proxy v1.18.2
cpu: 0.153/7 (2%)
mem: 4.4GB/7.1GB (63%)
ephemeral-storage: 2.2GB
There is one cluster that answers, for some reason:
Node/[nodeName], created 11d ago
linux Oracle Linux Server 7.8 (amd64), kernel 4.1.12-124.26.5.el7uek.x86_64, kubelet v1.18.2, kube-proxy v1.18.2
cpu: 5, mem: 7.1GB, ephemeral-storage: 2.2GB
(Let me clarify that I'm trying to automate some resources checking and the way resources are differently displayed is quite annoying, plus the used vs total resources is exactly what I need!)
I am absolutely unable to locate the status plugin repo, and I have no idea where to go with this issue. kubectl version says that both clusters have the same server version, I'm executing the kubectl status command from my local in both cases and... I am completely out of ideas.
Does anyone know why this might be happening, or when can I go to look for answers?

To display used and total resources you can use kubectl top
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes or pods.
This command requires Metrics Server to be correctly configured and working on the server.
Available Commands:
node Display Resource (CPU/Memory/Storage) usage of nodes
pod Display Resource (CPU/Memory/Storage) usage of pods
Usage:
kubectl top [flags] [options]
You can also have a look at Tools for Monitoring Resources inside Kubernetes docs.
As for doing the same using Kubernetes Python Client you can use:
from kubernetes.config import load_kube_config
from kubernetes.client import CustomObjectsApi
load_kube_config()
cust = CustomObjectsApi()
cust.list_cluster_custom_object('metrics.k8s.io', 'v1beta1', 'nodes') # All node metrics
cust.list_cluster_custom_object('metrics.k8s.io', 'v1beta1', 'pods') # All Pod Metrics

Related

Kind Kubernetes cluster doesn't have container logs

I have installed a Kubernetes cluster using kind k8s as it was easier to setup and run in my local VM. I also installed Docker separately. I then created a docker image for Spring boot application I built for printing messages to the stdout. It was then added to kind k8s local registry. Using this newly created local image, I created a deployment in the kubernetes cluster using kubectl apply -f config.yaml CLI command. Using similar method I've also deployed fluentd hoping to collect logs from /var/log/containers that would be mounted to fluentD container.
I noticed /var/log/containers/ symlink link doesn't exist. However there is /var/lib/docker/containers/ and it has folders for some containers that were created in the past. None of the new container IDs doesn't seem to exist in /var/lib/docker/containers/ either.
I can see logs in the console when I run kubectl logs pod-name even though I'm unable to find the logs in the local storage.
Following the answer in another thread given by stackoverflow member, I was able to get some information but not all.
I have confirmed Docker is configured with json logging driver by running the following command.
docker info | grep -i logging
When I run the following command (found in the thread given above) I can get the image ID.
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
However I cannot use it to inspect the docker image using docker inspect as Docker is not aware of such image which I assume it due to the fact it is managed by kind control plane.
Appreciate if the experts in the forum can assist to identify where the logs are written and recreate the /var/log/containers symbolink link to access the container logs.
It's absolutely normal that your local installed Docker doesn't have containers running in pod created by kind Kubernetes. Let me explain why.
First, we need to figure out, why kind Kubernetes actually needs Docker. It needs it not for running containers inside pods. It needs Docker to create container which will be Kubernetes node - and on this container you will have pods which will have containers that are you looking for.
kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
So basically the layers are : your VM -> container hosted on yours VM's docker which is acting as Kubernetes node -> on this container there are pods -> in those pods are containers.
In kind quickstart section you can find more detailed information about image used by kind:
This will bootstrap a Kubernetes cluster using a pre-built node image. Prebuilt images are hosted atkindest/node, but to find images suitable for a given release currently you should check the release notes for your given kind version (check with kind version) where you'll find a complete listing of images created for a kind release.
Back to your question, let's find missing containers!
On my local VM, I setup kind Kubernetes and I have installed kubectl tool Then, I created an example nginx-deployment. By running kubectl get pods I can confirm pods are working.
Let's find container which is acting as node by running docker ps -a:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2892110866 kindest/node:v1.21.1 "/usr/local/bin/entr…" 50 minutes ago Up 49 minutes 127.0.0.1:43207->6443/tcp kind-control-plane
Okay, now we can exec into it and find containers. Note that kindest/node image is not using docker as the container runtime but crictl.
Let's exec into node: docker exec -it 1d2892110866 sh:
# ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
#
Now we are in node - time to check if containers are here:
# crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
135c7ad17d096 295c7be079025 47 minutes ago Running nginx 0 4e5092cab08f6
ac3b725061e12 295c7be079025 47 minutes ago Running nginx 0 6ecda41b665da
a416c226aea6b 295c7be079025 47 minutes ago Running nginx 0 17aa5c42f3512
455c69da57446 296a6d5035e2d 57 minutes ago Running coredns 0 4ff408658e04a
d511d62e5294d e422121c9c5f9 57 minutes ago Running local-path-provisioner 0 86b8fcba9a3bf
116b22b4f1dcc 296a6d5035e2d 57 minutes ago Running coredns 0 9da6d9932c9e4
2ebb6d302014c 6de166512aa22 57 minutes ago Running kindnet-cni 0 6ef310d8e199a
2a5e0a2fbf2cc 0e124fb3c695b 57 minutes ago Running kube-proxy 0 54342daebcad8
1b141f55ce4b2 0369cf4303ffd 57 minutes ago Running etcd 0 32a405fa89f61
28c779bb79092 96a295389d472 57 minutes ago Running kube-controller-manager 0 2b1b556aeac42
852feaa08fcc3 94ffe308aeff9 57 minutes ago Running kube-apiserver 0 487e06bb5863a
36771dbacc50f 1248d2d503d37 58 minutes ago Running kube-scheduler 0 85ec6e38087b7
Here they are. You can also notice that there are other container which are acting as Kubernetes Components.
For further debugging containers I would suggest reading documentation about debugging Kubernetes nodes with crictl.
Please also note that on your local VM there is file ~/.kube/config which has information needed for kubectl to communicate between your VM and the Kubernetes cluster (in case of kind Kubernetes - docker container running locally).
Hope It will help you. Feel free to ask any question.
EDIT - ADDED INFO HOW TO SETUP MOUNT POINTS
Answering question from comment about mounting directory from node to local VM. We need to setup "Extra Mounts". Let's create a definition needed for kind Kubernetes:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /tmp/logs/
containerPath: /var/log/pods
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: Bidirectional
Note that I'm using /var/log/pods instead of /var/log/containers/ - it is because on the cluster created by kind Kubernetes containers directory has only symlinks to logs in pod directory.
Save this yaml, for example as cluster-with-extra-mount.yaml , then create a cluster using this (create a directory /tmp/logs before applying this command!):
kind create cluster --config=/tmp/cluster-with-extra-mount.yaml
Then all containers logs will be in /tmp/logs on your VM.

jvm heap usage history in a killed Kubernetes pod

I have a play framework based java application deployed in kubernetes. One of the pods died due to out of memory/memory leak. In local , can use some utilities and monitor jvm heap usage. I am new to kubernetes.
Appreciate if you tell how to check for heap usage history of my application in a Kubernetes pod which got killed. kubectl get events on this killed pod will give events history but I want to check object wise heap usage history on that dead pod. Thanks much
You can install addons or external tools like Prometheus or metrics-server.
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
You can define queries:
For CPU percentage
avg((sum (rate (container_cpu_usage_seconds_total {container_name!="" ,pod="<Pod name>" } [5m])) by (namespace , pod, container ) / on (container , pod , namespace) ((kube_pod_container_resource_limits_cpu_cores >0)*300))*100)
For Memory percentage
avg((avg (container_memory_working_set_bytes{pod="<pod name>"}) by (container_name , pod ))/ on (container_name , pod)(avg (container_spec_memory_limit_bytes>0 ) by (container_name, pod))*100)
Take a look: prometheus-pod-memory-usage.
You can visualize such metrics using Grafana - take a look how to set it up with Prometheus - grafana-prometheus-setup.
Metrics-server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.
You can execute:
$ kubectl top pod <your-pod-name> --namespace=your-namespace --containers
The following command will give you both the CPU usage as well as the memory usage for a given pod and its containers.
See how to firstly install metrics-server: metrics-server-installtion.
Otherwise if you want to check cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to shell of running container kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
Remember that memory usage is in bytes.
Take a look: memory-usage-kubernetes.

About k8s metrices server only some resources can be monitored

Version
k8s version: v1.19.0
metrics server: v0.3.6
I set up k8s cluster and metrics server, it can check nodes and pod on master node,
work node can not see, it return unknown.
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
u-29 1160m 14% 37307Mi 58%
u-31 2755m 22% 51647Mi 80%
u-32 4661m 38% 32208Mi 50%
u-34 1514m 12% 41083Mi 63%
u-36 1570m 13% 40400Mi 62%
when the pod running on the client node, it return unable to fetch pod metrics for pod default/nginx-7764dc5cf4-c2sbq: no metrics known for pod
when the pod running one the master node, it can return cpu or memory
NAME CPU(cores) MEMORY(bytes)
nginx-7cdd6c99b8-6pfg2 0m 2Mi
This is a community wiki answer based on OP's comment posted for better visibility. Feel free to expand it.
The issue was caused by using different versions of docker on different nodes. After upgrading docker to v19.3 on both nodes and executing kubeadm reset the issue was resolved.
Generally the metrics server receives the metrics via the kubelet.
Maybe there is a problem in retrieving the information from that.
You will need to look at the follow configurations mentioned in the readme.
Configuration
Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container. Most useful flags:
--kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
--kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.
--requestheader-client-ca-file - Specify a root certificate bundle for verifying client certificates on incoming requests.
Maybe you can check below configuration changes.
--kubelet-preferred-address-types=InternalIP
--kubelet-insecure-tls
You might be able to refer this ticket to get more information.

glusterfs: failed to get the 'volume file' from server

I see below error in pod logs:
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-01-10 20:57:47.132637] E [glusterfsd-mgmt.c:1804:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-01-10 20:57:47.132690] E [glusterfsd-mgmt.c:1940:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_32dd7b246275)
I have glusterfs installed on three servers - server 1, 2 and 3. I am using heketi to do dynamic provisioning of PVC. PVC creation is successful but pod creation shows below status while I try to mount something on this volume:
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/2 ContainerCreating 0 4m22s
Not an answer to Heketi's issue, but introducing an alternative solution to provide persistent storage in Kubernetes based on Gluster.
Kadalu Project(https://kadalu.io) is a Gluster based solution which natively integrates with Kubernetes without using Glusterd(Gluster's management layer).
Kadalu provides Kubernetes storage in just two steps.
Install Kadalu Operator using,
$ kubectl create -f https://kadalu.io/operator-latest.yaml
Register your storage device, through which you can provision persistent volumes to the applications running in Kubernetes.(Install Kubectl plugin in master node using pip3 install kubectl-kadalu)
$ kubectl kadalu storage-add storage-pool1 --type Replica3 \
--device node1:/dev/vdc \
--device node2:/dev/vdc \
--device node3:/dev/vdc
Thats it! Now ready for PV claims. Refer this link for Quick start https://kadalu.io/docs/quick-start
Latest blog post(https://kadalu.io/blog/kadalu-kubernetes-storage) explains the different configurations available with Kadalu.

Checking kubernetes pod CPU and memory

I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:
kubectl top pod podname --namespace=default
I am getting the following error:
W0205 15:14:47.248366 2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
What do I do about this error? Is there any other way to get CPU and memory usage of the pod?
I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?
Do we get the same output if we enter the pod and run the linux top command?
CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL
If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage
Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level
NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.
kubectl top pod <pod-name> -n <fed-name> --containers
FYI, this is on v1.16.2
Use k9s for a super easy way to check all your resources' cpu and memory usage.
As described in the docs, you should install metrics-server
250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:
1 AWS vCPU
1 GCP Core
1 Azure vCore
1 Hyperthread on a bare-metal Intel processor with Hyperthreading
Fractional values are allowed. A Container that requests 0.5 CPU is
guaranteed half as much CPU as a Container that requests 1 CPU. You
can use the suffix m to mean milli. For example 100m CPU, 100
milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not
allowed.
CPU is always requested as an absolute quantity, never as a relative
quantity; 0.1 is the same amount of CPU on a single-core, dual-core,
or 48-core machine.
No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.
There are more details on these links:
Why top and free inside containers don't show the correct container memory
Kubernetes top vs Linux top
A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.
kubectl describe PodMetrics <pod_name>
replace <pod_name> with the pod name you get by using
kubectl get pod
You need to run metric server to make below commands working with correct data:
kubectl get hpa
kubectl top node
kubectl top pods
Without metric server:
Go into the pod by running below command:
kubectl exec -it pods/{pod_name} sh
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You will get memory usage of pod in bytes.
Not sure why it's not here
To see all pods with time alive - kubectl get pods --all-namespaces
To see memory and CPU - kubectl top pods --all-namespaces
As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server
You can install metrics-server in following way:
Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git
Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:
- command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Run the following command: kubectl apply -f deploy/1.8+
It will install all the requirements you need for metrics server.
For more info, please have a look at my following answer:
How to Enable KubeAPI server for HPA Autoscaling Metrics
If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:
Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
Per-node memory usage percentage:
100 * (
sum(container_memory_usage_bytes{container!=""}) by (node)
/ on(node)
kube_node_status_capacity{resource="memory"}
)
Per-node CPU usage percentage:
100 * (
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
/ on(node)
kube_node_status_capacity{resource="cpu"}
)
An alternative approach without having to install the metrics server.
It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.
Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)
Find your container id of the pod sudo crictl ps
use stats to get CPU and RAM sudo crictl stats <CONTAINERID>
Sample output for reference:
CONTAINER CPU % MEM DISK INODES
873f04b6cef94 0.50 54.16MB 28.67kB 8
To check the usage of individual pods in Kubernetes type the following commands in terminal
$ docker ps | grep <pod_name>
This will give your list of running containers in Kubernetes
To check CPU and memory utilization using
$ docker stats <container_id>
CONTAINER_ID NAME CPU% MEM USAGE/LIMIT MEM% NET_I/O BLOCK_I/O PIDS
you need to deploy heapster or metric server to see the cpu and memory usage of the pods
You can use API as defined here:
For example:
kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq
{
"kind": "PodMetrics",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"name": "nginx-7fb5bc5df-b6pzh",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
"creationTimestamp": "2021-06-14T07:54:31Z"
},
"timestamp": "2021-06-14T07:53:54Z",
"window": "30s",
"containers": [
{
"name": "nginx",
"usage": {
"cpu": "33239n",
"memory": "13148Ki"
}
},
{
"name": "git-repo-syncer",
"usage": {
"cpu": "0",
"memory": "6204Ki"
}
}
]
}
Where nginx-7fb5bc5df-b6pzh is pod's name.
Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU
I know this is an old thread, but I just found it trying to do something similar. In the end, I found I can just use the Visual Studio Code Kubernetes plugin. This is what I did:
Select the cluster and open the Workloads/Pods section, find the pod you want to monitor (you can reach the pod through any other grouping in the Workloads section)
Right-click on the pod and select "Terminal"
Now you can either cat the files described above or use the "top" command to monitor CPU and memory in real-time.
Hope it helps
In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.
If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.
Metrics are available only if metric server is enabled or third party solutions like prometheus is configured. Otherwise you need to look at /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage, which is the total cpu time occupied by this cgroup/container and /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage, which is total memory consumed by all the processes in the cgroup/container.
Also don't forget another beast called QOS, which can have values like Bursted, Guaranteed. If your pod appears Bursted, then it will be OOMKilled, even if it has not breached the CPU or Memory threshold.
Kubernetes is FUN!!!