Using Colima I am getting `error: Metrics API not available` when running `kubectl top nodes` on Mac - kubernetes

I am using Colima version 0.4.6 on Mac 12.6.1 I am trying to create a test Ubuntu pod for training purposes found here How can I keep a container running on Kubernetes? . After creating that pod and running kubectl get pods that pod is hanging in the pending state. After running kubectl top nodes I am getting the error error: Metrics API not available
Any help on this or points of direction would be appreciated.
I am hoping that I am able to get pods up and running so I can launch a little test environment.
Please let me know if more info is needed.

Related

How to debug a kubernetes cluster?

As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.
use a tool like https://github.com/kubernetes/kubernetes-dashboard
You can install kubectl and kubernetes-dashboard in a k8s cluster (https://kubernetes.io/docs/tasks/tools/install-kubectl/), and then use the kubectl command to query information about a pod or container, or use the kubernetes-dashboard web UI to query information about the cluster.
For more information, please refer to https://kubernetes.io/
kubectl get pods
will show you all your pods and their status. A quick check to make sure that all is at least running.
If there are pods that are unhealthy, then
kubectl describe pod <pod name>
will give some more information.. eg image not found etc
kubectl log <pod name> --all
is often the next step , use -f to follow the logs as you exercise your api.
It is possible to link up images running in a pod with most ide debuggers, but instructions will differ depending on language and ide used...

Getting Unkown target for HPA

Am actually new to kubernetes, but as now am good with the terms such as deployment, pods etc.
Well i was trying an example of HPA (Horizontal pod autoscaler), and as prerequisite metrics-servers is already integrated, but after all those things am not able to see HPA working as expected
enter image description here
When execute below cmd-
Kubectl get hpa
Unknown in the target, although i have tried all my luck referring online forum but didn't got any break through
Any help would be really appreciated
Thank you
I was getting same issue, fixed after adding cpu requests in my pod defination. Below points can be the reason in most of the cases
Metric server is not installed in your kubernates cluster, you can check with command
(kubectl get deploy,svc -n kube-system | egrep metrics-server)
Check if you have provided resources for deployment/sts/pod definations

How to find the pod that led to an error in GKE

If I look at my logs in GCP logs, I see for instance that I got a request that gave 500
log_message: "Method: some_cloud_goo.Endpoint failed: INTERNAL_SERVER_ERROR"
I would like to quickly go to that pod and do a kubectl logs on it. But I did not find a way to do this.
I am fairly new to k8s and GKE, any way to traceback the pod that handled that request?
You could run command "kubectl get pods " on each node to check the status of all pods and could figure out accordingly by running for detail description of an error " kubectl describe pod pod-name"
As mentioned in #Neelam answer, you can can get the pod names with the command kubectl get pods -A and log into all your pods to find the error.
Or, alternatively, you could deploy a custom monitoring system like Elastic GKE Logging available in GCP github Click-to-deploy.
See here to install from MarketPlace with few clicks.
It is a free alternative to have a complete monitoring system and you can filter your logs in Kibana dashboard after deployed.

Kubernetes Master Server is failing to become up and running

Installed kubeadm v1.6.0-alpha, kubectl v1.5.3, kubelet v1.5.3
Executed command $kubeadm init, to bring the Kubernetes Master up.
Issue observed: Stuck with the below log message
Created API client, waiting for the control plane to become ready
How to make the Kubernetes master server up and running or how to debug the issue?
Could you try using kubelet and kubectl 1.6 to see if it is a version mismatch?

calico-policy-container on the worker node is on a restart loop. how can i check why?

I have two coreos stable machines (with latest stable version installed) to test Kubernetes. i installed kubernetes 1.5.1 using the script from https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and patched it with https://github.com/kfirufk/coreos-kubernetes-multi-node-generic-install-script.
I installed controller script on one and worker script on the other. kubectl get nodes shows both servers.
kubectl get pods --namespace=kube-system shows that calico-policy-controller-2j5dn restarts a lot. in the worker server I do see that calico-policy-controller restarts a lot. any idea how to investigate this issue further?
how can I check why it restarts? are there any logs for this container?
kubectl logs --previous $id —namespace=kube-system
i added --previous because when the controller restart it has a different random characters appended to it.
in my case that kube-policy-controller what started on one server, and requested the etcd2 certificates that where generated on a different server.