Exporting Kubernetes pod cpu/memory information - ibm-cloud

I am running Kubernetes cluster 1.5.3 on IBM Bluemix, I would like to get the pod's resources utilization (memory and cpu) as raw data points. Is Kubernetes expose such API?
➜ bluemix git:(master) ✗ k cluster-info
Kubernetes master is running at https://x:x
Heapster is running at
https://x:x/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at
https://x:x/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at
https://x:x/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

You can use heapster or kube-state-metrics to achieve this. In many kubernetes deployments heapsteris already installed. Both can be easily deployed in-cluster.

Related

Istio Installation successful but not able to deploy POD

I have successfully installed Istio in k8 cluster.
Istio version is 1.9.1
Kubernetes CNI plugin used: Calico version 3.18 (Calico POD is up and running)
kubectl get pod -A
istio-system istio-egressgateway-bd477794-8rnr6 1/1 Running 0 124m
istio-system istio-ingressgateway-79df7c789f-fjwf8 1/1 Running 0 124m
istio-system istiod-6dc55bbdd-89mlv 1/1 Running 0 124
When I'm trying to deploy sample nginx app I am getting the error below:
failed calling webhook sidecar-injector.istio.io context deadline exceeded
Post "https://istiod.istio-system.svc:443/inject?timeout=30s":
context deadline exceeded
When I Disable automatic proxy sidecar injection the pod is getting deployed without any errors.
kubectl label namespace default istio-injection-
I am not sure how to fix this issue could you please some one help me on this?
In this case, adding hostNetwork:true under spec.template.spec to the istiod Deployment may help.
This seems to be a workaround when using Calico CNI for pod networking (see: failed calling webhook "sidecar-injector.istio.io)
As we can find in the Kubernetes Host namespaces documentation:
HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.

GKE pod health status in Google Cloud GCP

Are there any metrics I can use to know if pods are in the running state or errored-out, crashloopbackoff state etc in GKE Google Cloud?
Basically I want a metric I can export to Stackdriver that can tell if my jobs are running healthy pods or pods have errors and no pods are running( Evicted, crashloopbackoff etc. )
According to the official documentation Cloud Monitoring supports the following metric types from Google Kubernetes Engine:
Kubernetes metrics
I believe you can use for your case:

Unable able to see Pods CPU and Memory Utilization and graphs are missing Kubernetes dashboard

K8s VERSION = v1.18.6
I have deployed the Kubernetes dashboard using the following command and added a privileged user with which I logged into the dashboard.
but not able to see Pods CPU and Memory Utilization graphs are missing Kubernetes dashboard
The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster,
To deploy the Metrics Server
Deploy the Metrics Server with the following command:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Verify that the metrics-server deployment is running the desired number of pods with the following command.
kubectl get deployment metrics-server -n kube-system
Output
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 6m
Also you can validate by below command:
kubectl top nodes
to see node cpu utilisation if it works, it should then come up in Dashboard as well.
Resource usage metrics are only available for K8s clusters once Metrics Server has been installed.

kube-stat-metrics is not working when only master node is running in a cluster

I am monitoring Kubernetes cluster ( which is deployed using kubeadm ) with grafana and Prometheus.
I am facing some difficulty with kube-state-metrics that is when I up the only master node then I am seeing kube-state-metrics down in Prometheus targets but when I up the one node then kube-state-metrics is up in Prometheus target.
Another interesting part here is when I up only master node I see one kube-state-meterics pod is up and running in kube-system namespace but I am unable to access the metrics.
Am I missing anything in understanding kube-state-metrics?
Please help me out.

Unable to get kubernetes dashboard

I've installad a new cluster (version 1.13.5 of kubectl kubelet kubeadm), then I've installed flannel and add a worker node.
Now I'm trying to add kubernetes dashboard to my cluster but after i run
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've this situation
kubernetes-dashboard-**** 0/1 CrashLoopBackOff 1 8s
Then if I get the log i can see this
Error while initializing connection to Kubernetes apiserver...
Where I'm wrong?
It seems that the problem was on the worker, when I put the dashboard on master the pod starts.
Maybe the kube dashboard has to be installed on the master or there is something wrong with flannel and the master-node communication.
Check api-server pod is running or not and KubeDNS is working fine or not.