Missing information when scraping data in grafana using prometheus and minikube - kubernetes

I am trying to use prometheus and grafana to get information like cluster cpu usage, cluster memory usage and pod cpu usage in my kubernetes cluster. I am using minikube in wsl2. I am using following commands to get everything up and running:
To start minikube:
$ minikube start
To add repo and install prometheus:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/prometheus
To create a nodeport on the port 9090:
$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prometheus-server-ext
To access prometheus server outside minikube:
$ minikube service prometheus-server-ext
Adding grafana report and installing with helm:
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm install grafana grafana/grafana
To create a nodeport on the port 3000 :
$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext
To access grafana server outside minikube:
$ minikube service grafana-ext
To get decode and get password to grafana(username:admin):
$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then I add a data source with type prometheus and the URL seen above: https://192.168.49.2:30988.
Until here everything works as expected. Then I import two different dashboards with the id "6417" and "12740". I then get the following dashboards:
and
My question is why do I only see the number of pods and cluster memory usage, but no cpu usage of pods or the cluster? It seems like there is a lot of information missing.
Here is the JSON code for dashboard with id 6417: https://pastebin.com/ywrT2qmQ
Here is the JSON code for dashboard with id 12740: https://pastebin.com/2neEzESu
I get the dashboards by using the id's 6417 and 12740 and importing it. Check image below:

Related

Bitnami mongodb cluster con't access from mongo3t client kubernetes

I have installed bitnami mongodb cluster with helm install my-release bitnami/mongodb -f values-production.yaml . , architecture: replicaset , and replicaCount: 2 in aws server,
helm version version.BuildInfo{Version:"v3.2.4",CHART mongodb-8.2.1, APP VERSION 4.2.8, kubectl version GitVersion:"v1.16.8"
Its installed successfully one statefulset along with three pods and one svc , I have put port forward, to get access mongodb from roto3T client as a localhost with using user name and password , But its getting Error , it is Authorization failed on admin database as root
I have tryed with portforword to svc , pod, and statefulset all are getting same error .
kubectl port-forward --namespace default svc/my-release-mongodb-headless 27017:27017
Is it I done any wrong or anything extra i need to mension in values-production.yaml file ?
It will install and working fine with below cammand
helm install mongo-cluster-name . -f values-production.yaml --set architecture=replicaset --set replicaCount=2
this will start statefulset and svc and pod along with disk, if we do the portforward to svc name mongo-cluster-name-headless, we can access as a localhost
kubectl port-forward --namespace default svc/mmongo-cluster-name-headless 27017:27017
you can't do a port-forward in a replicaset architecture.
https://docs.mongodb.com/kubernetes-operator/master/tutorial/deploy-replica-set/#enable-external-access-for-a-replica-set
You are using helm ? nice just check this lines there is an option just made for you https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml#L446
This is gonna create a LoadBalancer for each of you primary & secondary mongodb pods.
But LoadBalancer are quite expensive (it's kind of 15 euros/month for me), so 30/month just to access the replicaset (2 replicas mini) from outside.

Postgres Kubernetes Helm chart fails when istio enabled

I have installed istio in my aks cluster and enabled it to a namespace called database as below.
kubectl label namespace database istio-injection=enabled
I'm going to install helm3 posgress database into database namespace.
helm install pg-db bitnami/postgresql-ha --version=2.0.1 -n database
few seconds database starting to fails because the database pod is not considered healthy.
when I disable adding sidecar into database as below it doesn't restart. How can I run this helm chart without disabling sidecar
podAnnotations:
sidecar.istio.io/inject: "false"
listing pods
pg-db-postgresql-ha-postgresql-1 logs
pg-db-postgresql-ha-pgpool-5475f499b8-7z4ph logs

Prometheus is not compatible with Kubernetes v1.16

I installed the stable/prometheus helm chart with some minor changes proposed at helm/charts#17268 to make it compatible with Kubernetes v1.16
After installation, none of the Kubernetes grafana dashboards show correct values. I am using 8769 (https://grafana.com/grafana/dashboards/8769) dashboard which provides many information on cpu, memory, network, etc. This dashboard is working properly on older k8s versions but on v1.16 it shows no results. I also randomly tried some other dashboards (8588, 6879, 10551) but they either just show the requested resource for each pod and not the live usage or showing nothing.
What these dashboards do is they send a promql query to prometheus and get the results. For example this is the promql query for cpu usage from 8769 dashboard:
sum (rate (container_cpu_usage_seconds_total{id!="/",namespace=~"$Namespace",pod_name=~"^$Deployment.*$"}[1m])) by (pod_name)
I don't know if I have to change the promql or the problem is somewhere else.
Kubernetes 1.16 removes the labels pod_name and container_name from
cAdvisor metrics, duplicates of pod and container.
You need change pod_name -> pod, container_name -> container in Grafana dashboards JSON models.
Try the installation this way, as the new CRDs had some issue, so I used old CRDs-
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/alertmanager.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/prometheus.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/prometheusrule.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/servicemonitor.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/podmonitor.crd.yaml
helm install --name prometheus --namespace monitoring stable/prometheus-operator --set prometheusOperator.createCustomResource=false
Make sure that CRD's don't exist priory, you can delete them via
kubectl delete crd --all

Gitlab-installed Helm: Error: context deadline exceeded

I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?
Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key

helm install stable/gocd returns an error

After installing helm I'm trying to install gocd for containerizing.
Command
helm install stable/gocd --name gocd --namespace gocd is throwing the following error:
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
Please help in resolving this issue. What may be the error? How can I correct it so that gocd is installed through helm?
Install the GoCD Helm chart
Helm is a package manager for Kubernetes. Kubernetes packages are called charts. Charts are curated applications for Kubernetes.
Install the GoCD Helm chart with these commands:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/gocd --name gocd --namespace gocd
Access the GoCD server
After you’ve installed the GoCD helm chart, you should be able to access the GoCD server from the Ingress IP.
The Ingress IP address can be obtained as specified below:
Minikube
minikube ip
Others
ip=$(kubectl get ingress --namespace gocd gocd-server -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
echo "http://$ip"
It might take a few minutes for the GoCD server to come up for the first time. You can check if the GoCD server is up with this command:
kubectl get deployments --namespace gocd
The column Available should show 1 for gocd-server.
The GoCD server on startup will look like this.
Now that you have accessed the GoCD server successfully, you will need to configure the Kubernetes elastic agent plugin.