Grafana is not able to get Prometheus metrics although Prometheus Datasource is validated successfully - grafana

I am trying to configure Grafana to visulaize metrics collected by Prometheus.
My Prometheus Datasource is validated successfully. But when I am trying to create dashboard then it's showing error saying "can not read property 'result' of undefined"
I am adding screenshots.

It looks like you are pointing towards the node exporter endpoint and not Prometheus Server. The default Prometheus Server endpoint is 9090. Try change your source to http://192.168.33.22:9090
Grafana doesn't query Node Exporter directly, it queries Prometheus Server which gathers the time series statistics.

Please see the guide below to fix the issue!
This will work as long as you have both your Grafana and Prometheus running as a docker images so before you begin please run the command below to be sure that both prom and Grafana images are up
docker ps
To connect the prometheus to GRAFANA, you will need to get the prometheus server IP address that is running as a docker image from host.
Use this command on your terminal to display all the container IDs:
docker ps -a
You will see your prometheus server container ID displayed for example "faca0c893603". Please copy the ID and run the command below on your terminal to see the IP address of your Prometheus server:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' faca0c893603
Note : (faca0c893603 is the ContainerID of the prom/prometheus server)
When you run the command it will display the IP address(172.17.0.3) of the Prometheus container which needs to be mapped with the port of the prometheus server on Grafana.
On data source on Grafana, put this on the URL http://172.17.0.3:9090 and try to save and test.

Related

Kubernetes beginner - cannot load the exposed service in browser

I am learning kubernetes and minikube, and I am following this tutorial:
https://minikube.sigs.k8s.io/docs/handbook/accessing/
But I am running into a problem, I am not able to load the exposed service. Here are the steps I make:
minikube start
The cluster info returns
Kubernetes control plane is running at https://127.0.0.1:50121
CoreDNS is running at https://127.0.0.1:50121/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Then I am creating a deployment
kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
and exposing it as a service
kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
When I list the services, I dont have a url:
minikube service list
NAMESPACE
NAME
TARGET PORT
URL
default
hello-minikube1
8080
and when I try to get the url, I am not getting it, seems to be empty
minikube service hello-minikube1 --url
This is the response (first line is empty):
🏃 Starting tunnel for service hello-minikube2.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
Why I am not getting the url and cannot connect to it? What did I miss?
Thanks!
Please use minikube ip command to get the IP of minikube and then use port number with it.
Also, refer below link:
https://minikube.sigs.k8s.io/docs/handbook/accessing/#:~:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system.
As per this issue,the docker driver, which needs an active terminal session. Who are using macOS to use docker driver by default a few releases ago, if no local configuration is found. I believe you can get your original behavior by using the hyperkit driver on macOS:
minikube start --driver=hyperkit
You can also set it to the default using:
minikube config set driver hyperkit
This will help you to solve your issue.

Grafana successfully Import via grafana.com but no data

I have successfully run Grafana locally at port 3000 and then default template and data using Import via grafana.com for 1860 and 405 id. But the problem is there is no data available.
How do I configure it to load the data?
My default data source:
Got it, so I am assuming you have tested the datasource, i.e. on Save & Test you get: Data source is working.
I just imported the same dashboard 1860 and it works for me. Some of the issues which you may like to check are:
See if you have installed correct node exporter as per your O.S
Check node exporter is running
In prometheus you have scrape configuration defined for this node exporter. You can refer to the example here https://prometheus.io/docs/guides/node-exporter/
This dashboard shows node exporter resources, and if your node exporter is running on custom port other than 9100 then you need to make the changes accordingly.
If above steps dont help, Best way to troubleshoot is stop prometheus service/script. Check node_exporter port --> configure prometheus.yml to point to this port --> start service/script by passing --config.file=./prometheus.yml explicitly.
The dashboard is fine, I just installed and ran. Also attached the pics for your reference.
You should be able to see atleast 1 node exporter. If nothing is shown means no exporter is sending data. And you know you have to fix the node exporter on that host.
that means you are not monitoring node exporter data
This should return all your node exporters pushing data to prometheus server. In my case, only localhost is sending.

k8s, RabbitMQ, and Peer Discovery

We are trying to run an instance of the RabbitMQ chart with Helm from the helm/charts/stable/rabbit project. I had it running perfect but then I had to restart k8s for some maintenance. Now we are completely unable to launch the RabbitMQ chart in any way shape or form. I am not even trying to run the chart with any variables, i.e. just the default values.
Here is all I am doing:
helm install stable/rabbitmq
I have confirmed I can simply run the default right on my local k8s which I'm running with Docker for Desktop. When we run the rabbit chart on our shared k8s the exact same way as on desktop and what we did before the restart, the following error is thrown:
Failed to get nodes from k8s - 503
I have also posted an issue on the Helm charts repo as well. Click here to see the issue on Github.
We are suspecting the DNS but are unable to confirm anything yet. What is very frustrating is after the restart every single other chart we installed restarted perfectly except Rabbit which now will not start at all.
Anyone know what I could do to get Rabbits peer discovery to work? Anyone seen issue like this after restarting k8s?
So I actually got rabbit to run. Turns out my issue was the k8s peer discovery could not connect over the default port 443 and I had to use the external port 6443 because kubernetes.default.svc.cluster.local resolved to the public port and could not find the internal, so yeah our config is messed up too.
It took me a while to realize the variable below was not overriding when I overrode it with helm install . -f server-values.yaml.
rabbitmq:
configuration: |-
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.port = 6443
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator=min-masters
# enable guest user
loopback_users.guest = false
I had to add cluster_formation.k8s.port = 6443 to the main values.yaml file instead of my own. Once the port was changed specifically in the values.yaml, rabbit started right up.
I'm wondering what is the reason of using rabbit_peer_discovery_k8s plugin, if values.yaml defaults to 1 replicas (your manifest file does not override this setting) ?
I was trying to reproduce your issue with given by you override values (dev-server.yaml), as per the details in your github issue #10811, but I somewhat failed. Here are my observations:
If to install RabbitMQ chart with your custom values, my rabbitmq-dev-default-0 pod gets stuck in CrashLoopBackOff state.
It`s quite hard to troubleshoot it further for me as bitnami`s rabbitmq image containers, used by this rabbitmq Helm chart, are shipped with non-root account.
On the other hand if rabbitmq chart is installed on my Kubernetes cluster (v1.13.2) in simplest form:
helm install stable/rabbitmq
I observe similar issue then. I mean rabbitmq server survives a simulated VM restart of all cluster nodes (including master), but I cannot connect to it from outside:
Post VM restart, I`m getting following error from my python mqclient:
socket.gaierror: [Errno -2] Name or service not known
Few remarks here:
Yes, I did port(s)-forward as per instructions on "helm status " command:
The readiness probe works fine:
curl -sS -f --user user:<my_pwd> 127.0.0.1:15672/api/healthchecks/node
{"status":"ok"}
rabbitmqctl to rabbitmq-server connectivity from inside the container works fine too:
kubectl exec rabbitmq-dev-default-0 -- rabbitmqctl list_queues
warning: the VM is running with native name encoding of latin1 which may cause Elixir to malfunction as it expects utf8. Please ensure your locale is set to UTF-8 (which can be verified by running "locale" in your shell)
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
hello 11
From the moment I used kubectl port-forward to pod instead service, connectivity to rabbitmq server is restored:
kubectl port-forward --namespace default pod/rabbitmq-dev-default-0 5672:5672
$ python send.py
[x] Sent 'Hello World!'

grafana variable still catch old metrics info

i use grafana+prometheus monitor k8s pod,when my pod is removed ,i clean all metrics belongs to the removed pod,but still can see in grafana
variable
for example ,i defined a variable named node ,the query express is " {instance=~"(.+)", job="node status"} ",it can catch all metrics ,and i use regex expression '/instance="([^"]+):9100"/' to match the ip of each monitor target ,when i click node label on dashboard,it display all target ip , and when one of these targets is removed ,i use http api provide by prometheus to clean all metrics belongs to this target,but when i click node label ,it still display the removed target ip ,why? and how i can delete this ip?
It seems that Prometheus targets are not updated, even so that some of the pods are evicted. You can check it in Prometheus http://yourprometheus/targets page.
Does Prometheus run inside of the K8s cluster?

kubernetes dashboard is not accessible from outside

I have installed and configure Kubernates in my Ubuntu virtual machine
Reference: Document Link
Started kubernetes proxy using below command
kubectl proxy --address='0.0.0.0'
I'm able to access my dashboard using http://localhost:8001 link on localhost when I'm trying to access the dashboard from outside using http://192.168.33.30:8001/ link getting following Error
<h3>Unauthorized</h3>
Can anyone help me on this?
It works using below command:
kubectl proxy --address='0.0.0.0' --accept-hosts='^.*$' --port=8001
After this, I am able to access the Kubernetes dashboard outside using VM IP address