Does anyone know whether PostgreSQL has built-in /metrics (or something like that)?
I've searched through the web and all I found was third party open source tools that send metrics to Prometheus
Thanks :)
Unfortunately, PostgreSQL doesn't have any /metrics endpoint. All sorts of metrics can be obtained through SQL queries from system tables.
Here is the list of monitoring tools.
For Prometheus, there is a pretty good exporter.
Related
im starting with tracing tools. I would like to use Grafana Tempo backend storage and as UI Jaeger. Is possible that this stack will work together? Im running that in docker via official docker-compose files.
I checked Jaeger documentation and did not find anything about Grafana Tempo support. There is only Elastic, Cassandra, Fluxdb etc... but not Grafana Tempo.
Thanks
You have to remember when you use Tempo (or Loki) that these systems do not index data. This is why they are so inexpensive; the challenge is that you cannot do a full text search across the data in bulk, this is why Jaeger does not support Tempo as a backend. The way all the Grafana projects work is that when you troubleshoot you start with a metric, isolate down to a small timeframe or specific component, then pivot to logs or traces. Unfortunately, when troubleshooting there are lots of good reasons to start with logs or traces, but this is not possible with their backends, this is the tradeoff between indexing and not indexing, which is why they are inexpensive to operate in comparison to OpenSearch/ElasticSearch.
I barely managed to set up Prometheus & Grafana on my new Raspberry Pi (running Raspbian). Now I would like to monitor a smart power plug with a REST API. That means I could send a curl command and receive some data back:
$ curl --location --request GET '[Switch IP]/report'
{
"power": 35.804927825927734,
"relay": true,
"temperature": 21.369983673095703
}
However I am at a loss as to how to get this data automagically queried and parsed by Prometheus. My Google Fu is failing me as all the results explain how to query Prometheus. Any hints would be greatly appreciated.
It's non-trivial, unfortunately.
Prometheus "scrapes" HTTP endpoints and expects these to publish metrics using Prometheus' exposition format. This is a simple text format that lists metrics with their values. I was unable to find a good example.
You would need to have an "exporter" that interacts with your devices and creates metrics (in the Prometheus format) and publishes these on an HTTP endpoint (not REST just a simple text page).
Then, you'd point the Prometheus server at this exporter's endpoint and Prometheus would periodically read the metrics representing your device and enable you to interact with the results.
There are a few possible approaches to make this a bit more straightforward:
https://github.com/ricoberger/script_exporter
https://github.com/grafana/agent/issues/1371 — discussing a possible script_exporter integration
https://github.com/prometheus/pushgateway — Prometheus’ push gateway
https://github.com/prometheus/blackbox_exporter — Prometheus’ blackbox exporter
https://medium.com/avmconsulting-blog/pushing-bash-script-result-to-prometheus-using-pushgateway-a0760cd261e — this post shows something similar
I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.
Is it possible to monitor or to get mails alerts while a pod in down? How to set the alert?
Yes, it possible you have to setup Prometheus with Alertmanager.
I recommend using prometheus-operato as an easier way to start with monitoring.
It depends if you want to use open source apps or you want to use paid software for monitoring and alerting.
As #FL3SH advised most used software to monitor and sending alerts is Prometheus and Alertmanager. This solution have many tutorials online "how to", for example this one.
However there are many other paid software to monitor your cluster/pods, alert you, create history diagrams etc. (like datadog, sysdig, Dynatrace) or mixed solutions (like Prometheus and Grafana, cAdvisor, Kibana, etc.) For more information you can check this article.
Please note that each cloud provider offers some specific monitoring features.
I am trying to figure out how to build a scalable database system. I settled on using postgresql and am trying to figure out how to implement load balancing. I looked into HAProxy, which I really liked. I noticed that there were multiple different configurations of postgresql http://www.postgresql.org/docs/8.3/static/high-availability.html. Which one would be the best to link with HAProxy?
I have used HAProxy for MySQL. But that was because there were no options tailor-made for MySQL. And HAProxy does a great job. For PostgreSQL, there are quite a few tailor-made options. May be you could have a look at pgpool?
Are you looking for scalability alone, or failover too? Which version of PostgreSQL are you using?