Does Jaeger support Grafana Tempo backend? - grafana

im starting with tracing tools. I would like to use Grafana Tempo backend storage and as UI Jaeger. Is possible that this stack will work together? Im running that in docker via official docker-compose files.
I checked Jaeger documentation and did not find anything about Grafana Tempo support. There is only Elastic, Cassandra, Fluxdb etc... but not Grafana Tempo.
Thanks

You have to remember when you use Tempo (or Loki) that these systems do not index data. This is why they are so inexpensive; the challenge is that you cannot do a full text search across the data in bulk, this is why Jaeger does not support Tempo as a backend. The way all the Grafana projects work is that when you troubleshoot you start with a metric, isolate down to a small timeframe or specific component, then pivot to logs or traces. Unfortunately, when troubleshooting there are lots of good reasons to start with logs or traces, but this is not possible with their backends, this is the tradeoff between indexing and not indexing, which is why they are inexpensive to operate in comparison to OpenSearch/ElasticSearch.

Related

does postgresql has built in open metrics?

Does anyone know whether PostgreSQL has built-in /metrics (or something like that)?
I've searched through the web and all I found was third party open source tools that send metrics to Prometheus
Thanks :)
Unfortunately, PostgreSQL doesn't have any /metrics endpoint. All sorts of metrics can be obtained through SQL queries from system tables.
Here is the list of monitoring tools.
For Prometheus, there is a pretty good exporter.

Can Grafana cluster use cockroachDB as its metadata DB?

I see that Grafana Cluster can use postgres or MySQL as its metadata DB.
Can it also use cockroachDB?
(In general, I'm looking for an HA solution for Grafana, where the DB is also HA)
Thanks,
Moshe
You might be interested in following along with this issue: https://github.com/grafana/grafana/issues/8900
There are a couple of problems that prevent it from working out of the box right now. A big one right now is that CockroachDB only has experimental support for altering data types of columns, which Grafana uses.

Sending metrics from kafka to grafana

I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.

How can I monitor my pods running on Kubernetes?

Is it possible to monitor or to get mails alerts while a pod in down? How to set the alert?
Yes, it possible you have to setup Prometheus with Alertmanager.
I recommend using prometheus-operato as an easier way to start with monitoring.
It depends if you want to use open source apps or you want to use paid software for monitoring and alerting.
As #FL3SH advised most used software to monitor and sending alerts is Prometheus and Alertmanager. This solution have many tutorials online "how to", for example this one.
However there are many other paid software to monitor your cluster/pods, alert you, create history diagrams etc. (like datadog, sysdig, Dynatrace) or mixed solutions (like Prometheus and Grafana, cAdvisor, Kibana, etc.) For more information you can check this article.
Please note that each cloud provider offers some specific monitoring features.

Logging and event tracer on Kubernetes

Is there any way of getting merged logs from more than one deployments on Kybernetes? What's the best way of logging events for all deployments?
Look for Elasticsearch , Logstash and Kibana (ELK) stack with Filebeats or FluentD to ship log data from individual deployments/pods onto your Elasticsearch DB. Once data is in your DB , use Kibana to visualize and search your merged logs. Logstash can be used to modify your data inflight. A simple google search should yield you lot of resources on doing the same.