I would like to monitor all ELK service running in our kubernetes clusters to be sure, that is still running properly.
I am able to monitor Kibana portal via URL. ElasticSearch via Prometheus and his metrics (ES have some interested metrics to be sure, that ES is working well).
But exist something similar for Filebeat, Logstash, ... ? Have these daemons some exposed metrics for Prometheus, which is possible to watching and analizing it states?
Thank you very much for all hints.
There is an exporter for ElasticSearch found here: https://github.com/prometheus-community/elasticsearch_exporter and an exporter for Kibana found here: https://github.com/pjhampton/kibana-prometheus-exporter These will enable your Prometheus to scrape the endpoints and collect metrics.
We are also working on a new profiler inside of OpenSearch which will provide much more detailed metrics and fix a lot of bugs. That will also natively provide an exporter for Prometheus to scrape : https://github.com/opensearch-project/OpenSearch/issues/539 you can follow along here, this is in active development if you are looking for an open-source alternative to ElasticSearch and Kibana.
Yes, both the beats and logstash have metrics endpoint for monitoring.
These monitoring endpoints are built to be consumed using metricbeat, but since they return a json you can use other tools to monitor it.
For logstash the metrics endpoint is enabled by default, listening on localhost at port 9600, and from the documentation you have these two endpoints:
node
node_stats
For the beats family you need to enable it as if you would consume the metrics using metricbeat, this documentation explains how to do that.
Then you will have two endpoints:
stats
state
So you would just need to use those endpoints to collect the metrics.
Related
I want to know if it's possible to get metrics for the services inside the pods using Prometheus.
I don't mean monitoring the pods but the processes inside those pods. For example, containers which have apache or nginx running inside them along other main services, so I can retrieve metrics for the web server and the other main service (for example a wordpress image which aso comes with an apache configured).
The cluster already has running kube-state-metrics, node-exporter and blackbox exporter.
Is it possible? If so, how can I manage to do it?
Thanks in advance
Prometheus works by scraping an HTTP endpoint that provides the actual metrics. That's where you get the term "exporter". So if you want to get metrics from the processes running inside of pods you have three primary steps:
You must modify those processes to export the metrics you care about. This is inherently something that must be custom for each kind of application. The good news is that there are lots of pre-built ones including things like nginx and apache that you mention . Most application frameworks also have capability to export prometheus metrics. ex: Microprofile, Quarkus, and many more.
You must then modify your pod definition to expose the HTTP endpoint that those processes are now providing. Very straightfoward, but will depend on the configuration you specify for your exporters.
You must then modify your Prometheus to scrape those targets. This will depend on your monitoring stack. For Openshift you will find the docs here for enabling user workload monitoring, and here for providing exporter details.
In some project there are scaling and orchestration implemented using technologies of a local cloud provider, with no Docker & Kubernetes. But the project has poor logging and monitoring, I'd like to instal Prometheus, Loki, and Grafana for metrics, logs, and visualisation respectively. Unfortunately, I've found no articles with instructions about using Prometheus without K8s.
But is it possible? If so, is it a good way? And how to do this? I also know that Prometheus & Loki can automatically detect services in the K8s to extract metrics and logs, but will the same work for a custom orchestration system?
Can't comment about Loki, but Prometheus is definitely doable.
Prometheus supports a number of service discovery mechanisms, k8s being just on of them. If you look at the list of options (the ones ending with _sd_config) you can see if your provider is there.
If it is not then a generic service discovery can be used. Maybe DNS-based discovery will work with your custom system? If not then with some glue code a file based service discovery will almost certainly work.
Yes, I'm running Prometheus, Loki etc. just fine in a AWS ECS cluster. It just requires a bit more configuration especially regarding service discovery (if you are not already using something like ECS Service Disovery or Hashicorp Consul)
I have a REST API and I would like to check the health of specific endpoints. Can Grafana be used to monitor the health of specific endpoints? Any plugins that can be used for that? I know that it can be integrated with Zabbix, are there any other ways to do it?
Any help would be great.
You can probably look at Prometheus blackbox exporter and use that to monitor the health of your endpoint.
https://github.com/prometheus/blackbox_exporter
Prometheus can then be added as a data source in Grafana and then Yes you have to first create a dashboard/panel if you want to use Grafana alerting capability.
That really depends what to monitor the health of specific endpoints means for you. Usually it is:
metric collecting: No, that is not a native task for Grafana. There was special Worldping plugin for Grafana https://grafana.com/grafana/plugins/raintank-worldping-app, but that is deprecated. Usually your monitoring tool (Zabbix, Prometheus, Dynatrace, ...) is doing this.
metric visualization: Yes, this is the best task for Grafana. It can visualize metrics from supported time series databases/apps
alerting: Yes - but only in the graph panel, so there is overhead - you need to manage dashboards/panels for every single metric just to have alerting. Again, monitoring tools have usually better design for this task.
=> Use monitoring tools for monitoring and use Grafana just for the graphs.
I'm working on a production K8s cluster with an HTTP-based application and I'd like to setup monitoring and alerting for HTTP errors. It's clear how to check the uptime of the service (using monitoring e.g. stackdriver), but absolutely not regarding HTTP failure rate.
I've got an nginx-ingress-controller as an end-point (with external load balancer).
How to collect and view the metrics such as latency, HTTP failures, etc. from this load balancer?
In particular I need to now, when HTTP response failure rate exceeds some percentage.
If you are looking at monitoring HTTP 4XX and 5XX errors for example I believe the best way is to aggregate the load balancer and the nginx ingress controller logs in some logging tool. If you are looking at open source solutions you could use something like Elasticsearch with Kibana to visualize the errors over a time frame. To send the logs you can use a forwarder like fluent-bit or Fluentd.
If you have a budget for a paid tool you can use a commercially available solution like:
Loggly
Datadog logging
Papertrail
etc.
Then you can set up alerts with any of these tools. For Elasticsearch you can use something like elastalert
If you are using GCP you can also use their Logging tool, create a custom metric, and alert on that metric.
Another alternative, but may not have the metrics that you are looking for is to use Prometheus with an Nginx ingress Prometheus exporter to monitor nginx metrics (it depends on what metrics you'd like to monitor)
I am trying to figure out how to best collect metrics from a set of spring boot based services running within a Kubernetes cluster. Looking at the various docs, it seems that the choice for internal monitoring is between Actuator or Spectator with metrics being pushed to an external collection store such as Redis or StatsD or pulled, in the case of Prometheus.
Since the number of instances of a given service is going to vary, I dont see how Prometheus can be configured to poll those running services since it will lack knowledge of them. I am also building around a Eureka service registry so not sure if that is polled first in this configuration.
Any real world insight into this kind of approach would be welcome.
You should use the Prometheus java client (https://www.robustperception.io/instrumenting-java-with-prometheus/) for instrumenting. Approaches like redis and statsd are to be avoided, as they mean hitting the network on every single event - greatly limiting what you can monitor.
Use file_sd service discovery in Prometheus to provide it with a list of targets from Eureka (https://www.robustperception.io/using-json-file-service-discovery-with-prometheus/), though if you're using Kubernetes like your tag hints Prometheus has a direct integration there.