How to push mule(Java based) logs to Prometheus storage? - grafana

I have a mule application which mostly does HTTP requests, which is logging as plain text. I want to push these logs as metrics to Prometheus. Since this is a legacy application it would take a substantial amount of time to change code and push metrics directly into Prometheus storage.
Idea is to show Prometheus metrics in Grafana Dashboard.
Is there any intermediate tool that converts plain text to metrics?
Anything that helps with this requirement.
FYI- We have Nagios and Splunk which is doing this task as of now, we are looking to move our solution to Prometheus and Grafana

In situations like these you can use tools like https://github.com/fstab/grok_exporter to convert logs into metrics.

Related

Why are metrics available via http://localhost:56789/metrics but not returned via https://example.com/path/serviceforapp/metrics?

Kubernetes. Prometheus. A test application that can send metrics or a simple phrase to an HTTP request.
The end goal is to see graphs in Grafana. I want to make sure that everything works up to the current setup moment.
Now I want to see the metrics by URL, to see that the ServiceMonitor is working correctly, that they reach the Graphana. But so far I haven't been able to find a good enough troubleshooting guide.
I assume it's okay not to show everyone your metrics. But I want to make sure that this is normal behavior and which way to google in case I want to allow it.
This is a completely intended behavior when trying to use Grafana to visualize data from Managed Service for Prometheus. Metrics will be available via http request when trying to browse Grafana, as during the deployment we port-forward the frontend service to our local machine. As it is not a good practice to leave our metrics available to everyone for some security reasons, Grafana port-forwards the frontend service to our local machine.
Google Cloud APIs all require authentication using OAuth2. However, Grafana doesn't support OAuth2 authentication for Prometheus data sources. To use Grafana with Managed Service for Prometheus, you must use the Prometheus UI as an authentication proxy.
You can refer to these documentations to know more about Grafana & How-to-Deploy-Grafana.

Moving Logs into a Kubernetes Cluster

I have Grafana running inside a Kubernetes Cluster and i want to push logs from outside of Kubernetes (apps not running in K8s/DB's etc) into kubernetes so i can view them inside the Grafana cluster. What's the best way of doing this?
So Grafana is a GUI for reporting on data stored in other databases. It sounds like you are capturing metrics from the cluster and this data is stored in another database. If you are running Prometheus this is the database for Grafana's time-series data. You also may end up running long-term storage systems like Thanos in the future for that data to keep it over time depending on the volume of data.
Back to logging... Similarly to use Grafana for logs you'll need to implement some kind of logging database. The most popular is the formerly open-source ELK (ElasticSearch, Logstash, Kibana) stack. You can now use OpenSearch which is an open-source version of ElasticSearch and Kibana. Most K8S distributions come with Fluentd which replaces logstash for sending data. You can also install Fluentd or Fluentbit on any host to send data to this stack. You'll find that Grafana is not the best for log analysis, so most people use Kibana (OpenSearch Dashboards). However you can use Grafana as well, it's just painful IMO.
Another option if you don't want to run ELK is using Grafana Loki, which is another open-source database for logging. It's a lot more simple, but also more limited as to how you can query the logs due to the way it indexes. It works nicely with Grafana, but once again this is not a full-text indexing technology so it will be a bit limited.
Hope this is helpful, let me know if you have questions!

How to properly monitor all ELK components with Prometheus?

I would like to monitor all ELK service running in our kubernetes clusters to be sure, that is still running properly.
I am able to monitor Kibana portal via URL. ElasticSearch via Prometheus and his metrics (ES have some interested metrics to be sure, that ES is working well).
But exist something similar for Filebeat, Logstash, ... ? Have these daemons some exposed metrics for Prometheus, which is possible to watching and analizing it states?
Thank you very much for all hints.
There is an exporter for ElasticSearch found here: https://github.com/prometheus-community/elasticsearch_exporter and an exporter for Kibana found here: https://github.com/pjhampton/kibana-prometheus-exporter These will enable your Prometheus to scrape the endpoints and collect metrics.
We are also working on a new profiler inside of OpenSearch which will provide much more detailed metrics and fix a lot of bugs. That will also natively provide an exporter for Prometheus to scrape : https://github.com/opensearch-project/OpenSearch/issues/539 you can follow along here, this is in active development if you are looking for an open-source alternative to ElasticSearch and Kibana.
Yes, both the beats and logstash have metrics endpoint for monitoring.
These monitoring endpoints are built to be consumed using metricbeat, but since they return a json you can use other tools to monitor it.
For logstash the metrics endpoint is enabled by default, listening on localhost at port 9600, and from the documentation you have these two endpoints:
node
node_stats
For the beats family you need to enable it as if you would consume the metrics using metricbeat, this documentation explains how to do that.
Then you will have two endpoints:
stats
state
So you would just need to use those endpoints to collect the metrics.

Custom cloudwatch metrics EKS CloudWatch Agent

I have set up container insights as described in the Documentation
Is there a way to remove some of the metrics sent over to CloudWatch ?
Details :
I have a small cluster ( 3 client facing namespaces, ~ 8 services per namespace ) with some custom monitoring, logging, etc in their own separate namespaces, and I just want to use CloudWatch for critical client facing metrics.
The problem I am having is that the Agent sends over 500 metrics to CloudWatch, where I am really only interested in a few of the important ones, especially as AWS bills per metric.
Is there any way to limit which metrics get sent to CloudWatch?
It would be especially helpful if I could only sent metrics from certain namespaces, for example, exclude the kube-system namespace
My configmap is:
cwagentconfig.json: |
{
"logs": {
"metrics_collected": {
"kubernetes": {
"cluster_name": "*****",
"metrics_collection_interval": 60
}
},
"force_flush_interval": 5
}
}
I have searched for a while now, but clouldn't really find anything on:
"metrics_collected": {
"kubernetes": {
I've looked as best I can and you're right, there's little or nothing to find on this topic. Before I make the obvious-but-unhelpful suggestions of either using Prometheus or asking on the AWS forums, a quick look at what the CloudWatch agent actually does.
The Cloudwatch agent gets container metrics either from from cAdvisor, which runs as part of kubelet on each node, or from the kubernetes metrics-server API (which also gets it's metrics from kubelet and cAdvisor). cAdvisor is well documented, and it's likely that the Cloudwatch agent uses the Prometheus format metrics cAdvisor produces to construct it's own list of metrics.
That's just a guess though unfortunately, since the Cloudwatch agent doesn't seem to be open source. That also means it may be possible to just set a 'measurement' option within the kubernetes section and select metrics based on Prometheus metric names, but probably that's not supported. (if you do ask AWS, the Premium Support team should keep an eye on the forums, so you might get lucky and get an answer without paying for support)
So, if you can't cut down metrics created by Container Insights, what are your other options? Prometheus is easy to deploy, and you can set up recording rules to cut down on the number of metrics it actually saves. It doesn't push to Cloudwatch by default, but you can keep the metrics locally if you have some space on your node for it, or use a remote storage service like MetricFire (the company I work for, to be clear!) which provides Grafana to go along with it. You can also export metrics from Cloudwatch and use Prometheus as your single source of truth, but that means more storage on your cluster.
If you prefer to view your metrics in Cloudwatch, there are tools like Prometheus-to-cloudwatch which actually scrape Prometheus endpoints and send data to Cloudwatch, much like (I'm guessing) the Cloudwatch Agent does. This service actually has include and exclude settings for deciding which metrics are sent to Cloudwatch.
I've written a blog post on EKS Architecture and Monitoring in case that's of any help to you. Good luck, and let us know which option you go for!

Kubernetes - monitor number requests

I have an app running on Google Container Engine.
I would like to monitor number requests per second my api is receiving
how can I do this?
is there a way monitoring from historical metrics on Stackdriver
as I am opted for Stackdriver Premium
Looking at https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/ I see that Stackdriver is deployed in the cluster by default. It looks like that, according to https://cloud.google.com/monitoring/api/metrics, this includes the metrics you are looking for.