Monitor a file inside GCS Storage bucket using Prometheus - google-cloud-storage

hi I am learning Prometheus and Grafana and here is my use case summary
I have created a sample GCS bucket inside my project and I uploaded a sample file inside that bucket.
I have installed Grafana, Prometheus and Stack driver; Configured storage api for Prometheus with Stack driver compute and storage apis
The Grafana, Prometheus are running successfully
Now I need to monitor the sample file in my storage bucket at every 5m interval using Prometheus query and populate the availability metrics in Grafana Dashboard. Any one could help me how to do it?

Related

Prometheus Grafana Pod metrics

We are using AWS EKS for deployment of our application in kubernetes cluster, using the AWS managed service for prometheus we have set up a grafana application.
However most of the metrics related to the application are not able to get any data,
The namespace is visible on some of the dashboards but it seems that none of the applications are captured by prometheus.
These are Java applications built on top of spring boot.
The above image shows up the namespace but no data is captured i tried many other dashboards but no luck.
Does this needs some changes in deployment.yaml of the application?
Any help on the same is highly appreciated.
V

How to monitor a container running db2 image using Prometheus and also react app using Prometheus?

I have to build a monitoring solution using Prometheus and Graphana for a service which is built using React(front end)+ Node js + Db2(containerised) . I have no idea where to start,can someone suggest me the resources where to learn?Thank you.
First of all, you need to install Prometheus and Grafana in your Kubernetes cluster following the instructions given for each:
Prometheus: https://prometheus.io/docs/prometheus/latest/installation/
Grafana: https://grafana.com/docs/grafana/latest/installation/
Next, you need to understand that Prometheus is a pull-based metrics collection system. It retrieves metrics from configured targets (endpoints) at given intervals and displays the results.
You can setup the working monitoring system by implementing the below steps:
Instrument your application code for Prometheus to be able to scrape metric from -
For this, you need to add instrumentation to the code via one of the supported Prometheus client libraries.
Configure Prometheus to scrape the metrics exposed by the service - Prometheus supports a K8s custom resource named ServiceMonitor introduced by the Prometheus Operator that can be used to configure Prometheus to scrape the metric defined in step 1.
Observe the scraped metrics - Next, you can observe the defined metric in either the Prometheus UI or Grafana UI by configuring Grafana support for Prometheus.

Service Level metrics Prometheus in k8

would like to see k8 Service level metrics in Grafana from underlying prometheus server.
For instance:
1) If i have 3 application pods exposed through a service i would like to see service level metrics for CPU,memory & network I/O pressure ,Total # of requests,# of requests failed
2)Also if i have group of pods(replicas) related to an application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
What would be the prometheus queries to achieve the same
Service level metrics for CPU, memory & network I/O pressure
If you have Prometheus installed on your Kubernetes cluster, all those statistics are being already collected by Prometheus. There are many good articles about how to install and how to use Kubernetes+Prometheus, try to check that one, as an example.
Here is an example of a request to fetch container memory usage:
container_memory_usage_bytes{image="CONTAINER:VERSION"}
Total # of requests,# of requests failed
Those are service-level metrics, and for collecting them, you need to use Prometheus Exporter created especially for your service. Check the list with exporters, find one which you need for your service and follow its instruction.
If you cannot find an Exporter for your application, you can write it yourself, here is an official documentation about it.
application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
It is possible to combine any graphics in a single view in Grafana using Dashboards and Panels. Check an official documentation, all that topics pretty detailed and easy to understand.
Aggregation can be done by Prometheus itself by aggregation operations.
All metrics from Kubernetes has labels, so you can group by them:
sum(http_requests_total) by (application, group), where application and group is labels.
Also, here is an official Prometheus instruction about how to add Prometheus to Grafana as a Datasourse.

can I add custom metrics do bluemix graphite

I'm testing the bluemix container service. Since IBM provides a grafana/graphite for my account to collect cpu/mem stats on all of my containers, i naturally want to add my own statistics.
Is it possible to report custom stats form the kubernetes cluster or from inside my containers to the ibm graphite?
according to the documentation you can/have-to use the coreOS prometheus-operator to provide a prometheus in your cluster and then add a prometheus datasource to grafana.
afaik you can not add the prometheus datasource to the metric.ng.bluemix.net grafana
WARNING: the current version of the linked coreOS repository is for kubernetes 1.6 (bluemix runs 1.5). You have to get an old version of the scripts used by coreOS

How grafana import old data when I restart prometheus?

I use grafana to show metrics from prometheus.
But when I restart prometheus server, grafana will not draw data that scraped before.
How make grafana draw all data that scraped from prometheus?
Don't think Grafana know or care about Prometheus restarts. Are you running Prometheus in a docker? Do you have the Prometheus storage set to a persistent storage. Grafana will just graph the data it gets from the respective data store.
The correct answer should be addressed here: How to persist data in Prometheus running in a Docker container?
In a nutshell, you need to launch prometheus docker by mounting its volume data (/prometheus/ by default) in a persistent way, so you don't lose data upon restart. I smashed my head over it a week, and finally I got it to work :)