I can set static_configs to provide metric endpoints to Prometheus. Is there a way to dynamically set metric endpoint in Docker swarm. For example, can we provide some label in docker-compose.yaml file which helps Prometheus to auto-discover metrics endpoint?
myApp:
image: ...
lables:
prom/scrape: true # something like this
prom/port: 3000
....
Prometheus has no native service discovery support for Docker Swarm (unlike, for example, Kubernetes service discovery).
However, for auto-discovering any metric endpoints in Docker Swarm, you can use the generic file service discovery mechanism. It works by using a file that contains the desired metric endpoints. Prometheus performs a disk watch on this file and applies any changes dynamically. That means, you can update the file at runtime and Prometheus will immediately sync with it.
There is a file service discovery integration for Docker Swarm named prometheus-swarm-discovery. This tool should be able to dynamically write the file that is used by Prometheus file service discovery, so you don't have to implement this logic yourself.
Related
We have a central monitoring cluster that monitors different k8s clusters (running various micro services)
Currently we’ve deployed prometheus using manifests but we plan to move to a prometheus operator.
My question is, is service discovery possible for prometheus in this kind of a set up? Will I be able to annotate my pods?
Of course, you'll be able to do service discovery with the Prometheus operator for Kubernetes.
However, it does not work as it does with a standalone Pormetheus server and the kubernetes_sd_config configuration.
With the operator, the service discovery works with a custom resource called ServiceMonitor. This resource works with label selector that target services with specific label. You can find an example here, in the official github page
I have to build a monitoring solution using Prometheus and Graphana for a service which is built using React(front end)+ Node js + Db2(containerised) . I have no idea where to start,can someone suggest me the resources where to learn?Thank you.
First of all, you need to install Prometheus and Grafana in your Kubernetes cluster following the instructions given for each:
Prometheus: https://prometheus.io/docs/prometheus/latest/installation/
Grafana: https://grafana.com/docs/grafana/latest/installation/
Next, you need to understand that Prometheus is a pull-based metrics collection system. It retrieves metrics from configured targets (endpoints) at given intervals and displays the results.
You can setup the working monitoring system by implementing the below steps:
Instrument your application code for Prometheus to be able to scrape metric from -
For this, you need to add instrumentation to the code via one of the supported Prometheus client libraries.
Configure Prometheus to scrape the metrics exposed by the service - Prometheus supports a K8s custom resource named ServiceMonitor introduced by the Prometheus Operator that can be used to configure Prometheus to scrape the metric defined in step 1.
Observe the scraped metrics - Next, you can observe the defined metric in either the Prometheus UI or Grafana UI by configuring Grafana support for Prometheus.
I installed prometheus-adapter with helm.
Now I don't know how to configure prometheus-adapter so that my kubernetes cluster can communicate with a extern server where prometheus is installed.
Where and how can i connect the prometheus-adapter to prometheus.
I want to use data from prometheus for my external metrics in kubernetes.
First, you'll need to deploy the Prometheus Operator.
This walkthrough assumes that Prometheus is deployed in the prom namespace. Most of the sample commands and files are namespace-agnostic, but there are a few commands or pieces of configuration that rely on that namespace. If you're using a different namespace, simply substitute that in for prom when it appears.
Note that if you are deploying on a non-x86_64 (amd64) platform, you'll need to change the image field in the Deployment to be the appropriate image for your platform.
Make sure that you have default adapter which configuration should work with standard Prometheus Operator configuration, but if you've got custom relabelling rules, or your labels above weren't exactly namespace and pod, you may need to edit the configuration in the ConfigMap. The configuration walkthrough provides an overview of how configuration works.
Make sure that you have registered the API with the API aggregator (part of the main Kubernetes API server).
Try fetching the discovery information for it:
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Since you've set up Prometheus to collect your app's metrics, you should see a pods/http_request resource show up. This represents the http_requests_total metric, converted into a rate, aggregated to have one datapoint per pod. Notice that this translates to the same API that our HorizontalPodAutoscaler was trying to use above.
The API is registered as custom.metrics.k8s.io/v1beta1, and you can find more information about aggregation at Concepts: Aggregation.
More information you can find in this instruction.
Let me know if it helps.
if you just want to communicate between prometheus-adapter and prometheus, you need to mount prometheus service url prometheus-adapter, so that prometheus-adapter will know where to grab the metric.
the default prometheus service url is http://prometheus.svc:9090 . you need to figure out what is your prometheus service url.
I have two Kubernetes clusters representing dev and staging environments.
Separately, I am also deploying a custom DevOps dashboard which will be used to monitor these two clusters. On this dashboard I will need to show information such as:
RAM/HD Space/CPU usage of each deployed Pod in each environment
Pod health (as in if it has too many container restarts etc)
Pod uptime
All these stats have to be at a cluster level and also per namespace, preferably. As in, if I query a for a particular namespace, I have to get all the resource usages of that namespace.
So the webservice layer of my dashboard will send a service request to the master node of my respective cluster in order to fetch this information.
Another thing I need is to implement real time notifications in my DevOps dashboard. Every time a container fails, I need to catch that event and notify relevant personnel.
I have been reading around and two things that pop up a lot are Prometheus and Metric Server. Do I need both or will one do? I set up Prometheus on a local cluster but I can't find any endpoints it exposes which could be called by my dashboard service. I'm also trying to set up Prometheus AlertManager but so far it hasn't worked as expected. Trying to fix it now. Just wanted to check if these technologies have the capabilities to meet my requirements.
Thanks!
I don't know why you are considering your own custom monitoring system. Prometheus operator provides all the functionality that you mentioned.
You will end up only with your own grafana dashboard with all required information.
If you need custom notification you can set it up in Alertmanager creating correct prometheusrules.monitoring.coreos.com, you can find a lot of preconfigured prometheusrules in kubernetes-mixin
.
Using labels and namespaces in Alertmanager you can setup a correct route to notify person responsible for a given deployment.
Do I need both or will one do?, yes, you need both - Prometheus collects and aggregates metric when Metrick server exposes metrics from your cluster node for your Prometheus to scrape it.
If you have problems with Prometheus, Alertmanger and so on consider using helm chart as entrypoint.
Prometheus + Grafana are a pretty standard setup.
Installing kube-prometheus or prometheus-operator via helm will give you
Grafana, Alertmanager, node-exporter and kube-state-metrics by default and all be setup for kubernetes metrics.
Configure alertmanager to do something with the alerts. SMTP is usually the first thing setup but I would recommend some sort of event manager if this is a service people need to rely on.
Although a dashboard isn't part of your requirements, this will inform how you can connect into prometheus as a data source. There is docco on adding prometheus data source for grafana.
There are a number of prebuilt charts available to add to Grafana. There are some charts to visualise alertmanager too.
Your external service won't be querying the metrics directly with prometheus, in will be querying the collected data in prometheus stored inside your cluster. To access the API externally you will need to setup an external path to the prometheus service. This can be configured via an ingress controller in the helm deployment:
prometheus.ingress.enabled: true
You can do the same for the alertmanager API and grafana if needed.
alertmanager.ingress.enabled: true
grafana.ingress.enabled: true
You could use Grafana outside the cluster as your dashboard via the same prometheus ingress if it proves useful.
Using K8S deployment, our project is based on springcloud. I want to know that in K8S, because the multi-node deployment passes the default host name of the registry, the gateway is deployed in A, and the config is deployed in B. They can't access each other through eureka before. I changed to eureka.instance.prefer-ip-address: true, but I found that they can only access each other on the same host. He is not using cluster-ip of K8S. I want to know how to access each other between services in K8S.
In version 7-201712-EA of Activiti Cloud we provided an example using services running the netflix libraries in kubernetes - the stable github tags and docker images are available to refer to. We approached it by creating Kubernetes Services for each component and getting the component to register with eureka using the k8s service name.
To make sure the component declared the correct service name to eureka we set the eureka.instance.hostname in the component, which can be set in the Deployment yaml by specifying an environment variable or using the default environment variable EUREKA_INSTANCE_HOSTNAME. We also kept thing simple by using the same port for the java app in the Pod and for the Service. Again this can be set to match by setting the port in the Pod spec and passing the SERVER_PORT environment variable to the spring boot app.
Check the spring-cloud-kubernetes project