I am using haproxy_exporter in prometheus which runs on default port 9101.
After configuring files i am not able to run this on default port.
Config file for haproxy:
frontend frontend
bind :1234
use_backend backend
backend backend
server server 0.0.0.0:9000
frontend monitoring
bind :1235
no log
stats uri /
stats enable
Config file for prometheus
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'production'
static_configs:
- targets: ['localhost:8080', 'localhost:8081']
labels:
group: 'production'
- job_name: 'canary'
static_configs:
- targets: ['localhost:8082']
labels:
group: 'canary'
- job_name: 'test'
static_configs:
- targets: ['localhost:9091']
- job_name: 'test1'
static_configs:
- targets: ['localhost:9091']
- job_name: 'test2'
static_configs:
- targets: ['localhost:9091']
- job_name: 'haproxy'
static_configs:
- targets: ['localhost:9188']
Please, can anyone help me out with this?
You should not setup stats on a frontend node, but on a listener (keyword listen, not frontend):
listen monitoring
mode http
bind *:1235
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth username:password
I strongly recommend that you also use a username/password to access you stats.
Finally, you can scrape data from haproxy with haproxy_exporter with command:
haproxy_exporter -haproxy.scrape-uri="http://username:password#<haproxy-dns>:1235/?stats;csv"
If everything is fine with your setup, you should be able to query the haproxy exporter with this curl:
curl http://localhost:9101/metrics
And the output should contain:
haproxy_up 1
If the ouput is haproxy_up 0, then there is a communication issue between haproxy and the haproxy_exporter, double check the -haproxy.scrape-uri value.
Related
I am installing kube-prometheus-stack with Helm and I am adding some custome scraping configuration to Prometheus which requires authentication. I need to pass basic_auth with username and password in the values.yaml file.
The thing is that I need to commit the values.yaml file to a repo so I am wondering how can I have the username and password set on values file, maybe from a secret in Kubernetes or some other way?
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myjob
scrape_interval: 20s
metrics_path: /metrics
static_configs:
- targets:
- myservice.default.svc.cluster.local:80
basic_auth:
username: prometheus
password: prom123456
Scrape config support specifying password_file parameter, so you can mount your own secret in volumes and volumemMounts:
Disclaimer, haven't tested it myself, not using a kube-prometheus-stack, but i guess something like this should work:
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myjob
scrape_interval: 20s
metrics_path: /metrics
static_configs:
- targets:
- myservice.default.svc.cluster.local:80
basic_auth:
password_file: /etc/scrape_passwordfile
# Additional volumes on the output StatefulSet definition.
volumes:
- name: scrape_passwordfile
secret:
secretName: scrape_passwordfile
optional: false
# Additional VolumeMounts on the output StatefulSet definition.
volumeMounts:
- name: scrape_passwordfile
mountPath: "/etc/scrape_passwordfile"
Another option is to ditch additionalScrapeConfigs and use additionalScrapeConfigsSecretto store whole config inside secret
## If additional scrape configurations are already deployed in a single secret file you can use this section.
## Expected values are the secret name and key
## Cannot be used with additionalScrapeConfigs
additionalScrapeConfigsSecret: {}
# enabled: false
# name:
# key:
I want to monitor a Spring Boot Microservices application running on Docker-Compose with about 20 microservices with Prometheus and Grafana.
What is the best approach:
1- Having one job with multiple targets for each microservice?
scrape_configs:
- job_name: 'services-job'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['service-one:8080']
labels:
group: 'service-one'
- targets: ['service-two:8081']
labels:
group: 'service-two'
2- Having multiple jobs with single target for each service?
scrape_configs:
- job_name: 'service-one-job'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['service-one:8080']
labels:
group: 'service-one'
- job_name: 'service-two-job'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['service-two:8081']
labels:
group: 'service-two'
The way you group your targets by job has nothing to do with the number of endpoints to scrape.
You need to group all the targets with the same purpose in the same job. That's exactly what the documentation says :
A collection of instances with the same purpose, a process replicated for scalability or reliability for example, is called a job.
I am currently working on scraping metrics from WebLogic server using weblogic monitoring exporter . I am trying to display these metrics using prometheus. My prometheus.yml file contents are:-
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'wls-exporter'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
metrics_path: '/wls-exporter/metrics'
static_configs:
- targets: ['localhost:7001']
basic_auth:
username: 'weblogic'
password: 'password1'
Now , whenever I execute prometheus.exe, nothing happens.
So what am I doing wrong in here?
PS:- I am on Windows 7
Based on your last log I suggest try running prometheus with 'storage.tsdb.no-lockfile=true'
I had several cases on win7 that data folder got corrupted and prometheus not starting. When I used the above flag I had no issues running prometheus on win7 or win10
I have Prometheus with node-exporter, cdvisor and grafana on same instance.
I have other instances with node and cadvisor for collecting metrics to grafana.
Now I have created a grafana template that accepts the Instance name:
As We have 2 instance here : The template is showing following in drop down
ip address of second instance
Node-exporter incase of first instance
So when selecting the instance with IP it works great but incase of instance showing with name node-exporter its not working. It works if I manually pass code-advisor to the query .
Here is the query:
count(container_last_seen{instance=~"$server:.*",image!=""})
Here is the prometheus.yml file where all the targets are set as the
node-exporter runs in the same instance where prometheus is I have
used localhost there. Please check bellow
prometheus.yml
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'lab2'
static_configs:
- targets: ['52.32.2.X:9100']
- job_name: 'cadvisor'
static_configs:
- targets: ['52.32.2.X:8080','cadvisor:8080']
If I try to edit targets and add localhost instead of node-exporter it doesnot even show up in drop down than
The node selections is working well for the HOST metrics but not for the containers metrics.
NOTE: It is working for the containers whose IP is shown in drop down but not for host not showing ip
docker-compose.yml:
This is the docker-compose to run the prometheus, node-exporter and alert-manager service. All the services are running great. Even the health status in target menu of prometheus shows ok.
version: '2'
services:
prometheus:
image: prom/prometheus
privileged: true
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alertmanger/alert.rules:/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
node-exporter:
image: prom/node-exporter
ports:
- '9100:9100'
alertmanager:
image: prom/alertmanager
privileged: true
volumes:
- ./alertmanager/alertmanager.yml:/alertmanager.yml
command:
- '--config.file=/alertmanager.yml'
ports:
- '9093:9093'
prometheus.yml
This is the prometheus config file with targets and alerts target sets. The alertmanager target url is working fine.
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
# this is where I have simple alert rules
rule_files:
- ./alertmanager/alert.rules
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
alerting:
alertmanagers:
- static_configs:
- targets: ['some-ip:9093']
alert.rules:
Just a simple alert rules to show alert when service is down
ALERT service_down
IF up == 0
alertmanager.yml
This is to send the message on slack when alerting occurs.
global:
slack_api_url: 'https://api.slack.com/apps/A90S3Q753'
route:
receiver: 'slack'
receivers:
- name: 'slack'
slack_configs:
- send_resolved: true
username: 'tara gurung'
channel: '#general'
api_url: 'https://hooks.slack.com/services/T52GRFN3F/B90NMV1U2/QKj1pZu3ZVY0QONyI5sfsdf'
Problems:
All the containers are working fine I am not able to figure out the exact problem.What am I really missing. Checking the alerts in prometheus shows.
Alerts
No alerting rules defined
Your ./alertmanager/alert.rules file is not included in your docker config, so it is not available in the container. You need to add it to the prometheus service:
prometheus:
image: prom/prometheus
privileged: true
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alertmanager/alert.rules:/alertmanager/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
And probably give an absolute path inside prometheus.yml:
rule_files:
- "/alertmanager/alert.rules"
You also need to make sure you alerting rules are valid. Please see the prometheus docs for details and examples. You alert.rules file should look something like this:
groups:
- name: example
rules:
# Alert for any instance that is unreachable for >5 minutes.
- alert: InstanceDown
expr: up == 0
for: 5m
Once you have multiple files, it may be better to add the entire directory as a volume rather than individual files.
If you need answers to this question see the explanation on this link
How to make alert rules visible on Prometheus User Interface?
Your alert rules inside the prometheus.yml should look like this
rule_files:
- "/etc/prometheus/alert.rules.yml"
You need to stop the alertmanager and prometheus containers and run this
docker run -d --name prometheus_ops -p 9191:9090 -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml -v $(pwd)/alert.rules.yml:/etc/prometheus/alert.rules.yml prom/prometheus
Verify if you can see the alert.rule config path : Prometheus container ID and go to cd /etc/prometheus
docker exec -it fa99f733f69b sh