I could see flower is showing worker count as 1, But in exporter metrics, i see a worker count as zero.
i am following the below url
https://github.com/zerok/celery-prometheus-exporter
here is my docker-compose and prometheus.yml file:
docker-compoe.yml
....
celery-exporter:
image: zerok/celery-prometheus-exporter
ports:
- '8888:8888'
and my prometheus.yml
prometheus.yml
.....
- job_name: celery-exporter
static_configs:
- targets: ['celery-exporter:8888']
Let me know if i have to configure anything in airflow.cfg file to enable it.
Thanks in advance!
By default celery-prometheus-exporter connect to redis://redis:6379/0.
But my airflow-instance use database named "1", not "0", so I use env
BROKER_URL=redis://redis:password#172.17.0.1:6379/1
172.17.0.1 - is default docker0 ip-address.
Chech name of you redis database.
Related
I am locally deploying a full-stack app via Docker Compose and would like to derive the backend and frontend ports from a single environment variable $PORT. For example, if $PORT = 3000, then the backend port should be 3000 and the frontend port should be 3001. And if $PORT = 4000, then the backend port should be 4000 and the frontend port should be 4001.
For this, I would like to do something like this in my docker-compose.yml:
version: "3"
services:
backend:
...
ports:
- "${PORT}:3000"
frontend:
...
ports:
- "$((PORT + 1)):4200"
This fails with ERROR: Invalid interpolation format for "environment" option in service "frontend". Is there a way to achieve this in Docker Compose?
Compose only supports a limited set of environment variable substitutions: ${VARIABLE}, ${VARIABLE:-default}, ${VARIABLE:?error message}, and the latter two options without colons. You cannot do other substitutions, computation, or shell callouts in Compose.
For this particular case, you can let Docker pick the host port number for you. This is less predictable than the scheme you describe, but it doesn't require any special setup. Instead of using two numbers in ports: just specify the container port number
version: '3.8'
services:
backend:
ports:
- '3000'
frontend:
ports:
- '4200'
To find the corresponding host port number, you need to run docker-compose port
docker-compose port frontend 4200
I don't think it's possible to do that. But why don't you set before in your environment those ports variables? For example run before everything:
export PORT=3000 #or whatever number you want
export PORT_INC=$(($PORT+1))
And then you use like this:
ports:
- "$PORT_INC:4200"
I'm developing a project based on the Github template dunglas/symfony-docker to which I want to add a postgres database..
It seems that my docker compose.yml file is incorrectly configured because the communication between PHP and postgres is malfunctioning.
Indeed when I try to perform a symfony migration, doctrine returns me the following error :
password authentication failed for user "postgres"
When I inspect the PHP logs I notice that PHP is waiting after the database
php_1 | Still waiting for db to be ready... Or maybe the db is not reachable.
My docker-compose.yml :
version: "3.4"
services:
php:
links:
- database
build:
context: .
target: symfony_php
args:
SYMFONY_VERSION: ${SYMFONY_VERSION:-}
SKELETON: ${SKELETON:-symfony/skeleton}
STABILITY: ${STABILITY:-stable}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
environment:
# Run "composer require symfony/orm-pack" to install and configure Doctrine ORM
DATABASE_URL: ${DATABASE_URL}
# Run "composer require symfony/mercure-bundle" to install and configure the Mercure integration
MERCURE_URL: ${CADDY_MERCURE_URL:-http://caddy/.well-known/mercure}
MERCURE_PUBLIC_URL: https://${SERVER_NAME:-localhost}/.well-known/mercure
MERCURE_JWT_SECRET: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
caddy:
build:
context: .
target: symfony_caddy
depends_on:
- php
environment:
SERVER_NAME: ${SERVER_NAME:-localhost, caddy:80}
MERCURE_PUBLISHER_JWT_KEY: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 80
published: 80
protocol: tcp
# HTTPS
- target: 443
published: 443
protocol: tcp
# HTTP/3
- target: 443
published: 443
protocol: udp
###> doctrine/doctrine-bundle ###
database:
image: postgres:${POSTGRES_VERSION:-13}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
# You should definitely change the password in production
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-ChangeMe}
POSTGRES_USER: ${POSTGRES_USER:-symfony}
volumes:
- db-data:/var/lib/postgresql/data:rw
# You may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./docker/db/data:/var/lib/postgresql/data:rw
###< doctrine/doctrine-bundle ###
volumes:
php_socket:
caddy_data:
caddy_config:
###> doctrine/doctrine-bundle ###
db-data:
###< doctrine/doctrine-bundle ###
extract of my .env file :
POSTGRES_DB=proximityNL
POSTGRES_PASSWORD=postgres
POSTGRES_USER=postgres
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
Can you help me ?
Best regards ..
UPDATE :
Indeed I understood on Saturday that it was just necessary to remove orphan ..
docker-compose down --remove-orphans --volumes
When running in a container, 127.0.0.1 refers to the container itself. Docker compose creates a virtual network where each container has its own IP address. You can address the containers by their service names.
So your connection string should point to database:5432 instead of 127.0.0.1:5432 like this
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
You use database because that's the service name of your postgresql container in your docker compose file.
In docker you can call images via the name of it.
So try to use the name of the docker image for your config
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
and maybe add an link between your php and database image
services:
php:
links:
- database
This is the way how i am connect a java app with an mysql db.
Docker should map DNS resolution from the Docker Host into your containers.
Networking in Compose link
Because of that, you DB URL should look like:
"postgresql://postgres:postgres#database:5432/..."
I understood on Saturday that it was just necessary to remove orphan
docker-compose down --remove-orphans --volumes
I used this docker-compose (kinda basic), however after configuring it and building it I got after entering http://[server-ip]:9090/targets information that:
speedtest (0/1 up)
Error: Get "http://speedtest:9798/metrics": dial tcp: lookup speedtest on 127.0.0.11:53: no such host
And I understand that it can't find that host, it's just that the configuration itself wasn't touched and it actually looks legit to me:
docker-compose
service:
speedtest:
tty: true
stdin_open: true
expose:
- 9798
ports:
- 9798:9798
image: miguelndecarvalho/speedtest-exporter
restart: always
networks:
- back-tier
prometheus.yml
- job_name: 'speedtest'
metrics_path: /metrics
scrape_interval: 5m
scrape_timeout: 60s # running speedtest needs time to complete
static_configs:
- targets: ['speedtest:9798']
Can someone spot the issue? How the speedtest is not found on local DNS server? Everything is exposed and still not finding the right stuff.
#Edit I have DNS server configured by dnsmasq
If Prometheus is bound to the host's network and you're trying to access speedtest on the host's network too, then you should reference speedtest as localhost not speedtest:
static_configs:
- targets: ['localhost:9798']
NOTE Docker (Compose) only provides DNS resolution for e.g. services (i.e. speedtest) within the process. If you were to run Prometheus within the Docker Compose services too, then you'd be able to use Docker (Compose) DNS resolution to resolve speedtest to the container on port 9798.
I have Prometheus with node-exporter, cdvisor and grafana on same instance.
I have other instances with node and cadvisor for collecting metrics to grafana.
Now I have created a grafana template that accepts the Instance name:
As We have 2 instance here : The template is showing following in drop down
ip address of second instance
Node-exporter incase of first instance
So when selecting the instance with IP it works great but incase of instance showing with name node-exporter its not working. It works if I manually pass code-advisor to the query .
Here is the query:
count(container_last_seen{instance=~"$server:.*",image!=""})
Here is the prometheus.yml file where all the targets are set as the
node-exporter runs in the same instance where prometheus is I have
used localhost there. Please check bellow
prometheus.yml
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'lab2'
static_configs:
- targets: ['52.32.2.X:9100']
- job_name: 'cadvisor'
static_configs:
- targets: ['52.32.2.X:8080','cadvisor:8080']
If I try to edit targets and add localhost instead of node-exporter it doesnot even show up in drop down than
The node selections is working well for the HOST metrics but not for the containers metrics.
NOTE: It is working for the containers whose IP is shown in drop down but not for host not showing ip
docker-compose.yml:
This is the docker-compose to run the prometheus, node-exporter and alert-manager service. All the services are running great. Even the health status in target menu of prometheus shows ok.
version: '2'
services:
prometheus:
image: prom/prometheus
privileged: true
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alertmanger/alert.rules:/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
node-exporter:
image: prom/node-exporter
ports:
- '9100:9100'
alertmanager:
image: prom/alertmanager
privileged: true
volumes:
- ./alertmanager/alertmanager.yml:/alertmanager.yml
command:
- '--config.file=/alertmanager.yml'
ports:
- '9093:9093'
prometheus.yml
This is the prometheus config file with targets and alerts target sets. The alertmanager target url is working fine.
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
# this is where I have simple alert rules
rule_files:
- ./alertmanager/alert.rules
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
alerting:
alertmanagers:
- static_configs:
- targets: ['some-ip:9093']
alert.rules:
Just a simple alert rules to show alert when service is down
ALERT service_down
IF up == 0
alertmanager.yml
This is to send the message on slack when alerting occurs.
global:
slack_api_url: 'https://api.slack.com/apps/A90S3Q753'
route:
receiver: 'slack'
receivers:
- name: 'slack'
slack_configs:
- send_resolved: true
username: 'tara gurung'
channel: '#general'
api_url: 'https://hooks.slack.com/services/T52GRFN3F/B90NMV1U2/QKj1pZu3ZVY0QONyI5sfsdf'
Problems:
All the containers are working fine I am not able to figure out the exact problem.What am I really missing. Checking the alerts in prometheus shows.
Alerts
No alerting rules defined
Your ./alertmanager/alert.rules file is not included in your docker config, so it is not available in the container. You need to add it to the prometheus service:
prometheus:
image: prom/prometheus
privileged: true
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alertmanager/alert.rules:/alertmanager/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
And probably give an absolute path inside prometheus.yml:
rule_files:
- "/alertmanager/alert.rules"
You also need to make sure you alerting rules are valid. Please see the prometheus docs for details and examples. You alert.rules file should look something like this:
groups:
- name: example
rules:
# Alert for any instance that is unreachable for >5 minutes.
- alert: InstanceDown
expr: up == 0
for: 5m
Once you have multiple files, it may be better to add the entire directory as a volume rather than individual files.
If you need answers to this question see the explanation on this link
How to make alert rules visible on Prometheus User Interface?
Your alert rules inside the prometheus.yml should look like this
rule_files:
- "/etc/prometheus/alert.rules.yml"
You need to stop the alertmanager and prometheus containers and run this
docker run -d --name prometheus_ops -p 9191:9090 -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml -v $(pwd)/alert.rules.yml:/etc/prometheus/alert.rules.yml prom/prometheus
Verify if you can see the alert.rule config path : Prometheus container ID and go to cd /etc/prometheus
docker exec -it fa99f733f69b sh