How to delete Prometheus & Grafana data manually? - docker-compose

I have a VM with 4GB storage. I setup Docker & Prometheus and Grafana with the following config:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
expose:
- 9090
networks:
- monitoring
grafana:
image: grafana/grafana:latest
ports:
- 3000:3000
networks:
- monitoring
I forgot to add following flags to Prometheus entry.
- "--storage.tsdb.retention.time=60m"
- "--storage.tsdb.retention.size=512MB"
so my disk filled up pretty quickly and I cannot perform any operations due to disk space is full. I am getting errors like from different sources
error: file write error: No space left on device
logger=cleanup t=2022-12-11T14:06:53.402543926Z level=error msg="failed to lock and execute cleanup of old login attempts" error="database or disk is full"
ts=2022-12-11T13:38:35.765Z caller=db.go:908 level=error component=tsdb msg="compaction failed" err="compact head: persist head block: populate block: write chunks: preallocate: no space left on device"
I attempted to delete prometheus and grafana containers from docker but it did not help to reduce disk size.
How can I delete all the previously collected data (there I don't care about them) and start with those retention limitations?

Related

Parse docker-compose.yml with yq to generate documentation

I'm trying to parse my docker-compose file using yq (the go implementation from https://github.com/mikefarah/yq) to auto-generate a documentation using asciidoc.
My docker-compose.yml looks fairly simple and does nothing out of the ordinary:
---
version: "3.3"
services:
# prometheus metrics
node_exporter:
image: prom/node-exporter:latest
container_name: node_exporter
labels:
description: Prometheus exporter to monitor system metrics
restart: always
command:
- --path.rootfs=/host
pid: host
network_mode: host
# ports:
# - 9100:9100
# The network_mode: host tells docker to run the container as if it was running on the
# server itself, so all ports exports by the container will directly be mapped to the server.
volumes:
- /:/host:ro,rslave
- /etc/timezone:/etc/timezone:ro
# prometheus metrics
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: always
expose:
- 9110
ports:
- 9110:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
# Mange containers
portainer:
image: portainer/portainer-ce:alpine
container_name: portainer
command: -H unix:///var/run/docker.sock --admin-password-file /tmp/portainer_passwords
restart: always
ports:
- 9990:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
- ./assets/portainer.passwd:/tmp/portainer_passwords
- /etc/timezone:/etc/timezone:ro
volumes:
portainer_data:
I want some information for each service. Most important to me is the image. container name restart and ports. Plus maybe labels -> description which is a field I use for some documentation (what does the respective service actually do).
I don't know how I can get these fields for the respective service combined. When I run yq eval '.services.[] | .container_name, .services.[] | .image' $composeFile I first get 3 lines with the container name and then 3 lines with the image.
node_exporter
cadvisor
portainer
prom/node-exporter:latest
gcr.io/cadvisor/cadvisor:latest
portainer/portainer-ce:alpine
This result is not grouped by service. I'd prefer something like this:
node_exporter
prom/node-exporter:latest
cadvisor
gcr.io/cadvisor/cadvisor:latest
portainer
portainer/portainer-ce:alpine
Or since I want to generate asciidoc,the perfect solution would be this:
|node_exporter |prom/node-exporter:latest
|cadvisor |gcr.io/cadvisor/cadvisor:latest
|portainer |portainer/portainer-ce:alpine
This way I can generate the body of an asciidoc table with information on my services for my documentation.
Anyone got an idea how I can get yq to work as I indend?
Each .services.[] starts a new iteration. Do it once and extract all you need from there:
yq eval '.services.[] | "|" + .container_name + "| " + .image'
|node_exporter| prom/node-exporter:latest
|cadvisor| gcr.io/cadvisor/cadvisor:latest
|portainer| portainer/portainer-ce:alpine

Is it possible from within a Docker container to use two input(capture) streams simultaneously from the same external sound card?

General Information:
Os : Ubuntu 20.04 LTS
docker : v. 20.10.6
docker-compose: v. 1.29.2
Soundcard : Steinberg UR22mkII
Drivers : Alsa
We are developing a web service to record audio signals and display various properties and analysis for anomaly detection. For some analyses larger window sizes are necessary but also some realtime plots are included. The realtime plots are done via java-script(p5-Module) while everything else is processed via flask and python and visualized via grafana.
We have now encountered the problem that these two different clients cannot access the same audio device at the same time. In the main system this behavior can be solved with the dsnoop plugin from asoundrc (https://www.alsa-project.org/wiki/Asoundrc). But until now we were not able to implement this functionality in the docker-environment.
We have already tried to tunnel the virtual audio devices via the docker-compose file, but without success. (The compose file is enclosed). The Alsa drivers inside the container are also installed correctly. We suspect that it has something to do with the setup of the internal docker-environment, but are stuck at this point.
We are grateful for any tips or hints!
Docker-Compose File:
version: "3.8"
services:
webapp:
build: .
restart: always
depends_on:
- influxdb
- grafana
ports:
- 5000:5000
volumes:
- ./:/aad
devices:
- /dev/snd:/dev/snd
environment:
# output gets written to the docker-compse console without buffering it
- PYTHONUNBUFFERED=1
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
influxdb:
image: influxdb
restart: always
ports:
- 8086:8086
volumes:
# mount volumes to store data and configuration files. Dirs are createt if necessary
- ./influxdb/data:/var/lib/influxdb2
- ./influxdb/config:/etc/influxdb2
# mount script to be executed (only) after initial setup is done
- ./influxdb/scripts:/docker-entrypoint-initdb.d
environment:
# setup of database is only executet if no boltdb file is found in the specified path so the container with influx can be rebooted same as once setup
DOCKER_INFLUXDB_INIT_USERNAME: aad
DOCKER_INFLUXDB_INIT_PASSWORD: .......
DOCKER_INFLUXDB_INIT_ORG: aaddev
DOCKER_INFLUXDB_INIT_BUCKET: training
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: .......
DOCKER_INFLUXDB_INIT_MODE: setup
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
grafana:
image: grafana/grafana
# port mapping only need for easieser debugging
ports:
- 3000:3000
restart: always
depends_on:
- influxdb
volumes:
- grafana-storage:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
GF_SECURITY_ADMIN_USER: aad
GF_SECURITY_ADMIN_PASSWORD: .......
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
GF_USERS_DEFAULT_THEME: light
GF_AUTH_ANONYMOUS_ENABLED: "True"
GF_SECURITY_ALLOW_EMBEDDING: "True"
GF_AUTH_ANONYMOUS_ORG_NAME: Main Org.
GF_AUTH_ANONYMOUS_ORG_ROLE: Viewer
GF_DASHBOARDS_MIN_REFRESH_INTERVAL: 1s
GF_AUTH_BASIC_ENABLED: "True"
GF_DISABLE_LOGIN_FORM: "True"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
volumes:
grafana-storage:
Python Environment:
name: aad
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- portaudio=19.6.0=h7b6447c_4
- flask=1.1.2=pyhd3eb1b0_0
- librosa=0.8.0=pyh9f0ad1d_0
- matplotlib=3.3.4=py38h06a4308_0
- numpy=1.20.1=py38h93e21f0_0
- pandas=1.2.4=py38h2531618_0
- pip=21.0.1=py38h06a4308_0
- pyaudio=0.2.11=py38h7b6447c_2
- python=3.8.8=hdb3f193_5
- scikit-learn=0.24.1=py38ha9443f7_0
- scipy=1.6.2=py38had2a1c9_1
- tqdm=4.59.0=pyhd3eb1b0_1
- werkzeug=1.0.1=pyhd3eb1b0_0
- pip:
- influxdb-client==1.16.0
- rx==3.2.0

Changing max message size of for rabbitmq

i am currently using rabbitmq version 3.7 on my project.
I'm generating my rabbitmq image on a docker-compose.
My question is how do i change from the default message size of 512mb to ~2gb.
rabbit:
container_name: rabbitMQ
hostname: rabbit
image: "rabbitmq:3.7-management"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "15672:15672"
# - "5672:5672"
volumes:
- ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
From what i gathered i needed to change a property on rabbitmq.conf, but everything i tried didn't have an effect on the allowed size in production.
Is it even possible to change this value?

How to use fluent-bit with Docker-compose

I want to use the fluent-bit docker image to help me persist the ephemeral docker container logs to a location on my host (and later use it to ship logs elsewhere).
I am facing issues such as:
Cannot start service clamav: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused
I have read a number of post including
configuring fluentbit with docker but I'm still at a lost.
My docker-compose is made up of nginx, our app, keycloak, elasticsearch and clamav. I have added fluent-bit, made it first to starts via depends on. I changed the other services to use the fluentd logging driver.
Part of config:
clamav:
container_name: clamav-app
image: tiredofit/clamav:latest
restart: always
volumes:
- ./clamav/data:/data
- ./clamav/logs:/logs
environment:
- ZABBIX_HOSTNAME=clamav-app
- DEFINITIONS_UPDATE_FREQUENCY=60
networks:
- iris-network
expose:
- "3310"
depends_on:
- fluentbit
logging:
driver: fluentd
fluentbit:
container_name: iris-fluent
image: fluent/fluent-bit:latest
restart: always
networks:
- iris-network
volumes:
- ./fluent-bit/etc:/fluent-bit/etc
ports:
- "24224:24224"
- "24224:24224/udp"
I have tried to proxy_pass 24224 to fluentbit in nginx and start nginx first, and that avoided the error on clamav and es, but same error with keycloak.
So how can I configure the service to use the host or is it that localhost is not the "external" host?

Docker Swarm error - invalid mount config for type

This is my Docker compose/stack file. When I deploy on a single node, everything works fine, but when I deploy on multiple nodes I get the following error:
invalid mount config for type bind bind source path does not exist
version: '3'
services:
shinyproxy:
build: /etc/shinyproxy
deploy:
replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 5000:5000
networks:
- proxynetwork
mysql:
image: mysql
deploy:
replicas: 3
volumes:
- /mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
networks:
- proxynetwork
keycloak:
deploy:
replicas: 3
image: jboss/keycloak
volumes:
- /etc/letsencrypt/live/ds-gym.de/fullchain.pem:/etc/x509/https/tls.crt
- /etc/letsencrypt/live/ds-gym.de/privkey.pem:/etc/x509/https/tls.key
#- /theme/govuk-social-providers/:/opt/jboss/keycloak/themes/govuk-social-providers/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=myadmin
- KEYCLOAK_PASSWORD=mypassword
ports:
- 8443:8443
networks:
- proxynetwork
networks:
proxynetwork:
external: true
I understand that the volumes path is expected on every other node too, but I think this is a very bad practice and my other 2 nodes are anyway just workers. How can I solve that problem? Hopefully there is a solution which allows me to keep the volumes, since I use the same file for docker-compose build to build my images.
Can someone help me?
Thank you :-)
If it is possible you could restrict this service to the node that has the required host path's using placement constraints. However I'm guessing that that's not an option in this use case.
Host mounted volumes should really not be used in a swarm deployment as it would cause redundant data in the filesystems between the nodes. (All files need to be present on all nodes).
One solution would be to implement NFS volumes:
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=<NFS_SERVER_IP>,nolock,soft,rw"
device: ":/docker/path/to/configs"
This solution requires you to host a NFS-Server though. Also keep in mind that this approach is fine for configs but should not be used for file systems that need to provide high performance access.
Regarding your question about keeping your docker-compose file the same across environments: While it is technically possible to do so, most modern projects consist of a base compose file as well as an environment specific override for volumes,networks,images etc.
In a swarm your services will be deployed randomly on your available nodes.
I suppose your "to be mounted directory" is on the manager node, so deploy the wanted service on the manager node like so.
deploy:
placement:
constraints:
- node.role == manager