Add route in docker compose - docker-compose

I have VM with docker containers in a cloud.
It have 2 containers: wireguard and redmine.
I have LDAP-authorization in redmine.
LDAP-server locates in private LAN (behind NAT), and I have VPN via wireguard to this LAN.
I need add route in Redmine-container so that redmine has access to a private LAN via Wireguard-container.
Now I make it by hand after containers start I write docker-compose exec redmine ip route add 192.168.42.0/23 via 172.20.0.50
Could you advice me, how implement it to my pipeline?
P.S. redmine-container already has entrypoint and cmd directives in Dockerfile.
version: '3.9'
services:
wireguard:
image: linuxserver/wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
volumes:
- ./wireguard-config:/config
- /lib/modules:/lib/modules
networks:
default:
ipv4_address: 172.20.0.50
sysctls:
- net.ipv4.conf.all.src_valid_mark=1 # for clients mode
restart: unless-stopped
postgres:
image: postgres:14.2-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- 'POSTGRES_PASSWORD=MySUperSecret'
- 'POSTGRES_DB=redmine'
redmine:
image: redmine:5.0.1-alpine
cap_add:
- NET_ADMIN
volumes:
- redmine-files:/usr/src/redmine/files
- ./redmine-plugins:/usr/src/redmine/plugins
- ./configuration.yml:/usr/src/redmine/config/configuration.yml
ports:
- 80:3000
depends_on:
- postgres
environment:
- 'REDMINE_DB_POSTGRES=postgres'
- 'REDMINE_DB_DATABASE=redmine'
- 'REDMINE_DB_PASSWORD=MySUperSecret'
- 'REDMINE_PLUGINS_MIGRATE=true'
restart: unless-stopped
networks:
default:
ipam:
config:
- subnet: 172.20.0.0/24
volumes:
postgres-data:
redmine-files:

I solve my problem:
services:
wireguard:
image: linuxserver/wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
ports:
- 3000:3000
environment:
- TZ=Europe/Moscow
volumes:
- ./wireguard-config:/config
- /lib/modules:/lib/modules
sysctls:
- net.ipv4.conf.all.src_valid_mark=1 # for clients mode
restart: unless-stopped
postgres:
image: postgres:14.2-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- 'POSTGRES_PASSWORD=MySUperSecret'
- 'POSTGRES_DB=redmine'
redmine:
image: redmine:5.0.2-alpine
network_mode: service:wireguard
volumes:
- redmine-files:/usr/src/redmine/files
- ./redmine-plugins:/usr/src/redmine/plugins
- ./configuration.yml:/usr/src/redmine/config/configuration.yml
# ports:
# - 80:3000
depends_on:
- postgres
environment:
- 'REDMINE_DB_POSTGRES=postgres'
- 'REDMINE_DB_DATABASE=redmine'
- 'REDMINE_DB_PASSWORD=MySUperSecret'
- 'REDMINE_PLUGINS_MIGRATE=true'
restart: unless-stopped
volumes:
postgres-data:
redmine-files:

Related

container fails curl resolve

I'm working with docker-compose.yml files and have done so for about three years now.
In my solution, I have 6 containers that reside on one network, that network defined as a bridge. One of the containers must have access to a specific internet host, which it's able to do so without any issue. I'll name this solution as the "original" solution.
My next development task is now to replicate the original solution, on my workstation, ensuring that the replica can't interact with the original. My understanding is that I simply create another network, define it as a "bridge" and I should be good to go.
The issue here is that the one container in my replica solution is unable to resolve a cURL call to the internet host. Code is nigh on identical between original and replica (I've added curl debugging to replica and that's how I caught the unresolved host error) and the only thing I can see being any different is the fact that it's on it's own network (but it's bridged so that it should still be able to resolve to the internet host).
The YAML files that I'm using are as follows:
original-docker-compose.yml
version: '3.8'
services:
console-mysql:
container_name: console-mysql
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "7306:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console
console-www:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www
env_file:
- ./.env
ports:
- 7080:80
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php
networks:
- dev-vlt-console
console-proc:
container_name: console-proc
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 7082:9001
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-www-php:
container_name: console-www-php
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-dicom:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom
depends_on: [console-mysql]
ports: [7084:8042, 10401:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console
console-vpacs:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs
depends_on: [console-mysql]
ports: ["7085:8042"]
secrets:
- vpacs-orthanc.json
networks:
- dev-vlt-console
secrets:
console-orthanc.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console:
name: dev-vlt-console
driver: bridge
replica-docker-compose.yml
version: '3.8'
services:
console-mysql-dev3:
container_name: console-mysql-dev3
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "60001:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-www-dev3:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www-dev3
env_file:
- ./.env
ports:
- 60002:80
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php-dev3
networks:
- dev-vlt-console-dev3
console-proc-dev3:
container_name: console-proc-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 60003:9001
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-www-php-dev3:
container_name: console-www-php-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-dicom-dev3:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom-dev3
depends_on: [console-mysql-dev3]
ports: [60004:8042, 60005:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-vpacs-dev3:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs-dev3
depends_on: [console-mysql-dev3]
ports: ["60006:8042"]
secrets:
- vpacs-orthanc-dev3.json
networks:
- dev-vlt-console-dev3
secrets:
console-orthanc-dev3.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc-dev3.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console-dev3:
name: dev-vlt-console-dev3
driver: bridge
Am I missing something here? I've added tens of other containers, all with their own networks configured as "bridge" and they are all able to access the internet without issue.
I've read posts from 5 years ago about this and the only resolution that seemed to work, was rebooting the docker host - which I've done, but it didn't help.
Any thoughts / comments?!
Thanks

volume mounting clickhouse docker image to override config.xml to connect with tabix

clickhouse:
build: ./db/clickhouse
restart: unless-stopped
volumes:
# Store data to HDD
- ./clickhouse-data:/var/lib/clickhouse/
# Base Clickhouse cfg
- ./clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse/users.xml:/etc/clickhouse-server/users.xml
ports:
- "8123:8123" # for http clients
- "9000:9000" # for console client
environment:
- CLICKHOUSE_USER=oussema
- CLICKHOUSE_PASSWORD=root
- CLICKHOUSE_DB=DWH
- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
ulimits:
nofile:
soft: 262144
hard: 262144
tabix:
image: spoonest/clickhouse-tabix-web-client
ports:
- "8080:80"
depends_on:
- clickhouse
restart: unless-stopped
environment:
- CH_NAME=clickhouse
- CH_HOST=https://127.0.0.1:8123
- CH_LOGIN=oussema
- CH_PASSWORD=root
It just working example for test purpose:
docker-compose.yml
version: "3.0"
services:
clickhouse:
image: yandex/clickhouse-server
ports:
- "8123:8123"
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
environment:
- CLICKHOUSE_USER=default
- CLICKHOUSE_PASSWORD=12345
tabix:
image: spoonest/clickhouse-tabix-web-client
ports:
- "8080:80"
depends_on:
- clickhouse
restart: unless-stopped
environment:
- CH_NAME=clickhouse
- CH_HOST=http://localhost:8123
- CH_LOGIN=default
- CH_PASSWORD=12345
It needs:
# run container
docker compose up
# browse the tabix endpoint
http://localhost:8080/

How to run docker(s) with mogodb cluster?

I use this docker-compose which runs MongoDB+ mongo-express Web interface:
https://gist.github.com/adamelliotfields/cd49f056deab05250876286d7657dc4b
How can I run docker(s) with mongodb cluster and mongo-express as web interface?
A have attached my docker-compose file: I have got it from bitnami and added mongo-express. But web interface does not work.
That is:
version: '3.1'
services: mongodb-sharded:
image: docker.io/bitnami/mongodb-sharded:4.4
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-sharded
- MONGODB_SHARDING_MODE=mongos
- MONGODB_CFG_PRIMARY_HOST=mongodb-cfg
- MONGODB_CFG_REPLICA_SET_NAME=cfgreplicaset
- MONGODB_REPLICA_SET_KEY=replicasetkey123
- MONGODB_ROOT_PASSWORD=example
ports:
- "27017:27017"
mongodb-shard0:
image: docker.io/bitnami/mongodb-sharded:4.4
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-shard0
- MONGODB_SHARDING_MODE=shardsvr
- MONGODB_MONGOS_HOST=mongodb-sharded
- MONGODB_ROOT_PASSWORD=example
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=replicasetkey123
- MONGODB_REPLICA_SET_NAME=shard0
volumes:
- 'shard0_data:/bitnami'
mongodb-cfg:
image: docker.io/bitnami/mongodb-sharded:4.4
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-cfg
- MONGODB_SHARDING_MODE=configsvr
- MONGODB_ROOT_PASSWORD=example
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=replicasetkey123
- MONGODB_REPLICA_SET_NAME=cfgreplicaset
volumes:
- 'cfg_data:/bitnami'
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
# - 27017:27017
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
XX_CONFIG_MONGODB_URL: mongodb://root:example#mongodb-sharded:27017/
ME_CONFIG_MONGODB_URL: mongodb://root:example#mongodb-cfg:27017/
# ongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
volumes: shard0_data:
driver: local cfg_data:
driver: local

Can't connect containers mariadb and phpmyadmin

I get the error "mysqli::real_connect(): (HY000/2002): No such file or directory" when trying to login to phpmyadmin. I verified I can connect to the DB container from the localhost using mysql -h 127.0.0.1 -P 3306 -u root -p. Below is my docker-compose file:
version: "3.7"
########################### SECRETS
secrets:
mysql_root_password:
file: $DOCKERDIR/secrets/mysql_root_password
########################### SERVICES
services:
# Portainer - WebUI for Containers
portainer:
container_name: portainer
image: portainer/portainer-ce:latest
restart: unless-stopped
command: -H unix:///var/run/docker.sock
security_opt:
- no-new-privileges:true
ports:
- "$PORTAINER_PORT:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/portainer/data:/data
environment:
- TZ=$TZ
# MariaDB - MySQL Database
db:
container_name: db
image: linuxserver/mariadb:latest
restart: always
security_opt:
- no-new-privileges:true
ports:
- "$MARIADB_PORT:3306"
volumes:
- $DOCKERDIR/mariadb/data:/config
environment:
- PUID=$PUID
- PGID=$PGID
- TZ=$TZ
- FILE__MYSQL_ROOT_PASSWORD=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# phpMyAdmin - Database management
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: unless-stopped
depends_on:
- db
security_opt:
- no-new-privileges:true
ports:
- "$PHPMYADMIN_PORT:80"
volumes:
- $DOCKERDIR/phpmyadmin:/etc/phpmyadmin
environment:
- PMA_HOST=db
#- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# Dozzle - Real-time Docker Log Viewer
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- "$DOZZLE_PORT:8080"
environment:
DOZZLE_LEVEL: info
DOZZLE_TAILSIZE: 300
DOZZLE_FILTER: "status=running"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
For the life of me, I can't figure out what I am doing wrong to log into Phpmyadmin. Can someone explain my mistake or mistakes and point me in the right direction? Thanks
I figured the issue out, first was I had the network set on the pphpmyadmin section, and not db, once I added the network statement to db section, I was able to connect.

Launching more containers with a Traefik+Odoo+Postgres .yml

I'm using a .yml to launch an Odoo instance & its postgresql through a traefik and it works perfectly fine. It assigns the domain and subdomain with a Let's Encrypt certificate, perfect.
My issue now is that I wish to launch more of these but I haven't succeeded in doing so.
I've tried:
Changing the ports: in the odoo section
Removing the ports: section from the odoo section
Editing the ports: in the traefik section
The .yml:
version: "3"
services:
odoo:
image: odoo:12.0
depends_on:
- db
restart: unless-stopped
networks:
- internal
ports:
- "8069:8069"
- "8072:8072"
environment:
- HOST=db
- USER=${ODOO_USER}
- PASSWORD=${ODOO_PASS}
volumes:
- ./odoo/odoo-web-data:/var/lib/odoo
- ./odoo/config:/etc/odoo
- ./odoo/addons:/mnt/extra-addons
- ./odoo/logs:/var/log/odoo
labels:
- 'traefik.http.routers.odoo.rule=Host(`${ODOO_TRAEFIK_URL}`)'
- 'traefik.http.routers.odoo.entrypoints=websecure'
- 'traefik.http.routers.odoo.tls.certresolver=odoo'
- 'traefik.port=8069'
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=web"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https#docker"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
db:
image: postgres:10
restart: unless-stopped
networks:
- internal
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=${ODOO_USER}
- POSTGRES_PASSWORD=${ODOO_PASS}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
traefik:
image: traefik:v2.0
networks:
- internal
- web
ports:
# The HTTP port
- "80:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
- "./traefik/letsencrypt:/letsencrypt"
- "./traefik/traefik.yml:/etc/traefik.yml"
- "/var/run/docker.sock:/var/run/docker.sock"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.defaultRule=Host(`{{ trimPrefix `/` .Name }}.${TRAEFIK_DEFAULT_DOMAIN}`)"
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--certificatesResolvers.odoo.acme.httpchallenge=true"
- "--certificatesresolvers.odoo.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.odoo.acme.email=${ACME_EMAIL}"
- "--certificatesresolvers.odoo.acme.storage=/letsencrypt/acme.json"
networks:
internal:
web:
external: true
Assuming I already have a Traefik + Odoo + Postgres already running, how could I go about launching more instances of these together?