How to run curl commands inside kong docker container at startup? - docker-compose

I have setup kong docker container. it's starting with docker compose file:
kong:
image: "${KONG_DOCKER_TAG}"
user: ${KONG_USER}
depends_on:
- kong-database
- kong-migrations
environment:
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: '0.0.0.0:8001'
KONG_CASSANDRA_CONTACT_POINTS: ${KONG_CASSANDRA_CONTACT_POINTS}
KONG_DATABASE: ${KONG_DATABASE}
KONG_PG_DATABASE: ${KONG_PG_DATABASE}
KONG_PG_HOST: ${KONG_PG_HOST}
KONG_PROXY_LISTEN: '0.0.0.0:8000'
KONG_PROXY_LISTEN_SSL: 0.0.0.0:8443
KONG_NGINX_HTTP_INCLUDE: custom-nginx-kong.conf
KONG_PG_USER: ${KONG_PG_USER}
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_PG_PASSWORD: ${KONG_PG_PASSWORD}
networks:
- external-network
configs:
- source: kong-config
target: /usr/local/kong/custom-nginx-kong.conf
# Permissions -r--r--r--
mode: 0444
healthcheck:
test: ["CMD", "curl", "-f", "http://kong:8001"]
interval: 5s
timeout: 2s
retries: 15
restart: on-failure
deploy:
restart_policy:
condition: on-failure
labels:
com.docker.lb.hosts: ${APP_URL_KONG}
com.docker.lb.port: 8080
com.docker.lb.network: external-network
com.docker.lb.backend_mode: vip
I want to execute curl commands eg. to add new service in kong like below post command immediately after my container is created automatically either through some script or using the below command. but How can I setup this automation through above docker-compose file? please help me to add services automatically on startup though docker compose!!!
curl -i -X POST http://<admin-hostname>:8001/services \
--data name=example_service \
--data url='http://mockbin.org'
Thanks in advance!

Related

Cannot pass sucessifuly through healthcheck with lsof

I've tried to up a docker-compose with three services, but it fails on healthcheck.
It is made by searching a specific running port with lsof: test: ["CMD", "lsof", "-t", "-i:3001"]
Nevertheless, when I try to reach the service by that specific port, binded on the docker-compose, it works seamslessly to me.
I've tried to run this command with running container, and this did not work too. So I realized that I should run it with sudo, and, by this way, it returns a sucessfully response (0).
I was looking for use the sudo command and I've found that the Docker Daemon runs on root mode. So it break out my theory.
Someone has some ideia about what can be making my healthcheck don't be sucessfully?
Follow the code:
version: '3.9'
services:
frontend:
container_name: front-end
build: ./src/front-end
ports:
- 3000:3000
restart: on-failure
networks:
- frontend
healthcheck:
test: ["CMD", "lsof", "-t", "-i:3000"]
interval: 10s
timeout: 5s
retries: 5
depends_on:
backend:
condition: service_healthy
backend:
container_name: back-end
build: ./src/back-end
working_dir: /backend
environment:
- BACKEND_PORT=3001
ports:
- 3001:3001
restart: on-failure
networks:
- frontend
- backend
healthcheck:
test: ["CMD", "lsof", "-t", "-i:3001"]
interval: 10s
timeout: 5s
retries: 5
depends_on:
database:
condition: service_healthy
database:
container_name: database
image: mongo:6.0.4
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${ECONDOS_MONGODB_USER}
MONGO_INITDB_ROOT_PASSWORD: ${ECONDOS_MONGODB_PASSWORD}
networks:
- backend
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.runCommand({ ping: 1 })", "--quiet"]
interval: 10s
timeout: 5s
retries: 5
networks:
frontend:
backend:
docker-compose -v
docker-compose version 1.29.2, build 5becea4c

Mongo container with a replica set with only one node in docker-compose

I want to create a Docker container with an instance of Mongo. In particular, I would like to create a replica set with only one node (since I'm interested in transactions and they are only available for replica sets).
Dockerfile
FROM mongo
RUN echo "rs.initiate();" > /docker-entrypoint-initdb.d/replica-init.js
CMD ["--replSet", "rs0"]
docker-compose.yml
version: "3"
services:
db:
build:
dockerfile: Dockerfile
context: .
ports:
- "27017:27017"
If I use the Dockerfile alone everything is fine, while if I use docker-compose it does not work: in fact if I then log to the container I got prompted as rs0:OTHER> instead of rs0:PRIMARY>.
I consulted these links but the solutions proposed are not working:
https://github.com/docker-library/mongo/issues/246#issuecomment-382072843
https://github.com/docker-library/mongo/issues/249#issuecomment-381786889
This is the compose file I have used for a while now for local development. You can remove the keyfile pieces if you don't need to connect via SSL.
version: "3.8"
services:
mongodb:
image : mongo:4
container_name: mongodb
hostname: mongodb
restart: on-failure
environment:
- PUID=1000
- PGID=1000
- MONGO_INITDB_ROOT_USERNAME=mongo
- MONGO_INITDB_ROOT_PASSWORD=mongo
- MONGO_INITDB_DATABASE=my-service
- MONGO_REPLICA_SET_NAME=rs0
volumes:
- mongodb4_data:/data/db
- ./:/opt/keyfile/
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
command: "--bind_ip_all --keyFile /opt/keyfile/keyfile --replSet rs0"
volumes:
mongodb4_data:
It uses Docker's health check (with a startup delay) to sneak in the rs.initiate() if it actually needs it after it's already running.
To create a keyfile.
Mac:
openssl rand -base64 741 > keyfile
chmod 600 keyfile
Linux:
openssl rand -base64 756 > keyfile
chmod 600 keyfile
sudo chown 999 keyfile
sudo chgrp 999 keyfile
The top answer stopped working for me in later MongoDB and Docker versions. Particularly because rs.initiate().ok would throw an error if the replica set was already initiated, causing the whole command to fail. In addition, connecting from another container was failing because the replica set's sole member had some random host, which wouldn't allow the connection. Here's my new docker-compose.yml:
services:
web:
# ...
environment:
DATABASE_URL: mongodb://root:root#db/?authSource=admin&tls=false
db:
build:
context: ./mongo/
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
ports:
- '27017:27017'
volumes:
- data:/data/db
healthcheck:
test: |
test $$(mongosh --quiet -u root -p root --eval "try { rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'db' }] }).ok } catch (_) { rs.status().ok }") -eq 1
interval: 10s
start_period: 30s
volumes:
data:
Inside ./mongo/, I have a custom Dockerfile that looks like:
FROM mongo:6
RUN echo "password" > /keyfile \
&& chmod 600 /keyfile \
&& chown 999 /keyfile \
&& chgrp 999 /keyfile
CMD ["--bind_ip_all", "--keyFile", "/keyfile", "--replSet", "rs0"]
This Dockerfile is suitable for development, but you'd definitely want a securely generated and persistent keyfile to be mounted in production (and therefore strike the entire RUN command).
You still need to issue replSetInitiate even if there's only one node in the RS.
See also here.
I had to do something similar to build tests around ChangeStreams which are only available when running mongo as a replica set. I don't remember where I pulled this from, so I can't explain it in detail but it does work for me. Here is my setup:
Dockerfile
FROM mongo:5.0.3
RUN echo "rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});" > "/docker-entrypoint-initdb.d/init_replicaset.js"
RUN echo "12345678" > "/tmp/key.file"
RUN chmod 600 /tmp/key.file
RUN chown 999:999 /tmp/key.file
CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/key.file"]
docker-compose.yml
version: '3.7'
services:
mongo:
build: .
restart: always
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u admin -p pass --quiet) -eq 1
interval: 10s
start_period: 30s
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: pass
MONGO_INITDB_DATABASE: test
Run docker-compose up and you should be good.
Connection String: mongodb://admin:pass#localhost:27017/test
Note: You shouldn't use this in production obviously, adjust the key "12345678" in the Dockerfile if security is a concern.
If you just need single node replica set of MongoDB via docker-compose.yml you can simply use this:
mongodb:
image: mongo:5
restart: always
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- 27018:27017
healthcheck:
test: mongo --eval "rs.initiate()"
start_period: 5s
This one works fine for me:
version: '3.4'
services:
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s

Docker losing Postgres server data when I shutdown the backend container

I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi

Kong: docker-compose [PostgreSQL error] failed to retrieve PostgreSQL server_version_num: host or service not provided, or not known

I'm trying to learn how to use Kong for my API server, but met the error:
kong_1 | nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:388: [PostgreSQL error] failed to retrieve PostgreSQL server_version_num: host or service not provided, or not known
My docker-compose.yaml as below:
version: "3"
networks:
kong-net:
driver: bridge
services:
# Create a service named db.
kong-postgres:
# Use the Docker Image postgres. This will pull the newest release.
image: "postgres"
# Give the container a name. You can changes to something else.
container_name: "kong-postgres"
# Setup the username, password, and database name. You can changes these values.
environment:
- POSTGRES_USER=kong
- POSTGRES_PASSWORD=kong
- POSTGRES_DB=kong
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "5432:5432"
restart: on-failure
# Set a volume some that database is not lost after shutting down the container.
# I used the name postgres-data but you can changed it to something else.
volumes:
- ./postgres-data:/var/lib/postgresql/data
kong:
image: "kong:latest"
command: "kong migrations bootstrap"
depends_on:
- kong-postgres
environment:
KONG_ADMIN_LISTEN: '0.0.0.0:8001,0.0.0.0:8444 ssl'
KONG_DATABASE: postgres
KONG_PG_HOST: kong-postgres
KONG_PG_DATABASE: kong
KONG_PG_PASSWORD: kong
KONG_PG_USER: kong
networks:
- kong-net
ports:
- "8000:8000/tcp"
- "8001:8001/tcp"
- "8443:8443/tcp"
- "8444:8444/tcp"
healthcheck:
test: ["CMD", "kong", "health"]
interval: 10s
timeout: 10s
retries: 10
restart: on-failure
Also tried running it by 2 steps:
docker-compose up kong-postgres, it's ok:
$ docker-compose up kong-postgres
Starting kong-postgres ... done
Attaching to kong-postgres
kong-postgres |
kong-postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization
kong-postgres |
kong-postgres | 2019-11-20 08:22:37.057 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
kong-postgres | 2019-11-20 08:22:37.057 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
kong-postgres | 2019-11-20 08:22:37.057 UTC [1] LOG: listening on IPv6 address "::", port 5432
kong-postgres | 2019-11-20 08:22:37.060 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
kong-postgres | 2019-11-20 08:22:37.128 UTC [25] LOG: database system was shut down at 2019-11-20 08:08:28 UTC
kong-postgres | 2019-11-20 08:22:37.176 UTC [1] LOG: database system is ready to accept connections
And the database can connect via psql -h localhost -p 5432 -U kong -d kong:
$ psql -h localhost -p 5432 -U kong -d kong
Password for user kong:
psql (11.5, server 12.1 (Debian 12.1-1.pgdg100+1))
WARNING: psql major version 11, server major version 12.
Some psql features might not work.
Type "help" for help.
kong=# \q
docker-compose up kong is failed:
$ docker-compose up kong
kong-postgres is up-to-date
Recreating kong_kong_1 ... done
Attaching to kong_kong_1
kong_1 | Error: [PostgreSQL error] failed to retrieve PostgreSQL server_version_num: host or service not provided, or not known
kong_1 |
kong_1 | Run with --v (verbose) or --vv (debug) for more details
p.s.: The official Docker Compose template is failed too:
kong-migrations-up_1 | Error: Cannot run migrations: database needs bootstrapping; run 'kong migrations bootstrap'
kong-migrations-up_1 |
kong-migrations-up_1 | Run with --v (verbose) or --vv (debug) for more details
version: "3.7"
volumes:
kong_data: {}
networks:
kong-net:
services:
#######################################
# Postgres: The database used by Kong
#######################################
kong-database:
image: postgres:9.6
container_name: kong-postgres
restart: on-failure
networks:
- kong-net
volumes:
- kong_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: kong
POSTGRES_PASSWORD: ${KONG_PG_PASSWORD:-kong}
POSTGRES_DB: kong
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "kong"]
interval: 30s
timeout: 30s
retries: 3
#######################################
# Kong database migration
#######################################
kong-migration:
image: ${KONG_DOCKER_TAG:-kong:latest}
command: kong migrations bootstrap
networks:
- kong-net
restart: on-failure
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_DATABASE: kong
KONG_PG_USER: kong
KONG_PG_PASSWORD: ${KONG_PG_PASSWORD:-kong}
depends_on:
- kong-database
#######################################
# Kong: The API Gateway
#######################################
kong:
image: ${KONG_DOCKER_TAG:-kong:latest}
restart: on-failure
networks:
- kong-net
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_DATABASE: kong
KONG_PG_USER: kong
KONG_PG_PASSWORD: ${KONG_PG_PASSWORD:-kong}
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_PROXY_LISTEN_SSL: 0.0.0.0:8443
KONG_ADMIN_LISTEN: 0.0.0.0:8001
depends_on:
- kong-database
healthcheck:
test: ["CMD", "kong", "health"]
interval: 10s
timeout: 10s
retries: 10
ports:
- "8000:8000"
- "8001:8001"
- "8443:8443"
- "8444:8444"
#######################################
# Konga database prepare
#######################################
konga-prepare:
image: pantsel/konga:latest
command: "-c prepare -a postgres -u postgresql://kong:${KONG_PG_PASSWORD:-kong}#kong-database:5432/konga"
networks:
- kong-net
restart: on-failure
depends_on:
- kong-database
#######################################
# Konga: Kong GUI
#######################################
konga:
image: pantsel/konga:latest
restart: always
networks:
- kong-net
environment:
DB_ADAPTER: postgres
DB_URI: postgresql://kong:${KONG_PG_PASSWORD:-kong}#kong-database:5432/konga
NODE_ENV: production
depends_on:
- kong-database
ports:
- "1337:1337"
I give up the docker-compose to run Kong, and back to user Docker command to do it in few steps below:
1. Create a Docker network:
$ docker network create kong-net
2. Start your database:
$ docker run -d --name kong_database \
--network=kong-net \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
--volume "$PWD/postgres-data":/var/lib/postgresql/data \
postgres:9.6
3. Prepare your database:
$ docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong_database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong_database" \
kong:latest kong migrations bootstrap
4. Start Kong
$ docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong_database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong_database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
--volume "$PWD/conf":/etc/nginx \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
5. Use Kong
$ curl -i http://localhost:8001/
I use Konga GUI for Kong Admin API:
$ docker run --rm \
--network=kong-net \
pantsel/konga -c prepare -a postgres -u postgresql://kong#kong_database:5432/konga_db
$ docker run -d --name konga \
-p 1337:1337 \
--network=kong-net \
-e "DB_ADAPTER=postgres" \
-e "DB_HOST=kong_database" \
-e "DB_USER=kong" \
-e "DB_DATABASE=konga_db" \
-e "KONGA_HOOK_TIMEOUT=120000" \
-e "NODE_ENV=production" \
pantsel/konga
Open the http://localhost:1337/ to start use it.
Wish this can help someone else.
P.S.: Wish to have a sample docker-compose.yml also.
I had the same error.
A solution for local development is to use:
POSTGRES_HOST_AUTH_METHOD: trust
Put this in your docker-compose, under kong-database environment.
This is very un-safe for production, because it trusts all connections to the DB.

How to keep the certbot container running?

I'm using the certbot/certbot container as in:
docker-compose run -d --rm --entrypoint 'certbot certonly --webroot -w /var/www/certbot --staging --email example#domain.se -d example.com --rsa-key-size 4096 --agree-tos --force-renewal ; sleep 3600' certbot
on the following compose file:
version: '3.5'
services:
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- "~/dev/docker/projects/common/volumes/letsencrypt/nginx:/etc/nginx/conf.d"
- "~/dev/docker/projects/common/volumes/letsencrypt/certbot/conf:/etc/letsencrypt"
- "~/dev/docker/projects/common/volumes/letsencrypt/certbot/www:/var/www/certbot"
- "~/dev/docker/projects/common/volumes/letsencrypt/nginx:/var/www/nginx"
- "~/dev/docker/projects/common/volumes/logs:/var/log/nginx"
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- "~/dev/docker/projects/common/volumes/letsencrypt/certbot/conf:/etc/letsencrypt"
- "~/dev/docker/projects/common/volumes/letsencrypt/certbot/www:/var/www/certbot"
- "~/dev/docker/projects/common/volumes/logs:/var/log/letsencrypt"
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
But it ignores the sleep command and the container goes away.
Whereas running the following:
docker-compose run -d --rm --entrypoint 'sleep 3600' certbot
keeps the container up and running.
I would like to keep the container up and running after the certbot failed.
You could move "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'" into dedicated script for example start.sh.
Mount it with docker-compose volumes
volumes:
- "./start.sh:/start.sh
entrypoint: /start.sh