Enable logging in postgresql using docker-compose - postgresql

I am using Postgres as a service in my docker-compose file. I want logging to log file to be enabled when I do docker-compose up. One way to enable logging is by editing postgres.conf file but it's not useful in this case. One other way is to do something like this
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
but this isn't useful too cause I am not starting it from an image but as a docker-compose service. Any idea how I can start the docker-compose up with logging enabled in Postgres???

Here is the docker-compose to run the command -c in compose
version: '3.6'
services:
postgresql:
image: postgres:11.5
container_name: platops_postgres
volumes: ['platops-data:/var/lib/postgresql/data/', 'postgress-logs:/var/log/postgresql/']
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
environment:
- POSTGRES_USER=postgresql
- POSTGRES_PASSWORD=postgresql
ports: ['5432:5432']
volumes:
platops-data: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/data/
postgress-logs: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/logs/
For more information, you can check with the containers/postgress

Just like you command with docker run:
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
that you add the -c logging_collector=on arguments for the ENTRYPOINT ["/sbin/entrypoint.sh"] to enable logging. (Dockerfile).
In docker-compose.yml file, use command: like this:
version: "3.7"
services:
database:
image: sameersbn/postgresql:10-2
command: "-c logging_collector=on"
# ......
When Postgresql contaienr run, it will run command: /sbin/entrypoint.sh -c logging_collector=on.

Related

How do I run a bash script in a docker container after it starts?

I'm trying to run a bash script after a Postgres container starts which 1) creates a new table within the Postgres DB, and 2) runs a copy command that dumps the contents of a csv file into the newly created table.
Currently, I'm specifying the execution of the script within my docker-compose.yml file using the "command" argument, but I find that it doesn't allow the Postgres container to succesfully start. I receive the following information from the log:
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I remove the "command" argument everything is fine. Here is what my docker-compose.yml files looks like now:
# docker-compose.yml
version: '3'
services:
web:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
volumes:
- .:/app
expose: # new
- 8000
environment:
- DATABASE_URL=postgresql://fastapi_traefik:fastapi_traefik#db:5432/fastapi_traefik
depends_on:
- db
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- "/Users/theComputerPerson/:/tmp"
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik
- POSTGRES_PASSWORD=fastapi_traefik
- POSTGRES_DB=fastapi_traefik
command: /bin/bash -c "/tmp/newtable.sh"
traefik: # new
image: traefik:v2.2
ports:
- 8008:80
- 8081:8080
volumes:
- "./traefik.dev.toml:/etc/traefik/traefik.toml"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes:
postgres_data:
It may be worth noting that I'm trying to customize some of the aspects of this FastAPI project, and to turn your attention to the development files and not the production files. Please let me know if I can provide any additional information in the comments.
You are overriding the default container image startup command.
According to PostgreSQL official container image page, you can extend initialization adding your sh scripts (or even sql files) to /docker-entrypoint-initdb.d directory.
See https://hub.docker.com/_/postgres.
This approach has a caveat that this script could not be executed.
Another approach is to override default container image command adding yours in bash style: postgres; /bin/bash -c "/tmp/newtable.sh";

Receiving an error from a docker-compose that the user must own the data directory

Every time I try to build my image, I get the following error:
The server must be started by the user that owns the data directory.
The following is my docker file:
version: "3.7"
services:
db:
image: postgres
container_name: xxxxxxxxxxxx
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
nginx:
image: nginx:latest
restart: always
container_name: xxxxxxxxxxxx-nginx
volumes:
- ./deployment/nginx:/etc/nginx
logging:
driver: none
depends_on: ["radio"]
ports:
- 8080:80
- 8081:443
radio:
build:
context: .
dockerfile: "./deployment/Dockerfile"
image: test-radio
command: './manage.py runserver 0:3000'
container_name: xxxxxxxxxxxxxxx
restart: always
depends_on: ["db"]
volumes:
- type: bind
source: ./api
target: /app/api
- type: bind
source: ./xxxxxx
target: /app/xxxxx
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_HOST: $POSTGRES_HOST
AWS_KEY_ID: $AWS_KEY_ID
AWS_ACCESS_KEY: $AWS_ACCESS_KEY
AWS_S3_BUCKET_NAME: $AWS_S3_BUCKET_NAME
networks:
default:
The image is built with the following run.sh file:
#!/usr/bin/env sh
if [ ! -f .pass ]; then
openssl rand -base64 32 > .pass
fi
#export POSTGRES_DB="xxxxxxxxxxxxxxxxx"
#export POSTGRES_USER="xxxxxxxxxxxxxx"
#export POSTGRES_PASSWORD="xxxxxxxxxxxxxxxxxxxx"
#export POSTGRES_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export POSTGRES_DB="xxxxxxxxxxxxxxxxxx"
export POSTGRES_USER="xxxxxxxxxxxxxxxxxxxx"
export POSTGRES_PASSWORD="`cat .pass`"
export POSTGRES_HOST="db"
export AWS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_S3_BUCKET_NAME=""
echo "Your psql password is in .pass do not commit this file."
echo "The app will be available on localhost:8080 shortly"
if [ -z "$1" ]; then
docker-compose up
else
docker-compose up $1
fi
I'm wondering if my error is being caused by attempting to use a bash script to deploy the service on a Windows machine?
Details on the issue
The behavior observed by the OP definetely comes from a UID/GID mismatch, given that the specification
volumes:
- ./postgres-data:/var/lib/postgresql/data
(which can be viewed as a docker-compose equivalent of docker run -v "$PWD/postgres-data:/var/lib/postgresql/data" …) bind-mounts the $PWD/postgres-data folder inside the container, giving access to its files as is (including owner/group metadata).
Also, note that the handling of owner/group metadata between host and containers only relies on the numeric UID and GID, not on the owner and group names.
For more information about UIDs and GIDs in a Docker context, see also that article on Medium.
Workarounds if the bind-mount is necessary
For completeness, several possible solutions to workaround the bind-mount UID-mismatch issue (including the most straightforward one that consists in changing the files' UID :) are described in this answer on StackOverflow:
How to have host and container read/write the same files with Docker?
Other solutions
Following #ParanoidPenguin's comment, you may want to use a named volume, which mainly consists in using:
the docker volume command
and/or the docker run option -v …:….
Remarks:
docker run -v PATH1:PATH2 … triggers a bind-mount of PATH1 (host) to PATH2 (container) if and only if PATH1 is absolute (i.e., starts with a /) (e.g., -v "$PWD:$PWD" is a common idiom)
docker run -v NAME:PATH2 … mounts volume NAME to PATH2 (container) if and only if NAME does not contain any / (i.e., matches regexp [a-zA-Z0-9][a-zA-Z0-9_.-]).
even if we don't run docker volume create foo beforehand by hand, docker run -v foo:/data --rm -it debian will create the named volume foo if need be.
in order to populate the files of a named volume (or respectively, backup them) you can use an ephemeral container of image debian, ubuntu or so, combining at the same time a bind-mount and a volume mount:
Add a file /home/user/bar.txt in a new volume foo
file1=/home/user/bar.txt # initial file
uid=2000 # target User-ID in the volume
gid=2000 # target Group-ID in the volume
docker pull debian
docker run -v "$file1:$file1:ro" -v foo:/data \
-e file1="$file1" -e uid="$uid" -e gid="$gid" \
--rm -it debian bash -exc \
'cp -v -- "$file1" /data/bar.txt && chown -v $uid:$gid /data/bar.txt'
docker volume ls
Backup the foo volume in a tarball
date=$(date +'%Y%m%d_%H%M%S')
back="backup_$date.tar.gz"
destdir=/home/user/backup
mkdir -p "$destdir"
docker run -v foo:/data -v "$destdir:/backup" -e back="$back" \
--rm -it debian bash -exc 'tar cvzf "/backup/$back" /data'

Automatically run setup replica-set and restore database in MongoDb using Docker

This is my Dockerfile:
FROM mongo
WORKDIR /usr/src/app
COPY db /usr/src/app/db
COPY replica.js /usr/src/app/
CMD mongo
The replica.js as follows
rs.initiate();
This is my docker-compose file
mongo_server:
image: mongo
hostname: mongo_server.$ENV_NAME
build:
context: ./mongo
dockerfile: Dockerfile
expose:
- 27017
ports:
- "$MONGO_PORT:27017"
restart: always
networks:
localnet:
aliases:
- mongo_server.$ENV_NAME
command: --replSet $MONGO_REPLICA --bind_ip_all
volumes:
- "mongovolume:/data/db"
The problem is if I run successfully docker-compose up.
Then I need to run manually two command
docker exec 2b2 sh -c "mongo < /usr/src/app/replica.js" # 2b2 is id of container mongo
and
docker exec 2b2 sh -c "mongorestore --drop -d mydb /usr/src/app/db"
Now the replica is set, the database is restored. My question is could I make it automatically such as moving to entrypoint.sh and call in Dockerfile or setting in docker-compose.yml to reduce manual work?
There is definitely a way by adding another container in your docker-compose file:
mongo_restore:
image: mongo
build:
context: ./mongo
dockerfile: Dockerfile
networks:
localnet:
aliases:
- mongo_server.$ENV_NAME
entrypoint:
- sh
command:
- -c
- |
# Step 1: Wait until mongo_server is fully up and running. Please insert your own code to check.
# Step 2: Execute your restore script but make sure to target mongo_server instead
volumes:
- "mongovolume:/data/db"
There might be some syntax errors here and there but the idea is the same as I have used this method in some other projects :)

Add custom config location to Docker Postgres image preserving its access parameters

I have written a Dockerfile like this:
FROM postgres:11.2-alpine
ADD ./db/postgresql.conf /etc/postgresql/postgresql.conf
CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
It just adds custom config location to a generic Postgres image.
Now I have the following docker-compose service description
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
POSTGRES_PASSWORD passwordhere
POSTGRES_USER: user
POSTGRES_DB: db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
The problem is I can no longer remotely connect to DB using these credentials if I add this Config option. Without that CMD line it works just fine.
If I prepend "postgres" in CMD it has the same effect due to the underlying script prepending it itself.
Provided all the files are where they need to be, I believe the only problem with your setup is that you've omitted an actual executable from the CMD -- specifying just the option. You need to actually run postgres:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
That should work!
EDIT in response to OP's first comment below
First, I did confirm that behavior doesn't change whether "postgres" is in the CMD or not. It's exactly as you said. Onward!
Then I thought there must be a problem with the particular postgresql.conf in use. If we could just figure out what the default file is.. turns out we can!
How to get the existing postgres.conf out of the postgres image
1. Create docker-compose.yml with the following contents:
version: "3"
services:
db:
image: postgres:11.2-alpine
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
2. Spin up the service using
$ docker-compose run --rm --name=postgres db
3. In another terminal get the location of the file used in this release:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SHOW config_file"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
4. View the contents of default postgresql.conf
$ docker exec -it postgres cat /var/lib/postgresql/data/postgresql.conf
5. Replace local config file
Now all we have to do is replace the local config file ./db/postgresql.conf with the contents of the known-working-state config and modify it as necessary.
Database objects are only created once!
Database objects are only created once by the postgres container (source). So when developing the database parameters we have to remove them to make sure we're in a clean state.
Here's a nuclear (be careful!) option to
(1) remove all exited Docker containers, and then
(2) remove all Docker volumes not attached to containers:
$ docker rm $(docker ps -a -q) -f && docker volume prune -f
So now we can be sure to start from a clean state!
Final setup
Let's bring our Dockerfile back into the picture (just like you have in the question).
docker-compose.yml
version: "3"
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
Connect to the db
Now all we have to do is build from a clean state.
# ensure all volumes are deleted (see above)
$ docker-compose build
$ docker-compose run --rm --name=postgres db
We can now (still) connect to the database:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SELECT COUNT(1) FROM pg_database WHERE datname='db_name'"
Finally, we can edit the postgres.conf from a known working state.
As per this other discussion, your CMD command only has arguments and is missing a command. Try:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

Postgres in docker-compose can't find/mount /etc/postgresql/postgres.conf

I'm trying to mount my postgres.conf and pg_hba.conf using docker-compose and having difficulty understanding why it work when run using docker-cli and doesn't with docker-compose
The following docker-compose causes the image to crash with error:
/usr/local/bin/docker-entrypoint.sh: line 176: /config_file=/etc/postgresql/postgres.conf: No such file or directory
docker-compose.yml
services:
postgres-master:
image: postgres:11.4
container_name: postgres-master
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
- /home/agilob/dockers/pg/data:/var/lib/postgresql/data:rw
- $PWD/pg:/etc/postgresql:rw
- /etc/localtime:/etc/localtime:ro
hostname: 'primary'
environment:
- PGHOST=/tmp
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
- MAX_CONNECTIONS=10
- MAX_WAL_SENDERS=5
- PG_MODE=primary
- PUID=1000
- PGID=1000
ports:
- "5432:5432"
command: 'config_file=/etc/postgresql/postgres.conf hba_file=/etc/postgresql/pg_hba.conf'
This command works fine:
docker run -d --name some-postgres -v "$PWD/postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
also when I remove command: section and run the same docker-compose:
$ docker-compose -f postgres-compose.yml up -d
Recreating postgres-master ... done
$ docker exec -it postgres-master bash
root#primary:/# cd /etc/postgresql
root#primary:/etc/postgresql# ls
pg_hba.conf postgres.conf
The files are present in /etc/postgres.
Files in $PWD/pg are present:
$ ls pg
pg_hba.conf postgres.conf
The following works fine:
command: postgres -c config_file='/etc/postgresql/postgres.conf' -c 'hba_file=/etc/postgresql/pg_hba.conf'