I know lots of questions sound like this, and they all have the same answer: delete your volumes to force it to reinitialize.
The problem is, I'm being careful to delete my volumes, but it's consistently spinning up the container incorrectly every time.
My docker-compose.yml
version: "3.1"
services:
db:
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=changeme
- POSTGRES_USER=myuser
image: postgres
My process:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose up -v # or docker-compose up --force-recreate
yet it always creates the "postgres" user instead of myuser. The output when it starts up shows that it "will be owned by user 'postgres'" and I can only docker exec as postgres, not my user.
The instructions seem very straightforward. Am I missing something, or is this a bug?
What happens when you use the compose file above?
I can only docker exec as postgres, not myuser
The environment variable POSTGRES_USER controls the database user, not the linux user. Take a look at the chapter Arbitrary --user Notes in the documentation to learn how to change the linux user.
Related
Goal:
Run Postgres in docker by pulling postgres from docker hub (https://hub.docker.com/_/postgres)
Background:
I get a message when I tried running docker with postgres
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
I got info at https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/ about why.
Problem:
"Update your docker-compose.yml or corresponding configuration with the POSTGRES_HOST_AUTH_METHOD environment variable to revert back to previous behavior or implement a proper password." (https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/)
I don't understand the solution about how to solve the current situation.
Where can i find the dokcer-compose.yml?
Info:
*I'm newbie in PostGre and Docker
If you need to run PostgreSQL in docker you will have to use a variable in docker run command like this :
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The error is telling you the same.
Read more at https://hub.docker.com/_/postgres
Docker-compose.yml is just another option. You can run it just by docker run like in first answer. If you want use docker-compose, in documentation is example of it:
stack.yaml
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8080:8080
and run: docker-compose -f stack.yml up.
Everything is here:
https://hub.docker.com/_/postgres
So I'm setting up with Docker swarm.
I am now cool with the docker stack deploy -c docker-compose.yml myapp command which replaces my former docker-compose up.
But one of my service is my DB and I need to pgrestore inside it.
Previously with compose, I would run:
docker-compose run --rm postgres pg_restore --rest-of-command
How can I do the same with stack deploy?
Unfortunately, the container created with compose is not the same as the one from stack deploy: the first one is called myapp_postgres while the second myapp_postgres.1.zamd6kb6cy4p8mtfha0gn50vh.
I guess I could write something like docker exec 035803286af0 but then I loose all the benefits of the config from docker-compose.yml, which in this case is:
postgres:
env_file:
- ./.env
image: postgres:11.0-alpine
volumes:
- "..:/app" # toe make the dump accessible to the container
- "/var/run/postgresql:/var/run/postgresql"
So this solution is not very IaC.
So ain't there a docker service run or something?
Thanks
You can follow docker image docs (Initialization scripts section):
and create *.sh script under /docker-entrypoint-initdb.d which will run pg_restore ... when Postgres container will run as part of the Docker service.
It doesn't seem to be a direct answer to your question, however it may achieve your goal of restoring the dump during Postgres initialization.
The documentation of the postgres Docker image says the following about the env var POSTGRES_DB:
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
I have found that this is not true at all. For example, with this config:
version: '3.7'
services:
db:
image: postgres:11.3-alpine
restart: always
container_name: store
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
environment:
- POSTGRES_USER=custom
- POSTGRES_DB=customname
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
volumes:
postgres_data:
secrets:
db_password:
file: config/.secrets.db_password
The default database is called postgres, and not customname as I have specified:
$ docker exec -it store psql -U custom customname
psql: FATAL: database customname does not exist
$ docker exec -it store psql -U custom postgres
psql (11.3)
Type help for help.
postgres=# ^D
Am I missing something obvious?
Providing the environment variables, as you did, SHOULD create the customname database when the container is initialized. There is no need to create the username and database in the /docker-entrypoint-initdb.d/' init scripts.
I would make sure there isn't any hanging postgres_data volume. If you have previously started the container without specifing the environment variables, the volume gets created for the default postgres database. Next time you start the container (with the POSTGRES_DB env specified), the database creation part is skipped.
Just to make sure, remove any created volume (the name should be something like *_postgres_data)
docker volume ls
docker volume rm <volume_name>
See User and DB were not created from environment variable arguments as well. Hope that helps
You need to create the database first.
If you want to do that automatically for new data directories, then the official Docker Postgres image has an option to do so by placing Initialization Scripts with the extension .sql in the /docker-entrypoint-initdb.d/ directory.
For example, create a file with contents like:
CREATE USER custom_user;
CREATE DATABASE custom_db;
GRANT ALL PRIVILEGES ON DATABASE custom_db TO custom_user;
And save it to /docker-entrypoint-initdb.d/create-db.sql in the container, e.g. with COPY in the Dockerfile. Scripts with extension .sql inside that directory will only run if the DATA directory is empty, and multiple files will run in the alphabetical order of the file names.
If you want to set it up manually, you can also do that with the createdb utility
createdb [connection-option...] [option...] [dbname [description]]
Or by connecting to the postgres database and use the CREATE DATABASE ... command, e.g.
docker exec -it store psql -U postgres -c 'CREATE DATABASE customname;'
If you connect interactively as in your question, you can do the following:
$ docker exec -it store psql -U postgres
psql (11.3)
Type help for help.
postgres=# CREATE DATABASE customname;
CREATE DATABASE
postgres=# \c customname
The last command will connect you to the customname database.
If you've changed the username/password since the very first run, try to delete the prior volume created
docker volume rm <volume-name>
Then run the compose file again
Gitlab CI keeps ignoring the sql-files in /docker-entrypoint-initdb.d/* in this project.
here is docker-compose.yml:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
here is .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: debian:stable-slim
script:
- bash ./deploy.sh
The deployment script basically uses rsync to deploy the content of the repository to to the server via SSH:
rsync -rav --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw -e "ssh -l gitlab-ci" --exclude=".git" --delete ./ "gitlab-ci#$DEPLOY_SERVER:test/"
and then ssh's into the server to stop and restart the container:
ssh "gitlab-ci#$DEPLOY_SERVER" "cd test && docker-compose down && docker-compose up --build --detach"
This all goes well, but when the container starts up, it is supposed to run all the files that are in /docker-entrypoint-initdb.d/* as we can see here.
But instead, when doing docker logs -f lbsn-testdb on the server, I can see it stating
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
and I have no clue, why that happens. When running this container locally or even when I ssh to that server, clone the repo and bring up the containers manually, it all goes well and parses the sql-files. Just not when the Gitlab CI does it.
Any ideas on why that is?
This has been easier than I expected, and fatally nothing to do with Gitlab CI but with file permissions.
I passed --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw to rsync which looked really secure because only the user can do stuff. I confess that I propably copypasted it from somewhere on the internet. But then the files are mounted to the Docker container, and in there they have those permissions as well:
-rw------- 1 1005 1004 314 May 8 15:48 100-create-database.sql
On the host my gitlab-ci user owns those files, they are obviously also owned by some user with ID 1005 in the container as well, and no permissions are given to other users than this one.
Inside the container the user who does things is postgres though, but it can't read those files. Instead of complaining about that, it just ignores them. That might be something to create an issue about…
Now that I pass --chmod=D755,F644 it looks like that:
-rw-r--r-- 1 1005 1004 314 May 8 15:48 100-create-database.sql
and the docker logs say
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/100-create-database.sql
Too easy to think of in the first place :-/
If you already run the postgres service before, the init files will be ignored when you restart it so try to use --build to build the image again
docker-compose up --build -d
and before you run again :
Check the existing volumes with
docker volume ls
Then remove the one that you are using for you pg service with
docker volume rm {volume_name}
-> Make sure that the volume is not used by a container, if so then remove the container as well
I found this topic discovering a similar problem with PostgreSQL installation using the docker-compose tool.
The solution is basically the same. For the provided configuration:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
Your deployment script should set 0755 permissions to your postgres container volume, like chmod -R 0755 ./testdb in this case. It is important to make all subdirectories visible, so chmod -R option is required.
Official Postgres image is running under internal postgres user with the UID 70. Your application user in the host is most likely has different UID like 1000 or something similar. That is the reason for postgres init script to miss installation steps due to permissions error. This issue appears several years, but still exist in the latest PostgreSQL version (currently is 12.1)
Please be aware of security vulnerability when having readable for all init files in the system. It is good to use shell environment variables to pass secrets into the init scrip.
Here is a docker-compose example:
postgres:
image: postgres:12.1-alpine
container_name: app-postgres
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- APP_POSTGRES_DB
- APP_POSTGRES_SCHEMA
- APP_POSTGRES_USER
- APP_POSTGRES_PASSWORD
ports:
- '5432:5432'
volumes:
- $HOME/app/conf/postgres:/docker-entrypoint-initdb.d
- $HOME/data/postgres:/var/lib/postgresql/data
Corresponding script create-users.sh for creating users may looks like:
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
POSTGRES_USER="${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD="${POSTGRES_PASSWORD}"
APP_POSTGRES_DB="${APP_POSTGRES_DB:-app}"
APP_POSTGRES_SCHEMA="${APP_POSTGRES_SCHEMA:-app}"
APP_POSTGRES_USER="${APP_POSTGRES_USER:-appuser}"
APP_POSTGRES_PASSWORD="${APP_POSTGRES_PASSWORD:-app}"
DATABASE="${APP_POSTGRES_DB}"
# Create single database.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE DATABASE ${DATABASE}"
# Create app user.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE USER ${APP_POSTGRES_USER} SUPERUSER PASSWORD '${APP_POSTGRES_PASSWORD}'"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "GRANT ALL PRIVILEGES ON DATABASE ${DATABASE} TO ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${DATABASE}" --command "CREATE SCHEMA ${APP_POSTGRES_SCHEMA} AUTHORIZATION ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "ALTER USER ${APP_POSTGRES_USER} SET search_path = ${APP_POSTGRES_SCHEMA},public"
I know there are probably many ways to do this. What I am looking for is a way to do it using (preferably) only my DockerFile and one container.
Here is my current dockerfile:
FROM postgres:latest
ENV POSTGRES_USER=myuser
ENV POSTGRES_PASSWORD=mypassword
Here is the command I used to build this container:
docker built -t my_db .
And here is the command that I use to run the container:
docker run -p 5432:5432 my_db
What I would like to do is have the data stored in the container if possible, but I don't seem to understand how or where postgres stores it's data. I saw on another stack overflow post that postgres will store it by default in /var/lib/postgresql/data however when I look in that folder I see nothing. I can however verify that postgres is running because I am using a client called teamSQL and from that client I can create tables and insert/read data.
I can also verify that when i stop the container and restart the data is definitely not persisted.
Note: this is running in OSx but I don't think that is relevant.
You should use Docker volumes, so when you stop your container, data will persist on host machine, and when you start container again data will be mounted to it
docker volume create pgdata
docker run -p 5432:5432 -v pgdata:/var/lib/postgresql/data my_db