I am using docker-compose 3.7 to run Postgresql but I can't get my init.sql to run through the docker-entrypoint-initdb.d entry point.
I understand that the volume directory on my machine needs to be empty but I am sure that I am doing that correctly. I must be missing something fairly simple/obvious.
version: '3.7'
services:
postgresdb:
environment:
POSTGRES_DB: appdb
POSTGRES_USER: appdb
POSTGRES_PASSWORD: ########
image: bitnami/postgresql:latest
ports:
- "5432:5432"
volumes:
- ./db/postgres_volume:/bitnami:rw
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
restart: always
CREATE TABLE user (
id SERIAL NOT NULL PRIMARY KEY,
name VARCHAR(255) NOT NULL
);
My volume directory is empty.
$ ls db/postgres_volume/
$
The container starts with no issues but from the logs, I can't tell if the init.sql script was executed. Scripts seemed to be loaded but I am not sure about execution.
$ docker-compose -f docker-compose.yml up -d postgresdb
Creating network "app_default" with the default driver
Creating app_postgresdb_1 ... done
$
postgresdb_1 | INFO ==> Loading custom scripts...
postgresdb_1 | INFO ==> Loading user's custom files from /docker-entrypoint-initdb.d ...
postgresdb_1 | INFO ==> Starting PostgreSQL in background...
Logging into the container and then into my database, I see no tables.
$ docker exec -it 737845344cf2 bash
I have no name!#737845344cf2:/$ psql -U appdb appdb
appdb=> \dt
Did not find any relations.
After the container is created, I see a "postgresql" folder db/postgres_volume but is has no contents.
$ ls db/postgres_volume/
postgresql
$ ls db/postgres_volume/postgresql/
$
First thing, It will never create a table because you are using reserve word user in table creation. change the user to "user"
CREATE TABLE "user"(
user_id serial PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL
)
Second thing, The above will work fine with initialization the DB but there is a mounting issue in the lastest image that might be the reason that the mount directory seemed empty.
Also in the documentation, the path is -v /path/to/postgresql-persistence:/bitnami/postgresql Persisting your database
Related
I am trying to start a postgresql docker container which is of version 10.5.
But before that I have used 9.6 version in the same docker-compose.yml file and there is no data populated in the database.
And now after changing the version of postgres container, I'm not able to run the docker-compose up. It is throwing the below error.
FATAL: database files are incompatible with server
DETAIL: The data directory was initialized by PostgreSQL version 9.6,
which is not compatible with this version 10.5 (Debian
10.5-2.pgdg90+1)
This is how the docker-compose.yml file looks like.
version: '2'
services:
postgres_service:
container_name: postgresql_container
restart: always
image: postgres:10.5
volumes:
- postgres-data:/var/lib/postgresql/data
- ./postgresql/init:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=password
volumes:
postgres-data:
driver: local
Can someone please let me know where the issue is. Where am I making mistake?
Do I need to delete any volumes before proceeding with the new postgres version?
I also have postgresql installed in my local.
postgres=# select version();
version
-------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 10.10 (Ubuntu 10.10-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit
(1 row)
Will this cause any issue?
The problem caused because the volume which your compose created to store your database still keep old data which initiated by PostgreSQL 9.6. That volume name is postgres-data which created when you use named volume on your docker-compose.yml. So simply to get rid of this, you can use some ways below:
Using docker-compose command:
Run docker-compose down -v, this will stop all your container inside that compose and remove all named volume inside that compose.
You could take a look at docker-compose down command
Using docker volume command:
Run docker volume ls to get list of current volumes on your machine, I think you will see your volume on that list too:
DRIVER VOLUME NAME
local postgres-data
Run docker volume rm postgres-data to remove that volume, if your container still running and you couldn't remove it then you can use -f to force remove it
Hope that helps!
What worked for me was deleting pgdata folder inside the root of my project and running docker-compose build -d. It then showed me
/usr/local/bin/docker-entrypoint.sh: /docker-entrypoint-initdb.d/create-multiple-postgres-databases.sh: /bin/bash: bad interpreter: Permission denied
To fix it, I ran
chmod +x pg-init-scripts/create-multiple-postgresql-databases.sh
Notice that the .sh file name should match the one you have. And finally, docker-compose up -d.
The documentation of the postgres Docker image says the following about the env var POSTGRES_DB:
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
I have found that this is not true at all. For example, with this config:
version: '3.7'
services:
db:
image: postgres:11.3-alpine
restart: always
container_name: store
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
environment:
- POSTGRES_USER=custom
- POSTGRES_DB=customname
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
volumes:
postgres_data:
secrets:
db_password:
file: config/.secrets.db_password
The default database is called postgres, and not customname as I have specified:
$ docker exec -it store psql -U custom customname
psql: FATAL: database customname does not exist
$ docker exec -it store psql -U custom postgres
psql (11.3)
Type help for help.
postgres=# ^D
Am I missing something obvious?
Providing the environment variables, as you did, SHOULD create the customname database when the container is initialized. There is no need to create the username and database in the /docker-entrypoint-initdb.d/' init scripts.
I would make sure there isn't any hanging postgres_data volume. If you have previously started the container without specifing the environment variables, the volume gets created for the default postgres database. Next time you start the container (with the POSTGRES_DB env specified), the database creation part is skipped.
Just to make sure, remove any created volume (the name should be something like *_postgres_data)
docker volume ls
docker volume rm <volume_name>
See User and DB were not created from environment variable arguments as well. Hope that helps
You need to create the database first.
If you want to do that automatically for new data directories, then the official Docker Postgres image has an option to do so by placing Initialization Scripts with the extension .sql in the /docker-entrypoint-initdb.d/ directory.
For example, create a file with contents like:
CREATE USER custom_user;
CREATE DATABASE custom_db;
GRANT ALL PRIVILEGES ON DATABASE custom_db TO custom_user;
And save it to /docker-entrypoint-initdb.d/create-db.sql in the container, e.g. with COPY in the Dockerfile. Scripts with extension .sql inside that directory will only run if the DATA directory is empty, and multiple files will run in the alphabetical order of the file names.
If you want to set it up manually, you can also do that with the createdb utility
createdb [connection-option...] [option...] [dbname [description]]
Or by connecting to the postgres database and use the CREATE DATABASE ... command, e.g.
docker exec -it store psql -U postgres -c 'CREATE DATABASE customname;'
If you connect interactively as in your question, you can do the following:
$ docker exec -it store psql -U postgres
psql (11.3)
Type help for help.
postgres=# CREATE DATABASE customname;
CREATE DATABASE
postgres=# \c customname
The last command will connect you to the customname database.
If you've changed the username/password since the very first run, try to delete the prior volume created
docker volume rm <volume-name>
Then run the compose file again
Gitlab CI keeps ignoring the sql-files in /docker-entrypoint-initdb.d/* in this project.
here is docker-compose.yml:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
here is .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: debian:stable-slim
script:
- bash ./deploy.sh
The deployment script basically uses rsync to deploy the content of the repository to to the server via SSH:
rsync -rav --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw -e "ssh -l gitlab-ci" --exclude=".git" --delete ./ "gitlab-ci#$DEPLOY_SERVER:test/"
and then ssh's into the server to stop and restart the container:
ssh "gitlab-ci#$DEPLOY_SERVER" "cd test && docker-compose down && docker-compose up --build --detach"
This all goes well, but when the container starts up, it is supposed to run all the files that are in /docker-entrypoint-initdb.d/* as we can see here.
But instead, when doing docker logs -f lbsn-testdb on the server, I can see it stating
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
and I have no clue, why that happens. When running this container locally or even when I ssh to that server, clone the repo and bring up the containers manually, it all goes well and parses the sql-files. Just not when the Gitlab CI does it.
Any ideas on why that is?
This has been easier than I expected, and fatally nothing to do with Gitlab CI but with file permissions.
I passed --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw to rsync which looked really secure because only the user can do stuff. I confess that I propably copypasted it from somewhere on the internet. But then the files are mounted to the Docker container, and in there they have those permissions as well:
-rw------- 1 1005 1004 314 May 8 15:48 100-create-database.sql
On the host my gitlab-ci user owns those files, they are obviously also owned by some user with ID 1005 in the container as well, and no permissions are given to other users than this one.
Inside the container the user who does things is postgres though, but it can't read those files. Instead of complaining about that, it just ignores them. That might be something to create an issue about…
Now that I pass --chmod=D755,F644 it looks like that:
-rw-r--r-- 1 1005 1004 314 May 8 15:48 100-create-database.sql
and the docker logs say
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/100-create-database.sql
Too easy to think of in the first place :-/
If you already run the postgres service before, the init files will be ignored when you restart it so try to use --build to build the image again
docker-compose up --build -d
and before you run again :
Check the existing volumes with
docker volume ls
Then remove the one that you are using for you pg service with
docker volume rm {volume_name}
-> Make sure that the volume is not used by a container, if so then remove the container as well
I found this topic discovering a similar problem with PostgreSQL installation using the docker-compose tool.
The solution is basically the same. For the provided configuration:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
Your deployment script should set 0755 permissions to your postgres container volume, like chmod -R 0755 ./testdb in this case. It is important to make all subdirectories visible, so chmod -R option is required.
Official Postgres image is running under internal postgres user with the UID 70. Your application user in the host is most likely has different UID like 1000 or something similar. That is the reason for postgres init script to miss installation steps due to permissions error. This issue appears several years, but still exist in the latest PostgreSQL version (currently is 12.1)
Please be aware of security vulnerability when having readable for all init files in the system. It is good to use shell environment variables to pass secrets into the init scrip.
Here is a docker-compose example:
postgres:
image: postgres:12.1-alpine
container_name: app-postgres
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- APP_POSTGRES_DB
- APP_POSTGRES_SCHEMA
- APP_POSTGRES_USER
- APP_POSTGRES_PASSWORD
ports:
- '5432:5432'
volumes:
- $HOME/app/conf/postgres:/docker-entrypoint-initdb.d
- $HOME/data/postgres:/var/lib/postgresql/data
Corresponding script create-users.sh for creating users may looks like:
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
POSTGRES_USER="${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD="${POSTGRES_PASSWORD}"
APP_POSTGRES_DB="${APP_POSTGRES_DB:-app}"
APP_POSTGRES_SCHEMA="${APP_POSTGRES_SCHEMA:-app}"
APP_POSTGRES_USER="${APP_POSTGRES_USER:-appuser}"
APP_POSTGRES_PASSWORD="${APP_POSTGRES_PASSWORD:-app}"
DATABASE="${APP_POSTGRES_DB}"
# Create single database.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE DATABASE ${DATABASE}"
# Create app user.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE USER ${APP_POSTGRES_USER} SUPERUSER PASSWORD '${APP_POSTGRES_PASSWORD}'"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "GRANT ALL PRIVILEGES ON DATABASE ${DATABASE} TO ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${DATABASE}" --command "CREATE SCHEMA ${APP_POSTGRES_SCHEMA} AUTHORIZATION ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "ALTER USER ${APP_POSTGRES_USER} SET search_path = ${APP_POSTGRES_SCHEMA},public"
Here is my docker compose file
version "2"
services:
my_postgres:
image: postgres:9.6
volumes:
- /Users/my_user_name/test_docker/my_volume_space:/var/lib/postgresql
ports:
- "5432:5432"
I entered the following command in mac
docker-machine start
docker-machine env
evcal "$(docker-machine env default)"
docker-compose up
psql -h 192.168.99.100 -p 5432 -U postgres
create table test (my_id bigserial primary key);
INSERT INTO test (my_id) values (1);
SELECT * FROM test;
\q
Originally I thought the above commands will cause a .sql file to be created in ./my_volume_space of the host computer. But I don't see any .sql file in ./my_volume_space rather just an empty data directory in ./my_volume_space
Furthermore if I docker-compose down and docker-compose up again I can see my data in the database is now gone.
I suspected that when I created the data when the image is running, the data is not stored back to ./my_volume_space thus when I reboot, there is nothing to mount from the host.
Can someone tell me what I am doing wrong?
Thanks
Path volumes do not work on docker-machine (macOS) with postgres image, source.
The work-around is to use named volumes. Example docker-compose.yaml:
services:
test-postgres-compose :
...
volumes :
- pgdata:/var/lib/postgresql/data
...
volumes :
pgdata :
Docker compose volumes info
I believe the volume path on the container is to /var/lib/postgresql/data not /var/lib/postgresql
I am using docker-compose to deploy a multicontainer python Flask web application. I'm having difficulty understanding how to create tables in the postgresql database during the build so I don't have to add them manually with psql.
My docker-compose.yml file is:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/flask-app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
I dont want to have to enter psql in order to type in:
CREATE DATABASE my_database;
CREATE USER this_user WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE "my_database" to this_user;
\i create_tables.sql
I would appreciate guidance on how to create the tables.
It didn't work for me with the COPY approach in Dockerfile. But I managed to run my init.sql file by adding the following to docker-compose.yml:
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
init.sql was in the same directory as my docker-compose.yml.
I picked the solution from this gist. Check this article for more information.
I dont want to have to enter psql in order to type in
You can simply use container's built-in init mechanism:
COPY init.sql /docker-entrypoint-initdb.d/10-init.sql
This makes sure that your sql is executed after DB server is properly booted up.
Take a look at their entrypoint script. It does some preparations to start psql correctly and looks into /docker-entrypoint-initdb.d/ directory for files ending in .sh, .sql and .sql.gz.
10- in filename is because files are processed in ASCII order. You can name your other init files like 20-create-tables.sql and 30-seed-tables.sql.gz for example and be sure that they are processed in order you need.
Also note that invoking command does not specify the database. Keep that in mind if you are, say, migrating to docker-compose and your existing .sql files don't specify DB either.
Your files will be processed at container's first start instead of build stage though. Since Docker Compose stops images and then resumes them, there's almost no difference, but if it's crucial for you to init the DB at build stage I suggest still using built-in init method by calling /docker-entrypoint.sh from your dockerfile and then cleaning up at /docker-entrypoint-initdb.d/ directory.
I would create the tables as part of the build process. Create a new Dockerfile in a new directory ./database/
FROM postgres:latest
COPY . /fixtures
WORKDIR /fixtures
RUN /fixtures/setup.sh
./database/setup.sh would look something like this:
#!/bin/bash
set -e
/etc/init.d/postgresql start
psql -f create_fixtures.sql
/etc/init.d/postgresql stop
Put your create user, create database, create table sql (and any other fixture data) into a create_fixtures.sql file in the ./database/ directory.
and finally your postgres service will change to use build:
postgres:
build: ./database/
...
Note: Sometimes you'll need a sleep 5 (or even better a script to poll and wait for postgresql to start) after the /etc/init.d/postgresql start line. In my experience either the init script or the psql client handles this for you, but I know that's not the case with mysql, so I thought I'd call it out.