Facing issues due to ownership on mounted folder with Docker - postgresql

Following command works fine
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v /var/lib/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v /var/lib/openproject/static:/var/db/openproject \
openproject/community:8
But this command doesn't start container
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v ~/Dropbox/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v ~/Dropbox/openproject/static:/var/db/openproject \
openproject/community:8
I've also tried making /var/lib/openproject/pgdata symlink to ~/Dropbox/openproject/pgdata. But it also didn't work.
Docker logs say, PostgreSQL Config owner (postgres:102) and data owner (app:1000) do not match, and config owner is not root.
Is there any way to mount non-root folder on root folder inside the docker container and solve this issue?

Related

Why my local file is empty after mounting?

When i try to mount a database from postgresql, i see my local directory is empty.
This is my code:
winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v /c/src/ny:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13
When i run that code on MINGW64, i see docker produce a file named "ny;C" and it's empty.
Why is empty and why its named "ny;C" instead of "ny"? How can i fix that problem?

Install Postrouting in docker postgis-postgresql container

I created a postgis database with docker using the postgis image as usual
docker run -d \
--name mypostgres \
-p 5555:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/postgres/data:/var/lib/postgresql/data \
-v /data/postgres/lib:/usr/lib/postgresql/10/lib \
postgis/postgis:10-3.0
now I can see all extensiones in the database,it has postgis, it's ok. but not have postrouting.
so I pull another image:
docker pull pgrouting/pgrouting:11-3.1-3.1.3
and do the same command:
docker run -d \
--name pgrouting \
-p 5556:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/pgrouting/data/:/var/lib/postgresql/data/ \
-v /data/postgres/lib/:/usr/lib/postgresql/11/lib/ \
pgrouting/pgrouting:11-3.1-3.1.3
but when I exec this command:
create extensione postrouting
I get this error message:
could not load library "/usr/lib/postgresql/11/lib/plpgsql.so": /usr/lib/postgresql/11/lib/plpgsql.so: undefined symbol: AllocSetContextCreate
I can't solve this problem.Can anyone help me?
thanks a lot

Don't find /var/lib/postgresql/data/ directory on ubuntu when created docker image

I found the following mentioned at many places -
docker run -d \
--name some-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v /custom/mount:/var/lib/postgresql/data \
postgres
My only question is that I am unable to find /var/lib/postgresql/data/pgdata directory itself. I don't see any postgresql directory under /var/lib. Why is it? And just wonder how does it work if there is no directory?
The -v in your command mounts /custom/mount on your host (the machine where you run docker command) to container's /var/lib/postgresql/data. So the pgdata you are looking for is on host's /custom/mount/pgdata.
Of course, /custom/data is only an example name, you have to replace it with your real directory.

Hasura use SSL certificates for Postgres connection

I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?
You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest

How do I upgrade Docker Postgresql without removing existing data?

I am beginner both of docker and postgresql. 
How do I upgrade docker postgresql 9.5 into 9.6 without losing my 
current database? 
fyi: im using ubuntu version 14 and docker 17.09 
Thanks in advance.
To preserve data across a docker container a volume is required. This volume will mount directly onto the file system of the container and be persevered when the container is killed. It sounds though that the container was created without a volume attached. The best way to get that data is to use copy the data folder for the container and move to the host file system. Then create a docker container with the new image. Copy the data directory to the running container's data directory in this case pgdata:/var/lib/postgresql/data
docker cp [containerID]:/var/lib/postgresql/data /home/user/data/data-dir/
docker stop [containerID]
docker run -it --rm -v pgdata:/var/lib/postgresql/data postgres
docker cp /home/user/data/data-dir [containereID]:/var/lib/postgresql/data
In case that doesn't work i would just dump the current databases, and re-upload them to the new container
You do not store database files to external storage (outside of container).
Then i know only 1 way to store your database:
1) Backup database
2) Shutdown postgres 9.5 container
3) Run new postgres 9.6 container
4) Restore backup
You can use pg_dumpall for backuping full database:
pg_dumpall > backupfile
The resulting dump can be restored with psql:
psql -f backup postgres
I know it's been some time since you asked it, but I hope my solution will help future Googlers :)
I've tried to create a solution that is stateless as possible, to be compatible with CI and upgrade scripts.
The script:
Backs up the whole pg instance using pg_dumpall.
Uses the dump to create the new instance using initdb and psql -f.
The only requirement is a volume with some existing pg_data directory in it.
docker stop lms_db_1
DB_NAME=lms
DB_USERNAME=lmsweb
DB_PASSWORD=123456
CURRENT_DATE=$(date +%d-%m-%Y_%H_%M_%S)
MOUNT_PATH=/pg_data
PG_OLD_DATA=/pg_data/11/data
PG_NEW_DATA=/pg_data/13/data
BACKUP_FILENAME=v11.$CURRENT_DATE.sql
BACKUP_PATH=$MOUNT_PATH/backup/$BACKUP_FILENAME
BACKUP_DIR=$(dirname "$BACKUP_PATH")
VOLUME_NAME=lms_db-data-volume
# Step 1: Create a backup
docker run --rm -v $VOLUME_NAME:$MOUNT_PATH \
-e PGDATA=$PG_OLD_DATA \
-e POSTGRES_DB="${DB_NAME:-db}" \
-e POSTGRES_USER="${DB_USERNAME:-postgres}" \
-e POSTGRES_PASSWORD="${DB_PASSWORD:-postgres}" \
postgres:11-alpine \
/bin/bash -c "chown -R postgres:postgres $MOUNT_PATH \
&& su - postgres /bin/bash -c \"/usr/local/bin/pg_ctl -D \\\"\$PGDATA\\\" start\" \
&& mkdir -p \"$BACKUP_DIR\" \
&& pg_dumpall -U $DB_USERNAME -f \"$BACKUP_PATH\" \
&& chown postgres:postgres \"$BACKUP_PATH\""
# Step 2: Create a new database from the backup
docker run --rm -v $VOLUME_NAME:$MOUNT_PATH \
-e PGDATA=$PG_NEW_DATA \
-e POSTGRES_DB="${DB_NAME:-db}" \
-e POSTGRES_USER="${DB_USERNAME:-postgres}" \
-e POSTGRES_PASSWORD="${DB_PASSWORD:-postgres}" \
postgres:13-alpine \
/bin/bash -c "ls -la \"$BACKUP_DIR\" \
&& mkdir -p \"\$PGDATA\" \
&& chown -R postgres:postgres \"\$PGDATA\" \
&& rm -rf $PG_NEW_DATA/* \
&& su - postgres -c \"initdb -D \\\"\$PGDATA\\\"\" \
&& su - postgres -c \"pg_ctl -D \\\"\$PGDATA\\\" -l logfile start\" \
&& su - postgres -c \"psql -f $BACKUP_PATH\" \
&& printf \"\\\nhost all all all md5\\\n\" >> \"\$PGDATA/pg_hba.conf\" \
"