When i try to mount a database from postgresql, i see my local directory is empty.
This is my code:
winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v /c/src/ny:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13
When i run that code on MINGW64, i see docker produce a file named "ny;C" and it's empty.
Why is empty and why its named "ny;C" instead of "ny"? How can i fix that problem?
Related
I created a postgis database with docker using the postgis image as usual
docker run -d \
--name mypostgres \
-p 5555:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/postgres/data:/var/lib/postgresql/data \
-v /data/postgres/lib:/usr/lib/postgresql/10/lib \
postgis/postgis:10-3.0
now I can see all extensiones in the database,it has postgis, it's ok. but not have postrouting.
so I pull another image:
docker pull pgrouting/pgrouting:11-3.1-3.1.3
and do the same command:
docker run -d \
--name pgrouting \
-p 5556:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/pgrouting/data/:/var/lib/postgresql/data/ \
-v /data/postgres/lib/:/usr/lib/postgresql/11/lib/ \
pgrouting/pgrouting:11-3.1-3.1.3
but when I exec this command:
create extensione postrouting
I get this error message:
could not load library "/usr/lib/postgresql/11/lib/plpgsql.so": /usr/lib/postgresql/11/lib/plpgsql.so: undefined symbol: AllocSetContextCreate
I can't solve this problem.Can anyone help me?
thanks a lot
I run docker container with apache airflow
If I set executor = LocalExecutor, everything works fine, however, if I set executor = CeleryExecutor and run a DAG I get the following exception printed
[2020-07-13 04:17:41,065] {{celery_executor.py:266}} ERROR - Error fetching Celery task state, ignoring it:OperationalError('(psycopg2.OperationalError) FATAL: password authentication failed for user "airflow"\n')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/executors/celery_executor.py", line 108, in fetch_celery_task_state
I provide however the following ENV variables in docker run call
docker run --name test -it \
-p 8000:80 -p 5555:5555 -p 8080:8080 \
-v `pwd`:/app \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION \
-e PYTHONPATH=/app \
-e ENVIRONMENT=local \
-e XCOMMAND \
-e POSTGRES_PORT=5432 \
-e POSTGRES_HOST=postgres \
-e POSTGRES_USER=project_user \
-e POSTGRES_PASSWORD=password \
-e DJANGO_SETTINGS_MODULE=config.settings.local \
-e AIRFLOW_DB_NAME=project_airflow_dev \
-e AIRFLOW_ADMIN_USER=project_user \
-e AIRFLOW_ADMIN_EMAIL=admin#project.com \
-e AIRFLOW_ADMIN_PASSWORD=password \
-e AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://project_user:password#postgres:5432/project_airflow_dev \
-e AIRFLOW__CORE__EXECUTOR=CeleryExecutor \
-e AIRFLOW__CELERY__BROKER_URL=redis://redis:6379/1 \
--network="project-network" \
--link project_cassandra_1:cassandra \
--link project_postgres_1:postgres \
--link project_redis_1:redis \
registry.dkr.ecr.us-east-2.amazonaws.com/airflow:v1.0
In LocalExecutor - everything is fine, so I can login into admin UI and trigger the dag and get successful results, it's just that when I switch to CeleryExecutor - I get a weird error about "airflow" user, as if AIRFLOW__CORE__SQL_ALCHEMY_CONN env var is not visible or used at all.
Any ideas?
solution:
Adding AIRFLOW__CELERY__RESULT_BACKEND env var fixed the issue.
...
-e AIRFLOW__CELERY__RESULT_BACKEND=db+postgresql+psycopg2://project_user:password#postgres:5432/project_airflow_dev \
...
or edit airflow.cfg
[celery]
result_backend = db+postgresql://airflow:airflow#postgres/airflow
I found the following mentioned at many places -
docker run -d \
--name some-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v /custom/mount:/var/lib/postgresql/data \
postgres
My only question is that I am unable to find /var/lib/postgresql/data/pgdata directory itself. I don't see any postgresql directory under /var/lib. Why is it? And just wonder how does it work if there is no directory?
The -v in your command mounts /custom/mount on your host (the machine where you run docker command) to container's /var/lib/postgresql/data. So the pgdata you are looking for is on host's /custom/mount/pgdata.
Of course, /custom/data is only an example name, you have to replace it with your real directory.
I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?
You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest
Following command works fine
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v /var/lib/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v /var/lib/openproject/static:/var/db/openproject \
openproject/community:8
But this command doesn't start container
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v ~/Dropbox/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v ~/Dropbox/openproject/static:/var/db/openproject \
openproject/community:8
I've also tried making /var/lib/openproject/pgdata symlink to ~/Dropbox/openproject/pgdata. But it also didn't work.
Docker logs say, PostgreSQL Config owner (postgres:102) and data owner (app:1000) do not match, and config owner is not root.
Is there any way to mount non-root folder on root folder inside the docker container and solve this issue?