Install Postrouting in docker postgis-postgresql container - postgresql

I created a postgis database with docker using the postgis image as usual
docker run -d \
--name mypostgres \
-p 5555:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/postgres/data:/var/lib/postgresql/data \
-v /data/postgres/lib:/usr/lib/postgresql/10/lib \
postgis/postgis:10-3.0
now I can see all extensiones in the database,it has postgis, it's ok. but not have postrouting.
so I pull another image:
docker pull pgrouting/pgrouting:11-3.1-3.1.3
and do the same command:
docker run -d \
--name pgrouting \
-p 5556:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/pgrouting/data/:/var/lib/postgresql/data/ \
-v /data/postgres/lib/:/usr/lib/postgresql/11/lib/ \
pgrouting/pgrouting:11-3.1-3.1.3
but when I exec this command:
create extensione postrouting
I get this error message:
could not load library "/usr/lib/postgresql/11/lib/plpgsql.so": /usr/lib/postgresql/11/lib/plpgsql.so: undefined symbol: AllocSetContextCreate
I can't solve this problem.Can anyone help me?
thanks a lot

Related

Why my local file is empty after mounting?

When i try to mount a database from postgresql, i see my local directory is empty.
This is my code:
winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v /c/src/ny:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13
When i run that code on MINGW64, i see docker produce a file named "ny;C" and it's empty.
Why is empty and why its named "ny;C" instead of "ny"? How can i fix that problem?

Airflow - Switching to CeleryExecutor results in password authentication failed for user "airflow" exception

I run docker container with apache airflow
If I set executor = LocalExecutor, everything works fine, however, if I set executor = CeleryExecutor and run a DAG I get the following exception printed
[2020-07-13 04:17:41,065] {{celery_executor.py:266}} ERROR - Error fetching Celery task state, ignoring it:OperationalError('(psycopg2.OperationalError) FATAL: password authentication failed for user "airflow"\n')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/executors/celery_executor.py", line 108, in fetch_celery_task_state
I provide however the following ENV variables in docker run call
docker run --name test -it \
-p 8000:80 -p 5555:5555 -p 8080:8080 \
-v `pwd`:/app \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION \
-e PYTHONPATH=/app \
-e ENVIRONMENT=local \
-e XCOMMAND \
-e POSTGRES_PORT=5432 \
-e POSTGRES_HOST=postgres \
-e POSTGRES_USER=project_user \
-e POSTGRES_PASSWORD=password \
-e DJANGO_SETTINGS_MODULE=config.settings.local \
-e AIRFLOW_DB_NAME=project_airflow_dev \
-e AIRFLOW_ADMIN_USER=project_user \
-e AIRFLOW_ADMIN_EMAIL=admin#project.com \
-e AIRFLOW_ADMIN_PASSWORD=password \
-e AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://project_user:password#postgres:5432/project_airflow_dev \
-e AIRFLOW__CORE__EXECUTOR=CeleryExecutor \
-e AIRFLOW__CELERY__BROKER_URL=redis://redis:6379/1 \
--network="project-network" \
--link project_cassandra_1:cassandra \
--link project_postgres_1:postgres \
--link project_redis_1:redis \
registry.dkr.ecr.us-east-2.amazonaws.com/airflow:v1.0
In LocalExecutor - everything is fine, so I can login into admin UI and trigger the dag and get successful results, it's just that when I switch to CeleryExecutor - I get a weird error about "airflow" user, as if AIRFLOW__CORE__SQL_ALCHEMY_CONN env var is not visible or used at all.
Any ideas?
solution:
Adding AIRFLOW__CELERY__RESULT_BACKEND env var fixed the issue.
...
-e AIRFLOW__CELERY__RESULT_BACKEND=db+postgresql+psycopg2://project_user:password#postgres:5432/project_airflow_dev \
...
or edit airflow.cfg
[celery]
result_backend = db+postgresql://airflow:airflow#postgres/airflow

Hasura use SSL certificates for Postgres connection

I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?
You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest

Facing issues due to ownership on mounted folder with Docker

Following command works fine
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v /var/lib/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v /var/lib/openproject/static:/var/db/openproject \
openproject/community:8
But this command doesn't start container
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v ~/Dropbox/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v ~/Dropbox/openproject/static:/var/db/openproject \
openproject/community:8
I've also tried making /var/lib/openproject/pgdata symlink to ~/Dropbox/openproject/pgdata. But it also didn't work.
Docker logs say, PostgreSQL Config owner (postgres:102) and data owner (app:1000) do not match, and config owner is not root.
Is there any way to mount non-root folder on root folder inside the docker container and solve this issue?

How to suppress INFO messages when running psql scripts

I'm seeing INFO messages when I run my tests and I thought that I had gotten rid of them by setting the client_min_messages PGOPTION. Here's my command:
PGOPTIONS='--client-min-messages=warning' \
psql -h localhost \
-p 5432 \
-d my_db \
-U my_user \
--no-align \
--field-separator '|' \
--pset footer \
--quiet \
-v AUTOCOMMIT=off \
-X \
-v VERBOSITY=terse \
-v ON_ERROR_STOP=1 \
--pset pager=off \
-f tests/test.sql \
-o "$test_results"
Can someone advise me on how to turn off the INFO messages?
This works for me: Postgres 9.1.4 on Debian GNU Linux with bash:
env PGOPTIONS='-c client_min_messages=WARNING' psql ...
(Still works for Postgres 12 on Ubuntu 18.04 LTS with bash.)
It's also what the manual suggests. In most shells, setting environment variables also works without an explicit leading env. See maxschlepzig's comment.
Note, however, that there is no message level INFO for client_min_messages.
That's only applicable to log_min_messages and log_min_error_statement.