Hasura use SSL certificates for Postgres connection - postgresql

I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?

You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest

Related

Why my local file is empty after mounting?

When i try to mount a database from postgresql, i see my local directory is empty.
This is my code:
winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v /c/src/ny:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13
When i run that code on MINGW64, i see docker produce a file named "ny;C" and it's empty.
Why is empty and why its named "ny;C" instead of "ny"? How can i fix that problem?

Install Postrouting in docker postgis-postgresql container

I created a postgis database with docker using the postgis image as usual
docker run -d \
--name mypostgres \
-p 5555:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/postgres/data:/var/lib/postgresql/data \
-v /data/postgres/lib:/usr/lib/postgresql/10/lib \
postgis/postgis:10-3.0
now I can see all extensiones in the database,it has postgis, it's ok. but not have postrouting.
so I pull another image:
docker pull pgrouting/pgrouting:11-3.1-3.1.3
and do the same command:
docker run -d \
--name pgrouting \
-p 5556:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/pgrouting/data/:/var/lib/postgresql/data/ \
-v /data/postgres/lib/:/usr/lib/postgresql/11/lib/ \
pgrouting/pgrouting:11-3.1-3.1.3
but when I exec this command:
create extensione postrouting
I get this error message:
could not load library "/usr/lib/postgresql/11/lib/plpgsql.so": /usr/lib/postgresql/11/lib/plpgsql.so: undefined symbol: AllocSetContextCreate
I can't solve this problem.Can anyone help me?
thanks a lot

How to run schema registry container for sasl plain kafka cluster

I want to run the cp-schema-registry image on AWS ECS, so I am trying to get it to run on docker locally. I have a command like this:
docker run -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.address:9092,2.kafka.address:9092,3.kafka.address:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_PLAINTEXT \
confluentinc/cp-schema-registry:5.5.3
(I have replaced the kafka urls).
My consumers/producers connect to the cluster with params:
["sasl.mechanism"] = "PLAIN"
["sasl.username"] = <username>
["sasl.password"] = <password>
Docs seem to indicate there is a file I can create with these parameters, but I don't know how to pass this into the docker run command. Can this be done?
Thanks to OneCricketeer for the help above with the SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG var. The command ended up like this (I added port 8081:8081 so I could test with curl):
docker run -p 8081:8081 -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.broker:9092,2.kafka.broker:9092,3.kafka.broker:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_SSL \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM=PLAIN \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="user" password="pass";' confluentinc/cp-schema-registry:5.5.3
Then test with curl localhost:8081/subjects and get [] as a response.

Facing issues due to ownership on mounted folder with Docker

Following command works fine
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v /var/lib/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v /var/lib/openproject/static:/var/db/openproject \
openproject/community:8
But this command doesn't start container
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v ~/Dropbox/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v ~/Dropbox/openproject/static:/var/db/openproject \
openproject/community:8
I've also tried making /var/lib/openproject/pgdata symlink to ~/Dropbox/openproject/pgdata. But it also didn't work.
Docker logs say, PostgreSQL Config owner (postgres:102) and data owner (app:1000) do not match, and config owner is not root.
Is there any way to mount non-root folder on root folder inside the docker container and solve this issue?

how to promote master, after failover on postgresql with docker

first of all I'm using this setup postgres-docker-cluster, everything works fine during the fail-over, i stop the master and the slave1 take its place, but if i turn the master back on im not sure how to promoted to master again, I would appreciated any pointers on the right direction, do i need to manually promote it? sorry I'm pretty new at this concept(ha).
This docker uses repmgr, pgpool2 and postgres 9.5.
some info on the docker
postgresql-cluster-pgsql
postgresql-cluster-pgpool
docker-compose.yml
So i figure out how to sort of solve the problem,
Create the containers manually
Master
docker run \
-e INITIAL_NODE_TYPE='master' \
-e NODE_ID=1 \
-e NODE_NAME='node1' \
-e CLUSTER_NODE_NETWORK_NAME='pgmaster' \
-e POSTGRES_PASSWORD='monkey_pass' \
-e POSTGRES_USER='monkey_user' \
-e POSTGRES_DB='monkey_db' \
-e CLUSTER_NODE_REGISTER_DELAY=5 \
-e REPLICATION_DAEMON_START_DELAY=120 \
-e CLUSTER_NAME='pg_cluster' \
-e REPLICATION_DB='replication_db' \
-e REPLICATION_USER='replication_user' \
-e REPLICATION_PASSWORD='replication_pass' \
-v cluster-archives:/var/cluster_archive \
-p 5432:5432 \
--net mynet \
--net-alias pgmaster \
--name pgmastertest \
paunin/postgresql-cluster-pgsql
Slave
docker run \
-e INITIAL_NODE_TYPE='standby' \
-e NODE_ID=2 \
-e NODE_NAME='node2' \
-e REPLICATION_PRIMARY_HOST='pgmaster' \
-e CLUSTER_NODE_NETWORK_NAME='pgslave1' \
-e REPLICATION_UPSTREAM_NODE_ID=1 \
-v cluster-archives:/var/cluster_archive \
-p 5441:5432 \
--net mynet \
--net-alias pgslave1 \
--name pgslavetest \
paunin/postgresql-cluster-pgsql
Pgpool
docker run \
-e PCP_USER='pcp_user' \
-e PCP_PASSWORD='pcp_pass' \
-e PGPOOL_START_DELAY=120 \
-e REPLICATION_USER='replication_user' \
-e REPLICATION_PASSWORD='replication_pass' \
-e SEARCH_PRIMARY_NODE_TIMEOUT=5 \
-e DB_USERS='monkey_user:monkey_pass' \
-e BACKENDS='0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::' \
-p 5430:5432 \
-p 9898:9898 \
--net mynet \
--net-alias pgpool \
--name pgpooltest \
paunin/postgresql-cluster-pgpool
on the line BACKENDS='0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::' \ you can add more slaves to pgppool
Stop master pgmaster, slave pgslave1 would be promoted after a few secs,
Add new slave container docker run \
-e INITIAL_NODE_TYPE='standby' \
-e NODE_ID=3 \
-e NODE_NAME='node1' \
-e REPLICATION_PRIMARY_HOST='pgslave1' \
-e CLUSTER_NODE_NETWORK_NAME='pgmaster' \
-e REPLICATION_UPSTREAM_NODE_ID=2 \
-v cluster-archives:/var/cluster_archive \
-p 5432:5432 \
--net mynet \
--net-alias pgmaster \
--name pgmastertest3 \
paunin/postgresql-cluster-pgsql
on the following lines
-e REPLICATION_PRIMARY_HOST='pgslave1' \ make sure you are pointing to the alias of the new master (pgslave1).
-e REPLICATION_UPSTREAM_NODE_ID=2 \ make sure you are pointing to the new master node id (2).
-e NODE_ID=3 \ make sure this id doesn't exists on the table repl_nodes.
--net-alias pgmaster \ u can use the one from your old master, or use one that you already added on pgpool BACKENDS='0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::' \ otherwise if the new master fails repmgr wont be able to recover it.
Its a little manual, but it does what i need, and thats to add a new slave to the new master.
When the Master fails the PostgreSQL cluster elects another master from the stanby nodes (based on the node weight in the cluster). So when the ex-master is finally brought back to life the cluster remains loyal to it's current master ,however the ex-master is initiated back in the cluster but this time as a standby. All of that is completely managed by the PostgreSQL and not the Pgpool.
so what you would expect is that if the new master (ex-standby) fails (or scaled to 0) then the cluster would failover to the ex-master and elect it as a leader once again and when the standby is scaled up again it would join as a standby and things are back to normal. And that is exactly what the PostgreSQL cluster would do.
But most probably the Pgpool service would fail at that moment because whenever a node fails the Pgpool sign that node status as DOWN and even if that node comes back to live it will not notify the pgpool and your traffic would not reach that node.
so if you checked the recovered node status - after its recovery - on the pgpool container using the PCP commands :
pcp_node_info -U pcp_user -h localhost -n 1 # master node id
pgmaster 5432 down 1
so what you have to do is to re-attach the once fallen node back to the Pgpool manually using:
pcp_attach_node -U pcp_user -h localhost -n 1 # master node id
--- executed successfully ---
pcp_node_info -U pcp_user -h localhost -n 1 # master node id
pgmaster 5432 up 1
At this point the pgpool recognizes the ex-master node once again and can direct traffic to it.
After that whenever you remove ( scale to 0 ) the ex-standby (now master) service the whole solution (PostgreSQL - Pgpool) would failover to the actual master and now you could bring standby up again and re-attach it to the pgpool.
P.S. The downtime is only the failover downtime of the pgpool and the pgpool service will maintain its original configuration nothing added nothing restarted (well except for the PostgresQL node that failed hopefully :D ).