Better way to run multiple docker commands - postgresql

I have a docker container for my database. lets call it my_pgdb. I am writing a script in which i am issuing multiple commands 1 by 1 i.e.,
docker exec -i my_pgdb pg_dump --schema-only -U my_user my_db > schema.sql
docker exec -i my_pgdb dropdb -U my_user "${db_name}_my_db" || true
docker exec -i my_pgdb createdb -U my_user "${db_name}_my_db"
cat schema.sql | docker exec -i my_pgdb psql -U my_user "${db_name}_my_db"
Is there a better way to merge these commands into 1? I mean i am doing docker exec multiple times, can i just issue all commands once or other better optimized way?

You can use heredoc in sh command like this:
docker exec -i my_pgdb sh <<-EOF
pg_dump --schema-only -U my_user my_db > schema.sql
dropdb -U my_user "${db_name}_my_db" || true
createdb -U my_user "${db_name}_my_db"
psql -U my_user "${db_name}_my_db" < schema.sql
EOF

Related

Dockerized Postgresql : Delete PGDATA content

I am trying to configure replication between two Postgresql docker containers. All tutorials out there - which are based on regular VM, not containers - states that a backup from the Master has to be done in the Replica / StandBy, through a command like this:
pg_basebackup -h 192.168.0.108 -U replicator -p 5432 -D $PGDATA -Fp -Xs -P -R
Unfortunately, this throws an error:
pg_basebackup: error: directory "/var/lib/postgresql/data" exists but is not empty
I cannot delete the content of this folder because the Postgresql service will crash and I will be kicked out of the container.
So, how can I do this given that I cannot stop the Postgresql service because of the containerized application?
These instructions following the percona docs for configuration
postgres replication.
Create a network for the postgres servers:
docker network create pgnet
Start the primary database server:
docker run -d \
--network pgnet \
-v primary_data:/var/lib/postgresql/data \
--name pg_primary \
-e POSTGRES_PASSWORD=secret \
-e POSTGRES_DB=exampledb \
docker.io/postgres:14
Create a replication user and configure pg_hba.conf to allow
replication access:
docker exec pg_primary \
psql -U postgres -c \
"CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'secret'"
docker exec pg_primary \
sh -c 'echo host replication replicator 0.0.0.0/0 md5 >> $PGDATA/pg_hba.conf'
docker exec pg_primary \
psql -U postgres -c 'select pg_reload_conf()'
Prepare the pgdata directory for the replica server. We do this by running the pg_basebackup command in an ephemeral container:
docker run -it --rm \
--network pgnet \
-v replica_data:/var/lib/postgresql/data \
--name pg_replica_init \
docker.io/postgres:14 \
sh -c 'pg_basebackup -h pg_primary -U replicator -p 5432 -D $PGDATA -Fp -Xs -P -R -W'
(The -W in the above command forces pg_basebackup to prompt for a
password. There are other ways you could provide the password; the
postgres docs have details.)
Start the replica:
docker run -d \
--network pgnet \
-v replica_data:/var/lib/postgresql/data \
--name pg_replica \
docker.io/postgres:14
Verify that replication is working. Create a table in the primary:
docker exec pg_primary \
psql -U postgres exampledb -c 'create table example_table (id int, name text)'
See that it shows up in the replica:
docker exec pg_replica \
psql -U postgres exampledb -c '\d'
You should see as output:
List of relations
Schema | Name | Type | Owner
--------+---------------+-------+----------
public | example_table | table | postgres
(1 row)

Readiness probe failure for Postgres because of "root"

I am trying to execute the very standard SELECT 1 command as a readiness probe command.
I've been through numerous posts on Postgres, root user and Kubernetes but I still can't get it right although this might be trivial.
I defined ENV variables POSTGRES_USER=postgres and POSTGRES_DB=mydb in the config manifest of my Postgres StatefulSet manifest.
I understood I had to use bash to execute a command since ENV variables are not interpolated.
All following commands fail with FATAL, role "root
bash -c pg_isready -U $POSTGRES_USER -p 5432 -> FATAL
bash -c psql -w -U $POSTGRES_USER -d $POSTGRES_DB -c 'SELECT 1;' -> FATAL
# bash doesn't read ENV ?
bash -c psql -w -U $POSTGRES_USER -d mydb -c 'SELECT 1;' -> FATAL
# maybe current user from host? "me" from "whoami"
bash -c psql -w -U me -d postgres -c 'SELECT 1;' -> FATAL
Then I execute the commands to check if bash gets the ENV variables -> NO
kubectl exec -it pg-dep-0 -- bash -c psql -U $POSTGRES_USER -d $POSTGRES_DB -c 'SELECT 1;' -> FATAL
kubectl exec -it pg-dep-0 -- bash -c psql -U $POSTGRES_USER -d $POSTGRES_DB -c 'SELECT 1;' -> FATAL
kubectl exec -it pg-dep-0 -- psql -U $POSTGRES_USER -d $POSTGRES_DB -c 'SELECT 1;' -> FATAL
but:
kubectl exec -it pg-dep-0 -- psql -U postgres -c 'SELECT 1;' <-- correct
kubectl exec -it pg-dep-0 -- sh
/ # echo $POSTGRES_USER
postgres <-- correct
succeeds, without bash but no ENV variable.
Finally, setting directly the values and not using bash with psql -U postgres -c 'SELECT 1; (without bash) as the command for the readiness probe also fails: Postgres is up, but the test returns "Readiness probe failed".
I used directly:
spec.tempalte.spec.containers:
[...]
envFrom:
- configMapRef:
name: config
readinessProbe:
exec:
command:
- bash
- "-c"
- |
psql -U "postgres" -d "mydb" -c 'SELECT 1;'
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
POSTGRES_HOST: "pg-svc"
POSTGRES_DB: "mysb"
POSTGRES_USER: "postgres"
A bit lost, any advice?

how to connect to psql using password on kubernetes

I want to connect to psql using password on kubectl exec command on kubernetes like
kubectl exec -it postgres -- bash \`psql -h $IP -U admin --password password -p $PORT dbname\`
I tried to command
kubectl exec -it $podid -- bash \`psql -h $IP -U admin --password -p $PORT postgresdb\`
and
kubectl exec -it $podid -- bash -c "psql -- dbname=postgresql://postgres:password#$ip:$port/postgresdb -c 'INSERT INTO public."user" (user_id,user_pw,user_nm,email,phone,is_admin) VALUES ('admin','admin','admin','admin#admin.com',NULL,true);'"
but these command did not work.
How can I connect to psql using kubernetes commands and password?
try
kubectl exec -it <postgresspod-name> bash or sh
when you inside the container you can just do
PGPASSWORD=<password> psql -h <host or service name or localhost> -U <username> <dbname>
another option is
kubectl exec -it postgres -- bash \`PGPASSWORD=<password> psql -h <host or service name or localhost> -U <username> <dbname>\`
The accepted answer did not work for me (command PGPASSWORD=<...> not found).
This worked :
kubectl exec -ti postgres -- env PGPASSWORD=$PASSWD psql psql -h <host> -U <username> <dbname>
As you asking only for command I assume you have all Deployments, Services, etc.
Please execute command below:
$ kubectl exec -ti <postgresspod-name> -- psql -h <node/host IP> -U postgresadmin --password -p <port> postgresdb
Password for user postgresadmin:
After you will type password you will be connected to Postgres
For me only this works
kubectl exec postgres-postgresql-0 -n nte -- sh -c 'PGPASSWORD=postgres psql -U "postgres" -d "database_name" -c "select * from table_name"'
You can try this way :
kubectl exec -i <plsql podname> -- psql -h <host or service name or localhost> -u <username> -p<password>

How can I access the postgres.conf of a postgres container on gitlab-ci?

On Gitlab-CI I set up a postgres service for my database and would like to inspect the config file of it.
For this I let postgres return the location of the config file but when I go to that directory, it is empty.
How can I access it?
.gitlab-ci.yaml:
image: maven:3.5.3-jdk-8
services:
- postgres
variables:
POSTGRES_DB: custom_db
POSTGRES_USER: custom_user
POSTGRES_PASSWORD: custom_pass
connect:
image: postgres
script:
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;"
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SHOW config_file;"
- cd /var/lib/postgresql/data
- dir
- ls -a
- cat postgresql.conf
The respective job output:
Running with gitlab-runner 11.8.0 (4745a6f3)
on docker-auto-scale 72989761
Using Docker executor with image postgres ...
Starting service postgres:latest ...
Pulling docker image postgres:latest ...
Using docker image sha256:30bf4f039abe0affe9fe4f07a13b00ea959299510626d650c719fb10c4f41470 for postgres:latest ...
Waiting for services to be up and running...
Pulling docker image postgres ...
Using docker image sha256:30bf4f039abe0affe9fe4f07a13b00ea959299510626d650c719fb10c4f41470 for postgres ...
Running on runner-72989761-project-7829066-concurrent-0 via runner-72989761-srm-1551974294-08e28deb...
Cloning repository...
Cloning into '/builds/kimtang/SpringBootTimeWithSwagger'...
Checking out 1399a232 as master...
Skipping Git submodules setup
$ export PGPASSWORD=$POSTGRES_PASSWORD
$ psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;"
status
--------
OK
(1 row)
$ psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SHOW config_file;"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
$ cd /var/lib/postgresql/data
$ dir
$ ls -a
.
..
$ cat postgresql.conf
cat: postgresql.conf: No such file or directory
ERROR: Job failed: exit code 1
Why does it state it is in /var/lib/postgresql/data but then can not be found?
You're connected to a remote docker instance via psql and you're checking a local directory. If you really want to check what's going on on the service docker image then ssh into the worker and then use the docker exec -i -t <container_name> /bin/sh command to log into the container. You will have to make the job run for a long time though so put some sleep in there.

how do i backup a database in docker

i'm running my app using docker-compose with the below yml file
postgres:
container_name: postgres
image: postgres:${POSTGRES_VERSION}
volumes:
- postgresdata:/var/lib/postgresql/data
expose:
- "5432"
environment:
- POSTGRES_DB=42EXP
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
node:
container_name: node
links:
- postgres:postgres
depends_on:
- postgres
volumes:
postgresdata:
As you can see here ,i'm using a named volume to manage postgres state.
According to the official docs, i can backup a volume like the below
docker run --rm --volumes postgresdata:/var/lib/postgresql/data -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Some other tutorials suggested i use the pg-dump function provided by postgres for backups.
pg_dump -Fc database_name_here > database.bak
I guess i would have to go inside the postgres container to perform this function and mount the backup directory to the host.
Is one approach better/preferable than the other?
To run pg_dump you can use docker exec command:
To backup:
docker exec -u <your_postgres_user> <postgres_container_name> pg_dump -Fc <database_name_here> > db.dump
To drop db (Don't do it on production, for test purpose only!!!):
docker exec -u <your_postgres_user> <postgres_container_name> psql -c 'DROP DATABASE <your_db_name>'
To restore:
docker exec -i -u <your_postgres_user> <postgres_container_name> pg_restore -C -d postgres < db.dump
Also you can use docker-compose analog of exec. In that case you can use short services name (postgres) instead of full container name (composeproject_postgres).
docker exec
docker-compose exec
pg_restore
Since you have
expose:
- "5432"
you can run
pg_dump -U <user> -h localhost -Fc <db_name> > 1.dump
pg_dump connects to 5432 port to make dump since it is listened by postgres in container you will dump db from container
You can also run
docker-compose exec -T postgres sh -c 'pg_dump -cU $POSTGRES_USER $POSTGRES_DB' | gzip > netbox.sql.gz
where netbox.sql.gz is the name of the backup file.
restoring would be
gunzip -c netbox.sql.gz | docker-compose exec -T postgres sh -c 'psql -U $POSTGRES_USER $POSTGRES_DB'