pg_dump of postgresql pod with no terminal access as it's a prod env - postgresql

So I want to take pg_dump of postgresql running on pod(openshift). I was thinking to create a cron job which would ssh into postgresql pod and run the pg_dump command. But cronjob actually created it's own pod and execute the command in it's own pod. any idea how can we create a .bak file/take backup of postgresql whose terminal we cannot access.

thanks #a_horse_with_no_name, it worked. I created a dummy postgresql pod in dev namespace where I had access to terminal and executed the pg_dump command as follows
pg_dump --username=username --host=host --port=port postgres > pguat.bak
it created the pguat.bak file. Restored it with following command to the new pgsql pod.
psql --username=username --host=host --port=port postgres < pguat.bak

Related

Install postgres extension into Bitnami container as superuser on initial startup

I am using the Bitnami Postgres Docker container and noticed that my ORM which uses UUIDs requires the uuid-ossp extension to be available. After some trial and error I noticed that I had to manually install it using the postgres superuser since my custom non-root user created via the POSTGRESQL_USERNAME environment variable is not allowed to execute CREATE EXTENSION "uuid-ossp";.
I'd like to know what a script inside /docker-entrypoint-initdb.d might look like that can execute this command into the specific database, to be more precise to automate the following steps I had to perform manually:
psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
I think that something like this should work
PGPASSWORD=$POSTGRESQL_POSTGRES_PASSWORD psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
If you want to do it on startup you need to add a file to the startup scripts. Check out the config section of their image documentation: https://hub.docker.com/r/bitnami/postgresql-repmgr/
If you're deploying it via helm you can add your scripts in the postgresql.initdbScripts variable. values.yaml.
If the deployment is already running, you'll need to connect as the repmgr user not the Postgres user you created. That's default NOT a superuser for security purposes. This way most of your connections are not privileged.
For example I deployed bitnami/postgresql-ha via helm to a k8s cluster in a namespace called "data" with the release name "prod-pg". I can connect to the database with a privileged user by running
export REPMGR_PASSWORD=$(kubectl get secret --namespace data prod-pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode)
kubectl run prod-pg-postgresql-ha-client \
--rm --tty -i --restart='Never' \
--namespace data \
--image docker.io/bitnami/postgresql-repmgr:14 \
--env="PGPASSWORD=$REPMGR_PASSWORD" \
--command -- psql -h prod-pg-postgresql-ha-postgresql -p 5432 -U repmgr -d repmgr
This drops me into an interactive terminal
$ ./connect-db.sh
If you don't see a command prompt, try pressing enter.
repmgr=#

Backup PostgreSQL database running in a container on Digital Ocean

I'm a newbie to Docker, I'm working on a project that written by another developer, the project is running on Digital Ocean Ubuntu 18.04. It consists of 2 containers (1 container for Django App, 1 container for PostgreSQL database).
It is required from me now to get a backup of database, I found a bash file written by previous programmer:
### Create a database backup.
###
### Usage:
### $ docker-compose -f <environment>.yml (exec |run --rm) postgres backup
set -o errexit
set -o pipefail
set -o nounset
working_dir="$(dirname ${0})"
source "${working_dir}/_sourced/constants.sh"
source "${working_dir}/_sourced/messages.sh"
message_welcome "Backing up the '${POSTGRES_DB}' database..."
if [[ "${POSTGRES_USER}" == "postgres" ]]; then
message_error "Backing up as 'postgres' user is not supported. Assign 'POSTGRES_USER' env with another one and try again."
exit 1
fi
export PGHOST="${POSTGRES_HOST}"
export PGPORT="${POSTGRES_PORT}"
export PGUSER="${POSTGRES_USER}"
export PGPASSWORD="${POSTGRES_PASSWORD}"
export PGDATABASE="${POSTGRES_DB}"
backup_filename="${BACKUP_FILE_PREFIX}_$(date +'%Y_%m_%dT%H_%M_%S').sql.gz"
pg_dump | gzip > "${BACKUP_DIR_PATH}/${backup_filename}"
message_success "'${POSTGRES_DB}' database backup '${backup_filename}' has been created and placed in '${BACKUP_DIR_PATH}'."
my first question is:
is that command right? i mean if i ran:
docker-compose -f production.yml (exec |run --rm) postgres backup
Would that create a backup for my database at the written location?
Second question: can I run this command while the database container is running? or should I run docker-compose down then run the command for backup then run docker-compose up again.
Sure you can run that script to backup, one way to do it is executing a shell in container with docker-compose exec db /bin/bash and then run that script.
Other way is running a new postgres container attached to the postgres container network:
docker run -it --name pgback -v /path/backup/host:/var/lib/postgresql/data --network composeNetwork postgres /bin/bash
this will create a new postgres container attached to the network created with compose, with a binding volume attached, then you can create this script in the container and back up the database to the volume to save it to other place out of container.
Then when you want to backup simple start docker container and backup:
docker start -a -i pgback
You dont need to create other compose file, just copy the script to he container and run it, also you could create a new postgres image with the script and run it from CMD, just to run the container, there are plenty ways to do it.

A container is a database server. How to ask it's Dockerfile to complete its construction after that container has started?

I am using a postgis/postgis Docker image to set a database server for my application.
The database server must have a tablespace created, then a database.
Then each time another application will start from another container, it will run a Liquibase script that will update the database schema (create tables, index...) when needed.
On a terminal, to prepare the database container, I'm running these commands :
# Run a naked Postgis container
sudo docker run --name ecoemploi-postgis
-e POSTGRES_PASSWORD=postgres
-d -v /data/comptes-france:/data/comptes-france postgis/postgis
# Send 'bash level' commands to create the directory for the tablespace
sudo docker exec -it ecoemploi-postgis
bin/sh -c 'mkdir /tablespace && chown postgres:postgres /tablespace'
Then to complete my step 1, I have to run SQL statements to create the tablespace in a PostGIS point of view, and create the database by a CREATE DATABASE.
I connect myself, manually, under the psql of my container :
sudo docker exec -it ecoemploi-postgis bin/sh
-c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR"
-p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
And I run manally these commands :
CREATE TABLESPACE data LOCATION '/tablespace';
CREATE DATABASE comptesfrance TABLESPACE data;
exit
But I would like to have a container created from a single Dockerfile having done all the needed work. The difficulty is that it has to be done in two parts :
One before the container is started. (creating directories, granting them user:group).
One after it is started for the first time : declaring the tablespace and creating the base. If I understand well the base image I took, it should be done after an entrypoint docker-entrypoint.sh has been run ?
What is the good way to write a Dockerfile creating a container having done all these steps ?
The PostGIS image "is based on the official postgres image", so it should be able to use the /docker-entrypoint-initdb.d mechanism. Any files you put in that directory will be run the first time the database container is started. The postgis Dockerfile already uses this directory to install the PostGIS extensions into the default database.
That means you can put your build-time setup directly into the Dockerfile, and copy the startup-time script into that directory.
FROM postgis/postgis:12-3.0
RUN mkdir /tablespace && chown postgres:postgres /tablespace
COPY createdb.sql /docker-entrypoint-initdb.d/20-createdb.sql
# Use default ENTRYPOINT/CMD from base image
For the particular setup you describe, this may not be necessary. Each database runs in an isolated filesystem space and starts with an empty data directory, so there's not a specific need to create an alternate data directory; Docker style is to just run multiple databases if you need isolated storage. Similarly, the base postgres image will create a database for you at first start (named by the POSTGRES_DB environment variable).
In order to run a container, your Dockerfile must be functional and completed.
you must enter the queries in a bash file and in the last line you have to enter an ENTRYPOINT with this bash script

pg_upgrade tool failed: invalid "unknown" user columns

Postgresql update from 9.6 to 10.4 (on Fedora 28) has me stuck: one table in one database has a column of data type "unknown". I would gladly remove the column, but since I cannot get postgresql service to start (because "An old version of the database format was found"), I have no access to the database. In more detail:
postgresql-setup --upgrade fails.
/var/lib/pgsql/upgrade_postgresql.log attributes this failure to column with data type "unknown": "...Checking for invalid 'unknown' user columns: fatal .... check tables_using_unknown.txt". And "tables_using_unknown.txt" specifies one column in one table that I wish I could drop, but can't, because I can't get the server to start:
systemctl start postgresql.service fails, and
systemctl status postgresql.service complains about the "old version of the database"
I have found no obvious way to install postgresql 9.6 on Fedora 28.
Is there a way to drop the column without a running server? Or at least produce a dump of the database? Or can I force the upgrade tool to drop columns with data type "unknown"? Or is there any other obvious solution that I'm missing?
Here's what finally worked for me:
I used a docker container (on the same machine) with postgres 9.6 to access the "old" database directory,
converted the problematic column from "unknown" to "text" in the container,
dumped the relevant database to a file on the container's host, and then
loaded the dumped db into the postgres 10.4 environment.
Not pretty, but worked. In more detail:
I copied postgresql's data directory (/var/lib/pgsql/data/ in Fedora) -- containing the database that could not be converted -- to a new, empty directory /home/hj/pg-problem/.
I created a Dockerfile (text file) called "Docker-pg-problem" reading
FROM postgres:9.6
# my databases need German locale;
# if you just need en_US, comment the next two lines out.
RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
ENV LANG de_DE.utf8
and saved it as the only file in the new, empty folder /home/hj/pg-problem/docker/.
I started the docker daemon and ran a container that uses the data from my copy of the problematic data (in /home/hj/pg-problem/data/) as data directory for the postgres 9.6 server in the container. (NB: the "docker build" command in line three needs a working internet connection, takes a while, and should finish saying "Successfully built").
root#host: cd /home/hj/pg-problem/docker
root#host: service docker start
root#host: docker build -t hj/failed-update -f Dockerfile .
root#host: docker run -it --rm -p 5431:5432 -v /home/hj/pg-problem/data:/var/lib/postgresql/data:z --name failed-update -e POSTGRES_PASSWORD=secret hj/failed-update
Then, I opened a terminal in the container to fix the database:
hj#host: docker exec -it failed-update bash
Inside the container, I fixed and dumped the database:
root#container: su postgres
postgres#container: psql <DB-name>
postgres#container: alter table <Table-name> alter column <Col-Name> type text;
postgres#container: \q
postgres#container: dump_db <DB-name> /var/lib/postgresql/data/dbREPAIRED.sql
I dumped the db right into the data directory so I could easily access the dumped file from the docker host.
On the docker host, the dumped database was, obviously, in /home/hj/pg-problem/data/dbREPAIRED.sql, and from there I could load it into postgresql 10:
postgres#host: createdb <DB-name>
postgres#host: psql <DB-name> < /home/hj/pg-problem/data/dbREPAIRED.sql
Since I was on a laptop with limited disk space, I deleted the docker stuff:
root#host: docker rm $(docker ps -a -q)
root#host: docker rmi $(docker images -q)

Unable to start postgresql service in Redhat linux 7

I have installed postgresql 9.4 on Redhat 7 server.It was installed through postgresql-9.4.3-1-linux-x64.run. It displayed a clear message"postgres is installed your machine". Now when I login as
su - postgres
It doesn't ask for password and goes to bash prompt. If I type psql displays "command not found". When I tried starting service through root user
service postgresql initdb
I get:
The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.
I tried start postgres restart which didn't work. I tried searching and found nothing. I know its with starting service.
service postgresql initdb
initdb is an independent command to create a new database cluster.
initdb -- create a new PostgreSQL database cluster
initdb [option...] [--pgdata | -D] directory
You must use it independent, but not as argument for service command. Read documentation how to use this command: initdb
Use service postgresql start to start postgresql service and service postgresql stop to stop it.
psql: "command not found"
Try to switch into postgres user using su postgres command (without dash). It effects on $PATH environment variable.If this wouldn't help use full path to specify the command, for example /usr/bin/psql