PostgreSQL in Docker - GUI with internal connection to specific database? [duplicate] - postgresql

I'd love to run pgadmin4 in our infrastructure in a way, that postgres servers would be preconfigured during docker build/1.st start.
I've tried to modify the internaly used /var/lib/pgadmin/pgadmin4.db sqlite DB on the 1.st start, which however results in an error in the UI (once selecting the particular postgres server:
definition of service "" not found
I've tried following:
Directory structure:
find ./ -print | sed -e 's;[^/]*/;|____;g;s;____|; |;g'
|____
|____dump
| |____servergroup.csv
| |____server.csv
| |____import_db.sh
|____Dockerfile
Where Dockerfile is:
cat Dockerfile
# rebuild:
# docker build -t pgadmin4:3.0-custom .
# run:
# docker run --rm -it -e PGADMIN_DEFAULT_EMAIL=admin -e PGADMIN_DEFAULT_PASSWORD=admin -p8081:80 docker build -t pgadmin4:3.0-custom
FROM dpage/pgadmin4:3.0
COPY dump/ /dump
RUN \
apk add --no-cache sqlite && \
chmod +x /dump/import_db.sh && \
# re rely on the current entrypoint.sh impl
sed -i '/python run_pgadmin.py/a \/dump\/import_db.sh' /entrypoint.sh && \
cat /entrypoint.sh
In fact it just modifies the https://github.com/postgres/pgadmin4/blob/master/pkg/docker/entrypoint.sh to run import_db.sh script on the 1.st start.
Where dump/import_db.sh is:
cat dump/import_db.sh
#!/bin/sh
echo ".tables" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
# remove header and `1,1,Servers` entry (would cause duplicates)
cat /dump/servergroup.csv | sed '1d' | grep -v 1,1,Servers > /tmp/servergroup.in.csv
echo "csv servergroup:"
cat /tmp/servergroup.in.csv
echo "DB servergroup:"
sqlite3 -csv -header /var/lib/pgadmin/pgadmin4.db "select * from servergroup;"
echo ".import /tmp/servergroup.in.csv servergroup" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
# remove header
cat /dump/server.csv | sed '1d' > /dump/server.in.csv
echo "csv server:"
cat /dump/server.in.csv
echo "DB server:"
sqlite3 -csv -header /var/lib/pgadmin/pgadmin4.db "select * from server;"
echo ".import /dump/server.in.csv server" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
Csv files contents:
cat dump/server.csv
id,user_id,servergroup_id,name,host,port,maintenance_db,username,password,role,ssl_mode,comment,discovery_id,hostaddr,db_res,passfile,sslcert,sslkey,sslrootcert,sslcrl,sslcompression,bgcolor,fgcolor,service
1,1,2,servername,localhost,5432,postgres,postgres,"",,prefer,,,"","",,<STORAGE_DIR>/.postgresql/postgresql.crt,<STORAGE_DIR>/.postgresql/postgresql.key,,,0,,,
cat dump/servergroup.csv
id,user_id,name
2,1,my-group
1,1,Servers
Any idea how to fix my error? Or of any other approach that could provide me the pre-configured pgadmin4 docker container?

The current version of image dpage/pgadmin is 4.24. This version has support for external configuration of server definition list (servers.json):
{
"Servers": {
"test": {
"Name": "test",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
Volume binding can be configured as below:
volumes:
- ./servers.json:/pgadmin4/servers.json
First time container is started server groups and servers will be configured automaticaly.
UPD. JSON format has more fields which are optional. It's important that password can not be imported/exported in such way due to the obvious security reasons.

Looks this change the service column value to an empty string instead of NULL.
Can you try updating the value of service column to NULL
sqlite> UPDATE server SET service = NULL;
commit the changes and Restart pgAdmin4 & try again connecting to that server.

Related

run conditional migration sh script via docker compose yml

Been googling around for this but I can't get the exact answer.
I am building a mobile app and I want to run additional migration scripts when the environment is "local".
I have a docker-compose-local.yml which builds the db
database:
build:
context: ./src/Database/
dockerfile: Dockerfile
container_name: database
ports:
- 1401:1433
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: 'password'
ENVIRONMENT: 'local'
networks:
- my-network
and then a Dockerfile with an entrypoint
ENTRYPOINT ["/usr/src/app/entry-point.sh"]
And then a script that runs migrations.
#!/bin/bash
# wait for MSSQL server to start
export STATUS=1
i=0
MIGRATIONS=$(ls migrations/*.sql | sort -V)
SEEDS=$(ls seed/*.sql)
while [[ $STATUS -ne 0 ]] && [[ $i -lt 30 ]]; do
i=$i+1
/opt/mssql-tools/bin/sqlcmd -t 1 -U sa -P $SA_PASSWORD -Q "select 1" >> /dev/null
STATUS=$?
done
if [ $STATUS -ne 0 ]; then
echo "Error: MSSQL SERVER took more than thirty seconds to start up."
exit 1
fi
echo "======= MSSQL SERVER STARTED ========" | tee -a ./config.log
# Run the setup script to create the DB and the schema in the DB
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database.sql
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database-user.sql
for f in $MIGRATIONS
do
echo "Processing migration $f"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $f
done
# RUN THIS ONLY FOR ENVIRONMENT = local
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
Currently everything works perfectly fine, except the seeds are added for all environments.
I only want to run the seed scripts if environment = local.
How can this condition be written into this script?
Alternative is there a cleaner way to do this?
Thanks
There are multiple ways to achieve your goal. Three that come to mind quickly are:
Insert and check the environment variable in the script (What you are trying to do now)
Have two versions of the script in the Container and change the entrypoint in the docker-compose file (either with environment variables or by using multiple compose files)
Build two different versions of the docker image for local and production environment
With your current setup the first alternative is the easiest:
# RUN THIS ONLY FOR ENVIRONMENT = local
if [[ "$ENVIRONMENT" == "local" ]]; then
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
fi

Why does psql -f COPY FROM STDIN fail when -c succeeds?

Using psql with COPY FROM STDIN works fine when executed via -c (inline command) but the same thing fails if -f (script file) is used. I've created a Docker-based test to demonstrate below; tested on MacOS w/ zsh and Debian w/ bash.
I was unable to find any relevant documentation on why this would be but I imagine it has to do with psql's special \copy functionality. Can someone help illuminate me?
# create test data
echo "1,apple
2,orange
3,banana">testdata.csv
# create test script
echo "drop table if exists fruits;
create table fruits (id INTEGER, name VARCHAR);
copy fruits from stdin with delimiter as ',' csv;
select * from fruits">testscript.pg
# create network
docker network create pgtest
# run Postgres server
echo "starting postgres server"
PG_CONTAINER_ID=$(docker run -d --name=pgtest --rm --network=pgtest -h database -e POSTGRES_USER=user1 -e POSTGRES_PASSWORD=pass1 -e POSTGRES_DB=db1 -p 6432:5432 postgres:12)
echo "sleeping for 5 seconds (wait for server to start)"
sleep 5
docker logs $PG_CONTAINER_ID
echo "*"
echo "*"
echo "*"
echo "run psql script using inline with -c"
cat testdata.csv | docker run -i --rm --network=pgtest postgres:12 psql postgres://user1:pass1#database:5432/db1 -c "$(cat testscript.pg)"
echo "*"
echo "*"
echo "*"
echo "run psql script using file with -f"
cat testdata.csv | docker run -i -v $PWD:/host --rm --network=pgtest postgres:12 psql postgres://user1:pass1#database:5432/db1 -f /host/testscript.pg
# stop server
echo "*"
echo "*"
echo "*"
docker stop $PG_CONTAINER_ID
docker rm $PG_CONTAINER_ID
The output of the psql commands look like this:
*
*
*
run psql script using inline with -c
NOTICE: table "fruits" does not exist, skipping
id | name
----+--------
1 | apple
2 | orange
3 | banana
(3 rows)
*
*
*
run psql script using file with -f
DROP TABLE
CREATE TABLE
psql:/host/testscript.pg:5: ERROR: invalid input syntax for type integer: "select * from fruits"
CONTEXT: COPY fruits, line 1, column id: "select * from fruits"
In the first case, (execution with -c), the copy data are read from standard input.
In the second case (execution with -f), the input file acts as input to psql (if you want, standard input is redirected from that file). So PostgreSQL interprets the rest of the file as COPY data and complains about the content. You'd have to mix the COPY data in with the file:
/* script with copy data */
COPY mytable FROM STDIN (FORMAT 'csv');
1,item 1,2021-11-01
2,"item 2, better",2021-11-11
\.
/* next statement */
ALTER TABLE mytable ADD newcol text;

running pgadmin4 in docker with pre-configured servers

I'd love to run pgadmin4 in our infrastructure in a way, that postgres servers would be preconfigured during docker build/1.st start.
I've tried to modify the internaly used /var/lib/pgadmin/pgadmin4.db sqlite DB on the 1.st start, which however results in an error in the UI (once selecting the particular postgres server:
definition of service "" not found
I've tried following:
Directory structure:
find ./ -print | sed -e 's;[^/]*/;|____;g;s;____|; |;g'
|____
|____dump
| |____servergroup.csv
| |____server.csv
| |____import_db.sh
|____Dockerfile
Where Dockerfile is:
cat Dockerfile
# rebuild:
# docker build -t pgadmin4:3.0-custom .
# run:
# docker run --rm -it -e PGADMIN_DEFAULT_EMAIL=admin -e PGADMIN_DEFAULT_PASSWORD=admin -p8081:80 docker build -t pgadmin4:3.0-custom
FROM dpage/pgadmin4:3.0
COPY dump/ /dump
RUN \
apk add --no-cache sqlite && \
chmod +x /dump/import_db.sh && \
# re rely on the current entrypoint.sh impl
sed -i '/python run_pgadmin.py/a \/dump\/import_db.sh' /entrypoint.sh && \
cat /entrypoint.sh
In fact it just modifies the https://github.com/postgres/pgadmin4/blob/master/pkg/docker/entrypoint.sh to run import_db.sh script on the 1.st start.
Where dump/import_db.sh is:
cat dump/import_db.sh
#!/bin/sh
echo ".tables" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
# remove header and `1,1,Servers` entry (would cause duplicates)
cat /dump/servergroup.csv | sed '1d' | grep -v 1,1,Servers > /tmp/servergroup.in.csv
echo "csv servergroup:"
cat /tmp/servergroup.in.csv
echo "DB servergroup:"
sqlite3 -csv -header /var/lib/pgadmin/pgadmin4.db "select * from servergroup;"
echo ".import /tmp/servergroup.in.csv servergroup" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
# remove header
cat /dump/server.csv | sed '1d' > /dump/server.in.csv
echo "csv server:"
cat /dump/server.in.csv
echo "DB server:"
sqlite3 -csv -header /var/lib/pgadmin/pgadmin4.db "select * from server;"
echo ".import /dump/server.in.csv server" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
Csv files contents:
cat dump/server.csv
id,user_id,servergroup_id,name,host,port,maintenance_db,username,password,role,ssl_mode,comment,discovery_id,hostaddr,db_res,passfile,sslcert,sslkey,sslrootcert,sslcrl,sslcompression,bgcolor,fgcolor,service
1,1,2,servername,localhost,5432,postgres,postgres,"",,prefer,,,"","",,<STORAGE_DIR>/.postgresql/postgresql.crt,<STORAGE_DIR>/.postgresql/postgresql.key,,,0,,,
cat dump/servergroup.csv
id,user_id,name
2,1,my-group
1,1,Servers
Any idea how to fix my error? Or of any other approach that could provide me the pre-configured pgadmin4 docker container?
The current version of image dpage/pgadmin is 4.24. This version has support for external configuration of server definition list (servers.json):
{
"Servers": {
"test": {
"Name": "test",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
Volume binding can be configured as below:
volumes:
- ./servers.json:/pgadmin4/servers.json
First time container is started server groups and servers will be configured automaticaly.
UPD. JSON format has more fields which are optional. It's important that password can not be imported/exported in such way due to the obvious security reasons.
Looks this change the service column value to an empty string instead of NULL.
Can you try updating the value of service column to NULL
sqlite> UPDATE server SET service = NULL;
commit the changes and Restart pgAdmin4 & try again connecting to that server.

Why isn't my postgres docker container mounting the database?

So I have two docker images which are used to host postgres servers. One is for first time setup and puts a .sh and .sql script into the appropriate directory for postgres to load them up (which works as expect). The second image is for running any other time where the database has already been created. The docker files for these looks like this:
FROM postgres:9.3
RUN mkdir -p /etc/postgresql/9.3/main/
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
ADD Setup.sh /docker-entrypoint-initdb.d/1.sh
ADD schema.sql /docker-entrypoint-initdb.d/2.sql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and
FROM postgres:9.3
MAINTAINER Andrew Broadbent <andrew.broadbent#manchester.ac.uk>
RUN mkdir -p /etc/postgresql/9.3/main/
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
I run the first from the following shell script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
read -p " User name: " user
read -s -p " Password: " password
echo ""
read -p " Database name: " dbname
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -e "NEW_USER=$user" -e "NEW_PASSWORD=$password" -e "POSTGRES_DB=$dbname" -p 9001:5432 -P -d --name psql-data postgres-setup-datastore
and I run the second from the following:
echo -e "\n${TITLE}[Data Store Database]${NC}"
read -p " User name: " user
read -s -p " Password: " password
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -e "POSTGRES_USER=$user" -e "POSTGRES_PASSWORD=$password" -p 9001:5432 -P --name psql-data postgres-runtime
If I run the first script, connect the the database make some changes, then remove the container, then run the second script, the changes I made to the database aren't persisted. How do I persist them? I managed to get it working earlier, but I'm not sure what I've changed and can't seem to get it working any more.

How do I replace the --D flag for pg_dumpall in Postgres?

I'm trying to create a PostgreSQL backup script using this answer as the basis of my script. The script is:
#! /bin/bash
# backup-postgresql.sh
# by Craig Sanders
# this script is public domain. feel free to use or modify as you like.
DUMPALL="/usr/bin/pg_dumpall"
PGDUMP="/usr/bin/pg_dump"
PSQL="/usr/bin/psql"
# directory to save backups in, must be rwx by postgres user
BASE_DIR="/var/backups/postgres"
YMD=$(date "+%Y-%m-%d")
DIR="$BASE_DIR/$YMD"
mkdir -p $DIR
cd $DIR
# get list of databases in system , exclude the tempate dbs
DBS=$($PSQL -l -t | egrep -v 'template[01]' | awk '{print $1}')
# first dump entire postgres database, including pg_shadow etc.
$DUMPALL -D | gzip -9 > "$DIR/db.out.gz"
# next dump globals (roles and tablespaces) only
$DUMPALL -g | gzip -9 > "$DIR/globals.gz"
# now loop through each individual database and backup the schema and data separately
for database in $DBS; do
SCHEMA=$DIR/$database.schema.gz
DATA=$DIR/$database.data.gz
# export data from postgres databases to plain text
$PGDUMP -C -c -s $database | gzip -9 > $SCHEMA
# dump data
$PGDUMP -a $database | gzip -9 > $DATA
done
The line:
$DUMPALL -D | gzip -9 > "$DIR/db.out.gz"
is returning this error:
psql: FATAL: role "root" does not exist
/usr/lib/postgresql/9.3/bin/pg_dumpall: invalid option -- 'D'
When I look at the PostgreSQL docs, there doesn't seem to be a -D option anymore. What should the updated command look like?
This is the modified script I ended up using to periodically backup my PostgreSQL database:
#! /bin/bash
# backup-postgresql.sh
# by Craig Sanders
# this script is public domain. feel free to use or modify as you like.
DUMPALL="/usr/bin/pg_dumpall"
PGDUMP="/usr/bin/pg_dump"
PSQL="/usr/bin/psql"
# directory to save backups in, must be rwx by postgres user
BASE_DIR="/var/backups/postgres"
YMD=$(date "+%Y-%m-%d")
DIR="$BASE_DIR/$YMD"
mkdir -p $DIR
cd $DIR
# get list of databases in system , exclude the tempate dbs
DBS=$($PSQL -l -t | egrep -v 'template[01]' | awk '{print $1}' | egrep -v '^\|' | egrep -v '^$')
# first dump entire postgres database, including pg_shadow etc.
$DUMPALL -c -f "$DIR/db.out"
# next dump globals (roles and tablespaces) only
$DUMPALL -g -f "$DIR/globals"
# now loop through each individual database and backup the schema and data separately
for database in $DBS; do
SCHEMA=$DIR/$database.schema
DATA=$DIR/$database.data
# export data from postgres databases to plain text
$PGDUMP -C -c -s $database -f $SCHEMA
# dump data
$PGDUMP -a $database -f $DATA
done
# delete backup files older than 30 days
OLD=$(find $BASE_DIR -type d -mtime +30)
if [ -n "$OLD" ] ; then
echo deleting old backup files: $OLD
echo $OLD | xargs rm -rfv
fi