Rancher - PostgreSQL Docker container hangs on interactive exec - postgresql

I used to have Windows Docker but now it was replaced by Windows Ranched in my company
In theory my docker commands should not be affected by that, by they were.
-- starting the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=secret \
-p 5432:5432 \
postgres:12.12-alpine
-- create database
docker exec -i my-database psql -U postgres < database.sql
It always hangs on the last line of the databse.sql script (whatever line it is)
database.sql
CREATE ROLE my_app LOGIN PASSWORD 'my-app-123';
CREATE DATABASE my_app_db
WITH
ENCODING = 'UTF8'
OWNER = my_app
CONNECTION LIMIT = 20;
If I manually get a shell and add it manually it works fines
$ docker exec -i my-database bash
bash-5.0# psql -h localhost -U postgres
postgres=# CREATE ROLE my_app LOGIN PASSWORD 'my-app-123';
CREATE ROLE
postgres=# CREATE DATABASE my_app_db
postgres-# WITH
postgres-# ENCODING = 'UTF8'
postgres-# OWNER = my_app
postgres-# CONNECTION LIMIT = 20;
CREATE DATABASE
postgres-#
Any idea or suggestion?

Related

How to supply password to 'psql' command while running PostgreSQL in Google Colab

I have installed PostgreSQL server in Google Colab as follows :
# Install postgresql server
!apt update > /dev/null
!apt install postgresql > /dev/null
!service postgresql start
and have then configured the 'postgres' userid and database as follows :
# Setup a password `pass` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'pass';"
#
# Setup a database with name `praxis` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS praxisdb;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE praxisdb;'
Subsequently, I have created a table, inserted data and run the select command
!psql -h localhost -p 5432 -Upostgres -W -dpraxisdb -c 'create table dept ... ;'
!psql -h localhost -p 5432 -Upostgres -W -dpraxisdb -c "INSERT INTO Dept ... ;"
!psql -h localhost -p 5432 -Upostgres -W -dpraxisdb -c "select * from dept;"
All this works perfectly but each time, I am prompted to enter the password. I wish to avoid having to enter the password each time. Based on what I read in the documentation on using password files, I created a file ~/.pgpass as follows :
!echo "localhost:5432:praxisdb:postgres:pass" > ~/.pgpass
!chmod 0600 ~/.pgpass
Now, when I execute any command, eg.
!psql -c "select * from dept;"
I get the error
psql: error: FATAL: role "root" does not exist
Where does this role root come from? I looked at the file ~/.pgpass and noted that its owner and group is root and I changed that to postgres using chown, chgrp, but that does not solve the problem. What else should I do in this case to solve the problem.
The version of Postgres and the OS is as follows :
!sudo -u postgres psql -V
psql (PostgreSQL) 12.13 (Ubuntu 12.13-0ubuntu0.20.04.1)
You misunderstand how the password file works. You still have to specify host, port, database and user in your connection request. The client library then searches the matching entry in the password file and reads the password.

bash script to have a postgres DB in a docker container

I'm having trouble in creating a Postgres DB using this bash script:
#! /bin/bash
docker pull postgres
docker run --name coverage-postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres
export CONTAINER_ID=$(sudo docker ps -a | grep coverage-postgres | head -c12)
sleep 2s
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "create user coverage_user with password 'password';"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "create database coverage owner coverage_user;"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "grant all privileges on database coverage to coverage_user;"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "\c coverage coverage_user" # it seems useless...
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "CREATE TABLE IF NOT EXISTS postal_codes (id,...;"
sleep 0.5
sudo docker exec -it $CONTAINER_ID psql -U postgres -c "CREATE UNIQUE INDEX ... ;"
# exit from container
exit
# restart container
docker start $CONTAINER_ID
In particular, the database is created, the user is created, the table is created but... it's not in the coverage db but in postgres db.
I've tried to add "CREATE TABLE coverage.postal_codes" but coverage is a db and not a schema and it didn't work.
I've tried to use: psql -U coverage_user but the system tells me that database coverage_user doesn't exist.
So of course I thought "I have to specify the database of course!". Then I've tried to use: psql -U coverage as the name of the database but this time, the system makes fun of me and, changing its mind, tells me that the role coverage doesn't exists.
I tried a workaround: within the command -c "\c coverage coverage_user" I concatenated the other commands this way:
-c "\c coverage coverage_user; CREATE TABLE...; CREATE UNIQUE INDEX...;"
but, of course, neither this worked at all.
I make a premise: I know there are other ways to do this but I would like to understand what I am missing with these specific commands.
Solution
#! /bin/bash
docker pull postgres
docker run --name coverage-postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres
export CONTAINER_ID=$(sudo docker ps -a | grep coverage-postgres | head -c12)
sleep 2s
docker exec -it $CONTAINER_ID psql -U postgres -c "create user coverage_user with password 'password';"
sleep 0.5
docker exec -it $CONTAINER_ID psql -U postgres -c "create database coverage owner coverage_user;"
sleep 0.5
docker exec -it $CONTAINER_ID psql -U postgres -c "grant all privileges on database coverage to coverage_user;"
sleep 0.5
docker exec -it $CONTAINER_ID psql -U coverage_user -c "CREATE TABLE IF NOT EXISTS postal_codes (id int)" coverage
Explanation
https://www.postgresql.org/docs/9.2/app-psql.html
psql [option...] [dbname [username]]
Just add dbname after options. And change user as -U option. You can pass dbname also as an rgument -d

Importing postgres database in a docker postgres container

I am trying to import an existing database into a postgres docker container.
This is how I proceed:
docker run --name pg-docker -e POSTGRES_PASSWORD=***** -d -p 5432:5432 -v BCS/postgres_data postgres
Then
docker exec -it pg-docker bash
psql -U postgres
postgres=# CREATE DATABASE myDB;
psql -U postgres myDB < BCS/mydb.sql
but when i execute the command \dt I have this error Did not find any relations.
knowing that my database has already tables.
So waht I am doing wrong?
First thing better to go with the approach that is mentioned by #LinPy.
Or better to copy at build time.
Dockerfile
FROM postgres
COPY mydb.sql /docker-entrypoint-initdb.d/
Another option, you do need to script for only DB creation.
FROM postgres
ENV POSTGRES_DB=mydb
The will create DB for you.
POSTGRES_DB
This optional environment variable can be used to define a different
name for the default database that is created when the image is first
started. If it is not specified, then the value of POSTGRES_USER will
be used.
In the above, the Postgres entrypoint will take care of the SQL script.
Second thing, the current issue with database name, Postgress will not treat them in uppercase simply, unless you did some trick.
Postgresql treats the db name as lowercase, normalising. However, the
field in the postgresapi does not replicate this behaviour, thus
allowing you to create a database with Capital letters. The fix could
be to warn the user that no uppercase letters are allowed in the db
aname and to add in a validation rule to the API to stop a user
creating such a database.
postgres-api
Change your command to
create DB
docker exec -it pg-docker bash
psql -U postgres
postgres=# CREATE DATABASE myDB;
verfiy DB
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
mydb | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
so the import command will be
psql -U postgres mydb < BCS/mydb.sql
or
psql -d mydb -U postgres -f ab.sql
You can also have your sql scripts being redirected to the container like that:
Postgres user:
docker exec -i my-postgres-container psql -U postgres < created-db.sql
Regular user:
docker exec -i my-postgres-container psql -d my-db -U my-user < create-schema.sql
If you are sure that the database installed everything correctly, are you are still not seeing tables, there are 2 things you should double-check:
When you connect, are you connecting to the right database? If you are using psql in a terminal, the database is specified with the -d switch.
psql -h <host> -U <user> -d <dbname>
You can also change your database after you connect using the \connect <dbname> command.
Are you specifying the right schema? \dt will show you tables, but you need to specify a schema first using set schema:
postgres=# \dt
...
<no tables>
...
postgres=# set schema 'my_schema';
postgres=# \dt
...
<my tables>
...

How to restore postgres within a docker?

I create backups like this: docker exec DOCKER pg_dump -U USER -F t DB | gzip > ./FILE.tar.gz
What's the best way to restore the database given that the database runs within a container?
For your case:
docker exec -it <CONTAINER> gunzip < backup.tar.gz | pg_restore -U <USER> -F t -d <DB>
Remote restore is also available if your container is public facing and remote connections are allowed in pg_hba.conf for postresql:
gunzip < backup.tar.gz | pg_restore -U <USER> -F t -d <DB> -h <HOST_IP> -p 5432
As a rule of thumb, it is good idea to document your backup and restore commands specific to the project.
How take backup of the data which is existing in the running PostgreSQL container
Create some folder in your root
mkdir -p '/myfolder/bdbackup'
download the postgres image which you are using and execute the following command
docker run --name demo1 -e POSTGRES_PASSWORD=passowrd -v /myfolder/bdbackup:/var/lib/postgresql/data -d postgres
docker exec -it demo1 psql -U postgres
Back up will be stored in the following folder /myfolder/bdbackup
you can kill the container and stop the container any time but data will be stored in the host.
and once again re-run the postgres the container with same command
docker run --name demo2 -e POSTGRES_PASSWORD=passowrd -v /myfolder/bdbackup:/var/lib/postgresql/data -d postgres
docker exec -it demo1 psql -U postgres
and execute following query select * from emp;
you can see the data has restored...

How to generate a Postgresql Dump from a Docker container?

I would like to have a way to enter into the Postgresql container and get a data dump from it.
Use the following command from a UNIX or a Windows terminal:
docker exec <container_name> pg_dump <schema_name> > backup
The following command will dump only inserts from all tables:
docker exec <container_name> pg_dump --column-inserts --data-only <schema_name> > inserts.sql
I have container named postgres with mounted volume -v /backups:/backups
To backup gziped DB my_db I use:
docker exec postgres pg_dump -U postgres -F t my_db | gzip >/backups/my_db-$(date +%Y-%m-%d).tar.gz
Now I have
user#my-server:/backups$ ls
my_db-2016-11-30.tar.gz
Although the mountpoint solution above looked promising, the following is the only solution that worked for me after multiple iterations:
docker run -it -e PGPASSWORD=my_password postgres:alpine pg_dump -h hostname -U my_user my_db > backup.sql
What was unique in my case: I have a password on the database that needs to be passed in; needed to pass in the tag (alpine); and finally the hosts version of the psql tools were different to the docker versions.
This one, using container_name instead of database_scheme's one, works for me:
docker exec {container_name} pg_dump -U {user_name} > {backup_file_name}
In instance, for me, database name, user and password are supposed declared in docker-compose.yaml
I wish it could help someone
for those who suffered with permissions, I used this following command with success to perform my dump:
docker exec -i MY_CONTAINER_NAME /bin/bash -c "PGPASSWORD=MY_PASSWORD pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /home/MY_USER/db-$(date +%d-%m-%y).backup
This will mount the pwd and include your environment variables
docker run -it --rm \
--env-file <(env) \
-w /working \
--volume $(pwd):/working \
postgres:latest /usr/bin/pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /working/db-$(date +%d-%m-%y).backup
Another workaround method is to start postgre sql with a mountpoint to the location of the dump in docker.
like docker run -v <location of the files>.
Then perform a docker inspect on the docker running container
docker inspect <image_id>
you can find "Volumes" tag inside and a corresponding location.Go to the location and you can find all the postgresql/mysql files.It worked for me.Let us know if that worked for you also.
Good luck
To run the container that has the Postgres user and password, you need to have preconfigured variables as container environment variable.
For example:
docker run -it --rm --link <container_name>:<data_container_name> -e POSTGRES_PASSWORD=<password> postgres /usr/bin/pg_dump -h <data_container_name> -d <database_name> -U <postgres_username> > dump.sql