Postgres container not recognizing POSTGRES_USER_FILE environment variable - postgresql

For whatever reason I can't get postgres to recognize the POSTGRES_USER_FILE environment variable.
With the following secret:
echo "pass" | docker secret create psql_pass -
When I create the following service i can log in (using adminer) using the account "pass":"pass"
Docker service create --name psql --secret psql_pass -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e POSTGRES_USER=pass postgres
But when I create THIS service, I CANNOT log in using the same account "pass":"pass"
Docker service create --name psql --secret psql_pass -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e POSTGRES_USER_FILE=/run/secrets/psql_pass postgres
(yes, it should be user:pass, but I wanted to illustrate that its the exact same secret being used)
I have verified that the secret is being set correctly in the container (cat /run/secrets/psql_pass)
Am I missing something here? Why isn't POSTGRES_USER_FILE being recognized?

Related

How to put Nextcloud in kubernetes in maintenance mode

I'm trying to migrate my Nextcloud instance to a kubernetes cluster. I've succesfully deployed a Nextcloud instance using openEBS-cStor storage. Before I can "kubectl cp" my old files to the cluster, I need to put Nextcloud in maintenance mode.
This is what I've tried so far:
Shell access to pod
Navigate to folder
Run OCC command to put next cloud in maintenance mode
These are the commands I used for the OCC way:
kubectl exec --stdin --tty -n nextcloud nextcloud-7ff9cf449d-rtlxh -- /bin/bash
su -c 'php occ maintenance:mode --on' www-data
# This account is currently not available.
Any tips on how to put Nextcloud in maintenance mode would be appreciated!
The su command fails because there is no shell associated with the www-data user.
What worked for me is explicitly specifying the shell in the su command:
su -s /bin/bash www-data -c "php occ maintenance:mode --on"

Install postgres extension into Bitnami container as superuser on initial startup

I am using the Bitnami Postgres Docker container and noticed that my ORM which uses UUIDs requires the uuid-ossp extension to be available. After some trial and error I noticed that I had to manually install it using the postgres superuser since my custom non-root user created via the POSTGRESQL_USERNAME environment variable is not allowed to execute CREATE EXTENSION "uuid-ossp";.
I'd like to know what a script inside /docker-entrypoint-initdb.d might look like that can execute this command into the specific database, to be more precise to automate the following steps I had to perform manually:
psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
I think that something like this should work
PGPASSWORD=$POSTGRESQL_POSTGRES_PASSWORD psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
If you want to do it on startup you need to add a file to the startup scripts. Check out the config section of their image documentation: https://hub.docker.com/r/bitnami/postgresql-repmgr/
If you're deploying it via helm you can add your scripts in the postgresql.initdbScripts variable. values.yaml.
If the deployment is already running, you'll need to connect as the repmgr user not the Postgres user you created. That's default NOT a superuser for security purposes. This way most of your connections are not privileged.
For example I deployed bitnami/postgresql-ha via helm to a k8s cluster in a namespace called "data" with the release name "prod-pg". I can connect to the database with a privileged user by running
export REPMGR_PASSWORD=$(kubectl get secret --namespace data prod-pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode)
kubectl run prod-pg-postgresql-ha-client \
--rm --tty -i --restart='Never' \
--namespace data \
--image docker.io/bitnami/postgresql-repmgr:14 \
--env="PGPASSWORD=$REPMGR_PASSWORD" \
--command -- psql -h prod-pg-postgresql-ha-postgresql -p 5432 -U repmgr -d repmgr
This drops me into an interactive terminal
$ ./connect-db.sh
If you don't see a command prompt, try pressing enter.
repmgr=#

Postgres docker error: FATAL: password authentication failed for user

I created a Postgres container with docker
sudo docker run -d \
--name dev-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=test \
-v ${HOME}/someplace/:/var/lib/postgresql/data \
-p 666:5432 \
postgres
I give the Postgres instance test as a username and password as specified in the doc.
The Postgres port (5432) inside the container is linked to my 666 port on the host.
Now I want to try this out using psql
psql --host=localhost --port=666 --username=test
I'm prompted to enter the password for user test and after entering test, I get
psql: error: FATAL: password authentication failed for user "test"
There are different problems that can cause this
The version of Postgres on the host and the container might not be the same
If you have to change the docker version of Postgres used, make sure that the container with the new version is not crashing. Trying to change the version of Postgres while using the same directory for data might cause problem as the directory was initialized with the wrong version.
You can use docker logs [container name] to debug if it crashes
There might be a problem with the volumes used by docker (something with cached value preventing the creation of a new user when env variable change) if you changed the env parameters.
docker stop $(docker ps -qa) && docker system prune -af --volumes
If you have problem with some libraries that use Postgres, you might need to install some package to allow libraries to work with Postgres. Those two are the one Stack Overflow answers often reference.
sudo apt install libpq-dev postgresql-client
Other problems seem to relate to problems with docker configuration.

Backup PostgreSQL database running in a container on Digital Ocean

I'm a newbie to Docker, I'm working on a project that written by another developer, the project is running on Digital Ocean Ubuntu 18.04. It consists of 2 containers (1 container for Django App, 1 container for PostgreSQL database).
It is required from me now to get a backup of database, I found a bash file written by previous programmer:
### Create a database backup.
###
### Usage:
### $ docker-compose -f <environment>.yml (exec |run --rm) postgres backup
set -o errexit
set -o pipefail
set -o nounset
working_dir="$(dirname ${0})"
source "${working_dir}/_sourced/constants.sh"
source "${working_dir}/_sourced/messages.sh"
message_welcome "Backing up the '${POSTGRES_DB}' database..."
if [[ "${POSTGRES_USER}" == "postgres" ]]; then
message_error "Backing up as 'postgres' user is not supported. Assign 'POSTGRES_USER' env with another one and try again."
exit 1
fi
export PGHOST="${POSTGRES_HOST}"
export PGPORT="${POSTGRES_PORT}"
export PGUSER="${POSTGRES_USER}"
export PGPASSWORD="${POSTGRES_PASSWORD}"
export PGDATABASE="${POSTGRES_DB}"
backup_filename="${BACKUP_FILE_PREFIX}_$(date +'%Y_%m_%dT%H_%M_%S').sql.gz"
pg_dump | gzip > "${BACKUP_DIR_PATH}/${backup_filename}"
message_success "'${POSTGRES_DB}' database backup '${backup_filename}' has been created and placed in '${BACKUP_DIR_PATH}'."
my first question is:
is that command right? i mean if i ran:
docker-compose -f production.yml (exec |run --rm) postgres backup
Would that create a backup for my database at the written location?
Second question: can I run this command while the database container is running? or should I run docker-compose down then run the command for backup then run docker-compose up again.
Sure you can run that script to backup, one way to do it is executing a shell in container with docker-compose exec db /bin/bash and then run that script.
Other way is running a new postgres container attached to the postgres container network:
docker run -it --name pgback -v /path/backup/host:/var/lib/postgresql/data --network composeNetwork postgres /bin/bash
this will create a new postgres container attached to the network created with compose, with a binding volume attached, then you can create this script in the container and back up the database to the volume to save it to other place out of container.
Then when you want to backup simple start docker container and backup:
docker start -a -i pgback
You dont need to create other compose file, just copy the script to he container and run it, also you could create a new postgres image with the script and run it from CMD, just to run the container, there are plenty ways to do it.

Dockerfile ENV variables not being accepted

I'm using docker and dockerfile to build images. I want to build a PostgreSQL image so I'm using this dockerfile:
ARG POSTGRES_USER=vetouz
ARG POSTGRES_PASSWORD=***
ARG POSTGRES_DB=vetouz_mediatheque
FROM postgres:latest
USER postgres
EXPOSE 5432
Then I run the image using this command
docker run -e POSTGRES_PASSWORD=vetouz -d --name postgres postgres:latest
When I do that the role vetouz, the password and the db vetouz_mediatheque are not created and I don't understand why. I know it because when I access my container with sudo docker exec -it postgres bash and then run psql -U vetouz I get the error role vetouz does not exist.
It works if I run my image with the following command:
docker run -e POSTGRES_PASSWORD=*** -e POSTGRES_USER=vetouz -e POSTGRES_DB=vetouz_mediatheque -d --name postgres postgres:latest
But I would rather define my variables in the dockerfile.
Any idea why it's not working?
Please use ENV instead of ARG. args are only available during build, envs are available during runtime as well.
Source
https://docs.docker.com/engine/reference/builder/#arg
https://docs.docker.com/engine/reference/builder/#env
YOUR PROBLEM
As already stated your are using ARG that is only available when building the Docker image, but using env variables to set sensitive information in a Docker image is not a safe approach, and I will explain why.
SECURITY CONCERNS
But I would rather define my variables in the dockerfile.
This is ok for information that is not sensitive, but not a best practice for sensitive information like passwords, because the database credentials will be stored in plain text in the Dockerfile, and even if you use ARG to set the ENV var they will be available in the docker image layers.
docker run -e POSTGRES_PASSWORD=*** -e POSTGRES_USER=vetouz -e POSTGRES_DB=vetouz_mediatheque -d --name postgres postgres:latest
This is also a bad practice in terms of security because now your database credentials are saved into the bash history.
In a Linux machine you can check with:
history | grep -i POSTGRES
A MORE SECURE APPROACH
Create an .env file:
POSTGRES_USER=vetouz
POSTGRES_PASSWORD=your-password-here
POSTGRES_DB=vetouz_mediatheque
Don't forget to add the .env file to .gitignore:
echo ".env" >> .gitignore
Running the Docker Container
Now run the docker container with:
docker run --env-file ./.env -d --name postgres postgres:latest