How do I handle passwords and dockerfiles? - postgresql

I've created an image for docker which hosts a postgresql server. In the dockerfile, the environment variable 'USER', and I pass a constant password into the a run of psql:
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" && createdb -O docker docker
Ideally either before or after calling 'docker run' on this image, I'd like the caller to have to input these details into the command line, so that I don't have to store them anywhere.
I'm not really sure how to go about this. Does docker have any support for reading stdin into an environment variable? Or perhaps there's a better way of handling this all together?

At build time
You can use build arguments in your Dockerfile:
ARG password=defaultPassword
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';" && createdb -O docker docker
Then build with:
$ docker build --build-arg password=superSecretPassword .
At run time
For setting the password at runtime, you can use an environment variable (ENV) that you can evaluate in an entrypoint script (ENTRYPOINT):
ENV PASSWORD=defaultPassword
ADD entrypoint.sh /docker-entrypoint.sh
USER postgres
ENTRYPOINT /docker-entrypoint.sh
CMD ["postgres"]
Within the entrypoint script, you can then create a new user with the given password as soon as the container starts:
pg_ctl -D /var/lib/postgresql/data \
-o "-c listen_addresses='localhost'" \
-w start
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';"
postgres pg_ctl -D /var/lib/postgresql/data -m fast -w stop
exec $#
You can also have a look at the Dockerfile and entrypoint script of the official postgres image, from which I've borrowed most of the code in this answer.
A note on security
Storing secrets like passwords in environment variables (both build and run time) is not incredibly secure (unfortunately, to my knowledge, Docker does not really offer any better solution for this, right now). An interesting discussion on this topic can be found in this question.

You could use environment variable in your Dockerfile and override the default value when you call docker run using -e or --env argument.
Also you will need to amend the init script to run psql command on startup referenced by the CMD instruction.

Related

docker postgres init script seems will not create db user, command is ignored

I have a simple dockerfile:
FROM postgres:latest
ENV POSTGRES_PASSWORD password
ENV POSTGRES_USER postgres
ENV POSTGRES_DB evesde
COPY init.sh /docker-entrypoint-initdb.d/
and my init file is chmod' to 777:
#!/bin/bash
psql -U "postgres" -d "evesde" -e "create role yaml with login encrypted password 'password';"
when running a container it will say:
psql: warning: extra command-line argument "create role yaml with
login encrypted password 'password';" ignored
Im not sure why this is happening, and when doing an interactive terminal, this command seemingly worked. I dont see any additional information and wasnt sure what was going wrong.
The postgres docker page is: https://hub.docker.com/_/postgres
When looking at it deeper, I was noticing that running the command itself fails in an interactive Terminal with the same error, but the command runs when I am in postgres: psql -U "postgres" -d "evesde" and run the command, it works.
I think it may be related to passing the command in through the exec command is where it fails. likely related to '.
You want -c instead of -e.
-e turns on "echo queries"
-c runs the command and exits
Have you considered putting just the create role command in a file called create_role.sql and copying that into /docker-entrypoint-initdb.d/?
Based on testing, it looks like an equivalent but simpler solution is to put the SQL command as one line in a file, 00_roles.sql, and copy that into the container instead of the init.sh script.

How to restore a Postgresdump while building a Docker image?

I'm trying to avoid touching a shared dev database in my workflow; to make this easier, I want to have Docker image definitions on my disk for the schemas I need. I'm stuck however at making a Dockerfile that will create a Postgres image with the dump already restored. My problem is that while the Docker image is being built, the Postgres server isn't running.
While messing around in the container in a shell, I tried starting the container manually, but I'm not sure what the proper way to do so. /docker-entrypoint.sh doesn't seem to do anything, and I can't figure out how to "correctly" start the server.
So what I need to do is:
start with "FROM postgres"
copy the dump file into the container
start the PG server
run psql to restore the dump file
kill the PG server
(Steps I don't know are in italics, the rest is easy.)
What I'd like to avoid is:
Running the restore manually into an existing container, the whole idea is to be able to switch between different databases without having to touch the application config.
Saving the restored image, I'd like to be able to rebuild the image for a database easily with a different dump. (Also it doesn't feel very Docker to have unrepeatable image builds.)
This can be done with the following Dockerfile by providing an example.pg dump file:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
For the two "unsure steps":
start the PG server
nohup bash -c "docker-entrypoint.sh postgres &" can take care of it
kill the PG server
It's not really necessary
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
You can utilise volumes.
The postgres image has an enviroment variable you could set with: PGDATA
See docs: https://hub.docker.com/_/postgres/
You could then point a pre created volume with the exact db data that you require and pass this as an argument to the image.
https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume
Alternate solution can also be found here: Starting and populating a Postgres container in Docker
A general approach to this that should work for any system that you want to initialize I remember using on other projects is:
Instead of trying to do do this during the build, use Docker Compose dependencies so that you end up with:
your db service that fires up the database without any initialization that requires it to be live
a db-init service that:
takes a dependency on db
waits for the database to come up using say dockerize
then initializes the database while maintaining idempotency (e.g. using schema migration)
and exits
your application services that now depend on db-init instead of db

Docker - extend the parent's ENTRYPOINT

I've got a custom image based on the official postgres image and I want to extend the entrypoint of the parent image so that it would create new users and databases if they don't exist yet every time a container starts up. Is it possible? Like my image would execute all the commands from the standard entrypoint plus my own shell script.
I know about putting my own scripts into the /docker-entrypoint-initdb.d directory, but it seems that they get executed only when the volume is created the first time.
What you need to do is something like below
setup_user.sh
sleep 10
echo "execute commands to setup user"
setup.sh
sh setup_user.sh &
./docker-entrypoint.sh postgres
And your image should use the ENTRYPOINT as
ENTRYPOINT ["/setup.sh"]
You need to start your setup script in background and let the origin entryscript do its works to start the database
In addition to the accepted answer Docker - extend the parent's ENTRYPOINT, instead of sleeping a specific time, you may want to consider executing your script similar to how ''docker-entrypoint.sh'' of the postgres docker image does it (docker-entrypoint.sh; to init the DB, they start the server, execute initialization commands, and shut it down again). Thus:
setup_user.sh
su - "$YOUR_PG_USER" -c '/usr/local/bin/pg_ctl -D /var/lib/postgresql/data -o "-c listen_addresses='localhost'" -w start'
psql -U "$YOUR_PG_USER" "$YOUR_PG_DATABASE" < "$YOUR_SQL_COMMANDS"
su - "$YOUR_PG_USER" -c '/usr/local/bin/pg_ctl -D /var/lib/postgresql/data -m fast -w stop'
setup.sh
./setup_user.sh && ./docker-entrypoint.sh postgres

Getting User name + password to docker container

I've really been struggling over the past few days trying to setup some docker containers and shell scripts to create an environment for my application to run in.
The tall and short is that I have a web server which requires a database to operate. My aim is to have end users unzip the content onto their docker machine, run a build script (which just builds the relevant docker images), then run a OneTime.sh script (which creates the volumes and databases necessary), during this script, they are prompted for what user name and password they would like for the super user of the database.
The problem I'm having is getting those values to the docker image. Here is my script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -p 9001:5432 -P --name psql-data-onetime postgres-setup
# Close containers
docker stop psql-data-onetime
docker rm psql-data-onetime
docker stop psql-transactions-onetime
docker rm psql-transactions-onetime
And here is the docker file:
FROM ubuntu
#Required environment variables: USERNAME, PASSWORD, DBNAME
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Complete configuration
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Run setup script
ADD Setup.sh /
CMD ["sh", "Setup.sh"]
The script 'Setup.sh' is the following:
echo -n " User name: "
read user
echo -n " Password: "
read password
echo -n " Database Name: "
read dbname
/etc/init.d/postgresql start
/usr/lib/postgresql/9.3/bin/psql --command "CREATE USER $user WITH SUPERUSER PASSWORD '$password';"
/usr/lib/postgresql/9.3/bin/createdb -O $user $dbname
exit
Why doesn't this work? (I don't get prompted to enter the text, and it throws an error that the parameters are bad). What is the proper way to do something like this? It feels like it's probably a pretty common problem to solve, but I cannot for the life of me find any non convoluted examples of this behaviour.
The main purpose of this is to make life easier for the end user, so if I could just prompt them for the user name, password, and dbname, (plus calling the correct scripts), that would be ideal.
EDIT:
After running the log file looks like this:
User name:
Password:
Database Name:
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
EDIT 2:
After updating to CMD ["sh", "-x", "Setup.sh"]
I get:
echo -n User name:
+read user
:bad variable nameuser
echo -n Password:
+read password
:bad variable namepassword
echo -n Database Name:
+read dbname
:bad variable dbname

Why can't you start postgres in docker using "service postgres start"?

All the tutorials point out to running postgres in the format of
docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf'
Why can't we in our Docker file have:
ENTRYPOINT ["/etc/init.d/postgresql-9.2", "start"]
And simply start the container by
docker run -d psql
Is that not the purpose of Entrypoint or am I missing something?
the difference is that the init script provided in /etc/init.d is not an entry point. Its purpose is quite different; to get the entry point started, in the background, and then report on the success or failure to the caller. that script causes a postgres process, usually indirectly via pg_ctl, to be started, detached from the controlling terminal.
for docker to work best, it needs to run the application directly, attached to the docker process. that way it can usefully and generically terminate it when the user asks for it, or quickly discover and respond to the process crashing.
Exemplify that IfLoop said.
Using CMD into Dockerfiles:
USE postgres
CMD ["/usr/lib/postgresql/9.2/bin/postgres", "-D", "/var/lib/postgresql/9.2/main", "-c", "config_file=/etc/postgresql/9.2/main/postgresql.conf"]
To run:
$docker run -d -p 5432:5432 psql
Watching PostgeSQL logs:
$docker logs -f POSTGRES_CONTAINER_ID