How to achieve persistence with Volumes on Windows Containers? - postgresql

In our company we are trying to migrate an application to Docker with Windows Containers. The application uses a PostgreSQL database.
We are able to get the application running inside the Container. However, whenever we stop the container and start a new one with the same image all the changes made into the database are gone.
How can we achieve persistence with data volumes on Windows Containers?
We've read multiple articles that persistence can be accomplished with data volumes.
We've followed this guide and we able to achieve persistence without any problem on Linux Containers
https://elanderson.net/2018/02/setup-postgresql-on-windows-with-docker/
However on Windows Containers something is missing to get us where we need.
The Dockerfile we are using for creating an image with postgres on Windows Containers is:
-----START-----
FROM microsoft/aspnet:4.7.2-windowsservercore-1709
EXPOSE 5432
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN [Net.ServicePointManager]::SecurityProtocol = 'Tls12, Tls11, Tls' ; \
Invoke-WebRequest -UseBasicParsing -Uri 'https://get.enterprisedb.com/postgresql/postgresql-9.6.10-2-windows-x64.exe' -OutFile 'postgresql-installer.exe' ; \
Start-Process postgresql-installer.exe -ArgumentList '--mode unattended --superpassword password' -Wait ; \
Remove-Item postgresql-installer.exe -Force
SHELL ["cmd", "/S", "/C"]
RUN setx /M PATH "C:\\Program Files\\PostgreSQL\\9.6\\bin;%PATH%" && \
setx /M DATA_DIR "C:\\Program Files\\PostgreSQL\\9.6\\data" && \
setx /M PGPASSWORD "password"
RUN powershell -Command "Do { pg_isready -q } Until ($?)" && \
echo listen_addresses = '*' >> "%DATA_DIR%\\postgresql.conf" && \
echo host all all 0.0.0.0/0 trust >> "%DATA_DIR%\\pg_hba.conf" && \
echo host all all ::0/0 trust >> "%DATA_DIR%\\pg_hba.conf" && \
net stop postgresql-x64-9.6
----END----
The commands we are using to build the image and running the container are.
docker build -t psql1709 .
docker run -d -it -p 8701:5432 --name postgresv1 -v "posgresData:c:\Program Files\PostgreSQL\9.6\data" psql1709

The problem likely is that the DATA_DIR is not set when running the container, and as a result, the database is written to a different path than the path where your volume is mounted.
Each RUN instruction in a Dockerfile is executed in a new container, and the resulting filesystem changes of the step is committed to a new layer.
However, memory state is not persisted, so when you run;
setx /M DATA_DIR "C:\\Program Files\\PostgreSQL\\9.6\\data"
That environment variable is only known during that run instruction, but not after.
To set an environment variable that's persisted as part of the image that you build (and will be set for following RUN instructions, and when running the image/container), use the ENV dockerfile instruction;
ENV DATA_DIR "C:\\Program Files\\PostgreSQL\\9.6\\data"
(I'm not using Windows, so double check if the quoting/escaping works as expected)
Note: I see you're setting PGPASSWORD in an environment variable; be aware that (when using ENV), environment variables can be seen by anyone that has access to the container or image (docker inspect will show this information). In this case, the password seems to be a "default" password, so likely not a concern.

Related

backup postgresql from azure container instance

I created Azure Container Instance and ran postgresql in it. Mounted an azure container instance storage account. How can I start backup work, possibly by sheduler?
When I run the command
az container exec --resource-group Vitalii-demo --name vitalii-demo --exec-command "pg_dumpall -c -U postgrace > dump.sql"
I get an error error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \ "pg_dumpall -c -U postgrace > dump.sql\": executable file not found in $PATH"
I read that
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
Perhaps there is an opportunity to run as a task? Thanks.
Unfortunately - and as you already mentioned - it's not possible to run any commands with arguments like echo FOO or chain multiple commands together with &&.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec#run-a-command-with-azure-cli
You should be able to run an interactive shell by using --exec-command /bin/bash.
But this will not help if you want to schedule the backups programatically.
pg_dumpall can also be configured by environment variables:
https://www.postgresql.org/docs/9.3/libpq-envars.html
You could launch your backup-container with the correct environment variables in order to connect your database service:
PGHOST
PGPORT
PGUSER
PGPASSWORD
When having these variables set, a simple pg_dumpall should totally do what you want.
Hope that helps.
UPDATE:
Yikes, even when configuring the connection via environment-variables you won't be able to state the desired output file... Sorry.
You could create your own Dockerimage with a pre-configured script for dumping your PostgreSQL-database.
Doing it that way, you can configure the output-file in your script and then simply execute the script with --exec-command dump_my_db.sh.
Keep in mind that your script has to be located somewhere in the default $PATH - e.g. /usr/local/bin.

sqlproxy: inject secrets into sqlproxy.cnf

I have a few proxysql (https://proxysql.com/) instances (running in Kubernetes). However, I don't want to hardcode the db credentials in the config file (proxysql.cnf). I was hoping I could use ENV variables but I wasn't able to get that to work. What is the proper way to include secrets in a proxysql instance without hard coding passwords in plain text files?
I was thinking of including the config file as one secret and mount it in Kubernetes (seem over kill or wrong) or run envsubstr via in a startup script or init container.
Thoughts?
What I ended up doing was I ran a sidecar with an init script as a configmap:
#!/bin/sh
echo "Check if mysqld is running..."
while ! nc -z 127.0.0.1 6032; do
sleep 0.1
done
echo "mysql is running!"
echo "Loading Runtime Data..."
echo "INSERT INTO mysql_users(username,password,default_hostgroup) VALUES ('$USERNAME','$PASSWORD',1);" | mysql -u $PROXYSQL_USER -p$PROXYSQL_PASSWORD -h 127.0.0.1 -P6032
echo "LOAD MYSQL USERS TO RUNTIME;" | mysql -u $PROXYSQL_USER -p$PROXYSQL_PASSWORD -h 127.0.0.1 -P6032
echo "Runtime Data loaded."
while true; do sleep 300; done;
Seem to work nicely.

Replacing postgresql.conf in a docker container

I am pulling the postgres:12.0-alpine docker image to build my database. My intention is to replace the postgresql.conf file in the container to reflect the changes I want (changing data directory, modify backup options etc). I am trying with the following docker file
FROM postgres:12.0-alpine
# create the custom user
RUN addgroup -S custom && adduser -S custom_admin -G custom
# create the appropriate directories
ENV APP_HOME=/home/data
ENV APP_SETTINGS=/var/lib/postgresql/data
WORKDIR $APP_HOME
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME/entrypoint.sh
# copy postgresql.conf
COPY ./postgresql.conf $APP_HOME/postgresql.conf
RUN chmod +x /home/data/entrypoint.sh
# chown all the files to the app user
RUN chown -R custom_admin:custom $APP_HOME
RUN chown -R custom_admin:custom $APP_SETTINGS
# change to the app user
USER custom_admin
# run entrypoint.sh
ENTRYPOINT ["/home/data/entrypoint.sh"]
CMD ["custom_admin"]
my entrypoint.sh looks like
#!/bin/sh
rm /var/lib/postgresql/data/postgresql.conf
cp ./postgresql.conf /var/lib/postgresql/data/postgresql.conf
echo "replaced .conf file"
exec "$#"
But I get an exec error saying 'custom_admin: not found on the 'exec "$#"' line. What am I missing here?
In order to provide a custom configuration. Please use the below command:
docker run -d --name some-postgres -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
Here my-postgres.conf is your custom configuration file.
Refer the docker hub page for more information about the postgres image.
Better to use the suggested answer by #Thilak, you do not need custom image to just use the custom config.
Now the problem with the CMD ["custom_admin"] in the Dockerfile, any command that is passed to CMD in the Dockerfile, you are executing that command in the end of the entrypoint, normally such command refers to main or long-running process of the container. Where custom_admin seems like a user, not a command. you need to replace this with the process which would run as a main process of the container.
Change CMD to
CMD ["postgres"]
I would suggest modifying the offical entrypoint which do many tasks out of the box, as you own entrypoint just starting the container no DB initialization etc.

postgres in Docker with data directory on named or bind volume works on Windows Server 2019 but not Windows Server 2016

I'm currently trying to run PostgreSQL 10.6.1 in a docker container (Base-Image is mcr.microsoft.com/windows/servercore:ltsc2016) that should be run on Windows Server 2016 Datacenter (Version 1607, Build 14393.2759).
Everything runs fine, including PostgreSQL's initdb and running PostgreSQL afterwards - as long as PostgreSQL's data directory is not a volume on the docker host.
If I try to run it having the PostgreSQL's data directory placed on either a named or bind volume, PostgreSQL is able to do its initdb but fails to start afterwards.
I suspect that the error is due to the fact that mounted volumes are visible as symlink within containers depending on mcr.microsoft.com/windows/servercore:ltsc2016, causing PostgreSQL to fail. On mcr.microsoft.com/windows/servercore:1809, this behaviour changed - mounted volumes look like 'normal' directories and PostgreSQL is able to start.
As there is no useful error message at all and running a database having no persistent storage outside of a container sounds like a really useless thing, any help is appreciated.
The Dockerfile I'm using looks like this:
FROM mcr.microsoft.com/windows/servercore:ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG version=10.6-1
RUN [Net.ServicePointManager]::SecurityProtocol = 'Tls12, Tls11, Tls' ; \
Invoke-WebRequest $('https://get.enterprisedb.com/postgresql/postgresql-{0}-windows-x64-binaries.zip' -f $env:version) -OutFile 'postgres.zip' -UseBasicParsing ; \
Expand-Archive postgres.zip -DestinationPath C:\ ; \
Remove-Item postgres.zip ; \
Remove-Item 'C:\pgsql\pgAdmin 4' -Recurse ; \
Remove-Item 'C:\pgsql\StackBuilder' -Recurse ; \
Remove-Item 'C:\pgsql\doc' -Recurse
RUN [Net.ServicePointManager]::SecurityProtocol = 'Tls12, Tls11, Tls' ; \
Invoke-WebRequest 'http://download.microsoft.com/download/0/5/6/056DCDA9-D667-4E27-8001-8A0C6971D6B1/vcredist_x64.exe' -OutFile vcredist_x64.exe ; \
Start-Process vcredist_x64.exe -ArgumentList '/install', '/passive', '/norestart' -NoNewWindow -Wait ; \
Remove-Item vcredist_x64.exe
SHELL ["cmd", "/S", "/C"]
RUN setx /M PATH "C:\pgsql\bin;%PATH%"
RUN MD data
RUN setx /M PGDATA "C:\data"
ENV PGUSER postgres
ENV PGPASSWORD postgres
COPY docker-entrypoint.cmd /
COPY start.cmd /
EXPOSE 5432
ENTRYPOINT ["docker-entrypoint.cmd"]
CMD [ "start.cmd"]
The referenced docker-entrypoint.cmd looks like this:
#ECHO %PGPASSWORD% > pw.txt
#ECHO OFF
IF NOT EXIST %PGDATA%/PG_VERSION (
initdb.exe --encoding=UTF8 --username=%PGUSER% --pwfile=pw.txt
#echo host all all 0.0.0.0/0 trust > %PGDATA%/pg_hba.conf
#echo host all all ::0/0 trust >> %PGDATA%/pg_hba.conf
)
%*
The referenced start.cmd looks like this:
#echo off
pg_ctl start -o "-h * -c max_prepared_transactions=100 -c max_connections=1000"
REM logic to keep container alive
:ENDLESS
CALL :sleep 100000
goto ENDLESS
exit
REM https://superuser.com/questions/48231/how-do-i-make-a-batch-file-wait-sleep-for-some-seconds
:sleep
ping 127.0.0.1 -n %1 -w 1000 > NUL

Run SQL script after start of SQL Server on docker

I have a Dockerfile with below code
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA=Y
ENV sa_password=##$wo0RD!
CMD sqlcmd -i create-db.sql
and I can create image but when I run container with the image I don't see created database on the SQL Server because the script is executed before SQL Server was started.
Can I do that the script will be execute after start the service with SQL Server?
RUN gets used to build the layers in an image. CMD is the command that is run when you launch an instance (a "container") of the built image.
Also, if your script depends on those environment variables, if it's an older version of Docker, it might fail because those variables are not defined the way you want them defined!
In older versions of docker the Dockerfile ENV command uses spaces instead of "="
Your Dockerfile should probably be:
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA Y
ENV SA_PASSWORD ##$wo0RD!
RUN sqlcmd -i create-db.sql
This will create an image containing the database with your password inside it.
(If the SQL file somehow uses the environment variables, this wouldn't make sense as you might as well update the SQL file before you copy it over.) If you want to be able to override the password between the docker build and docker run steps, by using docker run --env sa_password=##$wo0RD! ..., you will need to change the last line to:
CMD sqlcmd -i create-db.sql && .\start -sa_password $env:SA_PASSWORD \
-ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
Which is a modified version of the CMD line that is inherited from the upstream image.
You can follow this link https://github.com/microsoft/mssql-docker/issues/11.
Credits to Robin Moffatt.
Change your docker-compose.yml file to contain the following
mssql:
image: microsoft/mssql-server-windows-express
environment:
- SA_PASSWORD=##$wo0RD!
- ACCEPT_EULA=Y
volumes:
# directory with sql script on pc to /scripts/
# - ./data/mssql:/scripts/
- ./create-db.sql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity