sqlproxy: inject secrets into sqlproxy.cnf - kubernetes

I have a few proxysql (https://proxysql.com/) instances (running in Kubernetes). However, I don't want to hardcode the db credentials in the config file (proxysql.cnf). I was hoping I could use ENV variables but I wasn't able to get that to work. What is the proper way to include secrets in a proxysql instance without hard coding passwords in plain text files?
I was thinking of including the config file as one secret and mount it in Kubernetes (seem over kill or wrong) or run envsubstr via in a startup script or init container.
Thoughts?

What I ended up doing was I ran a sidecar with an init script as a configmap:
#!/bin/sh
echo "Check if mysqld is running..."
while ! nc -z 127.0.0.1 6032; do
sleep 0.1
done
echo "mysql is running!"
echo "Loading Runtime Data..."
echo "INSERT INTO mysql_users(username,password,default_hostgroup) VALUES ('$USERNAME','$PASSWORD',1);" | mysql -u $PROXYSQL_USER -p$PROXYSQL_PASSWORD -h 127.0.0.1 -P6032
echo "LOAD MYSQL USERS TO RUNTIME;" | mysql -u $PROXYSQL_USER -p$PROXYSQL_PASSWORD -h 127.0.0.1 -P6032
echo "Runtime Data loaded."
while true; do sleep 300; done;
Seem to work nicely.

Related

How to achieve persistence with Volumes on Windows Containers?

In our company we are trying to migrate an application to Docker with Windows Containers. The application uses a PostgreSQL database.
We are able to get the application running inside the Container. However, whenever we stop the container and start a new one with the same image all the changes made into the database are gone.
How can we achieve persistence with data volumes on Windows Containers?
We've read multiple articles that persistence can be accomplished with data volumes.
We've followed this guide and we able to achieve persistence without any problem on Linux Containers
https://elanderson.net/2018/02/setup-postgresql-on-windows-with-docker/
However on Windows Containers something is missing to get us where we need.
The Dockerfile we are using for creating an image with postgres on Windows Containers is:
-----START-----
FROM microsoft/aspnet:4.7.2-windowsservercore-1709
EXPOSE 5432
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN [Net.ServicePointManager]::SecurityProtocol = 'Tls12, Tls11, Tls' ; \
Invoke-WebRequest -UseBasicParsing -Uri 'https://get.enterprisedb.com/postgresql/postgresql-9.6.10-2-windows-x64.exe' -OutFile 'postgresql-installer.exe' ; \
Start-Process postgresql-installer.exe -ArgumentList '--mode unattended --superpassword password' -Wait ; \
Remove-Item postgresql-installer.exe -Force
SHELL ["cmd", "/S", "/C"]
RUN setx /M PATH "C:\\Program Files\\PostgreSQL\\9.6\\bin;%PATH%" && \
setx /M DATA_DIR "C:\\Program Files\\PostgreSQL\\9.6\\data" && \
setx /M PGPASSWORD "password"
RUN powershell -Command "Do { pg_isready -q } Until ($?)" && \
echo listen_addresses = '*' >> "%DATA_DIR%\\postgresql.conf" && \
echo host all all 0.0.0.0/0 trust >> "%DATA_DIR%\\pg_hba.conf" && \
echo host all all ::0/0 trust >> "%DATA_DIR%\\pg_hba.conf" && \
net stop postgresql-x64-9.6
----END----
The commands we are using to build the image and running the container are.
docker build -t psql1709 .
docker run -d -it -p 8701:5432 --name postgresv1 -v "posgresData:c:\Program Files\PostgreSQL\9.6\data" psql1709
The problem likely is that the DATA_DIR is not set when running the container, and as a result, the database is written to a different path than the path where your volume is mounted.
Each RUN instruction in a Dockerfile is executed in a new container, and the resulting filesystem changes of the step is committed to a new layer.
However, memory state is not persisted, so when you run;
setx /M DATA_DIR "C:\\Program Files\\PostgreSQL\\9.6\\data"
That environment variable is only known during that run instruction, but not after.
To set an environment variable that's persisted as part of the image that you build (and will be set for following RUN instructions, and when running the image/container), use the ENV dockerfile instruction;
ENV DATA_DIR "C:\\Program Files\\PostgreSQL\\9.6\\data"
(I'm not using Windows, so double check if the quoting/escaping works as expected)
Note: I see you're setting PGPASSWORD in an environment variable; be aware that (when using ENV), environment variables can be seen by anyone that has access to the container or image (docker inspect will show this information). In this case, the password seems to be a "default" password, so likely not a concern.

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

How to restore a Postgresdump while building a Docker image?

I'm trying to avoid touching a shared dev database in my workflow; to make this easier, I want to have Docker image definitions on my disk for the schemas I need. I'm stuck however at making a Dockerfile that will create a Postgres image with the dump already restored. My problem is that while the Docker image is being built, the Postgres server isn't running.
While messing around in the container in a shell, I tried starting the container manually, but I'm not sure what the proper way to do so. /docker-entrypoint.sh doesn't seem to do anything, and I can't figure out how to "correctly" start the server.
So what I need to do is:
start with "FROM postgres"
copy the dump file into the container
start the PG server
run psql to restore the dump file
kill the PG server
(Steps I don't know are in italics, the rest is easy.)
What I'd like to avoid is:
Running the restore manually into an existing container, the whole idea is to be able to switch between different databases without having to touch the application config.
Saving the restored image, I'd like to be able to rebuild the image for a database easily with a different dump. (Also it doesn't feel very Docker to have unrepeatable image builds.)
This can be done with the following Dockerfile by providing an example.pg dump file:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
For the two "unsure steps":
start the PG server
nohup bash -c "docker-entrypoint.sh postgres &" can take care of it
kill the PG server
It's not really necessary
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
You can utilise volumes.
The postgres image has an enviroment variable you could set with: PGDATA
See docs: https://hub.docker.com/_/postgres/
You could then point a pre created volume with the exact db data that you require and pass this as an argument to the image.
https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume
Alternate solution can also be found here: Starting and populating a Postgres container in Docker
A general approach to this that should work for any system that you want to initialize I remember using on other projects is:
Instead of trying to do do this during the build, use Docker Compose dependencies so that you end up with:
your db service that fires up the database without any initialization that requires it to be live
a db-init service that:
takes a dependency on db
waits for the database to come up using say dockerize
then initializes the database while maintaining idempotency (e.g. using schema migration)
and exits
your application services that now depend on db-init instead of db

Backup postgresql database from 4D

I am using 4D for front-end and postgresql for back-end. So i have the requirement to take database backups from front-end.
Here what i have done so far for taking backups in 4D.
C_LONGINT(i_pg_connection)
i_pg_connection:=PgSQL Connect ("localhost";"admin";"admin";"test_db")
LAUNCH EXTERNAL PROCESS("C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db")
PgSQL Close (i_pg_connection)
But the it's not taking the backup.
The backup command is ok because it works perfectly while firing on command prompt.
What's wrong in my code?
Thanks in advance.
Unneeded commands in your code
If you are using LAUNCH EXTERNAL PROCESS to do the backup then you do not need the PgSQL CONNECT and PgSQL CLOSE.
These plug-in commands do not execute in the same context as LAUNCH EXTERNAL PROCESS so they are unneeded in this situation.
Make sure you have write access
If the 4D Database is running as a Service, or more specifically as a user that does not have write access to C:\Users\Admin_user\..., then it could be failing due to a permissions issue.
Make sure that you are writing to a location that you have write access to, and also be sure to check the $out and $err parameters to see what the Standard Output and Error Streams are.
You need to specify a password for pg_dump
Another problem is that you are not specifying the password.
You could either use the PGPASSWORD environment variable or use a pgpass.conf file in the user's profile directory.
Regarding the PGPASSWORD environment variable; the documentation has the following warning:
Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file
Example using pgpass.conf
The following example assumes you have a pgpass.conf file in place:
C_TEXT($c;$in;$out;$err)
$c:="C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F"
$c:=$c+" c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db"
LAUNCH EXTERNAL PROCESS($c;$in;$out;$err)
TRACE
Example using PGPASSWORD environment variable
The following example sets the PGPASSWORD environment variable before the call to pg_dump and then clears the variable after the call:
C_TEXT($c;$in;$out;$err)
SET ENVIRONMENT VARIABLE ( "PGPASSWORD" ; "your postgreSQL password" )
$c:="C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F"
$c:=$c+" c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db"
LAUNCH EXTERNAL PROCESS($c;$in;$out;$err)
SET ENVIRONMENT VARIABLE ( "PGPASSWORD" ; "" ) // clear password for security
TRACE
Debugging
Make sure to use the debugger to check the $out and $err to see what the underlying issue is.

Getting User name + password to docker container

I've really been struggling over the past few days trying to setup some docker containers and shell scripts to create an environment for my application to run in.
The tall and short is that I have a web server which requires a database to operate. My aim is to have end users unzip the content onto their docker machine, run a build script (which just builds the relevant docker images), then run a OneTime.sh script (which creates the volumes and databases necessary), during this script, they are prompted for what user name and password they would like for the super user of the database.
The problem I'm having is getting those values to the docker image. Here is my script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -p 9001:5432 -P --name psql-data-onetime postgres-setup
# Close containers
docker stop psql-data-onetime
docker rm psql-data-onetime
docker stop psql-transactions-onetime
docker rm psql-transactions-onetime
And here is the docker file:
FROM ubuntu
#Required environment variables: USERNAME, PASSWORD, DBNAME
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Complete configuration
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Run setup script
ADD Setup.sh /
CMD ["sh", "Setup.sh"]
The script 'Setup.sh' is the following:
echo -n " User name: "
read user
echo -n " Password: "
read password
echo -n " Database Name: "
read dbname
/etc/init.d/postgresql start
/usr/lib/postgresql/9.3/bin/psql --command "CREATE USER $user WITH SUPERUSER PASSWORD '$password';"
/usr/lib/postgresql/9.3/bin/createdb -O $user $dbname
exit
Why doesn't this work? (I don't get prompted to enter the text, and it throws an error that the parameters are bad). What is the proper way to do something like this? It feels like it's probably a pretty common problem to solve, but I cannot for the life of me find any non convoluted examples of this behaviour.
The main purpose of this is to make life easier for the end user, so if I could just prompt them for the user name, password, and dbname, (plus calling the correct scripts), that would be ideal.
EDIT:
After running the log file looks like this:
User name:
Password:
Database Name:
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
EDIT 2:
After updating to CMD ["sh", "-x", "Setup.sh"]
I get:
echo -n User name:
+read user
:bad variable nameuser
echo -n Password:
+read password
:bad variable namepassword
echo -n Database Name:
+read dbname
:bad variable dbname