Where is the Postgres username/password being created in this Dockerfile? - postgresql

So I was following this tutorial:
https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/
I have everything up and running, however theres a few things going on that I'm not able to follow or understand.
In the main Docker-Compose we have:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
You will notice there is a env_file containing:
DB_NAME=postgres
DB_USER=postgres
DB_PASS=postgres
My question is when is the postgres user and password being set? If I run this docker-compose everything works, meaning the web app can pass credentials to the postgress database and establish a connection. I'm not able to follow however, where those credentials are being set in the first place.
I was assuming in the base postgres Dockerfile, there would be some instruction to set the database name, username and password. I do not it though. Here is a copy of the base postgres Dockerfile below.
# vim:set ft=dockerfile:
FROM debian:jessie
# explicitly set user/group IDs
RUN groupadd -r postgres --gid=999 && useradd -r -g postgres --uid=999 postgres
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& apt-get purge -y --auto-remove ca-certificates wget
# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
ENV PG_MAJOR 9.6
ENV PG_VERSION 9.6.1-1.pgdg80+1
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update \
&& apt-get install -y postgresql-common \
&& sed -ri 's/#(create_main_cluster) .*$/\1 = false/' /etc/postgresql-common/createcluster.conf \
&& apt-get install -y \
postgresql-$PG_MAJOR=$PG_VERSION \
postgresql-contrib-$PG_MAJOR=$PG_VERSION \
&& rm -rf /var/lib/apt/lists/*
# make the sample config easier to munge (and "correct by default")
RUN mv -v /usr/share/postgresql/$PG_MAJOR/postgresql.conf.sample /usr/share/postgresql/ \
&& ln -sv ../postgresql.conf.sample /usr/share/postgresql/$PG_MAJOR/ \
&& sed -ri "s!^#?(listen_addresses)\s*=\s*\S+.*!\1 = '*'!" /usr/share/postgresql/postgresql.conf.sample
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
ENV PATH /usr/lib/postgresql/$PG_MAJOR/bin:$PATH
ENV PGDATA /var/lib/postgresql/data
VOLUME /var/lib/postgresql/data
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]

Not sure which postgres image are you using.
If you look at the official postgres image for complete information. It allows user to specify the environment variables for the below ones and these can be easily overridden at run time.
POSTGRES_PASSWORD
POSTGRES_USER
PGDATA
POSTGRES_DB
POSTGRES_INITDB_ARGS
Environment variables can be overridden using below three methods depending on your case.
Run Image: If running docker image directly, use below to include environment variable in docker run with -e K=V. Please refer documentation for more details here
docker run -e POSTGRES_PASSWORD=secrect -e POSTGRES_USER=postgres <other options> image/name
Dockerfile: If you need to specify the environment variable in Dockerfile, specify as mentioned below. Please refer documentation for more details here
ENV POSTGRES_PASSWORD=secrect
ENV POSTGRES_USER=postgres
docker-compose: If you need to specify the environment variable in docker-compose.yml, specify as below. Please refer documentation for more details here
web:
environment:
- POSTGRES_PASSWORD=secrect
- POSTGRES_USER=postgres
Hope this is useful.

I think the postgres user and password is being set on entrypoint, like in line 23 on official image entrypoint.
https://github.com/docker-library/postgres/blob/e4942cb0f79b61024963dc0ac196375b26fa60dd/9.6/docker-entrypoint.sh
Can you check your entrypoint?

All you need is
docker run -e POSTGRES_PASSWORD=123456 postgres
In the same way you can set up the username like below:
docker run -e POSTGRES_USERNAME=xyz postgres

Are you using the -v mounted folder? If so, you need to remove the entire pgdata/ folder you created previously so that the postgres can re-create itself using the new environment variables.

Related

Error deploying a custom image with the crunchydata postgres operator

I've created a custom image based on timescaledb where I've installed wal2json and I'm trying to deploy it on my kubernetes cluster using the crunchydata-postgres-operator. I've managed to set everything up except credentials to access the database.
I'm trying to create the pgo cluster with the following command:
pgo create cluster my-db --ccp-image-prefix="eu.gcr.io/<project-id>" --ccp-image="timescale-custom" -c latest -d <dbname> -u <username> --password="<my_password>"
This command executes successfully, but the database deployment enters a CrashLoopBackOff because of the following error:
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
connections without a password. This is *not* recommended.
See PostgreSQL documentation about "trust":
https://www.postgresql.org/docs/current/auth-trust.html
Seeing this I tried to set the POSTGRES_PASSWORD and POSTGRES_USER in the Dockerfile, but this does not alleviate the problem?
I know that usually these environment variables are set in the k8s deployment.yaml or in docker-compose.yml. But even though the postgres operator has some default credentials they don't seem to be applied to the container?
Dockerfile:
FROM postgres:12 AS build
ENV VERSION 1_0
RUN buildDeps="curl build-essential ca-certificates git pkg-config glib2.0 postgresql-server-dev-$PG_MAJOR" \
&& apt-get update \
&& apt-get install -y --no-install-recommends ${buildDeps} \
&& echo "deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& apt-get update \
&& apt-get install -y --no-install-recommends libc++1 postgresql-server-dev-$PG_MAJOR \
&& mkdir -p /tmp/build \
&& curl -o /tmp/build/${VERSIONN}.tar.gz -SL "https://github.com/eulerto/wal2json/archive/wal2json_${VERSION}.tar.gz" \
&& cd /tmp/build/ \
&& tar -xzf /tmp/build/${VERSIONN}.tar.gz -C /tmp/build/ \
&& cd /tmp/build/wal2json-wal2json_${VERSION} \
&& make && make install \
&& mkdir /outputs \
&& cp wal2json.so /outputs/ \
&& cd / \
&& rm -rf /tmp/build \
&& apt-get remove -y --purge ${buildDeps} \
&& apt-get autoremove -y --purge \
&& rm -rf /var/lib/apt/lists/
FROM timescale/timescaledb:1.7.4-pg12
COPY --from=build /outputs/wal2json.so /usr/local/lib/postgresql/
RUN echo "host replication all 127.0.0.1/32 trust" >> /var/lib/postgresql/data/pg_hba.conf
ENV POSTGRES_USER=<username> POSTGRES_PASSWORD=<password>
Edit:
After running the same command with the --debug flag I've received this output:
DEBU[0000] debug flag is set to true
DEBU[0000] in initConfig with url=https://127.0.0.1:8443
DEBU[0000] using PGO_NAMESPACE env var pgo
DEBU[0000] GetSessionCredentials called
DEBU[0000] PGOUSER environment variable is being used at /home/mycloud/.pgo/pgo/pgouser
DEBU[0000] pgouser file found at /home/mycloud/.pgo/pgo/pgouser contains admin:examplepassword
DEBU[0000] [admin examplepassword]
DEBU[0000] username=[admin] password=[examplepassword]
DEBU[0000] setting up httpclient with TLS
DEBU[0000] GetTLSTransport called
DEBU[0000] create cluster called
DEBU[0000] IsValidForResourceName: my-db
DEBU[0000] createCluster called...[https://127.0.0.1:8443/clusters]
DEBU[0000] &{200 OK 200 HTTP/1.1 1 1 map[Content-Length:[193] Content-Type:[application/json] Date:[Sat, 09 Jan 2021 19:56:05 GMT] Www-Authenticate:[Basic realm="Restricted"]] 0xc00017c080 193 [] false false map[]

Two docker images into a single docker image [duplicate]

This question already has answers here:
Docker: Combine multiple images
(4 answers)
Closed 2 years ago.
I have a requirement to create a single docker image by merging both "RabbitMQ-management" and "MongoDB" images. Is it possible to do it?
I've tried with below Dockerfile and can able to access RabbitMQ, but mongodb is not working. Could you please help me.
# cat Dockerfile
FROM rabbitmq:3.7.17
RUN rabbitmq-plugins enable --offline rabbitmq_management
# extract "rabbitmqadmin" from inside the "rabbitmq_management-X.Y.Z.ez" plugin zipfile
# see https://github.com/docker-library/rabbitmq/issues/207
RUN apt-get update && apt-get install -y gnupg2
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 0C49F3730359A14518585931BC711F9BA15703C6 && \
gpg --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mongodb.gpg
ARG MONGO_PACKAGE=mongodb-org
ARG MONGO_REPO=repo.mongodb.org
ENV MONGO_PACKAGE=${MONGO_PACKAGE} MONGO_REPO=${MONGO_REPO}
ENV MONGO_MAJOR 3.4
ENV MONGO_VERSION 3.4.18
RUN echo "deb http://$MONGO_REPO/apt/debian jessie/${MONGO_PACKAGE%-unstable}/$MONGO_MAJOR main" | tee "/etc/apt/sources.list.d/${MONGO_PACKAGE%-unstable}.list"
RUN echo "/etc/apt/sources.list.d/${MONGO_PACKAGE%-unstable}.list"
RUN apt-get update
RUN apt-get install -y ${MONGO_PACKAGE}=$MONGO_VERSION
RUN set -eux; \
erl -noinput -eval ' \
{ ok, AdminBin } = zip:foldl(fun(FileInArchive, GetInfo, GetBin, Acc) -> \
case Acc of \
"" -> \
case lists:suffix("/rabbitmqadmin", FileInArchive) of \
true -> GetBin(); \
false -> Acc \
end; \
_ -> Acc \
end \
end, "", init:get_plain_arguments()), \
io:format("~s", [ AdminBin ]), \
init:stop(). \
' -- /plugins/rabbitmq_management-*.ez > /usr/local/bin/rabbitmqadmin; \
[ -s /usr/local/bin/rabbitmqadmin ]; \
chmod +x /usr/local/bin/rabbitmqadmin; \
apt-get update; apt-get install -y --no-install-recommends python ca-certificates; rm -rf /var/lib/apt/lists/*; \
rabbitmqadmin --version
EXPOSE 15671 15672 27017
I'm using below command to run RabbitMQ (it is working)
# docker run -it -p 15672:15672 -p 5672:5672 --hostname my-rabbitmq image
Command to start Mongodb container (it is not working)
docker run -d -v /tmp/mongodb:/data/db -p 27017:27017 image mongod
It is not impossible to achieve what you are asking, but I would say it's not the correct way of doing it. Docker provides isolation, and states that it should be one process per container. (should, not must).
If you follow this concept, it's easy to use other tools to bring up multiple containers together. For example, instead of complicated Dockerfile that merge two images you can specify a nice docker-compose.yml:
version: '3'
services:
mongo:
image: mongod
volumes:
- /tmp/mongodb:/data/db
ports:
- "27017:27017"
rabbitmq:
image: rabbitmq
ports:
- "15672:15672"
- "5672:5672"
depends_on:
- mongo
Which is more readable IMO, and probably achieve the same outcome.

How to install pgsql driver on docker php:7.1-apache?

I have an API Platform project with a postgresql DB and I can't find how to enable pdo pgsql driver with docker..
Here is my docker file :
FROM php:7.1-apache
# PHP extensions
ENV APCU_VERSION 5.1.7
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libicu-dev \
zlib1g-dev \
libpq-dev \
libzip-dev \
libpcre3-dev \
ssmtp vim git cron zip \
&& docker-php-ext-install \
pdo \
pdo_pgsql \
zip
# Apache config
RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# Add the application
ADD . /app
WORKDIR /app
# Install composer
RUN ./docker/composer.sh \
&& mv composer.phar /usr/bin/composer \
&& composer global require "hirak/prestissimo:^0.3"
RUN usermod -u 1000 www-data
#RUN chown -R www-data:www-data /app/var/cache /app/var/logs /app/var/sessions
CMD ["/app/docker/start.sh"]
and here my docker-composer.yml file
web:
container_name: web-api-front
build: .
environment:
SYMFONY_ENV: dev
volumes:
- .:/app
ports:
- 8084:80
psql:
container_name: psql-api-front
image: postgres
environment:
POSTGRES_PASSWORD: ''
POSTGRES_USER: dbuser
POSTGRES_DB: dbname
ports:
- "5433:5432"
volumes:
- ./docker/sql:/var/sql
I've tryed a lot of websites but I still can't find a way to enable pgsql..
When I do
var_dump(PDO::getAvailableDrivers());
I only have
array(2) { [0]=> string(6) "sqlite" [1]=> string(5) "mysql" }
Also, when I run
docker-compose up
I have this in my log, i'm not sure about what this mean
psql-api-front | LOG: database system was shut down at 2017-08-01 08:18:57 UTC
psql-api-front | LOG: MultiXact member wraparound protections are now enabled
psql-api-front | LOG: database system is ready to accept connections
psql-api-front | LOG: autovacuum launcher started
What am I doing wrong ?
That means that PSQL is working properly but in order to apache to work properly you will need to add the psql libraries and drivers :
FROM php:7.1-apache
# PHP extensions
ENV APCU_VERSION 5.1.7
RUN buildDeps=" \
libicu-dev \
zlib1g-dev \
libsqlite3-dev \
libpq-dev \
" \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
$buildDeps \
libicu52 \
zlib1g \
sqlite3 \
git \
php5-pgsql \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-install \
intl \
mbstring \
pdo_mysql \
pdo_pgsql \
pdo \
pgsql \
zip \
pdo_sqlite \
&& apt-get purge -y --auto-remove $buildDeps
RUN pecl install \
apcu-$APCU_VERSION \
xdebug \
&& docker-php-ext-enable xdebug \
&& docker-php-ext-enable --ini-name 05-opcache.ini \
opcache \
&& docker-php-ext-enable --ini-name 20-apcu.ini \
apcu
ARG SYMFONY_ENV=dev
ENV SYMFONY_ENV=dev
RUN if [ "$SYMFONY_ENV" -ne "dev" ]; then \
sed -i '1 a xdebug.remote_enable=1' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_handler=dbgp' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_autostart=0' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_connect_back=1 ' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_port=9001' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_log=/var/log/xdebug_remote.log' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
fi;
# Apache config
RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# PHP config
ADD docker/php/php.ini /usr/local/etc/php/php.ini
# Add the application
ADD . /app
WORKDIR /app
RUN chmod +x /app/docker/composer.sh
# Install composer
RUN /app/docker/composer.sh \
&& mv composer.phar /usr/bin/composer \
&& composer global require "hirak/prestissimo:^0.3"
ENV PATH="$PATH:$HOME/.composer/vendor/bin"
# to define
ARG INSTALL_DEP=true
RUN if [ -n "$INSTALL_DEP" ]; then \
if [ "$SYMFONY_ENV" -ne "prod" ]; then \
composer install --prefer-dist --no-scripts --no-dev --no-progress --no-suggest --optimize-autoloader --classmap-authoritative && composer run-script continuous-pipe; \
else \
composer install -o --no-interaction --prefer-dist --no-scripts && composer run-script continuous-pipe; \
fi; \
fi;
# Remove cache and logs if some and fixes permissions
RUN rm -rf var/cache/* && rm -rf var/logs/* && rm -rf var/sessions/* && chmod a+r var/ -R
# Apache gets grumpy about PID files pre-existing
RUN rm -f /var/run/apache2/apache2.pid
RUN a2enmod ssl
EXPOSE 443
CMD ["/app/docker/apache/run.sh"]
This should be working properly and you could compare with ur existing configuration.

pg_dump server and pg_dump version mismatch in docker

When I run the command psql --version within the railsApp container, I get 9.4.12 and when I run the same within the postgres container, I get 9.6.2. How can I get the versions to match?
I am getting the following error when I try to do a migration on Rails App which does a pg_dump sql import.
pg_dump: server version: 9.6.2; pg_dump version: 9.4.12
pg_dump: aborting because of server version mismatch
rails aborted!
Here's my Docker-compose.yml file:
version: "2.1"
services:
railsApp:
build:
context: ./
ports:
- "3000:3000"
links:
- postgres
volumes:
- .:/app
postgres:
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ./.postgres:/var/lib/postgresql
The Dockerfile:
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
python \
rsync \
software-properties-common \
wget \
postgresql-client \
graphicsmagick \
&& rm -rf /var/lib/apt/lists/*
# Install node and npm with nvm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION v7.2.1
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION
ENV PATH $NODE_PATH/bin:./node_modules/.bin:$PATH
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Install our ruby dependencies
ADD Gemfile Gemfile.lock /app/
RUN bundle install
# copy the rest of our code over
ADD . /app
ENV RAILS_ENV development
ENV SECRET_KEY_BASE a6bdc5f788624f00b68ff82456d94bf81bb50c2e114b2be19af2e6a9b76f9307b11d05af4093395b0471c4141b3cd638356f888e90080f8ae60710f992beba8f
# Expose port 3000 to the Docker host, so we can access it from the outside.
EXPOSE 3000
# Set the default command to run our server on port 3000
CMD ["rails", "server", "-p", "3000", "-b", "0.0.0.0"]
The same issue for me, I used an alternative way to take a dump,
First I access the terminal and run the pd_dump inside docker and copied the file from docker to host.
Below are the commands
docker exec -it <container-id> /bin/bash # accessing docker terminal
pg_dump > ~/dump # taking dump inside the docker
docker cp <container-id>:/root/dump ~/dump #copying the dump files to host
Hope the above solution helps.
The easiest approach is to use the correct postgres version in the docker-compose. Change:
postgres:
image: postgres:9.6
To:
postgres:
image: postgres:9.4.2
All available versions here.

How to create docker image for postgis that will enable extension at build time or before container fully running?

What I mean is that I want to create a docker image for postgis that will be completely usable right after build. So that if user runs
docker run -e POSTGRES_USER=user somepostgis
the user database would be created and extensions already installed?
The official postgres image can't be used for that AFAIK.
Basically need to write script and tell that it would be entrypoint. This script should create database and create extensions with porstgres server running on different port and then restart it on port 5432.
But I don't know sh enough and docker to do that. Right now it's saying that there is no pg_ctl command
If you want to help you can fork
FROM ubuntu:15.04
#ENV RELEASE_NAME lsb_release -sc
#RUN apt-get update && apt-get install wget
#RUN echo "deb http://apt.postgresql.org/pub/repos/apt ${RELEASE_NAME}-pgdg main" >> /etc/apt/sources.list
#RUN cat /etc/apt/sources.list
#RUN wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-9.4-postgis-2.1 \
curl \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& apt-get purge -y --auto-remove curl
RUN mkdir /docker-entrypoint-initdb.d
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
RUN chmod +x /docker-entrypoint.sh
RUN ls -l /docker-entrypoint.sh
EXPOSE 5432
CMD ["postgres"]
So I'm trying to do somethink like that, but it doesn't work.
#!/bin/bash
${POSTGRES_DB:=$POSTGRES_USER}
gosu postgres pg_ctl start -w -D ${PGDATA} -0 "-p 5433"
gosu postgres createuser ${POSTGRES_USER}
gosu postgres createdb ${POSTGRES_DB} -s -E UTF8
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis;"
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis_topology;"
pg_ctl -w restart