Docker Postgres file illegal option - - postgresql

I want to put variables inside my CMD of a Dockerfile that has a Postgres container with certificates needed for SSL. I am using this Dockerfile as build context from a docker-compose.yml file that this database as one service and an app
db.Dockerfile
FROM postgres:14.5-alpine
ENV EXT_KEY .key
COPY ./.docker/dev/init-database.sh /docker-entrypoint-initdb.d/
COPY ./.docker/dev/migrations/database_schema.tar ./
COPY ./.docker/dev/certs/out/postgresdb$EXT_KEY /var/lib/postgresql
COPY ./.docker/dev/certs/out/postgresdb.crt /var/lib/postgresql
COPY ./.docker/dev/certs/out/myCA.crt /var/lib/postgresql
COPY ./.docker/dev/certs/out/myCA.crl /var/lib/postgresql
COPY ./.docker/dev/certs/out/news_user$EXT_KEY ./
COPY ./.docker/dev/certs/out/news_user.crt ./
RUN chown 0:70 /var/lib/postgresql/postgresdb$EXT_KEY && chmod 640 /var/lib/postgresql/postgresdb$EXT_KEY
RUN chown 0:70 /var/lib/postgresql/postgresdb.crt && chmod 640 /var/lib/postgresql/postgresdb.crt
RUN chown 0:70 /var/lib/postgresql/myCA.crt && chmod 640 /var/lib/postgresql/myCA.crt
RUN chown 0:70 /var/lib/postgresql/myCA.crl && chmod 640 /var/lib/postgresql/myCA.crl
RUN chown 0:70 ./news_user$EXT_KEY && chmod 640 ./news_user$EXT_KEY
RUN chown 0:70 ./news_user.crt && chmod 640 ./news_user.crt
RUN chown postgres:postgres /docker-entrypoint-initdb.d/init-database.sh
EXPOSE 5432
USER postgres
ENTRYPOINT ["docker-entrypoint.sh"]
CMD [ "-c", "ssl=on" , "-c", "ssl_cert_file=/var/lib/postgresql/postgresdb.crt", "-c",\
"ssl_key_file=/var/lib/postgresql/postgresdb.${EXT_KEY}", "-c",\
"ssl_ca_file=/var/lib/postgresql/myCA.crt", "-c", "ssl_crl_file=/var/lib/postgresql/myCA.crl" ]
docker-compose.yml
version: "3.8"
services:
news_database:
build:
context: ../..
dockerfile: ./.docker/dev/db.Dockerfile
container_name: news_database
restart: unless-stopped
env_file:
- .env
ports:
- "5432:5432"
volumes:
- news_db:/var/lib/postgresql/data
news_app:
...
volumes:
news_db:
driver: local
When I run this the variable is not present in the CMD and therefore the container fails
Attempt 1
I tried changing the final command from array to string format
CMD -c ssl=on -c ssl_cert_file=/var/lib/postgresql/postgresdb.crt -c ssl_key_file=/var/lib/postgresql/postgresdb.key -c ssl_ca_file=/var/lib/postgresql/myCA.crt -c ssl_crl_file=/var/lib/postgresql/myCA.crl
It gives me an /bin/sh: illegal option - error
Attempt 2
I removed the entrypoint completely and tried directly calling postgres with a CMD
CMD postgres -c ssl=on -c ssl_cert_file=/var/lib/postgresql/postgresdb.crt -c ssl_key_file=/var/lib/postgresql/postgresdb.key -c ssl_ca_file=/var/lib/postgresql/myCA.crt -c ssl_crl_file=/var/lib/postgresql/myCA.crl
It immediately gives me another error when I run it via docker-compose
postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
All I want is to have variables inside that CMD, can someone kindly tell me a way to make this work?

Your ENTRYPOINT instruction is specified as JSON array, meaning it's in exec form - no shell will be invoked for the execution of docker-entrypoint.sh. As there is no shell invoked, there won't be any environment variable expansion.
To make it work, try this:
ENTRYPOINT [ "sh", "-c", \
"docker-entrypoint.sh \
-c ssl=on \
-c ssl_cert_file=/var/lib/postgresql/postgresdb.crt \
-c ssl_key_file=/var/lib/postgresql/postgresdb${EXT_KEY} \
-c ssl_ca_file=/var/lib/postgresql/myCA.crt \
-c ssl_crl_file=/var/lib/postgresql/myCA.crl \
${0} ${#}" ]

Related

Docker 'backup' process container not seeing Database container postgres

I have a simple docker-compose.yml & associated Dockerfiles that give me a simple dev and prod environment for a nginx-uvicorn-django-postgres stack. I want to add an optional 'backup' container that just runs cron to periodically connect to the 'postgres' container.
# backup container - derived from [this blog][1]
ARG DOCKER_REPO
ARG ALPINE_DOCKER_IMAGE # ALPINE
ARG ALPINE_DOCKER_TAG # LATEST
FROM ${DOCKER_REPO}${ALPINE_DOCKER_IMAGE}:${ALPINE_DOCKER_TAG}
ARG DB_PASSWORD
ARG DB_HOST # "db"
ARG DB_PORT # "5432"
ARG DB_NAME # "ken"
ARG DB_USERNAME # "postgres"
ENV PGPASSWORD=${DB_PASSWORD} HOST=${DB_HOST} PORT=${DB_PORT} PSQL_DB_NAME=${DB_NAME} \
USERNAME=${DB_USERNAME}
RUN printenv
RUN mkdir /output && \
mkdir /output/backups && \
mkdir /scripts && \
chmod a+x /scripts
COPY ./scripts/ /scripts/
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/15min/${DB_NAME}_15
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/daily/${DB_NAME}_day
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/weekly/${DB_NAME}_week
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/monthly/${DB_NAME}_month
RUN apk update && \
apk upgrade && \
apk add --no-cache postgresql-client && \
chmod a+x /etc/periodic/15min/${DB_NAME}_15 && \
chmod a+x /etc/periodic/daily/${DB_NAME}_day && \
chmod a+x /etc/periodic/weekly/${DB_NAME}_week && \
chmod a+x /etc/periodic/monthly/${DB_NAME}_month
The django container is derived from the official Python image and connects (through psycopg2) with values (as ENV value) for host, dbname, username, password and port. The 'backup' container has these same values, but I get this error from the command line:
> pg_dump --host="$HOST" --port="$PORT" --username="$USERNAME" --dbname="$PSQL_DB_NAME"
pg_dump: error: could not translate host name "db" to address: Name does not resolve
Is Alpine missing something relevant that is present in the official Python?
Edit:
I am running with a system of shell scripts that take care of housekeeping for different configurations. so
> ./ken.sh dev_server
will set up the environment variables and then run docker-compose for the project and the containers
docker-compose.yml doesn't explicitly create a network.
I don't know what "db" should resolve to beyond just 'db://'? - its what the django container gets and it is able to resolve a connection to the 'db' service.
service:
db:
image: ${DOCKER_REPO}${DB_DOCKER_IMAGE}:${DB_DOCKER_TAG} #postgres: 14
container_name: ${PROJECT_NAME}_db
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- PGPASSWORD
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
command: ["postgres", "-c", "log_statement=all"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -h db"]
interval: 2s
timeout: 5s
retries: 25
This is the 'dev_server' script run by the parent ken.sh script
function dev_server() {
trap cleanup EXIT
wait_and_launch_browser &
docker-compose -p "${PROJECT_NAME}" up -d --build db nginx web pgadmin backup
echo "Generate static files and copy them into static and file volumes."
source ./scripts/generate_static_files.sh
docker-compose -p "${PROJECT_NAME}" logs -f web nginx backup
}
Update: Worked through "Reasons why docker containers can't talk to each other" and found that all the containers are on a ken_default network, from 170.20.0.2 to 170.20.0.6.
I can docker exec ken_backup backup ken_db -c2, but not from db to backup, because the db container doesn't include ping.
From a shell on backup I cannot ping ken_db - ken_db doesn't resolve, nor does 'db'.
I can't make much of that and I'm not sure what to try next.
You are running the backup container as a separate service.
Docker-compose creates a unique network for each service (docker-compose.yml file).
You need to get the DB and your backup container on the same docker network.
See this post

Customize postgres start command in default docker postgres image

I want to change the start command of postgres to support SSL in the default docker image
db.Docerfile
FROM postgres:14.5-alpine
COPY ./.docker/dev/init-database.sh /docker-entrypoint-initdb.d/
COPY ./.docker/dev/migrations/database_schema.tar ./
COPY ./.docker/dev/certs/out/server.key /var/lib/postgresql
COPY ./.docker/dev/certs/out/server.crt /var/lib/postgresql
COPY ./.docker/dev/certs/out/myCA.crt /var/lib/postgresql
COPY ./.docker/dev/certs/out/myCA.crl /var/lib/postgresql
COPY ./.docker/dev/certs/out/news_user.key ./
COPY ./.docker/dev/certs/out/news_user.crt ./
RUN chown 0:70 /var/lib/postgresql/server.key && chmod 640 /var/lib/postgresql/server.key
RUN chown 0:70 /var/lib/postgresql/server.crt && chmod 640 /var/lib/postgresql/server.crt
RUN chown 0:70 /var/lib/postgresql/myCA.crt && chmod 640 /var/lib/postgresql/myCA.crt
RUN chown 0:70 /var/lib/postgresql/myCA.crl && chmod 640 /var/lib/postgresql/myCA.crl
RUN chown 0:70 ./news_user.key && chmod 640 ./news_user.key
RUN chown 0:70 ./news_user.crt && chmod 640 ./news_user.crt
RUN chown postgres:postgres /docker-entrypoint-initdb.d/init-database.sh
EXPOSE 5432
CMD postgres -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key -c ssl_ca_file=/var/lib/postgresql/myCA.crt -c ssl_crl_file=/var/lib/postgresql/myCA.crl
When I run this image, I get an error saying
"root" execution of the PostgreSQL server is not permitted.
news_database | The server must be started under an unprivileged user ID to prevent
news_database | possible system security compromise. See the documentation for
news_database | more information on how to properly start the server.
The default postgres image comes with a user named postgres so I tried adding a user postgres line before CMD
Now it gives me a new error saying
postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
Can someone kindly tell me how to fix this so that the postgres command works?
UPDATE 1
I am not running the dockerfile directly, I run it via a docker-compose.yml file since I have a python script that accesses the database in another container
docker.compose.yml
version: "3.8"
services:
news_database:
build:
context: ../..
dockerfile: ./.docker/dev/db.Dockerfile
container_name: news_database
restart: unless-stopped
env_file:
- .env
ports:
- "5432:5432"
volumes:
- news_db:/var/lib/postgresql/data
# https://stackoverflow.com/a/55835081/5371505
# https://stackoverflow.com/questions/72213661/test-connection-to-postgres-with-ssl-with-the-command-line
healthcheck:
test: ["CMD-SHELL", "pg_isready -d 'hostaddr=$DATABASE_HOST user=$DATABASE_USER port=$DATABASE_PORT dbname=$DATABASE_NAME'"]
interval: 5s
timeout: 5s
retries: 5
# Dont add a 'restart' policy to the app because we run it as a cronjob regardless of whether it succeeds or fails
news_app:
# https://stackoverflow.com/questions/65594752/docker-compose-how-to-reference-files-in-other-directories
build:
context: ../..
dockerfile: ./.docker/dev/app.Dockerfile
env_file:
- .env
image: news_app
container_name: news_app
depends_on:
news_database:
condition: service_healthy
volumes:
news_db:
driver: local
I run the above file with this command
docker compose -f ".docker/dev/docker-compose.yml" up -d --build news_database && docker compose -f ".docker/dev/docker-compose.yml" logs --follow
I see a couple of problems with your Dockerfile. First, I've stripped out a few bits to reduce the complexity to something I can reproduce locally, so I have:
FROM postgres:14.5-alpine
COPY ./certs/out/server.key /var/lib/postgresql
COPY ./certs/out/server.crt /var/lib/postgresql
RUN chown 0:70 /var/lib/postgresql/server.key && chmod 640 /var/lib/postgresql/server.key
RUN chown 0:70 /var/lib/postgresql/server.crt && chmod 640 /var/lib/postgresql/server.crt
EXPOSE 5432
CMD postgres -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key -c ssl_ca_file=/var/lib/postgresql/myCA.crt -c ssl_crl_file=/var/lib/postgresql/myCA.crl
If we build a pgtest image from this Dockerfile and run it, we see:
$ docker build -t pgtest .
...
$ docker run --rm pgtest
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
What's going on here? It turns out this is caused by how you've written your CMD directive. You need to use the "exec style", which means passing in a JSON array rather than a string:
CMD ["postgres", \
"-c", "ssl=on", \
"-c", "ssl_cert_file=/var/lib/postgresql/server.crt", \
"-c", "ssl_key_file=/var/lib/postgresql/server.key"]
This is necessary because there's a check in the entrypoint script for
the image that looks like this:
if [ "$1" = 'postgres' ] && ! _pg_want_help "$#"; then
When you write:
CMD postgres -c ...
This wraps the command in a call to sh -c, so $1 will be sh,
thus skipping the important database initialization logic. When we use
the exec format of the CMD statement, $1 will be postgres, as
expected.
With that fix, the image runs correctly:
$ docker run --rm -e POSTGRES_PASSWORD=secret pgtest
.
.
.
PostgreSQL init process complete; ready for start up.
2022-08-28 12:48:48.784 UTC [1] LOG: starting PostgreSQL 14.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2022-08-28 12:48:48.784 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-08-28 12:48:48.784 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-08-28 12:48:48.786 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-08-28 12:48:48.789 UTC [48] LOG: database system was shut down at 2022-08-28 12:48:48 UTC
2022-08-28 12:48:48.792 UTC [1] LOG: database system is ready to accept connections

pg_dump server and pg_dump version mismatch in docker

When I run the command psql --version within the railsApp container, I get 9.4.12 and when I run the same within the postgres container, I get 9.6.2. How can I get the versions to match?
I am getting the following error when I try to do a migration on Rails App which does a pg_dump sql import.
pg_dump: server version: 9.6.2; pg_dump version: 9.4.12
pg_dump: aborting because of server version mismatch
rails aborted!
Here's my Docker-compose.yml file:
version: "2.1"
services:
railsApp:
build:
context: ./
ports:
- "3000:3000"
links:
- postgres
volumes:
- .:/app
postgres:
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ./.postgres:/var/lib/postgresql
The Dockerfile:
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
python \
rsync \
software-properties-common \
wget \
postgresql-client \
graphicsmagick \
&& rm -rf /var/lib/apt/lists/*
# Install node and npm with nvm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION v7.2.1
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION
ENV PATH $NODE_PATH/bin:./node_modules/.bin:$PATH
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Install our ruby dependencies
ADD Gemfile Gemfile.lock /app/
RUN bundle install
# copy the rest of our code over
ADD . /app
ENV RAILS_ENV development
ENV SECRET_KEY_BASE a6bdc5f788624f00b68ff82456d94bf81bb50c2e114b2be19af2e6a9b76f9307b11d05af4093395b0471c4141b3cd638356f888e90080f8ae60710f992beba8f
# Expose port 3000 to the Docker host, so we can access it from the outside.
EXPOSE 3000
# Set the default command to run our server on port 3000
CMD ["rails", "server", "-p", "3000", "-b", "0.0.0.0"]
The same issue for me, I used an alternative way to take a dump,
First I access the terminal and run the pd_dump inside docker and copied the file from docker to host.
Below are the commands
docker exec -it <container-id> /bin/bash # accessing docker terminal
pg_dump > ~/dump # taking dump inside the docker
docker cp <container-id>:/root/dump ~/dump #copying the dump files to host
Hope the above solution helps.
The easiest approach is to use the correct postgres version in the docker-compose. Change:
postgres:
image: postgres:9.6
To:
postgres:
image: postgres:9.4.2
All available versions here.

Where is the Postgres username/password being created in this Dockerfile?

So I was following this tutorial:
https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/
I have everything up and running, however theres a few things going on that I'm not able to follow or understand.
In the main Docker-Compose we have:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
You will notice there is a env_file containing:
DB_NAME=postgres
DB_USER=postgres
DB_PASS=postgres
My question is when is the postgres user and password being set? If I run this docker-compose everything works, meaning the web app can pass credentials to the postgress database and establish a connection. I'm not able to follow however, where those credentials are being set in the first place.
I was assuming in the base postgres Dockerfile, there would be some instruction to set the database name, username and password. I do not it though. Here is a copy of the base postgres Dockerfile below.
# vim:set ft=dockerfile:
FROM debian:jessie
# explicitly set user/group IDs
RUN groupadd -r postgres --gid=999 && useradd -r -g postgres --uid=999 postgres
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& apt-get purge -y --auto-remove ca-certificates wget
# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
ENV PG_MAJOR 9.6
ENV PG_VERSION 9.6.1-1.pgdg80+1
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update \
&& apt-get install -y postgresql-common \
&& sed -ri 's/#(create_main_cluster) .*$/\1 = false/' /etc/postgresql-common/createcluster.conf \
&& apt-get install -y \
postgresql-$PG_MAJOR=$PG_VERSION \
postgresql-contrib-$PG_MAJOR=$PG_VERSION \
&& rm -rf /var/lib/apt/lists/*
# make the sample config easier to munge (and "correct by default")
RUN mv -v /usr/share/postgresql/$PG_MAJOR/postgresql.conf.sample /usr/share/postgresql/ \
&& ln -sv ../postgresql.conf.sample /usr/share/postgresql/$PG_MAJOR/ \
&& sed -ri "s!^#?(listen_addresses)\s*=\s*\S+.*!\1 = '*'!" /usr/share/postgresql/postgresql.conf.sample
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
ENV PATH /usr/lib/postgresql/$PG_MAJOR/bin:$PATH
ENV PGDATA /var/lib/postgresql/data
VOLUME /var/lib/postgresql/data
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
Not sure which postgres image are you using.
If you look at the official postgres image for complete information. It allows user to specify the environment variables for the below ones and these can be easily overridden at run time.
POSTGRES_PASSWORD
POSTGRES_USER
PGDATA
POSTGRES_DB
POSTGRES_INITDB_ARGS
Environment variables can be overridden using below three methods depending on your case.
Run Image: If running docker image directly, use below to include environment variable in docker run with -e K=V. Please refer documentation for more details here
docker run -e POSTGRES_PASSWORD=secrect -e POSTGRES_USER=postgres <other options> image/name
Dockerfile: If you need to specify the environment variable in Dockerfile, specify as mentioned below. Please refer documentation for more details here
ENV POSTGRES_PASSWORD=secrect
ENV POSTGRES_USER=postgres
docker-compose: If you need to specify the environment variable in docker-compose.yml, specify as below. Please refer documentation for more details here
web:
environment:
- POSTGRES_PASSWORD=secrect
- POSTGRES_USER=postgres
Hope this is useful.
I think the postgres user and password is being set on entrypoint, like in line 23 on official image entrypoint.
https://github.com/docker-library/postgres/blob/e4942cb0f79b61024963dc0ac196375b26fa60dd/9.6/docker-entrypoint.sh
Can you check your entrypoint?
All you need is
docker run -e POSTGRES_PASSWORD=123456 postgres
In the same way you can set up the username like below:
docker run -e POSTGRES_USERNAME=xyz postgres
Are you using the -v mounted folder? If so, you need to remove the entire pgdata/ folder you created previously so that the postgres can re-create itself using the new environment variables.

How restore a postgres database with fig?

I'm trying to use FIG (http://www.fig.sh/) for a django app. I can't recreate the database from a dump, I try:
fig run db pg_restore -d DBNAME < backup.sql
And get:
socket.error: [Errno 104] Connection reset by peer
But this run (still not see the tables in the db):
fig run db pg_restore < backup.sql
This is the dockerfile:
FROM python:3.4
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD backup.sql /code/
RUN pip install -r requirements.txt
RUN pg_restore -d postgres backup.sql
ADD . /code/
And fig.yml:
db:
image: postgres
ports:
- 5432
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
When you run
fig run db pg_restore -d DBNAME < backup.sql
postgresd is not running. You've replaced the startup of the daemon with the pg_restore command.
I would suggest doing something like this:
Move backup.sql to dockerfiles/db/backup.sql
Create a dockerfiles/db/Dockerfile
change your fig.yml to use build for the db instead
Dockerfile
FROM postgres
ADD . /files
WORKDIR /files
RUN /etc/init.d/postgresql start && \
pg_restore -d DBNAME < backup.sql && \
/etc/init.d/postgresql stop
fig.yml
db:
build: dockerfiles/db
Now when you run any fig commands your database should be ready to go