I want to create dataBase for autotest in kubernetes. I want to create an image(postg-my-app-v1) from postgres image, add changelog files, and liquibase image. When I deploy this image with helm i just want to specify containers - postg-my-app-v1 and it should startup pod with database and create tables with liquibase changelog.
Now i create Dockerfile as below
FROM postgres
ADD /changelog /liquibase/changelog
I don't understand how to add liquibase to this image? Or i must use docker compose? or helm lifecycle postStart for liquibase?
FROM docker-proxy.tcsbank.ru/liquibase/liquibase:3.10.x AS Liquibase
FROM docker-proxy.tcsbank.ru/postgres:9.6.12 AS Postgres
ENV POSTGRES_DB bpm
ENV POSTGRES_USER priest
ENV POSTGRES_PASSWORD Bpm_123
COPY --from=Liquibase /liquibase /liquibase
ENV JAVA_HOME /usr/local/openjdk-11
COPY --from=Liquibase $JAVA_HOME $JAVA_HOME
ENV LIQUIBASE_CHANGELOG /liquibase/changelog/
COPY /changelog $LIQUIBASE_CHANGELOG
COPY liquibase.sh /usr/local/bin/
COPY main.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/liquibase.sh && \
chmod +x /usr/local/bin/main.sh && \
ln -s /usr/local/bin/main.sh / && \
ln -s /usr/local/bin/liquibase.sh /
ENTRYPOINT ["main.sh"]
main.sh
#!/bin/bash
bash liquibase.sh | awk '{print "liquiBase script: " $0}' &
bash docker-entrypoint.sh postgres
liquibase.sh
#!/bin/bash
for COUNTER in {1..120}
do
sleep 1s
echo "check db $COUNTER times"
pg_isready
if [ $? -eq 0 ]
then
break
fi
done
echo "try execute liquibase"
bash liquibase/liquibase --url="jdbc:postgresql://localhost:5432/$POSTGRES_DB" --username=$POSTGRES_USER --password=$POSTGRES_PASSWORD --changeLogFile=/liquibase/changelog/changelog.xml update
Related
I am beginner both of docker and postgresql.
How do I upgrade docker postgresql 9.5 into 9.6 without losing my
current database?
fyi: im using ubuntu version 14 and docker 17.09
Thanks in advance.
To preserve data across a docker container a volume is required. This volume will mount directly onto the file system of the container and be persevered when the container is killed. It sounds though that the container was created without a volume attached. The best way to get that data is to use copy the data folder for the container and move to the host file system. Then create a docker container with the new image. Copy the data directory to the running container's data directory in this case pgdata:/var/lib/postgresql/data
docker cp [containerID]:/var/lib/postgresql/data /home/user/data/data-dir/
docker stop [containerID]
docker run -it --rm -v pgdata:/var/lib/postgresql/data postgres
docker cp /home/user/data/data-dir [containereID]:/var/lib/postgresql/data
In case that doesn't work i would just dump the current databases, and re-upload them to the new container
You do not store database files to external storage (outside of container).
Then i know only 1 way to store your database:
1) Backup database
2) Shutdown postgres 9.5 container
3) Run new postgres 9.6 container
4) Restore backup
You can use pg_dumpall for backuping full database:
pg_dumpall > backupfile
The resulting dump can be restored with psql:
psql -f backup postgres
I know it's been some time since you asked it, but I hope my solution will help future Googlers :)
I've tried to create a solution that is stateless as possible, to be compatible with CI and upgrade scripts.
The script:
Backs up the whole pg instance using pg_dumpall.
Uses the dump to create the new instance using initdb and psql -f.
The only requirement is a volume with some existing pg_data directory in it.
docker stop lms_db_1
DB_NAME=lms
DB_USERNAME=lmsweb
DB_PASSWORD=123456
CURRENT_DATE=$(date +%d-%m-%Y_%H_%M_%S)
MOUNT_PATH=/pg_data
PG_OLD_DATA=/pg_data/11/data
PG_NEW_DATA=/pg_data/13/data
BACKUP_FILENAME=v11.$CURRENT_DATE.sql
BACKUP_PATH=$MOUNT_PATH/backup/$BACKUP_FILENAME
BACKUP_DIR=$(dirname "$BACKUP_PATH")
VOLUME_NAME=lms_db-data-volume
# Step 1: Create a backup
docker run --rm -v $VOLUME_NAME:$MOUNT_PATH \
-e PGDATA=$PG_OLD_DATA \
-e POSTGRES_DB="${DB_NAME:-db}" \
-e POSTGRES_USER="${DB_USERNAME:-postgres}" \
-e POSTGRES_PASSWORD="${DB_PASSWORD:-postgres}" \
postgres:11-alpine \
/bin/bash -c "chown -R postgres:postgres $MOUNT_PATH \
&& su - postgres /bin/bash -c \"/usr/local/bin/pg_ctl -D \\\"\$PGDATA\\\" start\" \
&& mkdir -p \"$BACKUP_DIR\" \
&& pg_dumpall -U $DB_USERNAME -f \"$BACKUP_PATH\" \
&& chown postgres:postgres \"$BACKUP_PATH\""
# Step 2: Create a new database from the backup
docker run --rm -v $VOLUME_NAME:$MOUNT_PATH \
-e PGDATA=$PG_NEW_DATA \
-e POSTGRES_DB="${DB_NAME:-db}" \
-e POSTGRES_USER="${DB_USERNAME:-postgres}" \
-e POSTGRES_PASSWORD="${DB_PASSWORD:-postgres}" \
postgres:13-alpine \
/bin/bash -c "ls -la \"$BACKUP_DIR\" \
&& mkdir -p \"\$PGDATA\" \
&& chown -R postgres:postgres \"\$PGDATA\" \
&& rm -rf $PG_NEW_DATA/* \
&& su - postgres -c \"initdb -D \\\"\$PGDATA\\\"\" \
&& su - postgres -c \"pg_ctl -D \\\"\$PGDATA\\\" -l logfile start\" \
&& su - postgres -c \"psql -f $BACKUP_PATH\" \
&& printf \"\\\nhost all all all md5\\\n\" >> \"\$PGDATA/pg_hba.conf\" \
"
When I run the command psql --version within the railsApp container, I get 9.4.12 and when I run the same within the postgres container, I get 9.6.2. How can I get the versions to match?
I am getting the following error when I try to do a migration on Rails App which does a pg_dump sql import.
pg_dump: server version: 9.6.2; pg_dump version: 9.4.12
pg_dump: aborting because of server version mismatch
rails aborted!
Here's my Docker-compose.yml file:
version: "2.1"
services:
railsApp:
build:
context: ./
ports:
- "3000:3000"
links:
- postgres
volumes:
- .:/app
postgres:
image: postgres:9.6
ports:
- "5432:5432"
volumes:
- ./.postgres:/var/lib/postgresql
The Dockerfile:
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
FROM ruby:2.3.3
# setup /app as our working directory
RUN mkdir /app
WORKDIR /app
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set debconf to run non-interactively
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
python \
rsync \
software-properties-common \
wget \
postgresql-client \
graphicsmagick \
&& rm -rf /var/lib/apt/lists/*
# Install node and npm with nvm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION v7.2.1
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION
ENV PATH $NODE_PATH/bin:./node_modules/.bin:$PATH
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Install our ruby dependencies
ADD Gemfile Gemfile.lock /app/
RUN bundle install
# copy the rest of our code over
ADD . /app
ENV RAILS_ENV development
ENV SECRET_KEY_BASE a6bdc5f788624f00b68ff82456d94bf81bb50c2e114b2be19af2e6a9b76f9307b11d05af4093395b0471c4141b3cd638356f888e90080f8ae60710f992beba8f
# Expose port 3000 to the Docker host, so we can access it from the outside.
EXPOSE 3000
# Set the default command to run our server on port 3000
CMD ["rails", "server", "-p", "3000", "-b", "0.0.0.0"]
The same issue for me, I used an alternative way to take a dump,
First I access the terminal and run the pd_dump inside docker and copied the file from docker to host.
Below are the commands
docker exec -it <container-id> /bin/bash # accessing docker terminal
pg_dump > ~/dump # taking dump inside the docker
docker cp <container-id>:/root/dump ~/dump #copying the dump files to host
Hope the above solution helps.
The easiest approach is to use the correct postgres version in the docker-compose. Change:
postgres:
image: postgres:9.6
To:
postgres:
image: postgres:9.4.2
All available versions here.
So I was following this tutorial:
https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/
I have everything up and running, however theres a few things going on that I'm not able to follow or understand.
In the main Docker-Compose we have:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
You will notice there is a env_file containing:
DB_NAME=postgres
DB_USER=postgres
DB_PASS=postgres
My question is when is the postgres user and password being set? If I run this docker-compose everything works, meaning the web app can pass credentials to the postgress database and establish a connection. I'm not able to follow however, where those credentials are being set in the first place.
I was assuming in the base postgres Dockerfile, there would be some instruction to set the database name, username and password. I do not it though. Here is a copy of the base postgres Dockerfile below.
# vim:set ft=dockerfile:
FROM debian:jessie
# explicitly set user/group IDs
RUN groupadd -r postgres --gid=999 && useradd -r -g postgres --uid=999 postgres
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& apt-get purge -y --auto-remove ca-certificates wget
# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
ENV PG_MAJOR 9.6
ENV PG_VERSION 9.6.1-1.pgdg80+1
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update \
&& apt-get install -y postgresql-common \
&& sed -ri 's/#(create_main_cluster) .*$/\1 = false/' /etc/postgresql-common/createcluster.conf \
&& apt-get install -y \
postgresql-$PG_MAJOR=$PG_VERSION \
postgresql-contrib-$PG_MAJOR=$PG_VERSION \
&& rm -rf /var/lib/apt/lists/*
# make the sample config easier to munge (and "correct by default")
RUN mv -v /usr/share/postgresql/$PG_MAJOR/postgresql.conf.sample /usr/share/postgresql/ \
&& ln -sv ../postgresql.conf.sample /usr/share/postgresql/$PG_MAJOR/ \
&& sed -ri "s!^#?(listen_addresses)\s*=\s*\S+.*!\1 = '*'!" /usr/share/postgresql/postgresql.conf.sample
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
ENV PATH /usr/lib/postgresql/$PG_MAJOR/bin:$PATH
ENV PGDATA /var/lib/postgresql/data
VOLUME /var/lib/postgresql/data
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
Not sure which postgres image are you using.
If you look at the official postgres image for complete information. It allows user to specify the environment variables for the below ones and these can be easily overridden at run time.
POSTGRES_PASSWORD
POSTGRES_USER
PGDATA
POSTGRES_DB
POSTGRES_INITDB_ARGS
Environment variables can be overridden using below three methods depending on your case.
Run Image: If running docker image directly, use below to include environment variable in docker run with -e K=V. Please refer documentation for more details here
docker run -e POSTGRES_PASSWORD=secrect -e POSTGRES_USER=postgres <other options> image/name
Dockerfile: If you need to specify the environment variable in Dockerfile, specify as mentioned below. Please refer documentation for more details here
ENV POSTGRES_PASSWORD=secrect
ENV POSTGRES_USER=postgres
docker-compose: If you need to specify the environment variable in docker-compose.yml, specify as below. Please refer documentation for more details here
web:
environment:
- POSTGRES_PASSWORD=secrect
- POSTGRES_USER=postgres
Hope this is useful.
I think the postgres user and password is being set on entrypoint, like in line 23 on official image entrypoint.
https://github.com/docker-library/postgres/blob/e4942cb0f79b61024963dc0ac196375b26fa60dd/9.6/docker-entrypoint.sh
Can you check your entrypoint?
All you need is
docker run -e POSTGRES_PASSWORD=123456 postgres
In the same way you can set up the username like below:
docker run -e POSTGRES_USERNAME=xyz postgres
Are you using the -v mounted folder? If so, you need to remove the entire pgdata/ folder you created previously so that the postgres can re-create itself using the new environment variables.
I've really been struggling over the past few days trying to setup some docker containers and shell scripts to create an environment for my application to run in.
The tall and short is that I have a web server which requires a database to operate. My aim is to have end users unzip the content onto their docker machine, run a build script (which just builds the relevant docker images), then run a OneTime.sh script (which creates the volumes and databases necessary), during this script, they are prompted for what user name and password they would like for the super user of the database.
The problem I'm having is getting those values to the docker image. Here is my script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -p 9001:5432 -P --name psql-data-onetime postgres-setup
# Close containers
docker stop psql-data-onetime
docker rm psql-data-onetime
docker stop psql-transactions-onetime
docker rm psql-transactions-onetime
And here is the docker file:
FROM ubuntu
#Required environment variables: USERNAME, PASSWORD, DBNAME
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Complete configuration
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Run setup script
ADD Setup.sh /
CMD ["sh", "Setup.sh"]
The script 'Setup.sh' is the following:
echo -n " User name: "
read user
echo -n " Password: "
read password
echo -n " Database Name: "
read dbname
/etc/init.d/postgresql start
/usr/lib/postgresql/9.3/bin/psql --command "CREATE USER $user WITH SUPERUSER PASSWORD '$password';"
/usr/lib/postgresql/9.3/bin/createdb -O $user $dbname
exit
Why doesn't this work? (I don't get prompted to enter the text, and it throws an error that the parameters are bad). What is the proper way to do something like this? It feels like it's probably a pretty common problem to solve, but I cannot for the life of me find any non convoluted examples of this behaviour.
The main purpose of this is to make life easier for the end user, so if I could just prompt them for the user name, password, and dbname, (plus calling the correct scripts), that would be ideal.
EDIT:
After running the log file looks like this:
User name:
Password:
Database Name:
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
EDIT 2:
After updating to CMD ["sh", "-x", "Setup.sh"]
I get:
echo -n User name:
+read user
:bad variable nameuser
echo -n Password:
+read password
:bad variable namepassword
echo -n Database Name:
+read dbname
:bad variable dbname
I am using the official postgresql docker image (version 9.4). I have extended the Dockerfile, so I can alter the settings in the postgresql.conf etc, using a bash script. It successfully adds and runs the script on entrypoint, for a single sed command. But when I put 2 or more sed commands, I get the following error:
/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/config.sh
: No such file or directoryread
/var/lib/postgresql/data/postgresql.conf
I am trying on Windows 10, in combination with Vagrant and VirtualBox, using NFS file system on shared folders, via the vagrant-winnfsd plugin.
Why is this happening? How can I alter my bash script in order to work with more configuration settings? Is there a better way?
Dockerfile:
FROM postgres:9.4
RUN echo "Europe/Athens" > /etc/timezone \
&& dpkg-reconfigure -f noninteractive tzdata
RUN localedef -i el_GR -c -f UTF-8 -A /usr/share/locale/locale.alias el_GR.UTF-8
ADD config.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/config.sh
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
config.sh:
#!/bin/bash
sed -i -e"s/^#logging_collector = off.*$/logging_collector = on/" /var/lib/postgresql/data/postgresql.conf
sed -i -e"s/^max_connections = 100.*$/max_connections = 1000/" /var/lib/postgresql/data/postgresql.conf
database.yml
postgres:
container_name: postgres-9.4
image: ***/postgres-9.4
volumes_from:
- postgres_data
ports:
- 5432:5432
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
- USERMAP_UID=999
- USERMAP_GID=999
postgres_data:
container_name: postgres_data
image: ***/postgres-9.4
volumes:
- ./services/postgres:/etc/postgresql
- ./services/postgres:/var/lib/postgresql
- ./services/postgres/logs:/var/log/postgresql
command: "true"
You might want to try using a RUN statement to execute your bash script or just run sed directly with both commands combined with a semicolon:
RUN sed -i -e 's/^#\(logging_collector = \).*/\1on/; s/^\(max_connections = \).*/\11000/' \
/var/lib/postgresql/data/postgresql.conf
A more scalable solution would be to put the sed program in an external file, then use these statements:
ADD postgres-edit.sed /var/local
RUN sed -i -f /var/local/postgres-edit.sed /var/lib/postgresql/data/postgresql.conf
postgres-edit.sed:
# sed script to edit postgresql configuration
s/^#\(logging_collector = \).*/\1on/
s/^\(max_connections = \).*/\11000/
Seems like a duplicate of How to customize the configuration file of the official PostgreSQL docker image?.
Copy-paste of my answer at https://stackoverflow.com/a/40598124/385548.
Inject custom postgresql.conf into postgres Docker container
The default postgresql.conf file lives within the PGDATA dir (/var/lib/postgresql/data), which makes things more complicated especially when running postgres container for the first time, since the docker-entrypoint.sh wrapper invokes the initdb step for PGDATA dir initialization.
To customize PostgreSQL configuration in Docker consistently, I suggest using config_file postgres option together with Docker volumes like this:
Production database (PGDATA dir as Persistent Volume)
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-v $CUSTOM_DATADIR:/var/lib/postgresql/data \
-e POSTGRES_USER=postgres \
-p 5432:5432 \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
Testing database (PGDATA dir will be discarded after docker rm)
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-e POSTGRES_USER=postgres \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
Debugging
Remove the -d (detach option) from docker run command to see the server logs directly.
Connect to the postgres server with psql client and query the configuration:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c 'exec psql -h $POSTGRES_PORT_5432_TCP_ADDR -p $POSTGRES_PORT_5432_TCP_PORT -U postgres'
psql (9.6.0)
Type "help" for help.
postgres=# SHOW all;