I'm a newbie to Docker, I'm working on a project that written by another developer, the project is running on Digital Ocean Ubuntu 18.04. It consists of 2 containers (1 container for Django App, 1 container for PostgreSQL database).
It is required from me now to get a backup of database, I found a bash file written by previous programmer:
### Create a database backup.
###
### Usage:
### $ docker-compose -f <environment>.yml (exec |run --rm) postgres backup
set -o errexit
set -o pipefail
set -o nounset
working_dir="$(dirname ${0})"
source "${working_dir}/_sourced/constants.sh"
source "${working_dir}/_sourced/messages.sh"
message_welcome "Backing up the '${POSTGRES_DB}' database..."
if [[ "${POSTGRES_USER}" == "postgres" ]]; then
message_error "Backing up as 'postgres' user is not supported. Assign 'POSTGRES_USER' env with another one and try again."
exit 1
fi
export PGHOST="${POSTGRES_HOST}"
export PGPORT="${POSTGRES_PORT}"
export PGUSER="${POSTGRES_USER}"
export PGPASSWORD="${POSTGRES_PASSWORD}"
export PGDATABASE="${POSTGRES_DB}"
backup_filename="${BACKUP_FILE_PREFIX}_$(date +'%Y_%m_%dT%H_%M_%S').sql.gz"
pg_dump | gzip > "${BACKUP_DIR_PATH}/${backup_filename}"
message_success "'${POSTGRES_DB}' database backup '${backup_filename}' has been created and placed in '${BACKUP_DIR_PATH}'."
my first question is:
is that command right? i mean if i ran:
docker-compose -f production.yml (exec |run --rm) postgres backup
Would that create a backup for my database at the written location?
Second question: can I run this command while the database container is running? or should I run docker-compose down then run the command for backup then run docker-compose up again.
Sure you can run that script to backup, one way to do it is executing a shell in container with docker-compose exec db /bin/bash and then run that script.
Other way is running a new postgres container attached to the postgres container network:
docker run -it --name pgback -v /path/backup/host:/var/lib/postgresql/data --network composeNetwork postgres /bin/bash
this will create a new postgres container attached to the network created with compose, with a binding volume attached, then you can create this script in the container and back up the database to the volume to save it to other place out of container.
Then when you want to backup simple start docker container and backup:
docker start -a -i pgback
You dont need to create other compose file, just copy the script to he container and run it, also you could create a new postgres image with the script and run it from CMD, just to run the container, there are plenty ways to do it.
Related
I am trying to create multiple PostgreSQL databases using Dockerfile and create a container from this image.
My sample setup looks like this:
Dockerfile:
FROM postgres:11.8
COPY init.sql /docker-entrypoint-initdb.d
init.sql
CREATE DATABASE firstdb
CREATE DATABASE seconddb
CREATE DATABASE thirddb
In order to build the docker image and SSH into a running container I run the following commands:
docker build -t postgres:v11.8 .
docker run -it postgres:v11.8 bash
One of the problems that I'm facing right now is the error below as soon as I try to connect using psql -U postgres command:
psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket
"/var/run/postgresql/.s.PGSQL.5432"?
The second issue I have is how to make the separate lines within init.sql (CREATE DATABASE ) into a single line or loop?
Thanks, guys!
I'm not sure, but when you run your docker image only with -it flags that way it's not really running the PostgreSQL process. First, run your container as its should without the flags with:
docker container run --name db <your-custom-image>:<tag>
After that, if you want to enter the container's bash then run with -it flags with the correct container name (db).
I am using a postgis/postgis Docker image to set a database server for my application.
The database server must have a tablespace created, then a database.
Then each time another application will start from another container, it will run a Liquibase script that will update the database schema (create tables, index...) when needed.
On a terminal, to prepare the database container, I'm running these commands :
# Run a naked Postgis container
sudo docker run --name ecoemploi-postgis
-e POSTGRES_PASSWORD=postgres
-d -v /data/comptes-france:/data/comptes-france postgis/postgis
# Send 'bash level' commands to create the directory for the tablespace
sudo docker exec -it ecoemploi-postgis
bin/sh -c 'mkdir /tablespace && chown postgres:postgres /tablespace'
Then to complete my step 1, I have to run SQL statements to create the tablespace in a PostGIS point of view, and create the database by a CREATE DATABASE.
I connect myself, manually, under the psql of my container :
sudo docker exec -it ecoemploi-postgis bin/sh
-c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR"
-p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
And I run manally these commands :
CREATE TABLESPACE data LOCATION '/tablespace';
CREATE DATABASE comptesfrance TABLESPACE data;
exit
But I would like to have a container created from a single Dockerfile having done all the needed work. The difficulty is that it has to be done in two parts :
One before the container is started. (creating directories, granting them user:group).
One after it is started for the first time : declaring the tablespace and creating the base. If I understand well the base image I took, it should be done after an entrypoint docker-entrypoint.sh has been run ?
What is the good way to write a Dockerfile creating a container having done all these steps ?
The PostGIS image "is based on the official postgres image", so it should be able to use the /docker-entrypoint-initdb.d mechanism. Any files you put in that directory will be run the first time the database container is started. The postgis Dockerfile already uses this directory to install the PostGIS extensions into the default database.
That means you can put your build-time setup directly into the Dockerfile, and copy the startup-time script into that directory.
FROM postgis/postgis:12-3.0
RUN mkdir /tablespace && chown postgres:postgres /tablespace
COPY createdb.sql /docker-entrypoint-initdb.d/20-createdb.sql
# Use default ENTRYPOINT/CMD from base image
For the particular setup you describe, this may not be necessary. Each database runs in an isolated filesystem space and starts with an empty data directory, so there's not a specific need to create an alternate data directory; Docker style is to just run multiple databases if you need isolated storage. Similarly, the base postgres image will create a database for you at first start (named by the POSTGRES_DB environment variable).
In order to run a container, your Dockerfile must be functional and completed.
you must enter the queries in a bash file and in the last line you have to enter an ENTRYPOINT with this bash script
I'm running a dockerized mongo container.
I'd like to create a mongo image with some initialized data.
Any ideas?
A more self-contained approach:
create javascript files that initialize your database
create a derived MongoDB docker image that contains these files
There are many answers that use disposable containers or create volumes and link them, but this seems overly complicated. If you take a look at the mongo docker image's docker-entrypoint.sh, you see that line 206 executes /docker-entrypoint-initdb.d/*.js files on initialization using a syntax: mongo <db> <js-file>. If you create a derived MongoDB docker image that contains your seed data, you can:
have a single docker run command that stands up a mongo with seed data
have data is persisted through container stops and starts
reset that data with docker stop, rm, and run commands
easily deploy with runtime schedulers like k8s, mesos, swarm, rancher
This approach is especially well suited to:
POCs that just need some realistic data for display
CI/CD pipelines that need consistent data for black box testing
example deployments for product demos (sales engineers, product owners)
How to:
Create and test your initialization scripts (grooming data as appropriate)
Create a Dockerfile for your derived image that copies your init scripts
FROM mongo:3.4
COPY seed-data.js /docker-entrypoint-initdb.d/
Build your docker image
docker build -t mongo-sample-data:3.4 .
Optionally, push your image to a docker registry for others to use
Run your docker image
docker run \
--name mongo-sample-data \
-p 27017:27017 \
--restart=always \
-e MONGO_INITDB_DATABASE=application \
-d mongo-sample-data:3.4
By default, docker-entrypoint.sh will apply your scripts to the test db; the above run command env var MONGO_INITDB_DATABASE=application will apply these scripts to the application db instead. Alternatively, you could create and switch to different dbs in the js file.
I have a github repo that does just this - here are the relevant files.
with the latest release of mongo docker , something like this works for me.
FROM mongo
COPY dump /home/dump
COPY mongo_restore.sh /docker-entrypoint-initdb.d/
the mongo restore script looks like this.
#!/bin/bash
# Restore from dump
mongorestore --drop --gzip --db "<RESTORE_DB_NAME>" /home/dump
and you could build the image normally.
docker build -t <TAG> .
First create a docker volume
docker volume create --name mongostore
then create your mongo container
docker run -d --name mongo -v mongostore:/data/db mongo:latest
The -v switch here is responsible for mounting the volume mongostore at the /data/db location, which is where mongo saves its data. The volume is persistent (on the host). Even with no containers running you will see your mongostore volume listed by
docker volume ls
You can kill the container and create a new one (same line as above) and the new mongo container will pick up the state of the previous container.
Initializing the volume
Mongo initializes a new database if none is present. This is responsible for creating the initial data in the mongostore. Let's say that you want to create a brand new environment using a pre-seeded database. The problem becomes how to transfer data from your local environment (for instance) to the volume before creating the mongo container. I'll list two cases.
Local environment
You're using either Docker for Mac/Windows or Docker Toolbox. In this case you can easily mount a local drive to a temporary container to initialize the volume. Eg:
docker run --rm -v /Users/myname/work/mongodb:/incoming \
-v mongostore:/data alpine:3.4 cp -rp /incoming/* /data
This doesn't work for cloud storage. In that case you need to copy the files.
Remote environment (AWS, GCP, Azure, ...)
It's a good idea to tar/compress things up to speed the upload.
tar czf mongodata.tar.gz /Users/myname/work/mongodb
Then create a temporary container to untar and copy the files to the mongostore. the tail -f /dev/null just makes sure that the container doesn't exit.
docker run -d --name temp -v mongostore:/data alpine:3.4 tail -f /dev/null
Copy files to it
docker cp mongodata.tar.gz temp:.
Untar and move to the volume
docker exec temp tar xzf mongodata.tar.gz && cp -rp mongodb/* /data
Cleanup
docker rm temp
You could also copy the files to the remote host and mounting from there but I tend to avoid interacting with the remote host at all.
Disclaimer. I'm writing this from memory (no testing).
Here is how its done with docker-compose. I use an older image of mongo but the docker-entrypoint.sh accepts *.js and *.sh files for all versions of the image.
docker-compose.yaml
version: '3'
services:
mongo:
container_name: mongo
image: mongo:3.2.12
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db:cached
- ./deploy/local/mongo_fixtures /fixtures
- ./deploy/local/mongo_import.sh:/docker-entrypoint-initdb.d/mongo_import.sh
volumes:
mongo-data:
driver: local
mongo_import.sh:
#!/bin/bash
# Import from fixtures
mongoimport --db wcm-local --collection clients --file /fixtures/properties.json && \
mongoimport --db wcm-local --collection configs --file /fixtures/configs.json
And my monogo_fixtures json files are the product of monogoexport which have the following format:
{"_id":"some_id","field":"value"}
{"_id":"another_id","field":"value"}
This should help those using this without a custom Dockefile, just using the image straight away with the right entrypoint setup right in your docker-compose file. Cheers!
I've found a way that is somehow easier for me.
Say you have a database in a docker container on your server, and you want to back it up, here’s what you could do.
What might differ from your setup to mine is the name of your mongo docker container [mongodb] (default when using elastic_spence). So make sure you start your container first with --name mongodb to match the following steps:
$ docker run \
--rm \
--link mongodb:mongo \
-v /root:/backup \
mongo \
bash -c ‘mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR’
And to restore the database from a dump.
$ docker run \
--rm \
--link mongodb:mongo \
-v /root:/backup \
mongo \
bash -c ‘mongorestore /backup --host $MONGO_PORT_27017_TCP_ADDR’
If you need to download the dump from to your server you can use scp:
$ scp -r root#IP:/root/backup ./backup
Or upload it:
$ scp -r ./backup root#IP:/root/backup
P.S: Original source by Tim Brandin available at https://blog.studiointeract.com/mongodump-and-mongorestore-for-mongodb-in-a-docker-container-8ad0eb747c62
Thank you!
Previously, to access a file in a running dokku instance I would run:
docker ps to get the container ID followed by
ls /var/lib/docker/aufs/diff/<container-id>/app/...
note: I'm just using 'ls' as an example command. I ultimately want to reference a particular file.
This must have changed, as the container ID is no longer accessible via this path. There are loads of dirs in that folder, but none that match any of the running containers.
It seems like mounting a volume for the entire container would be overkill in this scenario. I know I can access the file system using dokku run project-name ls, and also docker exec <container-id> ls, but neither of those will satisfy my use case.
To explain a bit more fully, in my dokku project, I have some .sql files that I'm using to bootstrap my postgres DB. These files get pushed up via git push with the rest of the project.
I'm hoping to use the postgres dokku plugin to run the following:
dokku postgres:connect db-name < file-name.sql
This is what I had previously been doing using:
dokku postgres:connect db-name < /var/lib/docker/aufs/diff/<container-id>/app/file-name.sql but that no longer works.
Any thoughts on this? Am I going about this all wrong?
Thanks very much for any thoughts.
Here's an example:
dokku apps:list
=====> My Apps
DummyApp
dokku enter DummyApp
Enter bash to the DummyApp container.
Never rely on the /var/lib/docker file system paths as most of the data stored there is dependent on the storage driver currently in use so it is subject to change.
cat the file from an existing container
docker exec <container> cat file.sql | dokku postgres:connect db-name
cat the file from an image
docker run --rm <image> cat file.sql | dokku postgres:connect db-name
Copy file from an existing container
docker cp <container>:file.sql file.sql
dokku postgres:connect db-name < file.sql
I'm trying to run postgresql in docker container, but of course I need to have my database data to be persistent, so I'm trying to use data only container which expose volume to store database at this place.
So, my data container has such Dockerfile:
FROM ubuntu
# Create data directory
RUN mkdir -p /data/postgresql
# Create /data volume
VOLUME /data/postgresql
Which I run:
docker run --name postgresql_data lyapun/postgresql_data true
In my postgresql.conf I set:
data_directory = '/data/postgresql'
Then I run my postgresql container in such way:
docker run -d --name postgre --volumes-from postgresql_data lyapun/postgresql
And I got:
2014-07-04 07:45:57 GMT FATAL: data directory "/data/postgresql" has wrong ownership
2014-07-04 07:45:57 GMT HINT: The server must be started by the user that owns the data directory.
How to deal with this issue? I googled a lot to find some information about using postgresql with docker volumes, but I didn't found anything.
Thanks!
Ok, seems like I found workaround for this issue.
Instead of running postgres in such way:
CMD ["/usr/lib/postgresql/9.1/bin/postgres", "-D", "/var/lib/postgresql/9.1/main", "-c", "config_file=/etc/postgresql/9.1/main/postgresql.conf"]
I wrote bash script:
chown -Rf postgres:postgres /data/postgresql
chmod -R 700 /data/postgresql
sudo -u postgres /usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c config_file=/etc/postgresql/9.1/main/postgresql.conf
And replaced CMD in postgresql image to:
CMD ["bash", "/run.sh"]
It works!
You have to set ownership of directory /data/postgresql to the same user, under which you are running your postgresql binary. For example, in Ubuntu it is usually postgres user.
Then you have to use this command:
chown postgres.postgres /data/postgresql
A better way to solve that issue, assuming your postgres images is named "postgres" and that your backup is ./backup.tar:
First, add this to your postgres Dockerfile:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
Then run:
docker run -it --name postgres -v $(pwd):/db postgres sh -c "tar xvf /db/backup.tar --no-overwrite-dir" && \
docker run -it --name data --volumes-from postgres busybox true && \
docker rm postgres && \
docker run -it --name postgres --volumes-from=data postgres
You don't have permission issues since the archive is extracted by the postgres user of your postgres image, so it is the owner of the extracted files.
You can then backup your data using the data container. The advantage of this solution is that you don't chmod/chown every time you run the image.
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control.
The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM