I have deployed an Mongo image in a Docker container via Docker Cloud. It is linked to a Meteor app. Is there any way to backup the data on the container?
Create another Docker container that runs a script controlled by a cron job that executes the backup and stores it onto a shared volume.
Also see Cron containers for docker - how do they actually work?
Related
I've issue with dumping data from mongo container into swarm. I can't use run into swarm, I can't connect other container (run'mongodump because main network not manually attachable).
I googled this issue and I've found only solutions with docker-compose --link which doesn't work in swarm.
My plane was:
Run other mongo container with command mongodump --host
main_mongo_container --out some_volume.
Compress dump into tar
Upload dump onto S3.
Run script in cron.
I don't have enough experience for solving this issue myself. Had anyone experience in automatization dumping mongo data from swarm container onto s3?
Many thanks in advance!
Why not run a swarm service that runs an hourly back and then you can automate it with a script to upload where you need it, or just store it in an EBS volume. Here's a simple example using digitalocean block storage.
I have a container called "postgres", build with plain docker command, that has a configured PostgreSQL inside it. Also, I have a docker-compose setup with two services - "api" and "nginx".
How to add the "postgres" container to my existing docker-compose setup as a service, without rebuilding? The PostgreSQL database is configured manually, and filled with data, so rebuilding is a really, really bad option.
I went through the docker-compose documentation, but found no way to do this without a re-build, sadly.
Unfortunately this is not possible.
You don't refer containers on docker-compose, you use images.
You need to create a volume and/or bind mount it to keep your database data.
This is because containers do not save data, if you have filled it with data and did not make a bind mount or a volume to it, you will lose everything on using docker container stop.
Recommendation:
docker cp
Docker cp will copy the contents from container to host. https://docs.docker.com/engine/reference/commandline/container_cp/
Create a folder to save all your PostgreSQL data (ex: /home/user/postgre_data/)
Save the contents of your PostgreSQL container data to this folder (docker hub postgres page for further reference: ;
Run a new PostgreSQL (same version) container with a bind mount poiting to the new folder;
This will maintain all your data and you will be able to volume or bind mount it to use on docker-compose.
Reference of docker-compose volumes: https://docs.docker.com/compose/compose-file/#volumes
Reference of postgres docker image: https://hub.docker.com/_/postgres/
Reference of volumes and bind mounts: https://docs.docker.com/storage/bind-mounts/#choosing-the--v-or---mount-flag
You can save this container in a new image using docker container commit and use that newly created image in your docker-compose
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
I however prefer creating images with the use of Dockerfiles and scripts to fill my data etc.
I have created the karaf dockerfile from scratch and it works with my application. Now, the postgreSQL and the MongoDB containers need to be running on the same network as the karaf container for the final step. Essentially, what i have so far is three separate dockerfiles. And what i need is for them to be able to communicate with each other. How do i approach this?
Use docker network ls command firstly, it will show the networks exist in the machine.
Then run your MongoDB container and set --net param.
docker run --net karaf_default mongo
The mongo and karaf will be in the same network now. (you can check doc of --link)
I have a docker-compose file that links a seed script with a mongo image from docker's public registry. I can populate the database with:
docker-compose up
docker-compose run seed_script
I can connect to this mongo container with the mongo cli from my host system and verify that the seed script is working and there's data in the database.
When I'm done seeding, I can see the mongo container ID with docker ps. I stop the container by pressing Ctrlc in the docker-compose terminal and commit the changes to the mongo container:
docker commit <mongo container ID> mongo-seeded-data
However, when I run that container individually, it's empty:
docker run -p 27017:27017 mongo-seeded-data
mongo
> show dbs
local 0.000GB
If I bring up the docker-compose containers again and use my host mongo client, I can see the data:
docker-compose up
mongo
> show dbs
seeded_db 0.018GB
local 0.000GB
I committed the container with the data in it. Why is it not there when I bring up the container? What is docker-compose doing differently than docker run?
Because there is a VOLUME defined in it.
docker commit saves the content of the overlay fs as a new image, but VOLUMES transcend the overlay fs. That's why it doesn't get saved.
I had the same problem in the past and resolved patching the mongo image by using a different data directory, so that the data would get written inside the container instead of the VOLUME.
I have a database docker container that is writing its data to another data-only container. The data-only container has a volume where it stores the data of the database. Is there a "docker" way of migrating this data-only container from one machine to another? I read about docker save and docker load but these commands save and load images, not containers. I want to be able to package the docker container along with its volumes and move it to another machine.
Checkout the flocker project. Very interesting solution to this problem, using ZFS to snapshot and replicate the storage volume between hosts.