how to restore orientdb container from backup - orientdb

I am running orientdb as docker container with mounts on host filesystem. I have the database mounted on host on /opt/orientdb/databases and will like to restore lets say database from another container on another host on a different container.
Do i just copy the path to the database /opt/orientdb/database/<database-name> from the host to the second host and restart the container?
If not then what is the best way to restore from backups WHEN running on docker containers?
OrientDB version 2.1.12
Thanks

Yes, you can copy the database from host1 to host2 and run another container on host2.
Be sure to stop the container on host1, sync/copy the database, start the container on host2.
If you need to run the two instances at the same time, you can use replication:
http://orientdb.com/docs/last/Distributed-Configuration.html

Related

Docker Postgres data host volume mapping

I'm trying to docker-containerize PostgreSQL server and this container will have many other applications as well. The need is that, PostgreSQL server data should be mapped to the host volume so that when container is stopped, we won't lose the data. Also that, the next time when we start the container, the same directory can be mapped again and postgres can use the old data. Below is the DOCKERFILE. Note that I'm using ubuntu 22.04 on the host.
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt install -y postgresql
ENTRYPOINT ["tail", "-f", "/dev/null"]
Docker image is built using the command
docker build -t pg_test .
and the container is run using the command
docker run --name test -v /home/me/data:/var/lib/postgresql/14/main pg_test
'/home/me/data' is the host directory which is empty where I want to map the postgres server data. '/var/lib/postgresql/14/main' is the directory inside the docker container where the postgres is supposed to store the data.
Once the docker container starts, I enter the docker container using the command
docker exec -it test bash
and once I'm inside, I'm trying to start the PostgreSQL service. But PostgreSQL fails to start as there is no data in '/var/lib/postgresql/14/main' directory. I understand that since I have mapped an empty host directory to '/var/lib/postgresql/14/main' directory, postgres doesn't have the files required to start.
I understand that I'm doing it the wrong way, but I couldn't find a way around it. Can anyone please help me to do this the right way, if there is one?
Any help would be appreciable.
You should use the postgres docker image, it will set up the db for you when you start the container, you can find instructions on https://hub.docker.com/_/postgres
If you must use a custom image, you will need to initialize the db yourself, usually by running initdb or whatever your system provides.
But really you should use the appropriate docker image, and if you need more services you start them in their own container and connect them to the postgres one

How to add an already build docker container to docker-compose?

I have a container called "postgres", build with plain docker command, that has a configured PostgreSQL inside it. Also, I have a docker-compose setup with two services - "api" and "nginx".
How to add the "postgres" container to my existing docker-compose setup as a service, without rebuilding? The PostgreSQL database is configured manually, and filled with data, so rebuilding is a really, really bad option.
I went through the docker-compose documentation, but found no way to do this without a re-build, sadly.
Unfortunately this is not possible.
You don't refer containers on docker-compose, you use images.
You need to create a volume and/or bind mount it to keep your database data.
This is because containers do not save data, if you have filled it with data and did not make a bind mount or a volume to it, you will lose everything on using docker container stop.
Recommendation:
docker cp
Docker cp will copy the contents from container to host. https://docs.docker.com/engine/reference/commandline/container_cp/
Create a folder to save all your PostgreSQL data (ex: /home/user/postgre_data/)
Save the contents of your PostgreSQL container data to this folder (docker hub postgres page for further reference: ;
Run a new PostgreSQL (same version) container with a bind mount poiting to the new folder;
This will maintain all your data and you will be able to volume or bind mount it to use on docker-compose.
Reference of docker-compose volumes: https://docs.docker.com/compose/compose-file/#volumes
Reference of postgres docker image: https://hub.docker.com/_/postgres/
Reference of volumes and bind mounts: https://docs.docker.com/storage/bind-mounts/#choosing-the--v-or---mount-flag
You can save this container in a new image using docker container commit and use that newly created image in your docker-compose
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
I however prefer creating images with the use of Dockerfiles and scripts to fill my data etc.

Docker best practice to access host's services

What is best practice to access the host's services within a docker container?
I'd like to access PostgreSQL running on the host within my application which runs in a docker container.
The easiest approach I've found is to use docker container run --net="host" which, based on this answer, behaves as follows:
Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.
Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.
Which does not seem to be best practice since the containers should be isolated from the host.
Other approaches I've found are awking the hosts IP. May this be the way to go?
The best option in this case to treat the host as a remote machine. That way the container will be portable and would not have a strict dependency on network locations when connecting to the database.
In addition to what is mentioned on the drawbacks of using --network=host, this option will tightly couple the container to the host by assuming that the database is found on localhost.
The way to treat the machine as a remote one, is to use standard network constructs such as IP and DNS. Define a new DNS entry for the container that will point to the host where the DB is found using the
--add-host option to docker run.
docker run --add-host db-static:<ip-address-of-host> ...
Then inside the container you connect to the database via db-static

Backup Mongo in a Docker container

I have deployed an Mongo image in a Docker container via Docker Cloud. It is linked to a Meteor app. Is there any way to backup the data on the container?
Create another Docker container that runs a script controlled by a cron job that executes the backup and stores it onto a shared volume.
Also see Cron containers for docker - how do they actually work?

The right way to move a data-only docker container from one machine to another

I have a database docker container that is writing its data to another data-only container. The data-only container has a volume where it stores the data of the database. Is there a "docker" way of migrating this data-only container from one machine to another? I read about docker save and docker load but these commands save and load images, not containers. I want to be able to package the docker container along with its volumes and move it to another machine.
Checkout the flocker project. Very interesting solution to this problem, using ZFS to snapshot and replicate the storage volume between hosts.