Backup and restore Rocket.chat on docker with mongodb - mongodb

I use this docker image : https://hub.docker.com/_/rocket.chat/
So here is the code i used :
docker run --name db -d mongo:3.0 --smallfiles
docker run --name rocketchat --link db -d rocket.chat
I tried several things, but I can't find a way to have a clean backup/restore system.
Any advice ?

For the posterity : Backing up Rocket.chat on SERVER 1 and Restore it on SERVER 2, based on the official docker image :
SERVER 1
cd /backups
docker run -it --rm --link db -v /backups:/backups mongo:3.0 mongodump -h db -o /backups/mongoBACKUP
tar czf mongoBACKUP.tar.gz mongoBACKUP/
Then send mongoBACKUP.tar.gz on SERVER 2 in /backups.
SERVER 2 (+ test on :3000)
docker run --name db -d mongo:3.0 --smallfiles
cd /backups
tar xzf mongoBACKUP.tar.gz
docker run -it --rm --name mongorestore -v /backups/mongoBACKUP:/var/dump --link db:db mongo mongorestore --host db /var/dump
docker run -p 3000:3000 --name rocket --env ROOT_URL=http://yourwebsite.test --expose 3000 --link db -d rocket.chat

Related

how update mongo4.0.28 in docker container

i use
docker run --name db -d mongo:4.0 --smallfiles --replSet rs0 --oplogSize 128
docker exec -ti db mongo --eval "printjson(rs.initiate())"
Then start Rocket.Chat linked to this mongo instance:
docker run --name rocketchat -p 80:3000 --link db --env ROOT_URL=http://localhost --env MONGO_OPLOG_URL=mongodb://db:27017/local -d rocket.chat
now mongo 4.0 is depricated.
how can i upgrade mongo in docker?
You can do it this way:
Create mongo dump
docker exec -i db /usr/bin/mongodump --username <username> --password <password> --authenticationDatabase admin --db <database_name> --out dump
Run new mongo container
docker run -d --name dbnew -i mongo:4.4 --smallfiles --replSet rs0 --oplogSize 128
Restore dump to new container
docker cp dbnew:/dump dump
docker exec -i dbnew /usr/bin/mongorestore --username <username> --password <password> --authenticationDatabase admin --db <database_name> /dump/<database_name>
Check new db works fine and remove old container
Rename new container
docker rename dbnew db
If you want your new container to be persistent (which I think you would) you need to use docker volumes.

Dockerized ThingsBoard + dockerized PostgreSQL

I launch a container running PostgreSQL with the command:
docker run -p 5432:5432 -d -it -e POSTGRES_USER='postgres' -e POSTGRES_PASSWORD='postgres' -e POSTGRES_DB='thingsboard' --name postgres postgres
Then, I launch ThingsBoard providing some environment variables to use the PostgreSQL database:
docker run -it -p 9090:9090 -p 1883:1883 -p 5683:5683/udp -v ~/.mytb-data:/data -v ~/.mytb-logs:/var/logs/thingsboard --name thingsboard --restart always -e SPRING_DATASOURCE_URL=jdbc:postgresql://<MY_LOCAL_IP>:5432/thingsboard -e SPRING_DATASOURCE_USERNAME=postgres -e SPRING_DATASOURCE_PASSWORD=postgres thingsboard/tb-postgres
where <MY_LOCAL_IP> is my IP address on the local network. I checked PostgreSQL, which actually binds to <MY_LOCAL_IP>:5432 (verified through PGAdmin).
The thingsboard container returns an error:
I expect ThingsBoard itself to create the tables in the thingsboard database, but it seems that it doesn't appen so. Any guess on the possible cause of this error? Thanks.
It seems that the problem is given by the volumes: mytb-data and mytb-logs have been created before and are not empty. The containers work as long as we launch thingsboard with:
docker run -it -p 9090:9090 -p 1883:1883 -p 5683:5683/udp --name thingsboard --restart always -e SPRING_DATASOURCE_URL=jdbc:postgresql://<MY_LOCAL_IP>:5432/thingsboard -e SPRING_DATASOURCE_USERNAME=postgres -e SPRING_DATASOURCE_PASSWORD=postgres thingsboard/tb-postgres

Exposed ports become unresponsive after heavy load - Docker

I run a Postgresql Docker container database in my server with the exposed port 5432. After a heavy load from users this container became unresponsive by the port.
docker run -d --env POSTGRES_PASSWORD=postgres --env POSTGRES_USER=user --env POSTGRES_DB=database -p 5432:5432 password
To resolve this I need to enter to the container, make a backup, restart the container and import the backup.
$ docker exec -it [id] sh
# pg_dump -U user dbname > dbexport.pgsql
# exit
$ docker cp [id]:/backup.pgsql ~/backup.pgsql
$ docker stop [id]
$ docker run -d --env POSTGRES_PASSWORD=postgres --env POSTGRES_USER=user --env POSTGRES_DB=database -p 5432:5432 password
$ docker exec -it [id] sh
# psql -U user database < backup.pgsql
# exit
And everything back to work until other heavy load.
Why this happens?

Docker: repository name must be lowercase

Im getting errors trying to run my docker containers. I need the postgres and redis connected to my server application.
docker pull postgres
docker rm -f syda-postgres
docker run -p 30203:5432 --name syda-postgres -e POSTGRES_PASSWORD=password POSTGRES_USER=root POSTGRES_DB=syda postgres
docker pull redis
docker rm -f syda-inmemory
docker run -d -p 30204:6379 --name syda-inmemory redis redis-server --appendonly yes
docker pull docker.url.ee/syda/server:latest
docker rm -f syda-server
docker run -d -p 30202:8080 --name syda-server --link syda-postgres:postgres --link syda-inmemory:redis \docker.url.ee/syda/server:latest
This is the error im getting:
Error: No such container: syda-postgres
docker: invalid reference format: repository name must be lowercase.
See 'docker run --help'.
Error: No such container: syda-server
docker: Error response from daemon: could not get container for syda-postgres: No such container: syda-postgres.
See 'docker run --help'.
docker run -p 30203:5432 --name syda-postgres -e POSTGRES_PASSWORD=password POSTGRES_USER=root POSTGRES_DB=syda postgres
That tries to run a container from the image named POSTGRES_USER=root with the command/arguments to the entrypoint POSTGRES_DB=syda postgres. You need to pass the -e for each variable like:
docker run -p 30203:5432 --name syda-postgres \
-e POSTGRES_PASSWORD=password -e POSTGRES_USER=root -e POSTGRES_DB=syda \
postgres
Also, note that links are deprecated, you should use a shared network for communicating between containers. This is often done with a compose file. If you need to do it from a script, you could run:
docker pull postgres
docker pull redis
docker pull docker.url.ee/syda/server:latest
docker rm -f syda-postgres
docker rm -f syda-inmemory
docker rm -f syda-server
docker network rm syda-net
docker network create syda-net
docker run -p 30203:5432 --net syda-net --name syda-postgres \
-e POSTGRES_PASSWORD=password -e POSTGRES_USER=root -e POSTGRES_DB=syda \
postgres
docker run -d -p 30204:6379 --net syda-net --name syda-inmemory \
redis redis-server --appendonly yes
docker run -d -p 30202:8080 --net syda-net --name syda-server \
docker.url.ee/syda/server:latest

Correct syntax to do mongodump of mongoDb docker instance?

I'm running an ubuntu 16.04 LTS server with some docker container. One of these containers is a mongoDB container, where my data is stored.
Now I'm trying to make a backup by mongodump.
The problem for me is, that mongoDb is running as a docker container, and the backup should be stored outside of the docker container.
I think the syntax for this is something like this:
docker run \
--rm \
-it \
--link DOCKER_CONTAINER_NAME:mongo_alias \
-v /backup:/backup \
mongo mongodump \
--host mongo_alias \
--out /backup/
But I'm not sure for the parameters I have to use...
This is what I get for my mongoDb container via docker ps:
7bee41bfa08a mongo:3.4 "docker-entrypoint..." 4 months ago Up 2 months 27017/tcp mongo_db
And this is my docker-compose file:
version: "3"
services:
mongo_db:
container_name: mongo_db
image: 'mongo:3.4'
restart: 'always'
volumes:
- '/opt/mongo/project/live:/data/db'
So it should look like this?
docker run \
--rm \
-it \
--link mongo_db:mongo_alias \ # mongo_alias can be choosen freely?
-v /backup:/backup \ # Don't understand /backup:/backup
mongo mongodump \
--host mongo_alias \
--out /backup/ # This is in the root of the server?
Define the backup to run via compose as well. This will create the new container on the same network as the main mongo container. If you have any compose network definitions you will need to duplicate them in each compose file.
Create a second compose file for the backup command: docker-compose-backup.yml
version: "3"
services:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/opt/mongo/project/live_backup:/backup'
command: |
mongodump --host mongo_db --out /backup/
Then run the backup
docker-compose -f docker-compose-backup.yml run mongo_db_backup
You can also do this without docker-composer, directly from your host
Backup all databases
docker exec -t your-db-container mongodump --host mongo_db --out /backup/
Restore all databases
Move your backup folder to your host volume folder
docker exec -t your-db-container mongorestore /backup/