Mongodump of docker mongoDB instance to single timestamp named file - mongodb

I'm doing backup dump of a mongoDb docker instance (mongo_db) via docker-compose (thanks to Matt for that snippet so far):
version: "3"
services:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command: |
mongodump --host mongo_db --out /backup/ --db specific
Executing the command
$ docker-compose run mongo_db_backup
gives me all collections of specific db and stores them in /backup/specific.
Is it possible to get only one single (compressed) dump file, which is named as current time?
I'm using --out to get the files in the folder. The docs are saying I cannot use --archive together with --out.
Further more I need to use a env variable to set the archive output. Something like this:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command:
- sh
- -c
- |
mongodump
--host mongo_db
--gzip
--db specific
$$(
if [ $TYPE = "hour" ]
then echo "--archive=/backup/hour/$$(date +"%H").gz"
elif [ $TYPE = "day" ]
then echo --archive=/backup/day/$$(date +"%d").gz
fi
)
Executing with $ docker-compose run -e TYPE=day mongo_db_backup

You can change your compose to below
version: "3"
services:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command: sh -c "mongodump --host mongo_db --gzip --archive=/backup/$$(date +'%Y%m%d_%H%M%S') --db $${DB:=specific}"
Now if you want to change the DB you can run it like below
docker-compose run -e DB=abc mongo_db_backup
If you want to use it like docker-compose run mongo_db_backup abc then would need to create entrypoint.sh script handle the arguments in that. So it is easier to do it using environment variables
Edit-1 - Default behavior on missing environment variable
If you need to change the command based on environment variable being specified or not, you can change the command to below
command: sh -c "mongodump --host mongo_db --gzip --archive=/backup/$$(date +'%Y%m%d_%H%M%S') $$(if [ -z $DB ]; then echo '--db default_db'; else echo --collection $DB; fi)"
Edit-2: Multiline line command in compose with if else
To solve the issue of using multiline commands in compose you need to use a combination of array and multiline
command:
- sh
- -c
- |
multi line shell script
Below is the command I worked out for your update
command:
- bash
- -c
- |
TYPE=$${TYPE:=day}
if [ ! -d /backup/hour ]; then mkdir /backup/hour; fi
if [ ! -d /backup/day ]; then mkdir /backup/day; fi
mongodump --host mongo_db --gzip \
--db test \
$$( \
if [ "$$TYPE" == "hour" ]; then \
echo "--archive=/backup/hour/$$(date +'%H').gz"; \
elif [ "$$TYPE" == "day" ]; then \
echo "--archive=/backup/day/$$(date +'%d').gz"; \
fi \
)
Since docker-compose processes variables we need to escape each $ using $$. So $TYPE becomes $$TYPE. Also mongodump is a single command, so if you split it into multiple lines you need to use \ for multiline command continuation

Related

Properly escaping quotes when running a command in kubernetes

I want to run a mongodb command in Kubernetes deployment.
In my yaml file, I want to run the following:
command: ["mongo --port ${MONGODBCACHE_PORT} --host ${MONGODBCACHE_BIND_IP} \
--eval "rs.initiate('{ _id: \"test\", members: [ { _id: 0, host: \"${MONGODBCACHE_BIND_IP}:${MONGODBCACHE_BIND_IP}\" },]}')" && \
./mycommand "]
I checked that the environment variables are present correctly. How do I escape the characters when running this command?
Use only mongo in command and the others in args field which is an array. Like,
command: ["/bin/bash", "-c"]
args:
- mongo
- --port
- ${MONGODBCACHE_PORT}
- --host
- ${MONGODBCACHE_BIND_IP}
- --eval
- rs.initiate('{ _id: "test", members: [ { _id: 0, host: "${MONGODBCACHE_BIND_IP}:${MONGODBCACHE_BIND_IP}" } ] }') && ./mycommand
Hope this will help.
got it working with a slightly modified configuration in manifest :
command: ["/bin/bash", "-c"]
args:
- /usr/bin/mysql -u root -p$DB_ROOT_PASS -h $DB_HOST -e "CREATE USER IF NOT EXISTS $DB_USER#'%' IDENTIFIED BY '$DB_PASS';"
Although it's mysql cli client this should work for any other command.
ENV Variables must exists of course.

MongoDB with Docker container, not able to restore database with different name using mongorestore

This is the setup i have
Created mongodb instance with docker
sudo docker run -p 27017:27017 -e MONGODB_DATABASE=DEV -e MONGODB_USER=dev -e MONGODB_PASSWORD=dev123 -e MONGODB _ADMIN_PASSWORD=dev123 -e MONGODB_ROLE=readWriteAnyDatabase --name mymongo -v testdb:/var/lib/mongodb/data -d mongo
Entered container using
sudo docker exec -it container-id /bin/bash
Executed command
mongodump -d DEV -u dev -p dev123 ( works perfectly )
Now the ISSUE happens while restoring to different database
mongorestore --db test ./dump/DEV -- throws below error
Failed: test.duke: error reading database: not authorized on test to execute command { listCollections: 1, cursor: { batchSize: 0 } }
Stuck for 3 days now any help would be appreciated ( beginner to both docker and mongodb)
If your other mongo database has authentication then you should use :
mongorestore -u <username> -p <password> --authenticationDatabase=<database name> --db=test ./dump/DEV
Other advice would be to create dumps like :
mongodump --port 55555 -d testdb --gzip --archive=testdb.tar
and then restore like:
mongorestore --port 55555 --gzip --archive=testdb.tar

Docker mongodb config file

There is a way to link /data/db directory of the container to your localhost. But I can not find anything about configuration. How to link /etc/mongo.conf to anything from my local file system. Or maybe some other approach is used. Please share your experience.
I'm using the mongodb 3.4 official docker image. Since the mongod doesn't read a config file by default, this is how I start the mongod service:
docker run -d --name mongodb-test -p 37017:27017 \
-v /home/sa/data/mongod.conf:/etc/mongod.conf \
-v /home/sa/data/db:/data/db mongo --config /etc/mongod.conf
removing -d will show you the initialization of the container
Using a docker-compose.yml:
version: '3'
services:
mongodb_server:
container_name: mongodb_server
image: mongo:3.4
env_file: './dev.env'
command:
- '--auth'
- '-f'
- '/etc/mongod.conf'
volumes:
- '/home/sa/data/mongod.conf:/etc/mongod.conf'
- '/home/sa/data/db:/data/db'
ports:
- '37017:27017'
then
docker-compose up
When you run docker container using this:
docker run -d -v /var/lib/mongo:/data/db \
-v /home/user/mongo.conf:/etc/mongo.conf -p port:port image_name
/var/lib/mongo is a host's mongo folder.
/data/db is a folder in docker container.
I merely wanted to know the command used to specify a config for mongo through the docker run command.
First you want to specify the volume flag with -v to map a file or directory from the host to the container. So if you had a config file located at /home/ubuntu/ and wanted to place it within the /etc/ folder of the container you would specify it with the following:
-v /home/ubuntu/mongod.conf:/etc/mongod.conf
Then specify the command for mongo to read the config file after the image like so:
mongo -f /etc/mongod.conf
If you put it all together, you'll get something like this:
docker run -d --net="host" --name mongo-host -v /home/ubuntu/mongod.conf:/etc/mongod.conf mongo -f /etc/mongod.conf
For some reason I should use MongoDb with VERSION:3.0.1
Now : 2016-09-13 17:42:06
That is what I found:
#first step: run mongo 3.0.1 without conf
docker run --name testmongo -p 27017:27017 -d mongo:3.0.1
#sec step:
docker exec -it testmongo cat /entrypoint.sh
#!/bin/bash
set -e
if [ "${1:0:1}" = '-' ]; then
set -- mongod "$#"
fi
if [ "$1" = 'mongod' ]; then
chown -R mongodb /data/db
numa='numactl --interleave=all'
if $numa true &> /dev/null; then
set -- $numa "$#"
fi
exec gosu mongodb "$#"
fi
exec "$#"
I find that there are two ways to start a mongod service.
What I try:
docker run --name mongo -d -v your/host/dir:/container/dir mongo:3.0.1 -f /container/dir/mongod.conf
the last -f is the mongod parameter, you can also use --config instead.
make sure the path like your/host/dir exists and the file mongod.conf in it.

Customize the configuration of the official PostgreSQL docker image

I am using the official postgresql docker image (version 9.4). I have extended the Dockerfile, so I can alter the settings in the postgresql.conf etc, using a bash script. It successfully adds and runs the script on entrypoint, for a single sed command. But when I put 2 or more sed commands, I get the following error:
/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/config.sh
: No such file or directoryread
/var/lib/postgresql/data/postgresql.conf
I am trying on Windows 10, in combination with Vagrant and VirtualBox, using NFS file system on shared folders, via the vagrant-winnfsd plugin.
Why is this happening? How can I alter my bash script in order to work with more configuration settings? Is there a better way?
Dockerfile:
FROM postgres:9.4
RUN echo "Europe/Athens" > /etc/timezone \
&& dpkg-reconfigure -f noninteractive tzdata
RUN localedef -i el_GR -c -f UTF-8 -A /usr/share/locale/locale.alias el_GR.UTF-8
ADD config.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/config.sh
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
config.sh:
#!/bin/bash
sed -i -e"s/^#logging_collector = off.*$/logging_collector = on/" /var/lib/postgresql/data/postgresql.conf
sed -i -e"s/^max_connections = 100.*$/max_connections = 1000/" /var/lib/postgresql/data/postgresql.conf
database.yml
postgres:
container_name: postgres-9.4
image: ***/postgres-9.4
volumes_from:
- postgres_data
ports:
- 5432:5432
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
- USERMAP_UID=999
- USERMAP_GID=999
postgres_data:
container_name: postgres_data
image: ***/postgres-9.4
volumes:
- ./services/postgres:/etc/postgresql
- ./services/postgres:/var/lib/postgresql
- ./services/postgres/logs:/var/log/postgresql
command: "true"
You might want to try using a RUN statement to execute your bash script or just run sed directly with both commands combined with a semicolon:
RUN sed -i -e 's/^#\(logging_collector = \).*/\1on/; s/^\(max_connections = \).*/\11000/' \
/var/lib/postgresql/data/postgresql.conf
A more scalable solution would be to put the sed program in an external file, then use these statements:
ADD postgres-edit.sed /var/local
RUN sed -i -f /var/local/postgres-edit.sed /var/lib/postgresql/data/postgresql.conf
postgres-edit.sed:
# sed script to edit postgresql configuration
s/^#\(logging_collector = \).*/\1on/
s/^\(max_connections = \).*/\11000/
Seems like a duplicate of How to customize the configuration file of the official PostgreSQL docker image?.
Copy-paste of my answer at https://stackoverflow.com/a/40598124/385548.
Inject custom postgresql.conf into postgres Docker container
The default postgresql.conf file lives within the PGDATA dir (/var/lib/postgresql/data), which makes things more complicated especially when running postgres container for the first time, since the docker-entrypoint.sh wrapper invokes the initdb step for PGDATA dir initialization.
To customize PostgreSQL configuration in Docker consistently, I suggest using config_file postgres option together with Docker volumes like this:
Production database (PGDATA dir as Persistent Volume)
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-v $CUSTOM_DATADIR:/var/lib/postgresql/data \
-e POSTGRES_USER=postgres \
-p 5432:5432 \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
Testing database (PGDATA dir will be discarded after docker rm)
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-e POSTGRES_USER=postgres \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
Debugging
Remove the -d (detach option) from docker run command to see the server logs directly.
Connect to the postgres server with psql client and query the configuration:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c 'exec psql -h $POSTGRES_PORT_5432_TCP_ADDR -p $POSTGRES_PORT_5432_TCP_PORT -U postgres'
psql (9.6.0)
Type "help" for help.
postgres=# SHOW all;

How to export all collections in MongoDB?

I want to export all collections in MongoDB by the command:
mongoexport -d dbname -o Mongo.json
The result is:
No collection specified!
The manual says, if you don't specify a collection, all collections will be exported.
However, why doesn't this work?
http://docs.mongodb.org/manual/reference/mongoexport/#cmdoption-mongoexport--collection
My MongoDB version is 2.0.6.
For lazy people, use mongodump, it's faster:
mongodump -d <database_name> -o <directory_backup>
And to "restore/import" it (from directory_backup/dump/):
mongorestore -d <database_name> <directory_backup>
This way, you don't need to deal with all collections individually. Just specify the database.
Note that I would recommend against using mongodump/mongorestore for big data storages. It is very slow and once you get past 10/20GB of data it can take hours to restore.
I wrote bash script for that. Just run it with 2 parameters (database name, dir to store files).
#!/bin/bash
if [ ! $1 ]; then
echo " Example of use: $0 database_name [dir_to_store]"
exit 1
fi
db=$1
out_dir=$2
if [ ! $out_dir ]; then
out_dir="./"
else
mkdir -p $out_dir
fi
tmp_file="fadlfhsdofheinwvw.js"
echo "print('_ ' + db.getCollectionNames())" > $tmp_file
cols=`mongo $db $tmp_file | grep '_' | awk '{print $2}' | tr ',' ' '`
for c in $cols
do
mongoexport -d $db -c $c -o "$out_dir/exp_${db}_${c}.json"
done
rm $tmp_file
For local and remote dump and restore:
For Local
Local dump
mongodump -d mydb -o ./mongo-backup
Local restore
mongorestore -d mydb ./mongo-backup/mydb
For remote
Remote dump
mongodump --uri "mongodb+srv://Admin:MYPASS#appcluster.15lf4.mongodb.net/mytestdb" -o ./mongo-backup
Remote restore
mongorestore --uri "mongodb+srv://Admin:MYPASS#appcluster.15lf4.mongodb.net/mytestdb" ./mongo-backup/mytestdb
Update:
If you're using mongo 4.0 you may encounter a snapshot error, Then you can run with this argument: --forceTableScan. See here for more information. The error is something like this:
mongodump error reading collection: BSON field 'FindCommandRequest.snapshot' is an unknown field.
To export all collections:
mongodump -d database_name -o directory_to_store_dumps
To restore them:
mongorestore -d database_name directory_backup_where_mongodb_tobe_restored
Follow the steps below to create a mongodump from the server and import it another server/local machine which has a username and a password
1. mongodump -d dbname -o dumpname -u username -p password
2. scp -r user#remote:~/location/of/dumpname ./
3. mongorestore -d dbname dumpname/dbname/ -u username -p password
Please let us know where you have installed your Mongo DB? (either in Ubuntu or in Windows)
For Windows:
Before exporting you must connect to your Mongo DB in cmd prompt and make sure that you are able to connect to your local host.
Now open a new cmd prompt and execute the below command,
mongodump --db database name --out path to save
eg: mongodump --db mydb --out c:\TEMP\op.json
Visit https://www.youtube.com/watch?v=hOCp3Jv6yKo for more details.
For Ubuntu:
Login to your terminal where Mongo DB is installed and make sure you are able to connect to your Mongo DB.
Now open a new terminal and execute the below command,
mongodump -d database name -o file name to save
eg: mongodump -d mydb -o output.json
Visit https://www.youtube.com/watch?v=5Fwd2ZB86gg for more details.
Previous answers explained it well, I am adding my answer to help in case you are dealing with a remote password protected database
mongodump --host xx.xxx.xx.xx --port 27017 --db your_db_name --username your_user_name --password your_password --out /target/folder/path
I realize that this is quite an old question and that mongodump/mongorestore is clearly the right way if you want a 100% faithful result, including indexes.
However, I needed a quick and dirty solution that would likely be forwards and backwards compatible between old and new versions of MongoDB, provided there's nothing especially wacky going on. And for that I wanted the answer to the original question.
There are other acceptable solutions above, but this Unix pipeline is relatively short and sweet:
mongo --quiet mydatabase --eval "db.getCollectionNames().join('\n')" | \
grep -v system.indexes | \
xargs -L 1 -I {} mongoexport -d mydatabase -c {} --out {}.json
This produces an appropriately named .json file for each collection.
Note that the database name ("mydatabase") appears twice. I'm assuming the database is local and you don't need to pass credentials but it's easy to do that with both mongo and mongoexport.
Note that I'm using grep -v to discard system.indexes, because I don't want an older version of MongoDB to try to interpret a system collection from a newer one. Instead I'm allowing my application to make its usual ensureIndex calls to recreate the indexes.
You can do it using the mongodump command
Step 1 : Open command prompt
Step 2 : go to bin folder of your mongoDB installation (C:\Program Files\MongoDB\Server\4.0\bin)
Step 3 : then execute the following command
mongodump -d your_db_name -o destination_path
your_db_name = test
destination_path = C:\Users\HP\Desktop
Exported files will be created in destination_path\your_db_name folder (in this example C:\Users\HP\Desktop\test)
References : o7planning
In case you want to connect a remote mongoDB server like mongolab.com, you should pass connection credentials
eg.
mongoexport -h id.mongolab.com:60599 -u username -p password -d mydb -c mycollection -o mybackup.json
If you are OK with the bson format, then you can use the mongodump utility with the same -d flag. It will dump all the collections to the dump directory (the default, can be changed via the -o option) in the bson format. You can then import these files using the mongorestore utility.
If you're dealing with remote databases you can try these commands given that you don't mind the output being BSON
1. Dump out as a gzip archive
mongodump --uri="mongodb://YOUR_USER_ID:YOUR_PASSWORD#YOUR_HOST_IP/YOUR_DB_NAME" --gzip --archive > YOUR_FILE_NAME
2. Restore (Copy a database from one to another)
mongorestore --uri="mongodb://$targetUser:$targetPwd#$targetHost/$targetDb" --nsFrom="$sourceDb.*" --nsTo="$targetDb.*" --gzip --archive
You can use mongo --eval 'printjson(db.getCollectionNames())' to get the list of collections
and then do a mongoexport on all of them.
Here is an example in ruby
out = `mongo #{DB_HOST}/#{DB_NAME} --eval "printjson(db.getCollectionNames())"`
collections = out.scan(/\".+\"/).map { |s| s.gsub('"', '') }
collections.each do |collection|
system "mongoexport --db #{DB_NAME} --collection #{collection} --host '#{DB_HOST}' --out #{collection}_dump"
end
I needed the Windows batch script version. This thread was useful, so I thought I'd contribute my answer to it too.
mongo "{YOUR SERVER}/{YOUR DATABASE}" --eval "rs.slaveOk();db.getCollectionNames()" --quiet>__collections.txt
for /f %%a in ('type __collections.txt') do #set COLLECTIONS=%%a
for %%a in (%COLLECTIONS%) do mongoexport --host {YOUR SERVER} --db {YOUR DATABASE} --collection %%a --out data\%%a.json
del __collections.txt
I had some issues using set /p COLLECTIONS=<__collections.txt, hence the convoluted for /f method.
I found after trying lots of convoluted examples that very simple approach worked for me.
I just wanted to take a dump of a db from local and import it on a remote instance:
on the local machine:
mongodump -d databasename
then I scp'd my dump to my server machine:
scp -r dump user#xx.xxx.xxx.xxx:~
then from the parent dir of the dump simply:
mongorestore
and that imported the database.
assuming mongodb service is running of course.
If you want, you can export all collections to csv without specifying --fields (will export all fields).
From http://drzon.net/export-mongodb-collections-to-csv-without-specifying-fields/ run this bash script
OIFS=$IFS;
IFS=",";
# fill in your details here
dbname=DBNAME
user=USERNAME
pass=PASSWORD
host=HOSTNAME:PORT
# first get all collections in the database
collections=`mongo "$host/$dbname" -u $user -p $pass --eval "rs.slaveOk();db.getCollectionNames();"`;
collections=`mongo $dbname --eval "rs.slaveOk();db.getCollectionNames();"`;
collectionArray=($collections);
# for each collection
for ((i=0; i<${#collectionArray[#]}; ++i));
do
echo 'exporting collection' ${collectionArray[$i]}
# get comma separated list of keys. do this by peeking into the first document in the collection and get his set of keys
keys=`mongo "$host/$dbname" -u $user -p $pass --eval "rs.slaveOk();var keys = []; for(var key in db.${collectionArray[$i]}.find().sort({_id: -1}).limit(1)[0]) { keys.push(key); }; keys;" --quiet`;
# now use mongoexport with the set of keys to export the collection to csv
mongoexport --host $host -u $user -p $pass -d $dbname -c ${collectionArray[$i]} --fields "$keys" --csv --out $dbname.${collectionArray[$i]}.csv;
done
IFS=$OIFS;
If you want to dump all collections in all databases (which is an expansive interpretation of the original questioner's intent) then use
mongodump
All the databases and collections will be created in a directory called 'dump' in the 'current' location
you can create zip file by using following command .It will create zip file of database {dbname} provided.You can later import the following zip file in you mongo DB.
Window filepath=C:\Users\Username\mongo
mongodump --archive={filepath}\+{filename}.gz --gzip --db {dbname}
Here's what worked for me when restoring an exported database:
mongorestore -d 0 ./0 --drop
where ./contained the exported bson files. Note that the --drop will overwrite existing data.
if you want to use mongoexport and mongoimport to export/import each collection from database, I think this utility can be helpful for you.
I've used similar utility couple of times;
LOADING=false
usage()
{
cat << EOF
usage: $0 [options] dbname
OPTIONS:
-h Show this help.
-l Load instead of export
-u Mongo username
-p Mongo password
-H Mongo host string (ex. localhost:27017)
EOF
}
while getopts "hlu:p:H:" opt; do
MAXOPTIND=$OPTIND
case $opt in
h)
usage
exit
;;
l)
LOADING=true
;;
u)
USERNAME="$OPTARG"
;;
p)
PASSWORD="$OPTARG"
;;
H)
HOST="$OPTARG"
;;
\?)
echo "Invalid option $opt"
exit 1
;;
esac
done
shift $(($MAXOPTIND-1))
if [ -z "$1" ]; then
echo "Usage: export-mongo [opts] <dbname>"
exit 1
fi
DB="$1"
if [ -z "$HOST" ]; then
CONN="localhost:27017/$DB"
else
CONN="$HOST/$DB"
fi
ARGS=""
if [ -n "$USERNAME" ]; then
ARGS="-u $USERNAME"
fi
if [ -n "$PASSWORD" ]; then
ARGS="$ARGS -p $PASSWORD"
fi
echo "*************************** Mongo Export ************************"
echo "**** Host: $HOST"
echo "**** Database: $DB"
echo "**** Username: $USERNAME"
echo "**** Password: $PASSWORD"
echo "**** Loading: $LOADING"
echo "*****************************************************************"
if $LOADING ; then
echo "Loading into $CONN"
tar -xzf $DB.tar.gz
pushd $DB >/dev/null
for path in *.json; do
collection=${path%.json}
echo "Loading into $DB/$collection from $path"
mongoimport $ARGS -d $DB -c $collection $path
done
popd >/dev/null
rm -rf $DB
else
DATABASE_COLLECTIONS=$(mongo $CONN $ARGS --quiet --eval 'db.getCollectionNames()' | sed 's/,/ /g')
mkdir /tmp/$DB
pushd /tmp/$DB 2>/dev/null
for collection in $DATABASE_COLLECTIONS; do
mongoexport --host $HOST -u $USERNAME -p $PASSWORD -db $DB -c $collection --jsonArray -o $collection.json >/dev/null
done
pushd /tmp 2>/dev/null
tar -czf "$DB.tar.gz" $DB 2>/dev/null
popd 2>/dev/null
popd 2>/dev/null
mv /tmp/$DB.tar.gz ./ 2>/dev/null
rm -rf /tmp/$DB 2>/dev/null
fi
If you have this issue:
Failed: can't create session: could not connect to server: connection() : auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
then add --authenticationDatabase admin
eg:
mongodump -h 192.168.20.30:27018 --authenticationDatabase admin -u dbAdmin -p dbPassword -d dbName -o path/to/folder
If you want to backup all the dbs on the server, without having the worry about that the dbs are called, use the following shell script:
#!/bin/sh
md=`which mongodump`
pidof=`which pidof`
mdi=`$pidof mongod`
dir='/var/backup/mongo'
if [ ! -z "$mdi" ]
then
if [ ! -d "$dir" ]
then
mkdir -p $dir
fi
$md --out $dir >/dev/null 2>&1
fi
This uses the mongodump utility, which will backup all DBs if none is specified.
You can put this in your cronjob, and it will only run if the mongod process is running. It will also create the backup directory if none exists.
Each DB backup is written to an individual directory, so you can restore individual DBs from the global dump.
I dump all collection on robo3t.
I run the command below on vagrant/homestead. It's work for me
mongodump --host localhost --port 27017 --db db_name --out db_path
Some of the options are now deprecated, in version 4.4.5 here is how I have done it
mongodump --archive="my-local-db" --db=my
mongorestore --archive="my-local-db" --nsFrom='my.*' --nsTo='mynew.*'
Read more about restore here: https://docs.mongodb.com/database-tools/mongorestore/
First, of Start the Mongo DB - for that go to the path as ->
C:\Program Files\MongoDB\Server\3.2\bin and click on the mongod.exe file to start MongoDB server.
Command in Windows to Export
Command to export MongoDB database in Windows from "remote-server" to the local machine in directory C:/Users/Desktop/temp-folder from the remote server with the internal IP address and port.
C:\> mongodump --host remote_ip_address:27017 --db <db-name> -o C:/Users/Desktop/temp-folder
Command in Windows to Import
Command to import MongoDB database in Windows to "remote-server" from local machine directory C:/Users/Desktop/temp-folder/db-dir
C:\> mongorestore --host=ip --port=27017 -d <db-name> C:/Users/Desktop/temp-folder/db-dir
This is the simplest technique to achieve your aim.
mongodump -d db_name -o path/filename.json
#mongodump using sh script
#!/bin/bash
TIMESTAMP=`date +%F-%H%M`
APP_NAME="folder_name"
BACKUPS_DIR="/xxxx/tst_file_bcup/$APP_NAME"
BACKUP_NAME="$APP_NAME-$TIMESTAMP"
/usr/bin/mongodump -h 127.0.0.1 -d <dbname> -o $BACKUPS_DIR/$APP_NAME/$BACKUP_NAME
tar -zcvf $BACKUPS_DIR/$BACKUP_NAME.tgz $BACKUPS_DIR/$APP_NAME/$BACKUP_NAME
rm -rf /home/wowza_analytics_bcup/wowza_analytics/wowza_analytics
### 7 days old backup delete automaticaly using given command
find /home/wowza_analytics_bcup/wowza_analytics/ -mindepth 1 -mtime +7 -delete
There are multiple options depending on what you want to do
1) If you want to export your database to another mongo database, you should use mongodump. This creates a folder of BSON files which have metadata that JSON wouldn't have.
mongodump
mongorestore --host mongodb1.example.net --port 37017 dump/
2) If you want to export your database into JSON you can use mongoexport except you have to do it one collection at a time (this is by design). However I think it's easiest to export the entire database with mongodump and then convert to JSON.
# -d is a valid option for both mongorestore and mongodump
mongodump -d <DATABASE_NAME>
for file in dump/*/*.bson; do bsondump $file > $file.json; done
Even in mongo version 4 there is no way to export all collections at once. Export the specified collection to the specified output file from a local MongoDB instance running on port 27017 you can do with the following command:
.\mongoexport.exe --db=xstaging --collection=products --out=c:/xstaging.products.json
Open the Connection
Start the server
open new Command prompt
Export:
mongo/bin> mongoexport -d webmitta -c domain -o domain-k.json
Import:
mongoimport -d dbname -c newCollecionname --file domain-k.json
Where
webmitta(db name)
domain(Collection Name)
domain-k.json(output file name)