I've dumped some databases using mongodump which means that I currently have a folder(which name, is the date I dumped the databases) and one folder for each database that I had and inside each database folder I have 2 files per collection, one .bson and one .metadata.json.
Now I want to use mongorestore to rebuild databases and their collections using the following command:
mongorestore --db user --verbose /home/nvsh120/projects/database/11-26-20/user
but it doesn't work and exits with the following error:
uncaught exception: SyntaxError: unexpected token: identifier :
#(shell):1:15
So the problem was that I was trying to run mongorestore from mongo shell whereas I should've run it from windows\linux command prompt\terminal.
In case someone needs this, it is for running a mongorestore command for a docker pulled DB.
You cannot run the mongorestore in the docker mongo shell.
First run the command to get the bash of the specific docker container:
sudo docker exec -it <name-of-container> bash
Next, run the restore command in this bash.
root#52f66390ae10:/# mongorestore -d <db-name> <path-to-dump-in-docker-container>
Keep in mind, I have run my Docker MongoDB with a docker-compose.yml file in which I have mounted the database dump as a volume.
Here is a snippet from the docker-compose file:
volumes:
- <path-to-dump-in-local>:<path-to-db-in-container> (Usually /data/db)
Volumes can also be mounted as a part of the docker run command.
Related
I’m currently trying to restore a mongodump made with mongodb:3.4-jessie into a newer version, mongodb:4.2.3-bionic.
When I try to execute my command:
sudo docker exec mongo mongorestore —db=mock —gzip /mongorestore/app
It returns me with this error:
2020-05-01T00:01:29.405+0000 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2020-05-01T00:01:29.406+0000 Failed: mongorestore target '/home/user1/mongorestore/app' invalid: stat /home/user1/mongorestore/app: no such file or directory
2020-05-01T00:01:29.406+0000 0 document(s) restored successfully. 0 document(s) failed to restore.
The folder app contains BSON files and json.gz too.
I can’t upgrade the older dump, as it’s the only thing left and really want to use a newer version of mongo.
Thanks a lot!
Your command was blocked by problems before it could attempt to restore data to a newer mongodb release.
You're running mongorestore inside a docker container but the input data directory /mongorestore/app does not seem to be inside the container (unless you mounted it in a previous step not seen here and passed the wrong path to mongorestore). You can use the Docker run command's --mount or --volume options to mount host directories into a container. Then pass its in-container path to the mongorestore command. See the docker run command docs.
mongorestore is warning about this use of the --db option but it's not clear if that's because it can't find the input data directory or if it needs the --nsInclude option instead. See the mongorestore command docs.
You shouldn't need to use sudo with docker exec, and doing so could cause permission problems with the mounted output files. The mongorestore command shouldn't need sudo either, but if I'm wrong about that, write docker exec mongo sudo mongorestore ....
The mongodump and mongorestore docs suggest that the --gzip option expects all the files to be compressed, not just some of them. Maybe it notices each file's .gz filename extensions to decide whether to decompress it, but the docs don't say that it supports that case.
I'm betting mongorestore can restore BSON files from an older release. That file format should be very stable.
I ran into the same issue but with Mongo 5 and it worked like so:
mongorestore --host=hostname --port=portnum \
--archive=/path/to/archive.gz --gzip --verbose \
--nsInclude="mydbname.*" \
--convertLegacyIndexes
where mydbname is the name of the db that I used when dumping the collections.
If you use another dbname now, then you need to convert them using --nsFrom="mydbname.*" --nsTo="newdbname.*"
All from: https://docs.mongodb.com/database-tools/mongorestore/
I'm new to docker and mongoDB, so I expect I'm missing some steps. Here's what I have in my Dockerfile so far:
FROM python:2.7
RUN apt-get update \
&& apt-get install -y mongodb \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /data/db
RUN service mongodb start
RUN mongod --fork --logpath /var/log/mongodb.log
RUN mongo db --eval 'db.createUser({user:"dbuser",pwd:"dbpass",roles:["readWrite","dbAdmin"]})'
The connection fails on the last command:
Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed`.
How can I connect successfully? Should I change the host/IP, and to what, in which commands?
Several things going wrong here. First are the commands you're running:
RUN service mongodb start
RUN mongod --fork --logpath /var/log/mongodb.log
Each of these will run to create a layer in docker. And once the command being run returns, the temporary container that was started is stopped and any files changed in the container are captured to make a new layer. There are no persistent processes that last between these commands.
These commands are also running the background version of the startup commands. In docker, you'll find this to be problematic since when you use this as your container's command, you'll find the container dies as soon as the command finishes. Pid 1 on the container has the same role of pid 1 on a linux OS, once it dies, so does everything else.
The second issue I'm seeing is mixing data with your container in the form of initializing the database with the last RUN command. This fails since there's no database running (see above). I'd recommend instead to make an entrypoint that configures the database if one does not already exist, and then use a volume in your docker-compose.yml or on your docker run commandline to persist data between containers.
If you absolutely must initialize the data as part of your image, then you can try merging the various commands into a single run:
RUN mongod --fork --logpath /var/log/mongodb.log \
&& mongo db --eval 'db.createUser({user:"dbuser",pwd:"dbpass",roles:["readWrite","dbAdmin"]})'
I think you misunderstood what Dockerfiles are used for.
As Dockerfile reference points out, a
Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
The whole concept of an image is to derive running container from it which are then filled with data and queried (in case of a database) or are beeing called by an external container / host (in case of an web service) or many other possible usages.
To answer your question I'll assume that:
You want to use a mongo database to store data.
You have some pyhton code which needs to have access to mongo.
You want some initial data in your database.
To do so:
Run a mongo database
docker run --name my-mongo -d mongo
Note: There is no need to write a custom image. Use the official mongo image!
Create a python image which contains your script
a) Write your Dockerfile
FROM python:3-alpine
ADD my_script.py /
RUN pip install any-dependency-you-might-need
CMD [ "python", "./my_script.py" ]
b) Write your my_script.py
Insert your application stuff here. It will be executed in the python container. And as mongo will be linked, you can use s.th. like client = MongoClient('mongodb://mongo:27017/') to get started.
Run your python container with a link to mongo
a) Build it:
docker build -t my-pyhthon-magic .
b) Run it:
docker run -d --name python-magic-container --link my-mongo:mongo my-python-magic
Note: The --link here links a running container named my-mongo to be reached internally in my-python-magic-container as mongo. That`s why you can use it in your python script.
I hope this helped you - don't hesitate to ask or modify your question if I misunderstood you.
I am a newbie to mongodb. I am trying to take a backup of my database using mongodump. But whenever I use this command I get the below error
Referenceerror: mongodump is not defined
I tried creating a new user with all the roles but still I get the same error. Should I add a specific role to take a backup? Or am I doing something wrong?
'mongodump' is a command/tool which is included in the 'mongodb-tools' package. If you don't have this package installed on your machine, it makes sense that it is not defined. The mongodb-tools also provide several other tools used for importing and exporting DBs (like mongorestore).
That being said, 'mongodump' is not a mongo-shell command, you shouldn't be using it in mongo-shell. It's a whole different command that you would be executing just like you execute 'mongod' or 'mongo' etc.
One simple way is to right-click mongodump.exe and "Run as administrator". It will create dump folder in bin of your mongodb home containing all database backups.
If you want to go with commands, open cmd as administrator and go to bin of your mongodb where you'll be able to fire commands such as mongorestore, mongodump etc. with intended parameters e.g for specific db or interact with remote mongodb.
tested on (v4.2.3)
Note: many folks try to execute these commands in mongod where you can execute queries which is wrong, these tools needs to be used separately.
Here ar e 2 simple exemples for a full backup using authentication and without
mongodump -h hostname -v -u sys_account -p ys_password --authenticationDatabase admin --out folder_location_for_backup
if no authentication
mongodump -h hostname -v --out folder_location_for_backup
here are the mongorestore commands as well
mongorestore -h hostname -v -u admin_user -p admin_password --authenticationDatabase admin --dir folder_location_where_backup_is_located
if no authentication
mongorestore -h hostname -v --dir folder_location_where_backup_is_located
for windows, you need to start executable file by running following command
"C:\Program Files\MongoDB\Server\3.4\bin\mongodump.exe" --db your_database_name
Above command will export your database to dump folder. This folder will be located where you have kept your "data" folder. If you use default "data/db" folder then it will be there but if you use different location then it will be kept over there.
This command must be run in normal command prompt, not in mongo shell. It is executable not mongo shell command. Official docs link is here.
Download MongoDB Command Line Database Tools .The mongodump tool is part of the MongoDB Database Tools package.
There are great chances that mongodump is being performed under Mongo Shell
mongodump should be run directly on command line or terminal (NOT inside mongo shell)
For creating dump of database
mongo --db database_name
For creating dump of any collection of database
mongo --db databas_name --collection collection_name
I used mongodump to backup my database since I want to move it from being hosted at compose.io to being hosted locally in the server itself using mupx.
Once I setup the app and have it running, how can I restore the mongodump? I am using mupx, and when I ssh into the server I see that mongodb is inside a docker container.
What are the steps needed to use mongorestore given that I can copy the mongodump files from my local pc to the server.
1) Use scp command to copy the mongodump folder from my local pc to the server
2) SSH into the server
At this point I am logged into the server and am in the same directory as the dump folder. Mongodb is running inside docker. How can I use mongorestore to restore mongodb to the data in the dump folder?
I figured out how to do it. Here are step by step instructions.
Here are the instructions
1) Copy dump folder to server
scp -r /local_path/to/dump_folder root#111.222.33.4:/remote/path
2) SSH into server
ssh root#111.222.33.4
3) Copy from root of server to inside docker container
docker cp dump_folder mongodb:/dump_folder
4) Go into mongodb docker container
docker exec -it mongodb bash
5) check if copied folder exists
ls (you should see dump_folder, if you named it the same folder as in this example)
6) use mongorestore
mongorestore --drop -d AppName dump_folder
For example: you copied mongo dump file to /data folder of server.
While you run docker container, you can mount /data folder into docker container.
docker run -v /data:/var/lib/mongodb -p 27017 ....
After that, you access to inside docker container and go to /var/lib/mongodb. You can see mongo dump file here by using : ls command.
Here, you can use mongorestore to restore mongo data.
I deployed my app on a Ubuntu server using mup deploy (https://github.com/arunoda/meteor-up) with the option "setupMongo": true in the mup.json file.
Everything works fine, and I would like to save the mongoDB database daily to FTP or S3, or to set a mongoDB replica to another server (to avoid copying the whole database every time, but it seems more complicated).
If deployed with mup, you are in luck.
You can find the steps here: https://github.com/xpressabhi/mup-data-backup
Here are the steps again:
MongoDB Data Backup deployed via mup
These commands run well only if meteor deployed with mup tool. Mup creates docker for mongodb hence taking backup becomes easy with these commands.
Backup
Take backup of running app data from docker then copy to local folder out of docker.
docker exec -it mongodb mongodump --archive=/root/mongodump.gz --gzip
docker cp mongodb:/root/mongodump.gz mongodump_$(date +%Y-%m-%d_%H-%M-%S).gz
Copy backup to server
Move data to another server/local machine or a backup location
scp /path/to/dumpfile root#serverip:/path/to/backup
Delete old data from meteor deployment
Get into mongo console running in docker then drop current database before getting new data.
docker exec -it mongodb mongo appName
db.runCommand( { dropDatabase: 1 } )
Restore data to meteor docker
docker cp /path/to/dumpfile mongodb:/root/mongodump.gz
docker exec -it mongodb mongorestore --archive=/root/mongodump.gz --gzip
The best way is to mongodump it.
Assuming its running on the mup instance itself since it only listens to 127.0.0.1 you would have to ssh in and use mongodump.
If you simply run it:
mongodump
It will create a directory dump containing your backup.
If you want to do this remotely you would have to edit /etc/mongodb.conf to ensure it binds globally, you will have to create users though since it will be publicly accessible. Then set auth to true.
You could then mongodump from your own machine (you can download the mongodump binary from mongodb.org):
./mongodump --host <your server ip address> --username <username> --password <password>
This answer is inspired by:
sheharyar.me/blog/regular-mongo-backups-using-cron
It uses a script to: mongodump -> tar -> wput (ftp)
First, create a bash script:
#!/bin/bash
MONGO_DATABASE="your_db_name"
APP_NAME="your_app_name"
MONGO_HOST="127.0.0.1"
MONGO_PORT="27017"
TIMESTAMP=`date +%F-%H%M`
MONGODUMP_PATH="/usr/bin/mongodump"
BACKUPS_DIR="/home/username/backups/$APP_NAME"
BACKUP_NAME="$APP_NAME-$TIMESTAMP"
# mongo admin --eval "printjson(db.fsyncLock())"
# $MONGODUMP_PATH -h $MONGO_HOST:$MONGO_PORT -d $MONGO_DATABASE
$MONGODUMP_PATH -d $MONGO_DATABASE
# mongo admin --eval "printjson(db.fsyncUnlock())"
mkdir -p $BACKUPS_DIR
mv dump $BACKUP_NAME
tar -zcvf $BACKUPS_DIR/$BACKUP_NAME.tgz $BACKUP_NAME
rm -rf $BACKUP_NAME
wput $BACKUP_NAME.tgz ftp://login:password#ftp.domain.com/backups/
Save it as mongo_backup.sh and run:
chmod +x mongo_backup.sh
bash mongo_backup.sh
sudo su
crontab -e
And enter this new line:
00 00 * * * /bin/bash /home/username/scripts/mongo_backup.sh
That's it.