mongodb: how to do backup of mongodb [closed] - mongodb

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I think someone has already suggested:
1. stop the mongod
2. backup the data directory
Is it reliable, I mean, ensure 100% success to restore?
And I can't find which directory stores the data... any command can help me to find it?

If mongod process exits cleanly (that is, no crashes or kill -9 stuff), then it is safe to copy data files somewhere.
If your current installation breaks (for example, data corruption due to unclean shutdown), you can delete its files, copy that backup over and start mongod again.
Default data directory is /data/db, but it may be set to another value in your config file. For example, I set it to /var/lib/mongodb.
You can also use mongodump to do a backup from a live server (this may impact performance). Use mongorestore to restore backups made by mongodump.

At IGN we do hot backups via mongodump running as an hourly cron job, and take filer snapshots (NetApp storage) on a half-hourly basis. If your system is not too write heavy and you can afford write blocks, try using fsync and lock to flush the writes to the disk and prevent further writes. This can be followed by mongodump and upon completion you can unlock the db. Please note that you've to be admin to do so.
db.runCommand({fsync:1,lock:1})

Related

Good time to shutdown Postgres server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 months ago.
Improve this question
PG when doing a clean shutdown calls postmaster to cleanly stop everything.
This means postmaster might take the checkpoint, archive and upload WAL files etc.
This can be a time consuming job. I want to know if there is a way to see how many (or size of) WAL files need to archived before sending the kill command to PG.
Is there a way to know/predict the time taken by PG stop by querying some pg tables?
Archiving is not your problem here, because WAL segments are filled one after the other and archived right when they are full. Also, there is no spike in WAL activity during a shutdown. There is only one final WAL segment that gets archived once the shutdown checkpoint is done.
What can take some time is the checkpoint that is executed during shutdown. There is a trick to speed that up: run an explicit checkpoint right before shutdown. Then the final checkpoint will have little to do and can finish quickly:
# explicit checkpoint to speed up the shutdown
psql -c CHECKPOINT
# shutdown the PostgreSQL server
pg_ctl stop

How to install MongoDB after downloading [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Task:
To install MongoDB after downloading
To Note:
I can't install it from terminal for some reasons. And I am using Ubuntu 14.04
I just don't know what to do after downloading. Coz there are no executable files or something.
This is preety good article for installation of mongodb step by step.
How-to-install-mongodb-on-ubuntu
Click here for mongoDB tutorials
To start your mongodb server : mongod
You may find error that database directory is not created. For that,
Create your database directory(default path) : mkdir -p /data/db and then restart your server again.If you did not find any error then skip this part.You can also change your database directory path. Here is command for that mongod --dbpath /your/path
Open new terminal and execute : mongo
If you have any query feel free to comment.Good luck.

importing .dmp file in oracle 10g express edition [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am having a problem while using the below command in the windows operating system and I have installed the oracle 10g server on my local machine which I am able to connect using client IDE
When I try to use the below command to import a dump file in my local DB
"imp system/ file=tms.dump log=test.log" in command prompt
where the binary of imp and the dump file is located in
"C:\oraclexe\app\oracle\product\10.2.0\server\bin"
I am getting the below error
error: unable to write logfile
I do not know how to create the log file
Thanks
The most likely reason for the error is that you are using an account which doesn't have write privileges on bin. You haven't specified a path, so like most utilities, imp will write its log files to the current directory.
bin is traditionally the sub-directory for holding executables. It is a very bad idea to use it for storing application data such as dump files.
Instead you should be working from a different location, ideally some sub-directory which you use solely for storing dump files. Either way, it must be a directory for which your user has write privileges.

Does mongodump lock the database?

I'm in the middle of setting up a backup strategy for mongo, was just curious to know if mongodump locks the database before performing the database dump?
I found this on mongo's google group:
Mongodump does a simple query on the live system and does not require
a shutdown. Like all queries it requires a read lock while running but
doesn't not block any more than normal queries.
If you have a replica set you will probably want to use the --oplog
flag to do your backups.
See the docs for more information
http://docs.mongodb.org/manual/administration/backups/
Additionally I found this previously asked question
MongoDB: mongodump/restore vs. backup up files directly
Excerpt from above question
Locking and copying files is only an option when you don't have heavy
write load.
mongodump can be run against live server. It will create some
additional load, so don't do it on peak hours. Also, it is advised to
do it on a secondary node (if you don't use replica sets, you should).
There are some complications when you have a DB so large that no
single machine can hold it. See this document.
Also, if you have replica set, you take down one of secondaries and copy its files directly. See http://www.mongodb.org/display/DOCS/Backups:
Mongdump does not lock the db. It means other read and write operations will continue normally.
Actually, both mongodump and mongorestore are non-blocking. So if you want to mongodump mongorestore a db then its your responsibility to make sure that it is really a desired snapshot backup/restore. To do this, you must stop all other write operations while taking/restoring backups with mongodump/mongorestore. If you are running a sharded environment then its recommended you stop the balancer also.

How to get rid of "Device busy" during reboot, redhat 5.1 without modifying rc.sysinit? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a samba mount located within /opt. I have a script in init.d called sysinit that is linked to in rc6.d. This gets called on a reboot (the first thing, I set it to K01sysinit) and it is supposed to unmount the /opt directory. However, on reboot I see that it is failing from the commands in the rc.sysinit file. When I manually run my sysinit script and then reboot, everything works fine. Am I running into some sort of race condition here where the rc.sysinit umount command is getting run before the other script is done unmounting /opt, or is something else going on? Or do I not understand how run levels work? I thought that what happened on a reboot is that the stuff from rc6.d is run first and then the unmounting from rc.sysinit occurs.
The solution I found was that I needed to create a lock file in /var/lock/subsys so that the rc.sysinit file knew that the service I created was "running". Without that, it would never create the KXXsysinit symlinks necessary so that my script would be run with a "stop" command on shutdown or reboot.