connecting to remote mongo db server and running clone - mongodb

I am migrating from server OLD at the old hosting company to server NEW at the new hosting company.
I want to run the clone command so I clone the mongoDB from OLD to NEW.
For OLD:
The public ip address is: 44.55.66.77.
The machine login user name is: admin, and the password is password
What is the right way to do this?
So far I can't even log into the server OLD
So far I have tried the following command prompts on NEW:
mongo -u admin -p password 44.55.66.77
mongo remote-ip:44.55.77.66 -u admin -p password
That don't work
I also tried this from mongo shell:
db.CopyDatabase('OldDb', 'NewDb', '44.55.66.77', 'admin', 'password')
and I get: the "could not connect to server" error message

Aside from firewall considerations in order to copy data between MongoDB servers, db.copyDatabase() (aka the copydb command) has a number of important usage caveats including:
copydb does not produce point-in-time snapshots of the source database; writing data to the source or destination database during the copy process will result in divergent data sets
copydb does not lock the destination server during its operation, so the copy will occasionally yield to allow other operations to complete.
There is also a known issue that copydb may not work with the role-based privileges in MongoDB 2.4 if you have authentication enabled (see SERVER-8213, which was recently fixed in the 2.5.x development releases).
A much better approach to migrating your data would be to restore from a normal backup using mongodump/mongorestore or file system snapshots. The Backup & Recovery section of the MongoDB manual has tutorials covering procedures for different deployment types.

Related

MongoDB restore from file backup

I have a backup of /data/db that contains all .wt files along with journal directory etc. I have stopped the db, replaced the current db directory with the one backed up and started the db. This works, Mongo starts up but when I "show databases" there are no results. The local machine (that was backed up) did not have authentication enabled. The machine I am using to attempt the restore does have it enabled, I am able to start the mongo client without any authentication.
Is there another step to this process?
Is the authentication difference an issue?

Database transfer from Heroku to Digital Ocean

I'm currently in the process of switching my cloud server from Heroku to Digital Ocean. However is there a way to migrate the database from the heroku server to the digital ocean one? I use postgresql for my database
I hope you already got a solution, but in case you didn’t, I’ll provide a simple guide on how I did it. I am going to assume that you have already created a postgres database on digitalocean. Also you need navigate to your project directory and log in to heroku using the heroku cli. And, you need to have postgresql installed or a psql client. Installing postgresql would do it as it comes with psql.
Step 1: Create a backup and download the backup from heroku postgres
heroku pg:backups:capture --app <app_name>
heroku pg:backups:download --app <app_name>
The first command will create a backup of your database and the second command will download it to your current directory, its a .dump file. If you would like to read more, here is an article.
Step 2: Connect to your remote (digital ocean’s) database using psql
Before you can do this, you need to go and add your machine you are connecting from to the list of database’s list of trusted sources, If you don’t, you’ll get a Connection Timed Out error. That’s because the database’s firewall doesn’t allow you to connect to the database from your local machine or resource (for security reasons).
Step 3: Import the Database
pg_restore -d "postgresql://<database_username>:<database_password>#<host>:<port>/<database>?sslmode=require" --jobs 4 -c "/path/to/dump_file.dump"
This will import your database from your dump file. Just substitute the variables will your connection parameters that you get from your dashboard. If you would like to read more, here is another article for this step.
Another thing to make clear is, sometimes, you will see some harmless error messages when running this command, but it will push through anyway. To learn more about pg_restore read this article.
And that’s it, your database has been migrated. Now, can you confirm it worked?, well, as for me, I used pgAdmin to connect to the remote database and I saw the tables and data as expected.
Hope this helps anyone with the same problem :)

MongoDB- backing up and restoring users and roles

What are best practices for synching users and roles between Mongo instances?
On the same Windows machine, I am trying to copy MongoDB users and roles in the admin database from one Mongo instance to another. Authentication is 'on' for each instance. No combination of mongodump\mongorestore or mongoexport\mongoimport I have tried works. With mongodump\restore, the restore step displays:
assuming users in the dump directory are from <= 2.4 (auth version 1)
Failed: the users and roles collections in the dump have an incompatible auth version with target server: cannot restore users of auth version 1 to a server of auth version 5
I found no command line option to tell it not to do this silly thing. I have Mongo version 4 and that's it installed.
You would think --dumpDbUsersAndRoles and --restoreDbUsersAndRoles would
be symmetrical, but they are not.
I was able to run this,
mongoexport -p 27017 -u admin --password please -d admin --collection system.roles --out myRoles.json
However, when trying mongoimport
mongoimport -p 26017 -u admin --password please -d admin --collection "system.roles" --file myRoles.json
the output displays
error validating settings: invalid collection name: collection name 'system.roles' is not allowed to begin with 'system.'
Primer
Users are attached to databases. Ideally, you have your database specific users stored in the respective database. All “global” users should go into admin. The good part: replica sets take care of syncing those users to each member of the replica set.
Solution
That being said, it seems to be quite obvious on how to deal with this.
For a worst case scenario, it is much easier to have a .js ready which simply recreates the 3-4 global roles instead
of fiddling with system.* collections in the admin database. This has the advantage that you can also do other setup stuff automatically, like sharding setup if TSHTF and you need to rebuild your cluster from scratch.
use admin;
db.createRole([...])
db.createRole([...])
// do other stuff, like sharding setup
Run it against the primary of your replica set or a mongos instance (if you have a sharded cluster) using
mongo daHost:27017/admin myjsfile.js
after you set up your machines but before you enable authentication.
Another option would be to use Ansible for user creation.
As for dump and restore, you might want to leave out the collection name.

Migrate mongodb to Google Cloud Compute Cluster

I have a mongodb in a replica set running with a cloud provider called compose.io.
I just created a new Google cloud compute mongodb cluster using these instructions
I want to copy all the data in my compose database to the compute instance.
One path I have been following has led me to get a file system backup of the running database and store it locally. I have opened that database locally and executed mongodump (I didn't seem to have permission to do that against the remote database) so I have the output of mongodump and a file system copy of the database stored on my machine.
I have no idea how to get any of this in to the compute cluster I created. I don't seem to be able to run mongorestore although figuring that out is still my main path at the moment. I am getting authentication errors which may be my not getting the command right or a database configuration issue. I am not sure yet.
I tried mongorestore from my local machine to the machine holding the primary database in the replica set.
Edit:
The last thing I tried was copy scp the mongodump output on to that machine and run mongorestore there.
I got this error:
2015-01-28T23:35:40.303+0000 Creating index: { key: { _id: 1 }, ns: "admin.system.users", name: "_id_" }
Error creating index admin.system.users: 13 err: "not authorized to create index on admin.system.users"
Aborted
Now I don't seem to be able to run any commands in mongo that require any kind of privileges, such as list database. Tried passing credentials for users that existed in the original database but that is not working so far.
Here is one possible fix.
Turn off auth in mongod.conf:
# mongod.conf
#auth=false
noauth=true
Run the mongorestore, then restart mongod with auth enabled.

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.