Mongodump spontaneously failed: "error dumping metadata" - mongodb

My remote mongodump backup script worked for months until today. I'm suddenly getting this error:
Failed: error dumping metadata: error converting index (<nil>): conversion of BSON value '2' of type 'bson.Decimal128' not supported
mongodump does not work on my remote backup server. However, when I run mongodump on the server when my production database lives, it works. But both servers use the exact same version of mongodump:
mongodump version: r3.4.1
git version: 5e103c4f5583e2566a45d740225dc250baacfbd7
Go version: go1.7
os: linux
arch: amd64
compiler: gc
The only place I've found any reference to this error is a Chinese blog (http://blog.5ibc.net/p/102326.html). However, their problem was that they were using an old version of mongo.
Does anyone know what went wrong or how to fix this?

Solved. The versions of mongodump on the production server and the backup server were the same. However, my script was executing mongodump on the jump server that connects the backup server to the production server. And the jump server had an out of date version of mongo. I don't know why it failed yesterday after running for months. But it worked after updating mongo tools.

Related

mongorestore failed to restore indexes for large data set

I took a dump of records from my server having a collection of 65.8M records and overall DB records are 74M. It makes around 4.26GB of gzip for this DB.
When I downloaded this gzip.archive dump file on one of my local machine having windows and mongorestore it. It worked successfully and restored all data but while restoring index it gave me the following exception
Failed: leads.business: error creating indexes for leads.business: createIndex error: connection(localhost:27017[-5]) unable to decode message length: read tcp 127.0.0.1:51636->127.0.0.1:27017: i/o timeout
2020-05-21T00:15:23.128+0500 74181602 document(s) restored successfully. 0 document(s) failed to restore.
So I searched for this exception and found this issue had been fixed in MongoDB JIRA ticket is https://jira.mongodb.org/browse/TOOLS-2394.
Then I tried restoring the same dump on to my ubuntu machine which has the latest MongoDB version. It didn't give me any exception but seemed to be hanged. I waited for more than half an hour even longer than complete DB restore time. But mongorestore never responded cursor kept on blinking as if something is being processed but nothing happened. I tried this twice on ubuntu.
Important Information:
MongoDB Server Details:
OS: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
MongoDB Version: 4.2.6
MongoDB Local Machine Details:
OS: Windows 10
MongoDB Version: 4.2.0
MongoDB Local Machine Details:
OS: Ubuntu
Description: Ubuntu 20.04 LTS
Release: 20.04
Codename: Focal Fossa
MongoDB Version: 4.2.6
After updating the Windows machine with MongoDB 4.2.6 to avoid the exception I was getting earlier as per ticket https://jira.mongodb.org/browse/TOOLS-2394
Logs:
2020-05-21T04:04:16.107+0500 leads.business 27.6GB
2020-05-21T04:04:18.835+0500 leads.business 27.7GB
2020-05-21T04:04:18.849+0500 restoring indexes for collection leads.business from metadata
As can be seen, it started restoring indexes on 2020-05-21T04:04:18.849+0500 and now it's 4:54 almost an hour has been passed
It can be verified from screenshot as well
If anyone else faced this issue please share your thoughts. Thanks!
It worked fine on MongoDB 4.2.6, it's just taking a lot of time, please find below logs.
2020-05-21T03:25:19.233+0500 preparing collections to restore from
....
....
....
2020-05-21T04:04:18.835+0500 leads.business 27.7GB
2020-05-21T04:04:18.849+0500 restoring indexes for collection leads.business from metadata
2020-05-21T05:14:01.598+0500 finished restoring leads.business (65803772 documents, 0 failures)
2020-05-21T05:14:01.658+0500 74181602 document(s) restored successfully. 0 document(s) failed to restore.
As it can be seen collection restore process(except indexes) starts at
03:25:19 and ends at 04:04:18.
Index restore started at 04:04:18 and ended at 05:14:01
Conclusion restoring indexes is taking more time than restoring the whole collection. But its working fine.

Restoring PostgreSQL database without having a dump just the database files

My hoster upgraded my Ubuntu server and it's not booting any more. The only way I can access my data any more is in read mode via a rescue environment (SSH shell).
I am running a postgres 9.1 installation on the crashed server. I am not able to start the postgres server in the rescue environment. I also do not have a dababase dump created with pg_dump.
However, I was able to copy the whole /var/lib/postgresql folder to a new machine . I installed Postgres 9.1 on this machine. Afertwards I replaced the /var/lib/postgresql with my old files.
When I start the postgres server, I get something like "incorrect checksum in control file".
I there any way to restore the database content without using pg_dump (since I don't have a current dump and I am not able to run it on the defective machine).
Indeed it was an issue between 32bit and 64bit. I had another old server running on 32bit Ubuntu. Initially I tried to restore the data on a 64bit machine. With the 32bit machine it simply worked by copying the postgres main directory. Finally I was able to log into the database and create a dump.

mongorestore from MongoDB to DocumentDB

I am trying to run mongorestore.exe from a DB dump files of collections into DocumentDB database. I have experience with MongoDB and Azure but not much with DocumentDB.
I am getting an error
error parsing command line options: unknown option "ssl"
if I use the command from this tutorial.
I have locally installed MongoDB Community Server, "Windows Server 2008 R2 64-bit and later, with SSL support x64" of the latest stable version - 3.2.4.
It looks like the the --ssl command might not be available since the version 3 (link).
However SSL is enforced by DocumentDB.
Any idea how to migrate an existing database from MongoDB to DocumentDB?The DB is quite large (~GB), hence mongoimport would take too long, we need to user mongorestore, I believe.
Updated, command example:
mongorestore.exe --host myhost123.documents.azure.com:10250 -u myhost123 -p somepassword== --db myhost123 --ssl --sslAllowInvalidCertificates
gives me this error:
error parsing command line options: unknown option "ssl"
If I remove the two ssl options (--ssl --sslAllowInvalidCertificates) I get back an error which kind of makes sense as SSL is enforced on Azure DocumentDB:
Failed: error connecting to db server: no reachable servers
According to your description, I installed MongoDB Community Server Windows Server 2008 R2 64-bit and later, with SSL support x64 -3.4.3 and followed this official tutorial about importing data to API for MongoDB with mongorestore. After some trials, I could restore my local file generated by mongodump to my Azure DocumentDB (MongoDB API) as follows:
Failed: error connecting to db server: no reachable servers
I also encountered this error, and found that it was caused by IP Access Control,
my current IP is not included as the allowed list of client IP addresses. You could find more details about DocumentDB firewall here.

Mongodump Fails Frequently After upgrade from 2.6 to 3.0

I'm in the process of upgrading our database to Mongo 3.0, and I'm at the step of upgrading our daily backups process from using mongodump 2.6.1 to 3.0.1, which has has greater performance due to parallelized collection downloads.
I'm running into an issue where the mongodump fails midway through with the error
....
2015-04-10T00:42:54.606+0000 [##############..........] XXX.XXXXXXX 6804841/11236617 (60.6%)
2015-04-10T00:42:57.352+0000 Failed: error reading collection: Closed explicitly.
Out of 8 attempts, 6 of them failed, and 2 of them went through fine. I've been unable to find anything else online about this particular error.
The entire mongodump is around 1TB in size, with thousands of collections. The failure happens somewhere in the middle. The mongodump does actually start up, as many .bson files start accumulating in the disk, and I can see the progress files in the output of the mongodump
When running the same code against a 150GB mongo 2.4 instance, it seems to go fine, it likely hasn't been running long enough to run into the error
The mongo database version I'm dumping from is 2.4, we're planning on upgrading 2.4 -> 2.6 -> 3.0. So we wanted to upgrade the mongodump tool in advance, hoping it would work fine against 2.4 and 2.6.
The current backup servers are using mongodump 2.6.1 against the 2.4 mongo databases, and they have been humming along fine, 100% reliability with the mongodump stage of the backup pipeline
The mongodump backup servers(google compute engine VMs) are located on a separate machine from the mongo servers(hard metal server), and the mongo servers are behind a firewall. So we establish an SSH tunnel between the two machines, then perform a mongodump with the --port command. It looks like so:
ssh -M -N -L 1234:localhost:27017 <remote_ip>
mongodump --port 1234 --username XXX --password XXX --out /tmp/dir
Can anyone give me some hints as to what might be going on? We will need to use mongodump 3.0 when our mongo databases are fully upgraded to 3.0.
UPDATE: Another error I'm getting is
2015-04-14T22:56:37.939+0000 Failed: error reading collection: read tcp XXX.X.X.X:XXXXX: use of closed network connection

mongodump assertion 17369

Folks,
When running a mongodump command, I get the following error:
assertion: 17369 Backing up users and roles is only supported for clusters with auth schema versions 1 or 3, found: 5
Any suggestions on how to address? MongoDB v2.8
Your version of mongodump might be to old. Running a v2.6 client with a v2.8+ server (with the new auth scheme) will give this error.
In my case I was running MongoDB v3.0 on the server, trying to make a dump with MongoDB v2.6 client. After upgrading mongodb-org-tools to v3.0 on my laptop, the problem went away.