MongoDB writeback exception - mongodb

Our mongodb cluster in production, is a sharded cluster with 3 replica sets with 3 server each one and, of course, another 3 config servers.
We also have 14 webservers that connect directly to mongoDb throw the mongos process that are running in each of this webservers (clients).
The entire cluster receive 5000 inserts per minute.
Sometimes, we start getting exceptions from our java applications when it wants to perform operations to the mongoDb.
This is the stackTrace:
caused by com.mongodb.MongoException: writeback
com.mongodb.CommandResult.getException(CommandResult.java:100)
com.mongodb.CommandResult.throwOnError(CommandResult.java:134)
com.mongodb.DBTCPConnector._checkWriteError(DBTCPConnector.java:142)
com.mongodb.DBTCPConnector.say(DBTCPConnector.java:183)
com.mongodb.DBTCPConnector.say(DBTCPConnector.java:155)
com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:270)
com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:226)
com.mongodb.DBCollection.insert(DBCollection.java:147)
com.mongodb.DBCollection.insert(DBCollection.java:90)
com.mongodb.DBCollection$insert$0.call(Unknown Source)
If I check the mongos process throw the rest _status command that it provides, it returns a 200OK. We could fix the problem restarting the tomcat that we are using and restarting the mongos process but I would like to find a final solution to this problem. It's not a happy solution to have to restart everything in the middle of the night.
When this error happens, maybe 2 or 3 another webservers got the same error at the same time, so I imagine that there is a problem in the entire mongoDb cluster, no a problem in a single isolated webserver.
Does anyone know why mongo returns a writeback error? and how to fix it?
I'm using mongoDb 2.2.0.
Thanks in advance.
Fer

I believe you are seeing the Writeback error "leak" into the getLastError output and then continue to be reported even when the operation in question had not errored. This was an issue in the earlier versions of MongoDB 2.2, and has since been fixed, see:
https://jira.mongodb.org/browse/SERVER-7958
https://jira.mongodb.org/browse/SERVER-7369
https://jira.mongodb.org/browse/SERVER-4532
As of writing this answer, I would recommend 2.2.4, but basically whatever the latest 2.2 branch is, to resolve your problem.

Related

Mongodb localhost:27017 replica set switched to secondary localhost:27027

I'm fairly new to mongodb, just a couple of months. I just converted my mongodb database to support a secondary replica set so I can watch collections. I only added one secondary which I'm guessing now may not be the best after reading you should create an odd number, but it is a localhost on one machine. I went through the instructions, got replication working fine for for half a day running my programs. But for some reason recently it has switched the database for port 27017 from primary to secondary. Primary was previously on localhost:27017 and secondary was on localhost:27027. Now my normal program can't connect to localhost:27017 without an error, which I believe it is because it is a secondary replica set now when it was primary before, assuming you can only connect to a primary. Here is the error msg.
Exception in thread "main" com.mongodb.MongoNotPrimaryException: Command failed with error 10107 (NotWritablePrimary): 'not master' on server localhost:27017.
I'm perplexed why mongodb switched the replica set primary in the first place. I doubt an error occurred, but certainly possible, but I haven't had a single "localhost" error in months of development.
For now, ideally how would I switch 27017 back to be the primary. How do I do that so my existing programs can function again?
Eventually when in production, what is the best methodology to handle this, assuming a lookup to a DNS entry to an ip address and suddenly the primary gets changed because of a fail over?
Given question 3 is a bit more involved, is there something I can do in my development environment to better simulate a production environment.
I use StackOverflow extensively but this is my first post so thanks for anyone who can provide advice.
Without knowing more about the replica configuration and circumstances of the switch over I'm not sure anyone could confidently answer question 1 but it may not be important compared to question 3.
When you want to manually switch the primary you can manipulate the priority settings:
https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/
Or run manual commands to freeze or step down the current primary:
https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/#force-a-member-to-be-primary-using-database-commands
The safest option is to ensure your application is aware of all replicas in the replica set. Then when you have these situations where something unexpected has happened the application will fail over to a writable db without any issues.
https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/connect-to-mongodb/#connect-to-a-replica-set
I can only suggest setting up some VMs or containers as replica set members to better represent a production environment.
https://hub.docker.com/_/mongo
I was able to solve the problem by using the connect string which comprised both replica sets, which I was unaware I needed to do. Such as for java:
mongoClient = MongoClients.create("mongodb://localhost:27017,localhost:27027");
This also worked for Mongo Compass so I was able to connect to the secondary database. I didn't know you needed to provide paths to all replica sets when trying to connect, but in retrospect makes goods sense if something is down.
If you need a replica set for testing, you can create a single-node RS. Follow the instructions for creating a RS but only add one node.

Problems with clustered Mongodb 4.x multi-document transactions

I have a problem with MongoDB cluster when it comes to use multi-document transactions.
I have a 5 mongodb replicaset servers divided into three data centers. Two of them are in the first datacenter, two in the second, and one in the third datacenter (arbiter). At a time one of the servers is primary and third others are slave replicas.
I wrote an application in java using Spring Boot and I masivelly use multi-document transactions in mongo.
Everything works fine when all the db servers are up. But I when I wanted to test high-availability by eliminating one of the datacenters I encountered strange problems. My application started to hang on each transaction (I need to wait about a minute, and then I get timeout from database), but still works fine when transactions are not used :-(.
Below is the exception i get in my application:
2020-07-06 16:58:51.748 ERROR 6 --- [0.0-5555-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.mongodb.MongoTransactionException: Query failed with error code 251 and error message 'Encountered non-retryable error during query :: caused by :: Transaction 4 has been aborted.' on server mongos-dc2.fake.domain.com:27017; nested exception is com.mongodb.MongoQueryException: Query failed with error code 251 and error message 'Encountered non-retryable error during query :: caused by :: Transaction 4 has been aborted.' on server mongos-dc2.fake.domain.com27017]
What could be the reason of that? Could you tell me what should I do to fix this behaviour?
I figured that the guilty one is the the mongo arbiter.
https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
As I described, four of my mongo servers are oridinary replicas (two in first datacenter and two in the second datacenter), but the last one (in third datacenter) is the arbiter, which role is only to vote in case of election of new primary.
I changed that node type from arbiter to normal replica and everything started to work as expected :/
That still doesn't anwser my question, because I have no idea why something that was supposed to work, the official mongo feature, just failed. But probably this is the question to Mongo team more than a topic to stackoverflow.
Nevertheless I hope that this will help someone

MongoDB inserts slow down when in Replica Set mode

I'm running MongoDB 2.4.5 and recently I've started digging into Replica Set to get some kind of redundancy.
I started same mongo instance with --replSet parameter and also added an Arbiter to running Replica Set. What happened was writing to mongo slowed down significantly (from 15ms to 30-60ms, sometimes even around 300ms). As soon as I restarted it in not-replicaset-mode performance went back to normal.
I also set up the newest 3.0 version of MongoDB with no data and run same tester as before and the result was quite similar - writes were at least 50% slower while running the ReplicaSet mode.
I could not find many examples of such behaviour online so I guess something is wrong with my mongo configuration or OS configuration.
Any ideas? Thanks for help.
It sounds like you're using "replica acknowledged" write concern, which means that the operation will not return until the data has been written to both the primary and replica. The write concern can be set when doing any write operation (from 2.6 onwards - it looks from the 2.4 documentation that calling getLastError causes a write concern of replica acknowledged in 2.4, are you doing that in your test code?).
Read this section (v3)) or this section (v2.4) of the MongoDB documentation to understand the implications of different write concerns and try explicitly setting it to Acknowledged.
Okay so the problem was C# library. I used a native C# driver (works fine even with 2.4.5 MongoDB) and there seems to be no difference in performance. Thanks for help.

mongodb sharding issue with 2.5.5 development version

I am trying to perform performance testing for one of my application using MongoDB. I am using 2.5.5 development version. Sharding works fine when I try to read and write data using mongos.
To perform performance testing I need to start 600-700 mongoconnection threads to the mongos. Each thread queries around 2000 documents which is distributed on two shards. This test runs fine for few minutes but after sometime it stops working with the error "Connection refused by one of the shard". Looking closely at it I found that server runs out of ports when these many threads request data.
Could anyone please have a look and let me know if it is a MongoDB bug in the dev version or is it something which I am doing wrong while connecting to database.
your help will be much appreciated.
Thanks,
Vibhu

How can I find out why mongodb crashes?

I have recently started having my mongodb instance crash on an ubuntu machine at random times, it usually stays up for a day or so. The mongo log has no trace of the crash, just the last operation and when I restarted the server. I need some guidance in finding out the problem and the log doesn't have any information. Is there another log I should be looking at?
The setup is fairly straightforward, single instance (no sharding), of mongodb 2.2 running on an ubuntu box, with pretty much default install.
The only change I have done recently which seems to coincide with this in timing is I have replaced some simple map reduce execution with the aggregate framework.
Thanks.
MongoDB unofficially have Mtools try download and run with your logs in this tool. it can give you why it went down, how many time it restarted and many more details.
you can get this in github
Through Logs you can view the actual cause of crash
/var/log/mongodb/mongod.log ####Common path of logs
or use this command
db.adminCommand( { getLog : "global" } )