MongoDB: A timeout occured after 30000ms selecting a server using CompositeServerSelector - mongodb

I'm completely stumped. I am using the latest c# drivers (2.3.0.157) and the latest MongoDB (3.2). The DB is running as a standalone setup with no replication or sharding. I've tried running locally on Windows as well as remotely on Amazon LINUX.
I continue to get a timeout error but mysteriously sometimes it just works (maybe once every 20 - 30 attempts).
I am creating the connection as such:
private static readonly string ConnectionString = ConfigurationManager.ConnectionStrings["MongoDB"].ToString();
private static readonly string DataBase = ConfigurationManager.ConnectionStrings["MongoDBDatabase"].ToString();
private static IMongoDatabase _database;
public static IMongoDatabase GetDatabase(string database)
{
if (_database == null)
{
var client = new MongoClient(ConnectionString);
_database = client.GetDatabase(database);
}
return _database;
}
And calling it like this:
public static List<Earnings> GetEarnings()
{
var db = GetDatabase(DataBase);
if (db == null) return new List<Earnings>();
var logs = db.GetCollection<Earnings>("EarningsData");
var all = logs.Find(new BsonDocument()).ToEnumerable().OrderBy(x => x.Symbol).ToList();
return all;
}
And it'll time out on the logs.Find part of the method. Here's the full message:
Additional information:
A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode = Primary, TagSets = [] } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "Direct", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "XX.XX.XX.XX:27017" }", EndPoint: "XX.XX.XX.XX:27017", State: "Disconnected", Type: "Unknown" }] }.
I've tried using the fully qualified host name, adding connect=direct and connect=replicaSet to the connection string, using MongoClientSettings instead of the connection string and everything else I could possibly find on forums and StackOverflow. I'm at a loss and not even sure where to look next. Any advice?
I should also add, I can connect fine from the command line and RoboMongo...

We finally figured out how to work around this issue but I still don't understand what's happening. In our case, we have a server that spawns ~10 signalr hubs that get their data from MongoDB. It seems that when the app was starting up it was making several rapid calls to MongoDB to get the initial set of data and while it would occasionally worked, most times it didn't. We ended up solving this by adding a one second delay between loading each SignalR hub so the initial query was delayed a bit and we didn't have contention.
The weird thing about this is none of these collections have a large amount of data and the initial load is usually < 100 documents per collection (max). Once things are initialized it doesn't seem to matter how often we hit MongoDB. It just seems to be on the initial load.

An old topic but I found I was getting a similar error (2.11.0-beta2, netcoreapp3.1) and then I realised DocumentDb is restricted to connectivity within the same VPC. It's mentioned here.
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
Amazon DocumentDB (with MongoDB compatibility) clusters are deployed within an Amazon Virtual Private Cloud (Amazon VPC). They can be accessed directly by Amazon EC2 instances or other AWS services that are deployed in the same Amazon VPC. Additionally, Amazon DocumentDB can be accessed by EC2 instances or other AWS services in different VPCs in the same AWS Region or other Regions via VPC peering.
Check you're in the same VPC. If not, good luck.

Related

NextJs + Mongoose + Mongo Atlas multiple connections even with caching

I am using NextJS to build an app. I am using MongoDB via mongoosejs to connect to my database hosted in mongoAtlas.
My database connection file looks like below
import mongoose from "mongoose";
const MONGO_URI =
process.env.NODE_ENV === "development"
? process.env.MONGO_URI_DEVELOPMENT
: process.env.MONGO_URI_PRODUCTION;
console.log(`Connecting to ${MONGO_URI}`);
const database_connection = async () => {
if (global.connection?.isConnected) {
console.log("reusing database connection")
return;
}
const database = await mongoose.connect(MONGO_URI, {
authSource: "admin",
useNewUrlParser: true
});
global.connection = { isConnected: database.connections[0].readyState }
console.log("new database connection created")
};
export default database_connection;
I have seen this MongoDB developer community thread and this GitHub thread.
The problem seems to happen only in dev mode(when you run yarn run dev). In the production version hosted on Vercel there seems to be no issue. I understand that in dev mode the server is restarted every time a change is saved so to cache a connection you need to use as global variable. As you can see above, I have done exactly that. The server even logs: reusing database connection, then in mongoAtlas it shows like 10 more connections opened.
How can I solve this issue or what am I doing wrong?

Cannot connect to Atlas MongoDB from Azure Functions

I just created an Azure Function that should connect to my instance of MongoDB on Atlas, basically following this tutorial:
https://www.mongodb.com/blog/post/how-to-integrate-azure-functions-with-mongodb
From my local development with Visual Studio, everything works fine and I can connect to the Atlas environment, but when I deploy the code on Azure, the following exception raises:
A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/ltdevcluster-shard-00-00.qkeby.mongodb.net:27017" }", EndPoint: "Unspecified/ltdevcluster-shard-00-00.qkeby.mongodb.net:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.
---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.
If I instead set the Network Access to "everywhere", again everything works fine.
Now, in the Network Access panel of Atlas, I added the IPs retrieved from the Azure Portal under my function app => Networking => Inbound traffic and Outbound traffic (a total of 1 IP for inbound and 3 IPs for outbound).
But adding those 4 IPs has not solved the issue.
What else should I do?
If you are using static IP, for a workaround, you can check this: How do I set a static IP in Functions?
You can also set up a Private Endpoint and for the security of the database credentials check secrets engine integration using vault.
You can refer to How to connect Azure Function with MongoDB Atlas ,Azure functions unable to connect with Mongo Db Atlas M10 and How to connect Azure Function with MongoDB Atlas

Intermittent TimeoutException connecting Atlas MongoDb from .net core 2.2 at Linux-based docker container on Azure

I have an application based using .Net Core 2.2 that is connecting to MondoDb cluster V3.6 in Atlas. The application is hosted in Azure as a Linux Docker container. The app is using MongoDB .Net driver 2.7.3. The app periodically (once in a couple minutes) receives the following timeout exceptions:
System.TimeoutException at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException
and
System.TimeoutException at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync
The mongo client instance is configured according to the MongoDb docs, i.e.
var url = MongoUrl.Create("mongodb+srv://user:password#cluster.gcp.mongodb.net/?authSource=admin&retryWrites=true&ssl=true");
var clientSettings = MongoClientSettings.FromUrl(url);
clientSettings.SslSettings = new SslSettings() { EnabledSslProtocols = SslProtocols.Tls12 };
void SocketConfigurator(Socket s) => s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
clientSettings.ClusterConfigurator = builder =>
builder.ConfigureTcp(tcp => tcp.With(socketConfigurator: (Action<Socket>)SocketConfigurator));
return new MongoClient(clientSettings);
I checked number of SO questions including MongoDB C# 2.0 TimeoutException and SocketTimeout with opened connection in MongoDB but the suggestions seem to be either outdated (reported as fixed in the current version of driver) or don't have permanent positive effect (setting timeouts in the connection string i.e. connectTimeoutMS=90000&socketTimeoutMS=90000&maxIdleTimeMS=90000). The second one (setting tcp_keepalive_time) seems to be not applicable to a docker container in Azure. Please help.
Have you tried setting like this:
var client = new MongoClient(new MongoClientSettings
{
Server = new MongoServerAddress("xxxx"),
ClusterConfigurator = builder =>
{
builder.ConfigureCluster(settings => settings.With(serverSelectionTimeout: TimeSpan.FromSeconds(10)));
}
});

Unable to connect to MongoDb using MongoClientSettings as parameter to MongoClient

I am developing a C# MVC Web API which uses MongoDb as backend.I tried connecting to my mongodb database using
MongoClient mongoClient = new MongoClient(connectionString)
where connectionstring is in format : mongodb://Username:Password#hostname.eastus.cloudapp.azure.com
Mongo db is hosted in a virtual machine in Azure.I am able to connect to the database and all works good.But I am getting frequent exceptions:
"MongoDb.driver.MongoConnectionException".An exception occurred while
receivinf a message from server--->System.IO.IOException:Unable to
read data from the transport connection : A connection attempt failed
because the connected party did not properly respond after a period of
time,......"
So after a bit of research I have learnt that Azure is killing idle connections and I have to set MaxConnectionIdleTime.
In order to set MaxConnectionIdleTime I decided to connect to Mongodb in the below way
var credential = MongoCredential.CreateCredential("dbname", "UserName", "Password");
var settings = new MongoClientSettings
{
Credentials = new[] { credential },
Server = new MongoServerAddress("HostName", 27017),
MaxConnectionIdleTime = new TimeSpan(0, 3, 0)
};
MongoClient mongoClient = new MongoClient(settings);
In this case I am using the same username,password combination given in the connection string which I used to connect before.
While trying to connect here I am getting inner Exception:
MongoDB.Driver.MongoAuthenticationException: "Unable to authenticate
using sasl protocol mechanism SCRAM-SHA-1".
"MongoDb.driver.MongoConnectionException".An exception occurred while receivinf a message from server--->System.IO.IOException:Unable to read data from the transport connection : A connection attempt failed because the connected party did not properly respond after a period of time,....
The reason behind this exception is when hosted in Azure,Azure tries to kill the idle connections but the C# driver is not aware of this.The driver tries to execute queries on the killed connections without knowing the connection is not existing.
The solution that worked out for me is to set maxIdleTimeMS=45000 in connection string.
This way driver will not use a connection which has been idle for long time.
Here is the connection string that worked out for me
connectionString="Username:Password#hostname.eastus.cloudapp.azure.com/?connectTimeoutMS=30000&socketTimeoutMS=30000&waitQueueTimeoutMS=30000&maxIdleTimeMS=45000"
I have had a similar error with my Azure hosted MongoDb (Cosmos Db). It turned out to be network settings such that I had blocked all access. Changing it to Allow access from "All networks" fixed the issue.
The error is very misleading, I would have expected a connection timeout.
A timeout occured after 30000ms selecting a server using
CompositeServerSelector{ Selectors =
MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector,
LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000
} }. Client view of cluster state is { ClusterId : "1", ConnectionMode
: "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers :
[{ ServerId: "{ ClusterId : 1, EndPoint :
"Unspecified/XXXX.documents.azure.com:10255" }", EndPoint:
"Unspecified/XXXX.documents.azure.com:10255", State: "Disconnected",
Type: "Unknown", HeartbeatException:
"MongoDB.Driver.MongoConnectionException: An exception occurred while
opening a connection to the server. --->
MongoDB.Driver.MongoAuthenticationException: Unable to authenticate
using sasl protocol mechanism SCRAM-SHA-1. --->
MongoDB.Driver.MongoCommandException: Command saslContinue failed: Not
Authenticated.
To troubleshot I tried from MongoDb Compass as well and that didn't work either, showing me it wasn't the code.

MongoS sharding metadata manager failed asking for instance is manually reset

My MongoS servers are not staring they are sending this error in logs.
SHARDING [Balancer] caught exception while doing balance: Server's
sharding metadata manager failed to initialize and will remain in
this state until the instance is manually reset :: caused by ::
HostNotFound: unable to resolve DNS for host confserv_1.xyz.com
2016-05-02T17:57:06.612+0530 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "DB2255-2016-05-02T17:57:06.611+0530-5727479aa1051c5fb04fcc49", server: "mongoS1", clientAddr: "", time: new Date(1462192026611), what: "balancer.round", ns: "", details: { executionTimeMillis: 35, errorOccured: true, errmsg: "Server's sharding metadata manager failed to initialize and will remain in this state until the instance is manually reset :: caused by :: HostNotFoun..." } }
When I connect config server using host name it is working fine.
I tried to restart MongoS server it is not coming up.
I check Mongo code and found this error mentioned in
https://github.com/mongodb/mongo/blob/master/src/mongo/db/s/sharding_state.cpp
/ TODO: remove after v3.4.
// This is for backwards compatibility with old style initialization through metadata
// commands/setShardVersion. As well as all assignments to _initializationStatus and
// _setInitializationState_inlock in this method.
if (_getInitializationState() == InitializationState::kInitializing) {
auto waitStatus = _waitForInitialization_inlock(deadline, lk);
if (!waitStatus.isOK()) {
return waitStatus;
}
}
if (_getInitializationState() == InitializationState::kError) {
return {ErrorCodes::ManualInterventionRequired,
str::stream() << "Server's sharding metadata manager failed to initialize and will "
"remain in this state until the instance is manually reset"
<< causedBy(_initializationStatus)};
}
But it does not mention anything what manual intervention is required.
Current Mongo version is 3.2.6
I just ran into this problem while trying to harden the security configuration. As in your case, I was able to connect to the config servers from all mongos instances.
In my case I was also testing a case with members of replica sets being in different datacenters, and I had the problem only after steppingDown some primaries.
I noticed at the end that, not as the error message is pretending, the issue was happening on some primaries of one datacenter, who were not able to route back to the config server. After fixing the routing problem (/etc/hosts eventually), no more problems occurred on the mongo side.