I'm using an official Perl driver for working with mongodb. To catch and handle errors, Try::Tiny and Safe::Isa modules are recommended. However, it doesn't work as expected. Please check code below that should work according to documentation but in fact it doesn't work:
use MongoDB;
use Try::Tiny;
use Safe::Isa;
my $client;
try {
$client = MongoDB->connect('mongodb://localhost');
$client->connect;
} catch {
warn "caught error: $_";
};
my $collection = $client->ns('foo.bar');
try {
my $all = $collection->find;
} catch {
warn "2 - caught error: $_";;
};
As far as connections are established automatically according documentation there will be no exception on connect(). But there is no exception on request too! Also I added $client->connect string to force connection, but again no exception. I'm running this script at machine where there is no mongodb installed and no mongodb docker container running so an exception definitely must appear.
Could someone explain what am I doing wrong?
This is a subtle issue. find returns a cursor object but doesn't issue the query immediately. From the documentation for MongoDB::Collection:
Note, a MongoDB::Cursor object holds the query and does not issue the
query to the server until the `result` method is called on it or until
an iterator method like `next` is called.
Related
I have a ASP.NET Core application with npgsql that is setup to retry when a database connection fails:
services.AddDbContext<DatabaseContext>(
opt => opt.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"),
(options) =>
{
options.EnableRetryOnFailure(10);
})
);
However, when i restart the database during a request, it will first throw an exception
Exception thrown: 'Npgsql.PostgresException' in System.Private.CoreLib.dll: '57P01: terminating connection due to administrator command'
That propogates into my code and causes an exception without any retries.
Looking into the documentation for npgsql it seems like some exceptions are marked as IsTransient, and that means they will be retried, but this one is not. I assume there's a reason for that, but I'd still like to be able to restart the postgres database without interrupting requests by simply having them retry after a while. I could create try catches around all the requests and check for this particular exception, but that seems like a clunky solution.
How do I handle this situation gracefully?
npgsql package used:
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="5.0.10" />
Managed to find an answer myself, you can add the error code in the EnableRetryOnFailure
services.AddDbContext<DatabaseContext>(
opt => opt.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"),
(options) =>
{
options.EnableRetryOnFailure(10, TimeSpan.FromSeconds(30), new string[] { "57P01" });
})
);
Now the only question that remains is if that is a good idea or not.
I am working on building a rocket REST API using a mongodb database. I have been able to successfully create a connection and start up the server without errors:
# Cargo.toml
[dependencies]
rocket = "0.5.0-rc.1"
dotenv = "0.15.0"
mongodb = "1.2.2"
[dependencies.openssl]
version = "0.10"
features = ["vendored"]
#[get("/")]
pub async fn index(client: &State<ClientPointer>) -> &'static str {
let _dbs = client.0.list_databases(None, None).await.unwrap();
"Fetched databases"
}
#[launch]
async fn rocket() -> _ {
let client = database::connect::pool(1, 32).await.unwrap();
rocket::build()
.mount("/", routes!(routes::index))
.manage(database::rocket::ClientPointer(client))
}
However, when the route is invoked I get the following output:
>> Matched: (index) GET /
thread 'rocket-worker-thread' panicked at 'there is no timer running, must be called from the context of a Tokio 0.2.x runtime', C:\Users\lukasdiegelmann\.cargo\registry\src\github.com-1ecc6299db9ec823\tokio-0.2.25\src\time\driver\handle.rs:24:32
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
>> Handler index panicked.
>> This is an application bug.
>> A panic in Rust must be treated as an exceptional event.
>> Panicking is not a suitable error handling mechanism.
>> Unwinding, the result of a panic, is an expensive operation.
>> Panics will severely degrade application performance.
>> Instead of panicking, return `Option` and/or `Result`.
>> Values of either type can be returned directly from handlers.
>> A panic is treated as an internal server error.
>> Outcome: Failure
>> No 500 catcher registered. Using Rocket default.
>> Response succeeded.
So it seems like there is something wrong with the versioning of the used async runtime. But I could not find where, because the error do not really give me a hint and the mongodb rust driver appears to be using a 0.2.x version of tokio namely version ~0.2.18.
Edit: I have gained a little bit more insight and it seems like rocket-0.5.0-rc.1 has started using a tokio-1.x runtime, whereas mongodb-1.2.2 has not. This obviously imposes a big problem, since I will either have to have two runtimes run simultaneously or have to ditch mongodb for now, which is not exactly the definition of a solution.
MongoDB has released a 2.0.0-beta version of its driver that uses tokio-1.x.
In the startup method of my application I want to check that the credentials for MongoDB provided to the application are OK. If they are OK, I continue the startup, if not, the application is supposed to exit as it cannot connect to the DB. The code snippet is as below:
// Create the client
MongoClient mongodb = null;
try {
mongodb = MongoClient.createShared(vertx, mongo_cnf, mongo_cnf.getString("pool_name"));
}
catch(Exception e) {
log.error("Unable to create MongoDB client. Cause: '{}'. Bailing out", e.getMessage());
System.exit(-1);
}
If I provide wrong credentials, the catch block is not called. Yet I get the following on the console:
19:35:43.017 WARN org.mongodb.driver.connection - Exception thrown during connection pool background maintenance task
com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=null, userName='user', source='admin', password=<hidden>, mechanismProperties={}}
at com.mongodb.connection.SaslAuthenticator.wrapException(SaslAuthenticator.java:162)
at com.mongodb.connection.SaslAuthenticator.access$200(SaslAuthenticator.java:39)
... many lines
The question is: how to intercept this exception in my code and be able to handle it properly ?
The exception is happening in the mongodb's java driver daemon thread so you cannot catch it.
Vertx MongoClient abstracts you from direct interaction with MongoDB Java driver so you can't modify anything related to the client.
You could access mongo client instance via reflection, but as it's already created you cannot pass additional configuration to it.
If you used com.mongodb.async.client.MongoClient you could pass ServerListener which could access the exception and you could examine it (please see this answer for more details - https://stackoverflow.com/a/46526000/1126831).
But it's only possible to specify the ServerListener in the moment of construction of the mongo client, which happens inside the Vertx MongoClient wrapper and there's no way to pass this additional configuration.
Currently the exception is not thrown, which in my opinion a mistake in design, since you receive an object that you cannot work with. Feel free to open a bug: https://github.com/vert-x3/vertx-mongo-client/issues
What you can do to detect that your client is "dead or arrival" is to wait for connection timeout:
// Default is 30s, which is quite long
JsonObject config = new JsonObject().put("serverSelectionTimeoutMS", 5_000);
MongoClient client = MongoClient.createShared(vertx, config, "pool_name");
client.findOne("some_collection", json1, json2, (h) -> {
if (h.succeeded()) {
//...
}
else {
// Notify that the client is dead
}
});
When attempting to publish a Service Fabric application to a local cluster, the cluster fails to load the application stating the error in the title. The stack trace points me to an exception line in OwinCommunicationListener.cs:
try
{
this.eventSource.LogInfo("Starting web server on " + this.listeningAddress);
this.webApp = WebApp.Start(this.listeningAddress, appBuilder => this.startup.Invoke(appBuilder));
this.eventSource.LogInfo("Listening on " + this.publishAddress);
return Task.FromResult(this.publishAddress);
}
catch (Exception ex)
{
var logString = $"Web server failed to open endpoint {endpointName}. {ex.ToString()}";
this.eventSource.LogFatal(logString);
this.StopWebServer();
throw ex; // points to this line from cluster manager
}
I am unable to inspect the exception thrown, but there is no useful exception information other than a TargetInvocationException with a stack trace to the line noted above. Why won't this application load on my local cluster?
It's hard to say without an actual exception message or stack trace but judging by the location from which the exception was thrown and the fact that the problem resolved itself the next morning, the most likely and most common cause of this is that the port you were trying to use to open the web listener was taken by some other process at the time, and the next morning the port was free again. This, by the way, isn't really specific to Service Fabric. You're just trying to open a socket on a port that was taken by someone else.
I'm honestly more curious about why you couldn't inspect the exception. I can think of three things off the top of my head to help with that:
Use "throw" instead of "throw ex" so you don't reset the stack trace.
Look at your logs. It looks like you're writing out an ETW event in your catch block. What did it say?
Use the Visual Studio debugger: Simply set a breakpoint in the catch block and start the application with debugging by pressing F5.
For the relevant part of our server stack, we're running:
NGINX 1.2.3
PHP-FPM 5.3.10 with PECL mongo 1.2.12
MongoDB 2.0.7
CentOS 6.2
We're getting some strange, but predictable behavior when the MongoDB server goes away (crashes, gets killed, etc). Even with a try/catch block around the connection code, i.e:
try
{
$mdb = new Mongo('mongodb://localhost:27017');
}
catch (MongoConnectionException $e)
{
die( $e->getMessage() );
}
$db = $mdb->selectDB('collection_name');
Depending on which PHP-FPM workers have connected to mongo already, the connection state is cached, causing further exceptions to go unhandled, because the $mdb connection handler can't be used. The troubling thing is that the try does not consistently fail for a considerable amount of time, up to 15 minutes later, when -- I assume -- the php-fpm processes die/respawn.
Essentially, the behavior is that when you hit a worker that hasn't connected to mongo yet, you get the die message above, and when you connect to a worker that has, you get an unhandled exception from $mdb->selectDB('collection_name'); because catch does not run.
When PHP is a single process, i.e. via Apache with mod_php, this behavior does not occur. Just for posterity, going back to Apache/mod_php is not an option for us at this time.
Is there a way to fix this behavior? I don't want the connection state to be inconsistent between different php-fpm processes.
Edit:
While I wait for the driver to be fixed in this regard, my current workaround is to do a quick polling to determine if the driver can handle requests and then load or not load the MongoDB library/run queries if it can't connect/query:
try
{
// connect
$mongo = new Mongo("mongodb://localhost:27017");
// try to do anything with connection handle
try
{
$mongo->YOUR_DB->YOUR_COLLECTION->findOne();
$mongo->close();
define('MONGO_STATE', TRUE);
}
catch(MongoCursorException $e)
{
$mongo->close();
error_log('Error connecting to MongoDB: ' . $e->getMessage() );
define('MONGO_STATE', FALSE);
}
}
catch(MongoConnectionException $e)
{
error_log('Error connecting to MongoDB: ' . $e->getMessage() );
define('MONGO_STATE', FALSE);
}
The PHP mongo driver connectivity code is getting a big overhaul in the 1.3 release, currently in beta2 as of writing this. Based on your description, your issues may be resolved by the fixes for:
https://jira.mongodb.org/browse/PHP-158
https://jira.mongodb.org/browse/PHP-465
Once it is released you will be able to see the full list of fixes here:
https://jira.mongodb.org/browse/PHP/fixforversion/10499
Or, alternatively on the PECL site. If you can test 1.3 and confirm that your issues are still present then I'm sure the driver devs would love to hear from you before the 1.3.0 release, especially if it is easily reproducible.