In the startup method of my application I want to check that the credentials for MongoDB provided to the application are OK. If they are OK, I continue the startup, if not, the application is supposed to exit as it cannot connect to the DB. The code snippet is as below:
// Create the client
MongoClient mongodb = null;
try {
mongodb = MongoClient.createShared(vertx, mongo_cnf, mongo_cnf.getString("pool_name"));
}
catch(Exception e) {
log.error("Unable to create MongoDB client. Cause: '{}'. Bailing out", e.getMessage());
System.exit(-1);
}
If I provide wrong credentials, the catch block is not called. Yet I get the following on the console:
19:35:43.017 WARN org.mongodb.driver.connection - Exception thrown during connection pool background maintenance task
com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=null, userName='user', source='admin', password=<hidden>, mechanismProperties={}}
at com.mongodb.connection.SaslAuthenticator.wrapException(SaslAuthenticator.java:162)
at com.mongodb.connection.SaslAuthenticator.access$200(SaslAuthenticator.java:39)
... many lines
The question is: how to intercept this exception in my code and be able to handle it properly ?
The exception is happening in the mongodb's java driver daemon thread so you cannot catch it.
Vertx MongoClient abstracts you from direct interaction with MongoDB Java driver so you can't modify anything related to the client.
You could access mongo client instance via reflection, but as it's already created you cannot pass additional configuration to it.
If you used com.mongodb.async.client.MongoClient you could pass ServerListener which could access the exception and you could examine it (please see this answer for more details - https://stackoverflow.com/a/46526000/1126831).
But it's only possible to specify the ServerListener in the moment of construction of the mongo client, which happens inside the Vertx MongoClient wrapper and there's no way to pass this additional configuration.
Currently the exception is not thrown, which in my opinion a mistake in design, since you receive an object that you cannot work with. Feel free to open a bug: https://github.com/vert-x3/vertx-mongo-client/issues
What you can do to detect that your client is "dead or arrival" is to wait for connection timeout:
// Default is 30s, which is quite long
JsonObject config = new JsonObject().put("serverSelectionTimeoutMS", 5_000);
MongoClient client = MongoClient.createShared(vertx, config, "pool_name");
client.findOne("some_collection", json1, json2, (h) -> {
if (h.succeeded()) {
//...
}
else {
// Notify that the client is dead
}
});
Related
I have a problem consuming an REST API with RestTemplate exchange method:
I tried some other public API and it worked with the same code (just changing the URL), so I think it is not a probleme with my code but maybe with the network proxy or something else.
The error text:
org.springframework.web.client.ResourceAccessException: I/O error on POST request for : Connection timed out: connect; nested exception is java.net.ConnectException: Connection timed out: connect
RestTemplate exchange call:
ResponseEntity<FooDto> reponseEntity = restTemplate.exchange(restUrl, HttpMethod.POST, httpEntity, FooDto.class);
The way you define the rest template is crucial here. you need to set the connection time out and read time out.
It is important to optimize them to the level where it doesn't slow down your application responsiveness.
SimpleClientHttpRequestFactory clientHttpRequestFactory = new SimpleClientHttpRequestFactory();
// Connect timeout: time is in milliseconds
clientHttpRequestFactory.setConnectTimeout(3000);
// Read timeout: time is in milliseconds
clientHttpRequestFactory.setReadTimeout(3000);
RestTemplate template = new RestTemplate(clientHttpRequestFactory);
When attempting to publish a Service Fabric application to a local cluster, the cluster fails to load the application stating the error in the title. The stack trace points me to an exception line in OwinCommunicationListener.cs:
try
{
this.eventSource.LogInfo("Starting web server on " + this.listeningAddress);
this.webApp = WebApp.Start(this.listeningAddress, appBuilder => this.startup.Invoke(appBuilder));
this.eventSource.LogInfo("Listening on " + this.publishAddress);
return Task.FromResult(this.publishAddress);
}
catch (Exception ex)
{
var logString = $"Web server failed to open endpoint {endpointName}. {ex.ToString()}";
this.eventSource.LogFatal(logString);
this.StopWebServer();
throw ex; // points to this line from cluster manager
}
I am unable to inspect the exception thrown, but there is no useful exception information other than a TargetInvocationException with a stack trace to the line noted above. Why won't this application load on my local cluster?
It's hard to say without an actual exception message or stack trace but judging by the location from which the exception was thrown and the fact that the problem resolved itself the next morning, the most likely and most common cause of this is that the port you were trying to use to open the web listener was taken by some other process at the time, and the next morning the port was free again. This, by the way, isn't really specific to Service Fabric. You're just trying to open a socket on a port that was taken by someone else.
I'm honestly more curious about why you couldn't inspect the exception. I can think of three things off the top of my head to help with that:
Use "throw" instead of "throw ex" so you don't reset the stack trace.
Look at your logs. It looks like you're writing out an ETW event in your catch block. What did it say?
Use the Visual Studio debugger: Simply set a breakpoint in the catch block and start the application with debugging by pressing F5.
I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?
If not, how can I find out that a connection to the queue was interrupted? I've done the following test:
start the queue (and some producer)
start my consumer (it was getting messages as expected)
stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
start the queue (no new messages were received)
I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK
You can pass in the flag automaticRecoveryEnabled=true to the URI, Camel will reconnect if the connection is lost.
For automatic RabbitMQ resource recovery (Connections/Channels/Consumers/Queues/Exchanages/Bindings) when failures occur, check out Lyra (which I authored). Example usage:
Config config = new Config()
.withRecoveryPolicy(new RecoveryPolicy()
.withMaxAttempts(20)
.withInterval(Duration.seconds(1))
.withMaxDuration(Duration.minutes(5)));
ConnectionOptions options = new ConnectionOptions().withHost("localhost");
Connection connection = Connections.create(options, config);
The rest of the API is just the amqp-client API, except your resources are automatically recovered when failures occur.
I'm not sure about camel-rabbitmq specifically, but hopefully there's a way you can swap in your own resource creation via Lyra.
Current camel-rabbitmq just create a connection and the channel when the consumer or producer is started. So it don't have a chance to catch the connection exception :(.
Running this test with an invalid hostname, or user/password, it waits about 2 minutes before failing. I would ideally like to have it fail immediately if user/password is incorrect, or if the hostname/port are not correct.
HikariConfig config = new HikariConfig();
config.setMaximumPoolSize(1);
config.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
config.addDataSourceProperty("serverName", "localhost");
config.addDataSourceProperty("url", "jdbc:mysql://localhost:3306/project_one?useServerPrepStmts=true&autoReconnect=false");
config.addDataSourceProperty("port", "3306");
config.addDataSourceProperty("databaseName", "project_one");
config.addDataSourceProperty("user", "root");
config.addDataSourceProperty("password", "");
config.addDataSourceProperty("autoReconnect", false);
HikariDataSource ds = new HikariDataSource(config);
Connection connection = ds.getConnection();
Statement statement = connection.createStatement();
statement.executeQuery("SELECT 1");
Yes, release 1.2.9 introduced a fail-fast option (current release is 1.3.3). The configuration property is initializationFailFast. Set that to true, and the pool should fail quickly. Enabling debug logging for com.zaxxer.hikari in your logging framework (log4j, slf4j, etc) can provide more details about why the connection failure occurred.
Starting from v3.0.0 of HikariCP, you have to use the property initializationFailTimeout.
The property initializationFailFast was deprecated in v2.4.10 and removed in v3.0.0.
Related issue #770 and pull request #771
Following the recommended transaction setup for Squeryl, in my Boot.scala:
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import org.squeryl.adapters.H2Adapter
SquerylRecord.initWithSquerylSession(Session.create(
DriverManager.getConnection("jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1", "sa", ""),
new H2Adapter
))
The first startup works fine. I can connect via H2's web-interface and if I use my app, it updates the database appropriately. However if I restart jetty without restarting the JVM, I get:
java.sql.SQLException: No suitable driver found for jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1
The same result is had if I replace "DB_CLOSE_DELAY=-1" with "AUTO_SERVER=TRUE", or remove it entirely.
Following the recommendations on the Squeryl list, I tried C3P0:
import com.mchange.v2.c3p0.ComboPooledDataSource
val cpds = new ComboPooledDataSource
cpds.setDriverClass("org.h2.Driver")
cpds.setJdbcUrl("jdbc:h2:lift_proto")
cpds.setUser("sa")
cpds.setPassword("")
org.squeryl.SessionFactory.concreteFactory =
Some(() => Session.create(
cpds.getConnection, new H2Adapter())
)
This produces similar behavior:
WARNING: A C3P0Registry mbean is already registered. This probably means that an application using c3p0 was undeployed, but not all PooledDataSources were closed prior to undeployment. This may lead to resource leaks over time. Please take care to close all PooledDataSources.
To be sure it wasn't anything I was doing which was causing this, I started and stopped the server without calling a transaction { } block. No exceptions were thrown. I then added to my Boot.scala:
transaction { /* Do nothing */ }
And the exception was once again thrown (I'm assuming because connections are lazy). So I moved the db initialization code to its own file away from Lift:
SessionFactory.concreteFactory = Some(()=>
Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:mem:test", "sa", ""),
new H2Adapter
))
transaction {}
Results were unchanged. What am I doing wrong? I cannot find any mention of needing to explicitly close connections or sessions in the Squeryl documentation, and this is my first time using JDBC.
I found mention of the same issue here on the Lift google group, but no resolution.
Thanks for any help.
When you say you are restarting Jetty, I think what you're actually doing is reloading your webapp within Jetty. Neither the h2 database or C3P0 will automatically shut down when your app reloads, which explains the errors you are receiving when Lift tries to initialize them a second time. You don't see the error when you don't create a transaction block because both h2 and C3P0 are initialized when the first DB connection is retrieved.
I tend to use BoneCP as a connection pool myself. You can configure the minimum number of pooled connections to be > 1, which will stop h2 from shutting down without the need for DB_CLOSE_DELAY=-1. Then you can use:
LiftRules.unloadHooks append { () =>
yourPool.close() //should destroy the pool and it's associated threads
}
That will close all of the connections when Lift is shutdown, which should properly shutdown the h2 database as well.