Is there a way to configure Hikari to "fail fast" if the database connection data is invalid? - hikaricp

Running this test with an invalid hostname, or user/password, it waits about 2 minutes before failing. I would ideally like to have it fail immediately if user/password is incorrect, or if the hostname/port are not correct.
HikariConfig config = new HikariConfig();
config.setMaximumPoolSize(1);
config.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
config.addDataSourceProperty("serverName", "localhost");
config.addDataSourceProperty("url", "jdbc:mysql://localhost:3306/project_one?useServerPrepStmts=true&autoReconnect=false");
config.addDataSourceProperty("port", "3306");
config.addDataSourceProperty("databaseName", "project_one");
config.addDataSourceProperty("user", "root");
config.addDataSourceProperty("password", "");
config.addDataSourceProperty("autoReconnect", false);
HikariDataSource ds = new HikariDataSource(config);
Connection connection = ds.getConnection();
Statement statement = connection.createStatement();
statement.executeQuery("SELECT 1");

Yes, release 1.2.9 introduced a fail-fast option (current release is 1.3.3). The configuration property is initializationFailFast. Set that to true, and the pool should fail quickly. Enabling debug logging for com.zaxxer.hikari in your logging framework (log4j, slf4j, etc) can provide more details about why the connection failure occurred.

Starting from v3.0.0 of HikariCP, you have to use the property initializationFailTimeout.
The property initializationFailFast was deprecated in v2.4.10 and removed in v3.0.0.
Related issue #770 and pull request #771

Related

How to query flink's queryable state

I am using flink 1.8.0 and I am trying to query my job state.
val descriptor = new ValueStateDescriptor("myState", Types.CASE_CLASS[Foo])
descriptor.setQueryable("my-queryable-State")
I used port 9067 which is the default port according to this, my client:
val client = new QueryableStateClient("127.0.0.1", 9067)
val jobId = JobID.fromHexString("d48a6c980d1a147e0622565700158d9e")
val execConfig = new ExecutionConfig
val descriptor = new ValueStateDescriptor("my-queryable-State", Types.CASE_CLASS[Foo])
val res: Future[ValueState[Foo]] = client.getKvState(jobId, "my-queryable-State","a", BasicTypeInfo.STRING_TYPE_INFO, descriptor)
res.map(_.toString).pipeTo(sender)
but I am getting :
[ERROR] [06/25/2019 20:37:05.499] [bvAkkaHttpServer-akka.actor.default-dispatcher-5] [akka.actor.ActorSystemImpl(bvAkkaHttpServer)] Error during processing of request: 'org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9067'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.
java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9067
what am I doing wrong ?
how and where should I define QueryableStateOptions
So If You want to use the QueryableState You need to add the proper Jar to Your flink. The jar is flink-queryable-state-runtime, it can be found in the opt folder in Your flink distribution and You should move it to the lib folder.
As for the second question the QueryableStateOption is just a class that is used to create static ConfigOption definitions. The definitions are then used to read the configurations from flink-conf.yaml file. So currently the only option to configure the QueryableState is to use the flink-conf file in the flink distribution.
EDIT: Also, try reading this]1 it provides more info on how does Queryable State works. You shouldn't really connect directly to the server port but rather You should use the proxy port which by default is 9069.

How to properly handle exceptions in MongoClient for VertX

In the startup method of my application I want to check that the credentials for MongoDB provided to the application are OK. If they are OK, I continue the startup, if not, the application is supposed to exit as it cannot connect to the DB. The code snippet is as below:
// Create the client
MongoClient mongodb = null;
try {
mongodb = MongoClient.createShared(vertx, mongo_cnf, mongo_cnf.getString("pool_name"));
}
catch(Exception e) {
log.error("Unable to create MongoDB client. Cause: '{}'. Bailing out", e.getMessage());
System.exit(-1);
}
If I provide wrong credentials, the catch block is not called. Yet I get the following on the console:
19:35:43.017 WARN org.mongodb.driver.connection - Exception thrown during connection pool background maintenance task
com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=null, userName='user', source='admin', password=<hidden>, mechanismProperties={}}
at com.mongodb.connection.SaslAuthenticator.wrapException(SaslAuthenticator.java:162)
at com.mongodb.connection.SaslAuthenticator.access$200(SaslAuthenticator.java:39)
... many lines
The question is: how to intercept this exception in my code and be able to handle it properly ?
The exception is happening in the mongodb's java driver daemon thread so you cannot catch it.
Vertx MongoClient abstracts you from direct interaction with MongoDB Java driver so you can't modify anything related to the client.
You could access mongo client instance via reflection, but as it's already created you cannot pass additional configuration to it.
If you used com.mongodb.async.client.MongoClient you could pass ServerListener which could access the exception and you could examine it (please see this answer for more details - https://stackoverflow.com/a/46526000/1126831).
But it's only possible to specify the ServerListener in the moment of construction of the mongo client, which happens inside the Vertx MongoClient wrapper and there's no way to pass this additional configuration.
Currently the exception is not thrown, which in my opinion a mistake in design, since you receive an object that you cannot work with. Feel free to open a bug: https://github.com/vert-x3/vertx-mongo-client/issues
What you can do to detect that your client is "dead or arrival" is to wait for connection timeout:
// Default is 30s, which is quite long
JsonObject config = new JsonObject().put("serverSelectionTimeoutMS", 5_000);
MongoClient client = MongoClient.createShared(vertx, config, "pool_name");
client.findOne("some_collection", json1, json2, (h) -> {
if (h.succeeded()) {
//...
}
else {
// Notify that the client is dead
}
});

Smack API throws "Session establishment not offered by server" Exception

I'm trying to connect to the Vines XMPP server from Java using the Smack API. However, when I use the following connection code:
ConnectionConfiguration connectionConfiguration = new ConnectionConfiguration("localhost", 5222);
this.connection = new XMPPConnection(connectionConfiguration);
connection.connect();
connection.login(AUCTION_LOGIN, AUCTION_PASSWORD, AUCTION_RESOURCE);
I receive the following error message:
Caused by: Session establishment not offered by server:
at org.jivesoftware.smack.SASLAuthentication.bindResourceAndEstablishSession(SASLAuthentication.java:456)
I understand the issue to be related to sessions now being deprecated from the XMPP protocol. I've been unsuccessful in finding a way to use the ConnectionConfiguration class and others to work around this problem.
You appear to be using an old, vulnerable Smack version (i.e. < 4.0.0).
The current stable Smack version does not throw such an exception, as it considers session binding to be option. It can even be completly disabled with setLegacySessionDisabled(true).

Handling connection failures in apache-camel

I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?
If not, how can I find out that a connection to the queue was interrupted? I've done the following test:
start the queue (and some producer)
start my consumer (it was getting messages as expected)
stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
start the queue (no new messages were received)
I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK
You can pass in the flag automaticRecoveryEnabled=true to the URI, Camel will reconnect if the connection is lost.
For automatic RabbitMQ resource recovery (Connections/Channels/Consumers/Queues/Exchanages/Bindings) when failures occur, check out Lyra (which I authored). Example usage:
Config config = new Config()
.withRecoveryPolicy(new RecoveryPolicy()
.withMaxAttempts(20)
.withInterval(Duration.seconds(1))
.withMaxDuration(Duration.minutes(5)));
ConnectionOptions options = new ConnectionOptions().withHost("localhost");
Connection connection = Connections.create(options, config);
The rest of the API is just the amqp-client API, except your resources are automatically recovered when failures occur.
I'm not sure about camel-rabbitmq specifically, but hopefully there's a way you can swap in your own resource creation via Lyra.
Current camel-rabbitmq just create a connection and the channel when the consumer or producer is started. So it don't have a chance to catch the connection exception :(.

Squeryl 0.9.5 (with Lift 2.4) not releasing database connections/pools

Following the recommended transaction setup for Squeryl, in my Boot.scala:
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import org.squeryl.adapters.H2Adapter
SquerylRecord.initWithSquerylSession(Session.create(
DriverManager.getConnection("jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1", "sa", ""),
new H2Adapter
))
The first startup works fine. I can connect via H2's web-interface and if I use my app, it updates the database appropriately. However if I restart jetty without restarting the JVM, I get:
java.sql.SQLException: No suitable driver found for jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1
The same result is had if I replace "DB_CLOSE_DELAY=-1" with "AUTO_SERVER=TRUE", or remove it entirely.
Following the recommendations on the Squeryl list, I tried C3P0:
import com.mchange.v2.c3p0.ComboPooledDataSource
val cpds = new ComboPooledDataSource
cpds.setDriverClass("org.h2.Driver")
cpds.setJdbcUrl("jdbc:h2:lift_proto")
cpds.setUser("sa")
cpds.setPassword("")
org.squeryl.SessionFactory.concreteFactory =
Some(() => Session.create(
cpds.getConnection, new H2Adapter())
)
This produces similar behavior:
WARNING: A C3P0Registry mbean is already registered. This probably means that an application using c3p0 was undeployed, but not all PooledDataSources were closed prior to undeployment. This may lead to resource leaks over time. Please take care to close all PooledDataSources.
To be sure it wasn't anything I was doing which was causing this, I started and stopped the server without calling a transaction { } block. No exceptions were thrown. I then added to my Boot.scala:
transaction { /* Do nothing */ }
And the exception was once again thrown (I'm assuming because connections are lazy). So I moved the db initialization code to its own file away from Lift:
SessionFactory.concreteFactory = Some(()=>
Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:mem:test", "sa", ""),
new H2Adapter
))
transaction {}
Results were unchanged. What am I doing wrong? I cannot find any mention of needing to explicitly close connections or sessions in the Squeryl documentation, and this is my first time using JDBC.
I found mention of the same issue here on the Lift google group, but no resolution.
Thanks for any help.
When you say you are restarting Jetty, I think what you're actually doing is reloading your webapp within Jetty. Neither the h2 database or C3P0 will automatically shut down when your app reloads, which explains the errors you are receiving when Lift tries to initialize them a second time. You don't see the error when you don't create a transaction block because both h2 and C3P0 are initialized when the first DB connection is retrieved.
I tend to use BoneCP as a connection pool myself. You can configure the minimum number of pooled connections to be > 1, which will stop h2 from shutting down without the need for DB_CLOSE_DELAY=-1. Then you can use:
LiftRules.unloadHooks append { () =>
yourPool.close() //should destroy the pool and it's associated threads
}
That will close all of the connections when Lift is shutdown, which should properly shutdown the h2 database as well.