I get the error com.mongodb.MongoWaitQueueFullException: Too many threads are already waiting for a connection. Max number of threads (maxWaitQueueSize) of 500 has been exceeded. while doing a stress test on my application.
So I am thinking of configuring the maxWaitQueueSize property via configuration.
I am using spring boot to configure mongodb connection. I am using #EnableAutoConfiguration in my Application and I have declared only spring.data.mongodb.uri=mongodb://user:password#ip:27017 in the application.properties file.
How do I configure the maxWaitQueueSize property with spring boot?
How do I decide a good value for the maxWaitQueueSize?
If you're using MongoDB 3.0+, you can set waitQueueMultiple in your mongouri :
spring.data.mongodb.uri=mongodb://user:password#ip:27017/?waitQueueMultiple=10
waitQueueMultiple is a number that the driver multiples the maxPoolSize value to, to provide the maximum number of threads allowed to wait for a connection to become available from the pool.
How do I decide a good value for the maxWaitQueueSize?
It's not directly related to MongoDB but you can read more about Pool Sizing in Hikari github wiki.
In com.mongodb.MongoClientURI, you can find the parameters which can be used in MongoClientOption.
if (key.equals("maxpoolsize")) {
builder.connectionsPerHost(Integer.parseInt(value));
} else if (key.equals("minpoolsize")) {
builder.minConnectionsPerHost(Integer.parseInt(value));
} else if (key.equals("maxidletimems")) {
builder.maxConnectionIdleTime(Integer.parseInt(value));
} else if (key.equals("maxlifetimems")) {
builder.maxConnectionLifeTime(Integer.parseInt(value));
} else if (key.equals("waitqueuemultiple")) {
builder.threadsAllowedToBlockForConnectionMultiplier(Integer.parseInt(value));
} else if (key.equals("waitqueuetimeoutms")) {
builder.maxWaitTime(Integer.parseInt(value));
} else if (key.equals("connecttimeoutms")) {
builder.connectTimeout(Integer.parseInt(value));
} else if (key.equals("sockettimeoutms")) {
builder.socketTimeout(Integer.parseInt(value));
} else if (key.equals("autoconnectretry")) {
builder.autoConnectRetry(_parseBoolean(value));
} else if (key.equals("replicaset")) {
builder.requiredReplicaSetName(value);
} else if (key.equals("ssl")) {
if (_parseBoolean(value)) {
builder.socketFactory(SSLSocketFactory.getDefault());
}
}
I am using spring boot starter webflux. This issue also happens.
I tried to add MongoClientFactoryBean. It doesn't work.
The whole application is located in https://github.com/yigubigu/webfluxbenchmark. I tried to test performance benchmark of webflux and original mvc.
#Bean
public MongoClientFactoryBean mongoClientFactoryBean() {
MongoClientFactoryBean factoryBean = new MongoClientFactoryBean();
factoryBean.setHost("localhost");
factoryBean.setPort(27017);
factoryBean.setSingleton(true);
MongoClientOptions options = MongoClientOptions.builder()
.connectionsPerHost(1000)
.minConnectionsPerHost(500)
.threadsAllowedToBlockForConnectionMultiplier(10)
.build();
factoryBean.setMongoClientOptions(options);
return factoryBean;
}
you can achieve this by injecting an object of MongoOptions to your MongoTemplate.
This maxQueueSize limit is computed here in the Java client source code :
https://github.com/mongodb/mongo-java-driver/blob/3.10.x/driver-core/src/main/com/mongodb/connection/ConnectionPoolSettings.java#L273
It is the product of maxConnectionPoolSize and threadsAllowedToBlockForConnectionMultiplier and hence can be modified through ?maxPoolSize= and ?waitQueueMultiple= in the connection URI.
Related
I'm upgrading my Grails 2.4 web application to Grails 3, and I'm considering switching from my custom DAO to GORM for my Mongo database.
I'm trying to understand how to setup GORM correctly, in particular about connection options, but its documentation is slightly misleading to me.
The Advanced Configuration ("Mongo Database Connection Configuration") states
Available options and their descriptions are defined in the MongoOptions javadoc.
so I'm tempted to assume that I'm allowed to use any of those options.
But later on in the same section (Configuration Options Guide) I read
Below is a complete example showing all configuration options:
showing only 9 options.
My issue is 'converting' my custom DAO
MongoClientOptions options = new MongoClientOptions .Builder()
.connectionsPerHost(1000)
.threadsAllowedToBlockForConnectionMultiplier(5)
.maxWaitTime(4000)
.socketTimeout(2000).build();
List<ServerAddress> list = getMongoReplicaSet();
mongo = new MongoClient(list, options);
mongo.setReadPreference(ReadPreference.nearest());
to an equivalent configuration
grails {
mongodb {
options {
connectionsPerHost = 1000
threadsAllowedToBlockForConnectionMultiplier = 5
maxWaitTime = 4000
socketTimeout = 2000
}
}
}
but how to define the read preference? Am I allowed to do something like this?
grails {
mongodb {
options {
readPreference = com.mongodb.ReadPreference.nearest()
}
}
}
Thanks in advance!
Yes you can set anything in the MongoClientOptions.Builder class via configuration. Although you syntax is wrong, it should be:
grails {
mongodb {
options {
readPreference = com.mongodb.ReadPreference.nearest()
}
}
}
I'm trying to follow the steps listed at http://dev.mysql.com/doc/connector-j/en/connector-j-master-slave-replication-connection.html which states
To enable this functionality, use the com.mysql.jdbc.ReplicationDriver
class when configuring your application server's connection pool
From https://github.com/brettwooldridge/HikariCP - it says
HikariCP will attempt to resolve a driver through the DriverManager
based solely on the jdbcUrl
So is this configuration all thats needed?
db.default.url=jdbc:mysql:replication ...
Squeryl has has a number of db Adapters; but my understanding is these are unrelated?
http://squeryl.org/api/index.html#org.squeryl.adapters.MySQLInnoDBAdapter
Sorry for the key word loading - I'm just not too sure where I need to focus
Thanks
Brent
For people hitting this in 2020, Hikari uses
com.mysql.jdbc.jdbc2.optional.MysqlDataSource
as a data source. If I look at the code of the above class. It has a method named connect which returns Connection instance.
protected Connection getConnection(Properties props) throws SQLException {
String jdbcUrlToUse = null;
if (!this.explicitUrl) {
StringBuffer jdbcUrl = new StringBuffer("jdbc:mysql://");
if (this.hostName != null) {
jdbcUrl.append(this.hostName);
}
jdbcUrl.append(":");
jdbcUrl.append(this.port);
jdbcUrl.append("/");
if (this.databaseName != null) {
jdbcUrl.append(this.databaseName);
}
jdbcUrlToUse = jdbcUrl.toString();
} else {
jdbcUrlToUse = this.url;
}
Properties urlProps = mysqlDriver.parseURL(jdbcUrlToUse, (Properties)null);
urlProps.remove("DBNAME");
urlProps.remove("HOST");
urlProps.remove("PORT");
Iterator keys = urlProps.keySet().iterator();
while(keys.hasNext()) {
String key = (String)keys.next();
props.setProperty(key, urlProps.getProperty(key));
}
return mysqlDriver.connect(jdbcUrlToUse, props);
}
where mysqlDriver is an instance of
protected static final NonRegisteringDriver mysqlDriver;
if i check the connect method of NonRegisteringDriver class. It looks like this
public Connection connect(String url, Properties info) throws SQLException {
if (url != null) {
if (StringUtils.startsWithIgnoreCase(url, "jdbc:mysql:loadbalance://")) {
return this.connectLoadBalanced(url, info);
}
if (StringUtils.startsWithIgnoreCase(url, "jdbc:mysql:replication://")) {
return this.connectReplicationConnection(url, info);
}
}
Properties props = null;
if ((props = this.parseURL(url, info)) == null) {
return null;
} else if (!"1".equals(props.getProperty("NUM_HOSTS"))) {
return this.connectFailover(url, info);
} else {
try {
com.mysql.jdbc.Connection newConn = ConnectionImpl.getInstance(this.host(props), this.port(props), props, this.database(props), url);
return newConn;
} catch (SQLException var6) {
throw var6;
} catch (Exception var7) {
SQLException sqlEx = SQLError.createSQLException(Messages.getString("NonRegisteringDriver.17") + var7.toString() + Messages.getString("NonRegisteringDriver.18"), "08001", (ExceptionInterceptor)null);
sqlEx.initCause(var7);
throw sqlEx;
}
}
}
After looking at the code, it looks like it supports. I haven't tried it till now. Will try and let you know from personal experience. From code, it looks directly feasible.
Squeryl offers different MySQL adapters because innodb supports referential keys, while myisam does not. It seems like what your'e doing should be handled at the connection pool level, so I don't think your Squeryl configuration will have an affect.
I've never configured Hikari for replicated MySQL, but if it requires an alternative JDBC driver I'd be surprised if you can provide a JDBC URL and everything just works. I'm guessing that Hikari's default functionality is to pick the plain vanilla MySQL JDBC driver unless you tell it otherwise. Luckily, Hikari has quite a few config options including the ability to set a specific driverClassName.
Replication allows for a different URL:
jdbc:mysql:replication://[server1],[server2],[server2]/[database]
I've never tried it, but I assume this will resolve to the ReplicationDriver.
And I find myself back here - please note, hikari doesn't support the Replication driver.
https://github.com/brettwooldridge/HikariCP/issues/625#issuecomment-251613688
MySQL Replication Driver simply does NOT work together with HikariCP.
And
https://groups.google.com/forum/#!msg/hikari-cp/KtKgzR8COrE/higEHoPkAwAJ
... nobody running anything resembling a mission critical application takes MySQL's driver-level replication support seriously.
for a project i'm working on, we have the necessity to write PaxExam integration tests which run over multiple Karaf containers.
The idea would be finding a way to extend/configure PaxExam to start-up a Karaf container (or more) and deploying there a bounce of bundles, and then start the test Karaf container which will then test the functionality.
We need this to verify performance tests and other things.
Does someone know anything about that? Is that actually possible in PaxExam?
I write the answer by myself, after having found this interesting article.
In particular have a look at the sections Using the Karaf Shell and Distributed integration tests in Karaf
http://planet.jboss.org/post/advanced_integration_testing_with_pax_exam_karaf
This is basically what the article says:
first of all you have to change the test probe header, allowing the dynamic-package
#ProbeBuilder
public TestProbeBuilder probeConfiguration(TestProbeBuilder probe) {
probe.setHeader(Constants.DYNAMICIMPORT_PACKAGE, "*;status=provisional");
return probe;
}
After that, the article suggests the following code that is able to execute commands in the Karaf shell
#Inject
CommandProcessor commandProcessor;
protected String executeCommands(final String ...commands) {
String response;
final ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
final PrintStream printStream = new PrintStream(byteArrayOutputStream);
final CommandSession commandSession = commandProcessor.createSession(System.in, printStream, System.err);
FutureTask<string> commandFuture = new FutureTask<string>(
new Callable<string>() {
public String call() {
try {
for(String command:commands) {
System.err.println(command);
commandSession.execute(command);
}
} catch (Exception e) {
e.printStackTrace(System.err);
}
return byteArrayOutputStream.toString();
}
});
try {
executor.submit(commandFuture);
response = commandFuture.get(COMMAND_TIMEOUT, TimeUnit.MILLISECONDS);
} catch (Exception e) {
e.printStackTrace(System.err);
response = "SHELL COMMAND TIMED OUT: ";
}
return response;
}
Then, the rest is kind of trivial, you will have to implement a layer able to start-up a child instance of Karaf
public void createInstances() {
//Install broker feature that is provided by FuseESB
executeCommands("admin:create --feature broker brokerChildInstance");
//Install producer feature that provided by imaginary feature repo.
executeCommands("admin:create --featureURL mvn:imaginary/repo/1.0/xml/features --feature producer producerChildInstance");
//Install producer feature that provided by imaginary feature repo.
executeCommands("admin:create --featureURL mvn:imaginary/repo/1.0/xml/features --feature consumer consumerChildInstance");
//start child instances
executeCommands("admin:start brokerChildInstance");
executeCommands("admin:start producerChildInstance");
executeCommands("admin:start consumerChildInstance");
//You will need to destroy the child instances once you are done.
//Using #After seems the right place to do that.
}
Our application is build upon mongodb replica set.
I'd like to catch all exceptions thrown among the time frame when replica set is in process of automatic failover.
I will make application retry or wait for failover completes.
So that the failover won't influence user.
I found document describing the behavior of java driver here: https://jira.mongodb.org/browse/DOCS-581
I write a test program to find all possible exceptions, they are all MongoException but with different message:
MongoException.Network: "Read operation to server /10.11.0.121:27017 failed on database test"
MongoException: "can't find a master"
MongoException: "not talking to master and retries used up"
MongoException: "No replica set members available in [ here is replica set status ] for { "mode" : "primary"}"
Maybe more...
I'm confused and not sure if it is safe to determine by error message.
Also I don't want to catch all MongoException.
Any suggestion?
Thanks
I am now of the opinion that Mongo in Java is particularly weak in this regards. I don't think your strategy of interpreting the error codes scales well or will survive driver evolution. This is, of course, opinion.
The good news is that the Mongo driver provides a way get the status of a ReplicaSet: http://api.mongodb.org/java/2.11.1/com/mongodb/ReplicaSetStatus.html. You can use it directly to figure out whether there is a Master visible to your application. If that is all you want to know, the http://api.mongodb.org/java/2.11.1/com/mongodb/Mongo.html#getReplicaSetStatus() is all you need. Grab that kid and check for a not-null master and you are on your way.
ReplicaSetStatus rss = mongo.getReplicaSetStatus();
boolean driverInFailover = rss.getMaster() == null;
If what you really need is to figure out if the ReplSet is dead, read-only, or read-write, this gets more difficult. Here is the code that kind-of works for me. I hate it.
#Override
public ReplSetStatus getReplSetStatus() {
ReplSetStatus rss = ReplSetStatus.DOWN;
MongoClient freshClient = null;
try {
if ( mongo != null ) {
ReplicaSetStatus replicaSetStatus = mongo.getReplicaSetStatus();
if ( replicaSetStatus != null ) {
if ( replicaSetStatus.getMaster() != null ) {
rss = ReplSetStatus.ReadWrite;
} else {
/*
* When mongo.getReplicaSetStatus().getMaster() returns null, it takes a a
* fresh client to assert whether the ReplSet is read-only or completely
* down. I freaking hate this, but take it up with 10gen.
*/
freshClient = new MongoClient( mongo.getAllAddress(), mongo.getMongoClientOptions() );
replicaSetStatus = freshClient.getReplicaSetStatus();
if ( replicaSetStatus != null ) {
rss = replicaSetStatus.getMaster() != null ? ReplSetStatus.ReadWrite : ReplSetStatus.ReadOnly;
} else {
log.warn( "freshClient.getReplicaSetStatus() is null" );
}
}
} else {
log.warn( "mongo.getReplicaSetStatus() returned null" );
}
} else {
throw new IllegalStateException( "mongo is null?!?" );
}
} catch ( Throwable t ) {
log.error( "Ingore unexpected error", t );
} finally {
if ( freshClient != null ) {
freshClient.close();
}
}
log.debug( "getReplSetStatus(): {}", rss );
return rss;
}
I hate it because it doesn't follow the Mongo Java Driver convention of your application only needs a single Mongo and through this singleton you connect to the rest of the Mongo data structures (DB, Collection, etc). I have only been able to observe this working by new'ing up a second Mongo during the check so that I can rely upon the ReplicaSetStatus null check to discriminate between "ReplSet-DOWN" and "read-only".
What is really needed in this driver is some way to ask direct questions of the Mongo to see if the ReplSet can be expected at this moment to support each of the WriteConcerns or ReadPreferences. Something like...
/**
* #return true if current state of Client can support readPreference, false otherwise
*/
boolean mongo.canDoRead( ReadPreference readPreference )
/**
* #return true if current state of Client can support writeConcern; false otherwise
*/
boolean mongo.canDoWrite( WriteConcern writeConcern )
This makes sense to me because it acknowledges the fact that the ReplSet may have been great when the Mongo was created, but conditions right now mean that Read or Write operations of a specific type may fail due to changing conditions.
In any event, maybe http://api.mongodb.org/java/2.11.1/com/mongodb/ReplicaSetStatus.html gets you what you need.
When Mongo is failing over, there are no nodes in a PRIMARY state. You can just get the replica set status via the replSetGetStatus command and look for a master node. If you don't find one, you can assume that the cluster is in a failover transition state, and can retry as desired, checking the replica set status on each failed connection.
I don't know the Java driver implementation itself, but I'd do catch all MongoExceptions, then filter them on getCode() basis. If the error code does not apply to replica sets failures, then I'd rethrow the MongoException.
The problem is, to my knowledge there is no error codes reference in the documentation. Well there is a stub here, but this is fairly incomplete. The only way is to read the code of the Java driver to know what code it uses…
Platform: 64 Bit windows OS, spymemcached-2.7.3.jar, J2EE
We want to use two memcache/membase servers for caching solution. We want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data.
We are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We loading memcacheClient object at the time of our J2EE application startup.
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
After that we are using memcacheClient to get and set value in memcache/membase server.
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
Looks like memcacheClient is setting and getting and value only from server1.
If we stop server1, it fails to get/set value. Should it not use server2 in case of server1 down? Please let me know if we are doing anything wrong here...
aspymemcached java client dos not handle membase failover for particular node.
Ref : https://blog.serverdensity.com/handling-memcached-failover/
We need to handle it manually(by our code)
We can do this by using ConnectionObserver
Here is my code :
public static void main(String a[]) throws InterruptedException{
try {
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
final ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
final MemcachedClient client = new MemcachedClient(serverList, "bucketName", "");
client.addObserver(new ConnectionObserver() {
#Override
public void connectionLost(SocketAddress arg0) {
//method call when connection lost
for(MemcachedNode node : client.getNodeLocator().getAll()){
if(!node.isActive()){
client.shutdown();
//re init your client here, and after re-init it will connect to your secodry node
break;
}
}
}
#Override
public void connectionEstablished(SocketAddress arg0, int arg1) {
//method call when connection established
}
});
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
} catch (Exception e) {
}
}
client.get() would use first available node and therefore your value would be stored/updated on one node only.
You seems to be a bit contradicting in your requirements - first you're saying that 'we want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data' which implies distributed cache model (particular key is stored on one node in the cache farm) and then you expect to fetch it if that node is down, which obviously won't happen.
If you need your cache farm to survive node failure without losing data cached on that node you should use replication, which is available in MemBase but obviously you would pay the price of storing the same values multiple times so your desire of '1GB per server...total 2GB of cache' won't be possible.