I'm upgrading my Grails 2.4 web application to Grails 3, and I'm considering switching from my custom DAO to GORM for my Mongo database.
I'm trying to understand how to setup GORM correctly, in particular about connection options, but its documentation is slightly misleading to me.
The Advanced Configuration ("Mongo Database Connection Configuration") states
Available options and their descriptions are defined in the MongoOptions javadoc.
so I'm tempted to assume that I'm allowed to use any of those options.
But later on in the same section (Configuration Options Guide) I read
Below is a complete example showing all configuration options:
showing only 9 options.
My issue is 'converting' my custom DAO
MongoClientOptions options = new MongoClientOptions .Builder()
.connectionsPerHost(1000)
.threadsAllowedToBlockForConnectionMultiplier(5)
.maxWaitTime(4000)
.socketTimeout(2000).build();
List<ServerAddress> list = getMongoReplicaSet();
mongo = new MongoClient(list, options);
mongo.setReadPreference(ReadPreference.nearest());
to an equivalent configuration
grails {
mongodb {
options {
connectionsPerHost = 1000
threadsAllowedToBlockForConnectionMultiplier = 5
maxWaitTime = 4000
socketTimeout = 2000
}
}
}
but how to define the read preference? Am I allowed to do something like this?
grails {
mongodb {
options {
readPreference = com.mongodb.ReadPreference.nearest()
}
}
}
Thanks in advance!
Yes you can set anything in the MongoClientOptions.Builder class via configuration. Although you syntax is wrong, it should be:
grails {
mongodb {
options {
readPreference = com.mongodb.ReadPreference.nearest()
}
}
}
Related
I'm on a project where I need to manage connections to both a MongoDB Instance and a PostgreSQL instance.
My current idea is to make a custom type that will contain an Arc<Mutex<pgConnection>> and an Arc<Mutex<MongoConnection>> in a struct that itself is within an Arc which would be passed to the Actix Web app_data initialization function.
e.g.
// this is pseudo-code, kinda
type DbPoolPG = r2d2::Pool<ConnectionManager<PostgreSQL>>;
// wont be an r2d2 pool, MongoDB official drivers handle pooling automatically
type DbPoolMongo = r2d2::Pool<ConnectionManager<MongoDB>>;
struct DatabseConnections {
pg: Arc<Mutex<DbPoolPG>>;
mongo: Arc<Mutex<DbPoolMongo>>;
}
#[actix_web::main]
async fn main() -> io::Result<()> {
// Create connection pools
let PostGresPool = r2d2::Pool::builder()
.build(manager)
.expect("Failed to create pool.");
let MongoPool = mongo.create_connection()
let connections = DatabaseConnections {
pg: Arc::new(Mutex::new(PostGresPool))
mongo: Arc::new(Mutex::new(MongoPool))
}
// Start HTTP server
HttpServer::new(move || {
App::new().app_data(web::Data::new(Arc::new(connections)))
.resource("/{name}", web::get().to(index))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
The idea seems a bit too simple to actually work though, Does anyone else have any ideas?
nvm, you can just call app_data twice as long as the types are different. then access them by calling the types
For performance optimisation we are trying to read data from Mongo secondary server for selected scenarios. I am using the inline query using "withReadPreference(ReadPreference.secondaryPreferred())" to read the data, PFB the code snippet.
What I want to confirm the data we are getting is coming from secondary server after executing the inline query highlighted, is there any method available to check the same from Java or Springboot
public User read(final String userId) {
final ObjectId objectId = new ObjectId(userId);
final User user = collection.withReadPreference(ReadPreference.secondaryPreferred()).findOne(objectId).as(User.class);
return user;
}
Pretty much the same way in Java. Note we use secondary() not secondaryPrefered(); this guarantees reads from secondary ONLY:
import com.mongodb.ReadPreference;
{
// This is your "regular" primaryPrefered collection:
MongoCollection<BsonDocument> tcoll = db.getCollection("myCollection", BsonDocument.class);
// ... various operations on tcoll, then create a new
// handle that FORCES reads from secondary and will timeout and
// fail if no secondary can be found:
MongoCollection<BsonDocument> xcoll = tcoll.withReadPreference(ReadPreference.secondary());
BsonDocument f7 = xcoll.find(queryExpr).first();
}
I get the error com.mongodb.MongoWaitQueueFullException: Too many threads are already waiting for a connection. Max number of threads (maxWaitQueueSize) of 500 has been exceeded. while doing a stress test on my application.
So I am thinking of configuring the maxWaitQueueSize property via configuration.
I am using spring boot to configure mongodb connection. I am using #EnableAutoConfiguration in my Application and I have declared only spring.data.mongodb.uri=mongodb://user:password#ip:27017 in the application.properties file.
How do I configure the maxWaitQueueSize property with spring boot?
How do I decide a good value for the maxWaitQueueSize?
If you're using MongoDB 3.0+, you can set waitQueueMultiple in your mongouri :
spring.data.mongodb.uri=mongodb://user:password#ip:27017/?waitQueueMultiple=10
waitQueueMultiple is a number that the driver multiples the maxPoolSize value to, to provide the maximum number of threads allowed to wait for a connection to become available from the pool.
How do I decide a good value for the maxWaitQueueSize?
It's not directly related to MongoDB but you can read more about Pool Sizing in Hikari github wiki.
In com.mongodb.MongoClientURI, you can find the parameters which can be used in MongoClientOption.
if (key.equals("maxpoolsize")) {
builder.connectionsPerHost(Integer.parseInt(value));
} else if (key.equals("minpoolsize")) {
builder.minConnectionsPerHost(Integer.parseInt(value));
} else if (key.equals("maxidletimems")) {
builder.maxConnectionIdleTime(Integer.parseInt(value));
} else if (key.equals("maxlifetimems")) {
builder.maxConnectionLifeTime(Integer.parseInt(value));
} else if (key.equals("waitqueuemultiple")) {
builder.threadsAllowedToBlockForConnectionMultiplier(Integer.parseInt(value));
} else if (key.equals("waitqueuetimeoutms")) {
builder.maxWaitTime(Integer.parseInt(value));
} else if (key.equals("connecttimeoutms")) {
builder.connectTimeout(Integer.parseInt(value));
} else if (key.equals("sockettimeoutms")) {
builder.socketTimeout(Integer.parseInt(value));
} else if (key.equals("autoconnectretry")) {
builder.autoConnectRetry(_parseBoolean(value));
} else if (key.equals("replicaset")) {
builder.requiredReplicaSetName(value);
} else if (key.equals("ssl")) {
if (_parseBoolean(value)) {
builder.socketFactory(SSLSocketFactory.getDefault());
}
}
I am using spring boot starter webflux. This issue also happens.
I tried to add MongoClientFactoryBean. It doesn't work.
The whole application is located in https://github.com/yigubigu/webfluxbenchmark. I tried to test performance benchmark of webflux and original mvc.
#Bean
public MongoClientFactoryBean mongoClientFactoryBean() {
MongoClientFactoryBean factoryBean = new MongoClientFactoryBean();
factoryBean.setHost("localhost");
factoryBean.setPort(27017);
factoryBean.setSingleton(true);
MongoClientOptions options = MongoClientOptions.builder()
.connectionsPerHost(1000)
.minConnectionsPerHost(500)
.threadsAllowedToBlockForConnectionMultiplier(10)
.build();
factoryBean.setMongoClientOptions(options);
return factoryBean;
}
you can achieve this by injecting an object of MongoOptions to your MongoTemplate.
This maxQueueSize limit is computed here in the Java client source code :
https://github.com/mongodb/mongo-java-driver/blob/3.10.x/driver-core/src/main/com/mongodb/connection/ConnectionPoolSettings.java#L273
It is the product of maxConnectionPoolSize and threadsAllowedToBlockForConnectionMultiplier and hence can be modified through ?maxPoolSize= and ?waitQueueMultiple= in the connection URI.
I'm trying to follow the steps listed at http://dev.mysql.com/doc/connector-j/en/connector-j-master-slave-replication-connection.html which states
To enable this functionality, use the com.mysql.jdbc.ReplicationDriver
class when configuring your application server's connection pool
From https://github.com/brettwooldridge/HikariCP - it says
HikariCP will attempt to resolve a driver through the DriverManager
based solely on the jdbcUrl
So is this configuration all thats needed?
db.default.url=jdbc:mysql:replication ...
Squeryl has has a number of db Adapters; but my understanding is these are unrelated?
http://squeryl.org/api/index.html#org.squeryl.adapters.MySQLInnoDBAdapter
Sorry for the key word loading - I'm just not too sure where I need to focus
Thanks
Brent
For people hitting this in 2020, Hikari uses
com.mysql.jdbc.jdbc2.optional.MysqlDataSource
as a data source. If I look at the code of the above class. It has a method named connect which returns Connection instance.
protected Connection getConnection(Properties props) throws SQLException {
String jdbcUrlToUse = null;
if (!this.explicitUrl) {
StringBuffer jdbcUrl = new StringBuffer("jdbc:mysql://");
if (this.hostName != null) {
jdbcUrl.append(this.hostName);
}
jdbcUrl.append(":");
jdbcUrl.append(this.port);
jdbcUrl.append("/");
if (this.databaseName != null) {
jdbcUrl.append(this.databaseName);
}
jdbcUrlToUse = jdbcUrl.toString();
} else {
jdbcUrlToUse = this.url;
}
Properties urlProps = mysqlDriver.parseURL(jdbcUrlToUse, (Properties)null);
urlProps.remove("DBNAME");
urlProps.remove("HOST");
urlProps.remove("PORT");
Iterator keys = urlProps.keySet().iterator();
while(keys.hasNext()) {
String key = (String)keys.next();
props.setProperty(key, urlProps.getProperty(key));
}
return mysqlDriver.connect(jdbcUrlToUse, props);
}
where mysqlDriver is an instance of
protected static final NonRegisteringDriver mysqlDriver;
if i check the connect method of NonRegisteringDriver class. It looks like this
public Connection connect(String url, Properties info) throws SQLException {
if (url != null) {
if (StringUtils.startsWithIgnoreCase(url, "jdbc:mysql:loadbalance://")) {
return this.connectLoadBalanced(url, info);
}
if (StringUtils.startsWithIgnoreCase(url, "jdbc:mysql:replication://")) {
return this.connectReplicationConnection(url, info);
}
}
Properties props = null;
if ((props = this.parseURL(url, info)) == null) {
return null;
} else if (!"1".equals(props.getProperty("NUM_HOSTS"))) {
return this.connectFailover(url, info);
} else {
try {
com.mysql.jdbc.Connection newConn = ConnectionImpl.getInstance(this.host(props), this.port(props), props, this.database(props), url);
return newConn;
} catch (SQLException var6) {
throw var6;
} catch (Exception var7) {
SQLException sqlEx = SQLError.createSQLException(Messages.getString("NonRegisteringDriver.17") + var7.toString() + Messages.getString("NonRegisteringDriver.18"), "08001", (ExceptionInterceptor)null);
sqlEx.initCause(var7);
throw sqlEx;
}
}
}
After looking at the code, it looks like it supports. I haven't tried it till now. Will try and let you know from personal experience. From code, it looks directly feasible.
Squeryl offers different MySQL adapters because innodb supports referential keys, while myisam does not. It seems like what your'e doing should be handled at the connection pool level, so I don't think your Squeryl configuration will have an affect.
I've never configured Hikari for replicated MySQL, but if it requires an alternative JDBC driver I'd be surprised if you can provide a JDBC URL and everything just works. I'm guessing that Hikari's default functionality is to pick the plain vanilla MySQL JDBC driver unless you tell it otherwise. Luckily, Hikari has quite a few config options including the ability to set a specific driverClassName.
Replication allows for a different URL:
jdbc:mysql:replication://[server1],[server2],[server2]/[database]
I've never tried it, but I assume this will resolve to the ReplicationDriver.
And I find myself back here - please note, hikari doesn't support the Replication driver.
https://github.com/brettwooldridge/HikariCP/issues/625#issuecomment-251613688
MySQL Replication Driver simply does NOT work together with HikariCP.
And
https://groups.google.com/forum/#!msg/hikari-cp/KtKgzR8COrE/higEHoPkAwAJ
... nobody running anything resembling a mission critical application takes MySQL's driver-level replication support seriously.
I am using Play 1.2.5, MongoDB and Morphia module 1.2.9 in my application.
To create a secure and encrypted connection to the db, I installed MongoDB by enabling SSL using the follwoing links
http://docs.mongodb.org/manual/administration/ssl/
http://www.mongodb.org/about/tutorial/build-mongodb-on-linux/
Now I'm able to connect to the mongo shell using mongo --ssl also able to verify whether MongoDB is running or not using https://mylocalhost.com:27017/
But after enabling SSL in MongoDB, I am not able to connect to it through my play application.
Following are the lines I used in the application.conf to connect to the db
morphia.db.host=localhost
morphia.db.port=27017
morphia.db.db=test
Is there any configurations available to connect over SSL?
I did some googling and I am not able to find any solutions. Please help me over this?
Thanks in advance.
Morphia module does not support ssl connection for the moment. And I am not sure morphia library support it. Please create an issue on github to track this requirement: https://github.com/greenlaw110/play-morphia/issues?state=open
I use spring-data and came up against the same issue. With spring-data i was able to construct a Mongo object myself and passes it as a constructor param. Morphia might have the same mechanism. The key is:
options.socketFactory = SSLSocketFactory.getDefault();
After that, make sure you install the SSL public key into your key store and it should work.
public class MongoFactory {
public Mongo buildMongo (String replicaSet, boolean slaveOk, int writeNumber , int connectionsPerHost, boolean useSSL) throws UnknownHostException{
ServerAddress addr = new ServerAddress();
List<ServerAddress> addresses = new ArrayList<ServerAddress>();
int port =0;
String host = new String();
if ( replicaSet == null )
throw new UnknownHostException("Please provide hostname");
replicaSet = replicaSet.trim();
if ( replicaSet.length() == 0 )
throw new UnknownHostException("Please provide hostname");
StringTokenizer tokens = new StringTokenizer(replicaSet, ",");
while(tokens.hasMoreTokens()){
String token = tokens.nextToken();
int idx = token.indexOf( ":" );
if ( idx > 0 ){
port = Integer.parseInt( token.substring( idx + 1 ) );
host = token.substring( 0 , idx ).trim();
}
addr = new ServerAddress(host.trim(), port);
addresses.add(addr);
}
MongoOptions options = new MongoOptions();
options.autoConnectRetry = true;
if (useSSL){
options.socketFactory = SSLSocketFactory.getDefault();
}
options.connectionsPerHost=connectionsPerHost;
options.w=writeNumber;
options.fsync=false;
options.wtimeout=5000;
options.connectTimeout=5000;
options.autoConnectRetry=true;
options.socketKeepAlive=true;
Mongo m = new Mongo(addresses, options);
if(slaveOk){
m.setReadPreference(ReadPreference.SECONDARY);
}
return m;
}
}