I have a question about configuring the size of the connection pool when using Zuul by itself, and not using Ribbon or other Netflix components.
We have a system that uses Zuul to proxy requests to a Mule server. We are only using Zuul and not Ribbon. We have defined 4 routes that call the Mule services. One of these services is long running, probably around 3 seconds per call.
When we load the system with 40 simultaneous users we get this error
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:412)
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute$1.getPoolEntry(ConnPoolByRoute.java:298)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager$1.getConnection(ThreadSafeClientConnManager.java:238)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:423)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:115)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.forwardRequest(SimpleHostRoutingFilter.java:262)
at org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.forward(SimpleHostRoutingFilter.java:225)
at org.springframework.cloud.netflix.zuul.filters.route.SimpleHostRoutingFilter.run(SimpleHostRoutingFilter.java:177)
at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:112)
When I looked through the code to figure out how to change the size of the connection pool and found this code
private static ClientConnectionManager newConnectionManager() throws Exception {
KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType());
trustStore.load(null, null);
SSLSocketFactory sf = new MySSLSocketFactory(trustStore);
sf.setHostnameVerifier(SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
SchemeRegistry registry = new SchemeRegistry();
registry.register(new Scheme("http", PlainSocketFactory.getSocketFactory(), 80));
registry.register(new Scheme("https", sf, 443));
ThreadSafeClientConnManager cm = new ThreadSafeClientConnManager(registry);
cm.setMaxTotal(Integer.parseInt(System.getProperty("zuul.max.host.connections", "200")));
cm.setDefaultMaxPerRoute(Integer.parseInt(System.getProperty("zuul.max.host.connections", "20")));
return cm;
}
At first I thought all I have to do is increase the value for zuul.max.host.connections and that would increase the size of per route max, but then I noticed that the same system property is used to set the total max number of connections.
Is setting the value of this system parameter the correct way to control the pool sizes? Or should we be using another component such as Ribbon to better manage these connections?
you can config
zuul.host.maxTotalConnections=1000
zuul.host.maxPerRouteConnections=100
If you still looking for solution, i took the following approach.
1- Disable ( SimpleHostRoutingFilter ) by passing
( -Dzuul.SimpleHostRoutingFilter.route.disable=true ) as system property.
2- Write your own customized Routing filter, in my case i copied ( SimpleHostRoutingFilter ), and make some modification to be able to set these properties .
Related
we have the following custom connection pooling implemented for RestTemplate.
PoolingHttpClientConnectionManager poolingConnManager =
new PoolingHttpClientConnectionManager();
poolingConnManager.setDefaultMaxPerRoute(restClientprops.getRestClientMaxPerRoutePool());
poolingConnManager.setMaxTotal(restClientprops.getRestClientMaxTotalPool());
HttpClientBuilder httpClientBuilder = HttpClients.custom()
.setConnectionManager(poolingConnManager)
.setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
.setMaxConnPerRoute(restClientprops.getRestClientMaxPerRoutePool())
.setMaxConnTotal(restClientprops.getRestClientMaxTotalPool());
HttpComponentsClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory();
requestFactory.setConnectTimeout(restClientprops.getConnectTimeout());
requestFactory.setReadTimeout(restClientprops.getReadTimeout());
requestFactory.setConnectionRequestTimeout(restClientprops.getConnectionRequestTimeout());
requestFactory.setHttpClient(httpClientBuilder.build());
this.restTemplate = new RestTemplate(requestFactory);
I am changing it to WebClient implementation, and this is what I could come up with.
HttpClient httpClient = HttpClient
.create(ConnectionProvider.create("webclient-pool", restClientprops.getRestClientMaxTotalPool()))
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, restClientprops.getConnectTimeout())
.responseTimeout(Duration.ofMillis(restClientprops.getConnectionRequestTimeout()))
.doOnConnected(conn -> conn.addHandler(new ReadTimeoutHandler(restClientprops.getReadTimeout(), TimeUnit.MILLISECONDS)))
.keepAlive(true);
Per this URL https://github.com/reactor/reactor-netty/issues/1159
from what I understood connection request timeout is renamed to responseTimeOut in webclient httpclient. Is that accurate?
How should I set MaxConnPerRoute in webclient that is in the RestTemplate implementation?
Is keepAlive(true) accurate translation of setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
Appreciate your help.
Per this URL https://github.com/reactor/reactor-netty/issues/1159 from what I understood connection request timeout is renamed to responseTimeOut in webclient httpclient. Is that accurate?
Yes that's true. More about all timeouts that can be configured for the HttpClient you can find in the Reference Documentation.
How should I set MaxConnPerRoute in webclient that is in the RestTemplate implementation?
You can provide connection pool configuration per remote address (if that's what you mean with MaxConnPerRoute), see javadoc for forRemoteHost.
ConnectionProvider.builder("test")
.maxConnections(2) // default max connections
.forRemoteHost(<socket-address>,spec -> spec.maxConnections(1)) // max connections only for this socket address
.build();
Is keepAlive(true) accurate translation of setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
If you mean specifying whether the connection is persistent or not then YES this configuration has to be used. By default the connection IS persistent. If you mean SO_KEEPALIVE then NO and you have to use .option() configuration. You can find more in the Reference Documentation.
HttpClient.create()
.option(ChannelOption.SO_KEEPALIVE, true)
This configuration can be removed is you use the timeout settings provided by Reactor Netty:
.doOnConnected(conn -> conn.addHandler(new ReadTimeoutHandler(restClientprops.getReadTimeout(), TimeUnit.MILLISECONDS)))
I have a windows desktop application on .NET Framework that a user can interact with. There is a "connect" button that sets up Apache Geode client connection to Geode Server.
When Geode Server is down (Locator - there is only 1) the desktop application hangs indefinitely and needs to be forcefully closed.
This is how we connenect to the server and hangs on last line indefinitely:
PoolFactory poolFactory = _cache.GetPoolManager()
.CreateFactory()
.SetSubscriptionEnabled(true)
.AddLocator(_host, _port);
return poolFactory.Create(_serviceName);
I would like to add a time-out period for this method to return or throw exceptions or anything.
Alternatively, I've wrapped it on a different kind of timer outside of this to just return, but when trying to run the above code again (to try to connect the application again) the pool is still trying to connect.
I have tried to destroy it like this:
//destroy the pool
var poolManager = _cache.GetPoolManager();
if (poolManager != null)
{
var pool = poolManager.Find(_serviceName);
if (pool != null && pool.Destroyed == false)
pool.Destroy();
}
But then the pool.Destroy(); hangs indefinateley
How can I have a mechanism to attempt to connect to the Geode server, return if not connected within 10 seconds, then leave and are able to try to connect again using the same cache?
Note above was trying to re-use the same cache, probably setting cache to null and trying all over again would work, but I am looking for a more correct way to do it.
Depending on what version of .NET native client you are using. There is a property in 10.x connect-timeout that should work. Additionally, you can wrap your connection code under try{...}catch{} block and handle an exception for Apache.Geode.Client.TimeoutException and/or ": No locators available".
Example:
var _cache = new CacheFactory()
.Set("log-level", "config")
.Set("log-file", "C:\temp\test.log")
.Set("connect-timeout", "26000ms")
.Create();
_cache.GetPoolManager()
.CreateFactory()
.SetSubscriptionEnabled(true)
.AddLocator(hostname-or-ip", port-number)
.Create("TestPool");
I'm posting the question here just to be sure I'm not barking on the wrong tree.
How to get the number of connections used (and free) to the MongoDB, but from a client perspective (eg. Java client), using the 4.x driver?
There are posts regarding using the serverStatus(Get the number of open connections in mongoDB using java), but it presumes having 'admin' access to the MongoDB. Using a 'regular user'(an db user with lower privileges (e.g access to only one database)) cannot run the serverStatus(). But this provides only a view from the server-side (there are N connections from IP x).
Other posts mentioned how to setup the connection pool size (eg. using the MongoClients.create​(MongoClientSettings settings) (see the 4.x API reference (https://mongodb.github.io/mongo-java-driver/4.0/apidocs/mongodb-driver-sync/com/mongodb/client/MongoClients.html)):
MongoCredential credential = MongoCredential.createCredential(
username,
"admin",
password.toCharArray());
MongoClient mongoClient = MongoClients.create(MongoClientSettings.builder()
.applyToClusterSettings(
builder -> builder.hosts(Arrays.asList(new ServerAddress(hostname, portNumber))))
.credential(credential)
.applyToConnectionPoolSettings(builder -> builder
.minSize(connectionPoolMinimumSize)
.maxSize(connectionPoolMaximumSize))
.readConcern(readConcern)
.readPreference(readPreference)
.writeConcern(writeConcern)
.build());
But none provided means to get the used and available connections the connection pool.
As mentioned by Oleg, using the ConnectionPoolListener would be a way, but that is available only in the 3.x drivers. The ConnectionPoolListener methods are marked as deprecated on 4.x (although it is still mentioned in the JMX Monitoring section (http://mongodb.github.io/mongo-java-driver/4.0/driver-reactive/reference/monitoring/).
You can use connection pool monitoring which is described here to keep track of connection states, and deduce the counts you are looking for.
I don't know if Java driver exposes the counters you are looking for as public APIs; many drivers don't.
Finally got this working:
created a custom connection pool listener, implementing the com.mongodb.event.ConnectionPoolListener...
public class CustomConnectionPoolListener implements ConnectionPoolListener {
...
}
... and having the stats counters updated on a store (accessible later)
#Override
public void connectionCreated(ConnectionCreatedEvent event) {
ConnectionPoolStatsPOJO cps = mongoConnectionPoolList.get(connectionPoolAlias);
cps.incrementConnectionsCreated();
mongoConnectionPoolList.put(connectionPoolAlias, cps);
}
attached this custom connection pool listener to the MongoClient connection:
ConnectionPoolListener customConnPoolListener = new CustomConnectionPoolListener(...); /* added some references in the */
...
MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applicationName(applicationName)
.applyConnectionString(connURI)
.credential(credential)
.readConcern(readConcern)
.readPreference(readPreference)
.writeConcern(writeConcern)
.applyToConnectionPoolSettings(builder -> builder
.minSize(connectionPoolMinimumSize)
.maxSize(connectionPoolMaximumSize)
.addConnectionPoolListener(customConnPoolListener)
)
.retryWrites(true)
.retryReads(true)
.build();
...
MongoClient mongoClient = MongoClients.create(mongoClientSettings);
....
finally, to access the connection pool stats, just have to query out the store:
ConnectionPoolStatsPOJO connectionPoolStats = MongoDB_ConnectionPool_Repository.getInstance().getMongoConnectionPoolList().get(connectionPoolAlias);
Therefore, thanks to "#D. SM" for pointing to the right direction.
My application, an API server, is thought to be organized as follows:
MainVerticle is called on startup and should create all necessary objects for the application to work. Mainly a mongoDB pool of connections (MongoClient.createShared(...)) and a global configuration object available instance-wide. It also starts the HTTP Listener, several instances of a HttpVerticle.
HttpVerticle is in charge of receiving requests and, based the command xxx in the payload, execute the XxxHandler.handle(...) method.
Most of the XxxHandler.handle(...) methods will need to access the database. In addition, some others will also deploy additional verticles with parameters from the global conf. For example LoginHandler.handle(...) will deploy a verticle to keep user state while he's connected and this verticle will be undeployed when the user logs out.
I can't figure out how to get the global configuration object while being in XxxHandler.handle(...) or in a "sub"-verticle. Same for the mongo client.
Q1: For configuration data, I tried to use SharedData. In `MainVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
lm.put("var", "val");
and in `HttpVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
log.debug("var={}", lm.get("var"));
but the log output is var=null.... What am I doing wrong ?
Q2: Besides this basic example with a <String, String> map type, what if the value is a mutable Object like JsonObject which actually is what I would need ?
Q3: Finally how to make the instance of the mongo client available to all verticles?
Instead of getLocalMap() you should be using getClusterWideMap(). Then you should be able to operate on shared data accross the whole cluster and not just in one verticle.
Be aware that the shared operations are async and the code might look like (code in Groovy):
vertx.sharedData().getClusterWideMap( 'your-name' ){ AsyncResult<AsyncMap<String,String>> res ->
if( res.succeeded() )
res.result().put( 'var', 'val', { log.info "put succeeded: ${it.succeeded()}" } )
}
You should be able to use any Serializable objects in your map.
I am currently using the ServicePartitionResolver to get the http endpoint of another application within my cluster.
var resolver = ServicePartitionResolver.GetDefault();
var partition = await resolver.ResolveAsync(serviceUri, partitionKey ?? ServicePartitionKey.Singleton, CancellationToken.None);
var endpoints = JObject.Parse(partition.GetEndpoint().Address)["Endpoints"];
return endpoints[endpointName].ToString().TrimEnd('/');
This works as expected, however if I redeploy my target application and its port changes on my local dev box, the source application still returns the old endpoint (which is now invalid). Is there a cache somewhere that I can clear? Or is this a bug?
Yes, they are cached. If you know that the partition is no longer valid, or if you receive an error, you can call the resolver.ResolveAsync() that has an overload that takes the earlier ResolvedServicePartition previousRsp, which triggers a refresh.
This api-overload is used in cases where the client knows that the
resolved service partition that it has is no longer valid.
See this article too.
Yes. They are cached. There are 2 solutions to overcome this.
The simplest code change that you need to do is replace var resolver = ServicePartitionResolver.GetDefault(); with var resolver = new ServicePartitionResolver();. This forces the service to create a new ServicePartitionResolver object to every time. Whereas, GetDefault() gets the cached object.
[Recommended] The right way of handling this is to implement a custom CommunicationClientFactory that implements CommunicationClientFactoryBase. And then initialize a ServicePartitionClient and call InvokeWithRetryAsync. It is documented clearly in Service Communication in the Communication clients and factories section.