Issue with HttpClient in AEM - aem

I have an HttpClient code written that is from org.apache.commons.httpclient package.
In that I am setting connection time and socket time out this way.
final HttpClient http = new HttpClient(this.connectionManager);
http.getParams().setParameter("http.connection.timeout", this.connectionTimeout);
http.getParams().setParameter("http.socket.timeout", this.socketTimeout);
Now the Adobe Cloud has raised issue that timeout is not being set(which is not true).
They suggested to set timeouts using
#Reference
private HttpClientBuilderFactory httpClientBuilderFactory;
public void doThis() {
HttpClientBuilder builder = httpClientBuilderFactory.newBuilder();
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(5000)
.setSocketTimeout(5000)
.build();
builder.setDefaultRequestConfig(requestConfig);
HttpClient httpClient = builder.build();
// do something with the client
}
Refer Link
But HttpClientBuilderFactory does not belong to **org.apache.commons.httpclient it belongs to org.apache.http.client**
And always returns Closable Http client.
How do I resolve this security issue? Can I add an annotation for exception? Or will I have to rewrite all my code?
This issue is with Adobe Experience Manager 6.5 instance.

Is it probably because you are not setting the right timeout parameter?
You are setting the property http.connection.timeout which is not available in the class org.apache.commons.httpclient.params.HttpClientParams.
http.getParams() returns an instance of HttpClientParams which has the socket timeout and connection manager timeout but not a connection timeout. You could probably use the constant HttpClientParams.CONNECTION_MANAGER_TIMEOUT to set a timeout for the connection manager?
On the other hand, the property http.connection.timeout is available for the class HttpConnectionParams.
Constant field values reference

The problem is Adobe has two versions of HttpClient the old 3.x that has package structure org.apache.commons.httpclient.HttpClient and the one that HttpClientBuilderFactory gives out that is 4.x org.apache.http.Httpclient.
I was breaking my head around this. Finally we were left with two options...
1) Rewrite all our commons http api(3.x) to the newer version of apache.http (4.x) that has the methods setTimeout and setConnectionTimeout
OR
2)#SuppressWarnings("CQRules:ConnectionTimeoutMechanism")
We chose Option number 2 as the effort arround this was huge and we are planning to go live soon.

Related

How do I set MaxConnPerRoute, ConnectionRequestTimeout, keepAliveStrategy in Spring WebFlux WebClient

we have the following custom connection pooling implemented for RestTemplate.
PoolingHttpClientConnectionManager poolingConnManager =
new PoolingHttpClientConnectionManager();
poolingConnManager.setDefaultMaxPerRoute(restClientprops.getRestClientMaxPerRoutePool());
poolingConnManager.setMaxTotal(restClientprops.getRestClientMaxTotalPool());
HttpClientBuilder httpClientBuilder = HttpClients.custom()
.setConnectionManager(poolingConnManager)
.setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
.setMaxConnPerRoute(restClientprops.getRestClientMaxPerRoutePool())
.setMaxConnTotal(restClientprops.getRestClientMaxTotalPool());
HttpComponentsClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory();
requestFactory.setConnectTimeout(restClientprops.getConnectTimeout());
requestFactory.setReadTimeout(restClientprops.getReadTimeout());
requestFactory.setConnectionRequestTimeout(restClientprops.getConnectionRequestTimeout());
requestFactory.setHttpClient(httpClientBuilder.build());
this.restTemplate = new RestTemplate(requestFactory);
I am changing it to WebClient implementation, and this is what I could come up with.
HttpClient httpClient = HttpClient
.create(ConnectionProvider.create("webclient-pool", restClientprops.getRestClientMaxTotalPool()))
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, restClientprops.getConnectTimeout())
.responseTimeout(Duration.ofMillis(restClientprops.getConnectionRequestTimeout()))
.doOnConnected(conn -> conn.addHandler(new ReadTimeoutHandler(restClientprops.getReadTimeout(), TimeUnit.MILLISECONDS)))
.keepAlive(true);
Per this URL https://github.com/reactor/reactor-netty/issues/1159
from what I understood connection request timeout is renamed to responseTimeOut in webclient httpclient. Is that accurate?
How should I set MaxConnPerRoute in webclient that is in the RestTemplate implementation?
Is keepAlive(true) accurate translation of setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
Appreciate your help.
Per this URL https://github.com/reactor/reactor-netty/issues/1159 from what I understood connection request timeout is renamed to responseTimeOut in webclient httpclient. Is that accurate?
Yes that's true. More about all timeouts that can be configured for the HttpClient you can find in the Reference Documentation.
How should I set MaxConnPerRoute in webclient that is in the RestTemplate implementation?
You can provide connection pool configuration per remote address (if that's what you mean with MaxConnPerRoute), see javadoc for forRemoteHost.
ConnectionProvider.builder("test")
.maxConnections(2) // default max connections
.forRemoteHost(<socket-address>,spec -> spec.maxConnections(1)) // max connections only for this socket address
.build();
Is keepAlive(true) accurate translation of setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
If you mean specifying whether the connection is persistent or not then YES this configuration has to be used. By default the connection IS persistent. If you mean SO_KEEPALIVE then NO and you have to use .option() configuration. You can find more in the Reference Documentation.
HttpClient.create()
.option(ChannelOption.SO_KEEPALIVE, true)
This configuration can be removed is you use the timeout settings provided by Reactor Netty:
.doOnConnected(conn -> conn.addHandler(new ReadTimeoutHandler(restClientprops.getReadTimeout(), TimeUnit.MILLISECONDS)))

How to get the number of connections used (and free) to the MongoDB (from a client perspective)?

I'm posting the question here just to be sure I'm not barking on the wrong tree.
How to get the number of connections used (and free) to the MongoDB, but from a client perspective (eg. Java client), using the 4.x driver?
There are posts regarding using the serverStatus(Get the number of open connections in mongoDB using java), but it presumes having 'admin' access to the MongoDB. Using a 'regular user'(an db user with lower privileges (e.g access to only one database)) cannot run the serverStatus(). But this provides only a view from the server-side (there are N connections from IP x).
Other posts mentioned how to setup the connection pool size (eg. using the MongoClients.create​(MongoClientSettings settings) (see the 4.x API reference (https://mongodb.github.io/mongo-java-driver/4.0/apidocs/mongodb-driver-sync/com/mongodb/client/MongoClients.html)):
MongoCredential credential = MongoCredential.createCredential(
username,
"admin",
password.toCharArray());
MongoClient mongoClient = MongoClients.create(MongoClientSettings.builder()
.applyToClusterSettings(
builder -> builder.hosts(Arrays.asList(new ServerAddress(hostname, portNumber))))
.credential(credential)
.applyToConnectionPoolSettings(builder -> builder
.minSize(connectionPoolMinimumSize)
.maxSize(connectionPoolMaximumSize))
.readConcern(readConcern)
.readPreference(readPreference)
.writeConcern(writeConcern)
.build());
But none provided means to get the used and available connections the connection pool.
As mentioned by Oleg, using the ConnectionPoolListener would be a way, but that is available only in the 3.x drivers. The ConnectionPoolListener methods are marked as deprecated on 4.x (although it is still mentioned in the JMX Monitoring section (http://mongodb.github.io/mongo-java-driver/4.0/driver-reactive/reference/monitoring/).
You can use connection pool monitoring which is described here to keep track of connection states, and deduce the counts you are looking for.
I don't know if Java driver exposes the counters you are looking for as public APIs; many drivers don't.
Finally got this working:
created a custom connection pool listener, implementing the com.mongodb.event.ConnectionPoolListener...
public class CustomConnectionPoolListener implements ConnectionPoolListener {
...
}
... and having the stats counters updated on a store (accessible later)
#Override
public void connectionCreated(ConnectionCreatedEvent event) {
ConnectionPoolStatsPOJO cps = mongoConnectionPoolList.get(connectionPoolAlias);
cps.incrementConnectionsCreated();
mongoConnectionPoolList.put(connectionPoolAlias, cps);
}
attached this custom connection pool listener to the MongoClient connection:
ConnectionPoolListener customConnPoolListener = new CustomConnectionPoolListener(...); /* added some references in the */
...
MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applicationName(applicationName)
.applyConnectionString(connURI)
.credential(credential)
.readConcern(readConcern)
.readPreference(readPreference)
.writeConcern(writeConcern)
.applyToConnectionPoolSettings(builder -> builder
.minSize(connectionPoolMinimumSize)
.maxSize(connectionPoolMaximumSize)
.addConnectionPoolListener(customConnPoolListener)
)
.retryWrites(true)
.retryReads(true)
.build();
...
MongoClient mongoClient = MongoClients.create(mongoClientSettings);
....
finally, to access the connection pool stats, just have to query out the store:
ConnectionPoolStatsPOJO connectionPoolStats = MongoDB_ConnectionPool_Repository.getInstance().getMongoConnectionPoolList().get(connectionPoolAlias);
Therefore, thanks to "#D. SM" for pointing to the right direction.

Project Reactor and Server Side Events

I'm looking for a solution that will have the backend publish an event to the frontend as soon as a modification is done on the server side. To be more concise I want to emit a new List of objects as soon as one item is modified.
I've tried implementing on a SpringBoot project, that uses Reactive Web, MongoDB which has a #Tailable cursor that publish an event as soon as the capped collection is modified. The problem is that the capped collection has some limitation and is not really compatible with what I want to do. The thing is I cannot update an existing element if the new one has a different size(as I understood this is illegal because you cannot make a rollback).
I honestly don't even know if it's doable, but maybe I'm lucky and I'll run into a rocket scientist right here that will prove otherwise.
Thanks in advance!!
*** EDIT:
Sorry for the vague question. Yes I'm more focused on the HOW, using the Spring Reactive framework.
When I had a similar need - to inform frontend that something is done on the backend side - I have used a message queue.
I have published a message to the queue from the backend and the frontend consumed the message.
But I am not sure if that is what you're looking for.
if you are using webflux with spring reactor, I think you can simply have a client request with content-type as 'text/event-stream' or 'application/stream+json' and You shall have API that can produce those content-type. This gives you SSE model without too much effort.
#GetMapping(value = "/stream", produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE, MediaType.APPLICATION_JSON_UTF8_VALUE})
public Flux<Message> get(HttpServletRequest request) {
Just as an idea - maybe you need to use a web socket technology here:
The frontend side (I assume its a client side application that runs in a browser, written in react, angular or something like that) can establish a web-socket communication with the backend server.
When the process on backend finishes, the message from backend to frontend can be sent.
You can do emitting changes by hand. For example:
endpoint:
public final Sinks.Many<SimpleInfoEvent> infoEventSink = Sinks.many().multicast().onBackpressureBuffer();
#RequestMapping(path = "/sseApproach", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<SimpleInfoEvent>> sse() {
return infoEventSink.asFlux()
.map(e -> ServerSentEvent.builder(e)
.id(counter.incrementAndGet() + "")
.event(e.getClass().getName())
.build());
}
Code anywhere for emitting data:
infoEventSink.tryEmitNext(new SimpleInfoEvent("any custom event"));
Watch out of threads and things like "subscribeOn", "publishOn", but basically (when not using any third party code), this should work good enough.

Will setDefaultRequestConfig method override system properties - CloseableHttpClient

I am using the following code to make a Http request.
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(10000)
.setConnectionRequestTimeout(10000)
.setSocketTimeout(300000)
.build();
CloseableHttpClient httpClient = HttpClientBuilder.create().useSystemProperties().setDefaultRequestConfig(requestConfig).build();
My simple question, will the method setDefaultRequestConfig remove all the system properties and will keep only the properties given above OR will it override only the given properties and keep the other system properties while making the HTTP request.
System properties that HttpClientBuilder can optionally take into consideration are as follows
ssl.TrustManagerFactory.algorithm
javax.net.ssl.trustStoreType
javax.net.ssl.trustStore
javax.net.ssl.trustStoreProvider
javax.net.ssl.trustStorePassword
ssl.KeyManagerFactory.algorithm
javax.net.ssl.keyStoreType
javax.net.ssl.keyStore
javax.net.ssl.keyStoreProvider
javax.net.ssl.keyStorePassword
https.protocols
https.cipherSuites
http.proxyHost
http.proxyPort
http.nonProxyHosts
http.keepAlive
http.maxConnections
http.agent
Request level configuration have no effect on any of those settings with an exception of proxy host and port. Proxy setting at the request level will override those at the system level.

Service Fabric ServicePartitionResolver ResolveAsync

I am currently using the ServicePartitionResolver to get the http endpoint of another application within my cluster.
var resolver = ServicePartitionResolver.GetDefault();
var partition = await resolver.ResolveAsync(serviceUri, partitionKey ?? ServicePartitionKey.Singleton, CancellationToken.None);
var endpoints = JObject.Parse(partition.GetEndpoint().Address)["Endpoints"];
return endpoints[endpointName].ToString().TrimEnd('/');
This works as expected, however if I redeploy my target application and its port changes on my local dev box, the source application still returns the old endpoint (which is now invalid). Is there a cache somewhere that I can clear? Or is this a bug?
Yes, they are cached. If you know that the partition is no longer valid, or if you receive an error, you can call the resolver.ResolveAsync() that has an overload that takes the earlier ResolvedServicePartition previousRsp, which triggers a refresh.
This api-overload is used in cases where the client knows that the
resolved service partition that it has is no longer valid.
See this article too.
Yes. They are cached. There are 2 solutions to overcome this.
The simplest code change that you need to do is replace var resolver = ServicePartitionResolver.GetDefault(); with var resolver = new ServicePartitionResolver();. This forces the service to create a new ServicePartitionResolver object to every time. Whereas, GetDefault() gets the cached object.
[Recommended] The right way of handling this is to implement a custom CommunicationClientFactory that implements CommunicationClientFactoryBase. And then initialize a ServicePartitionClient and call InvokeWithRetryAsync. It is documented clearly in Service Communication in the Communication clients and factories section.