EJB3.1 client max connection for each EJB facade - jboss

For example we have 2 EJB stateless facades UserFacade and ContactFacade.
Client is tomcat web application which sometimes do remote calls to UserFacade and to ContactFacade.
In JBoss we have configuration like
<!-- Maximum number of connections in client invoker's -->
<!-- connection pool (socket transport). Defaults to 50. -->
<!--entry><key>clientMaxPoolSize</key> <value>20</value></entry-->
Link
Is this configuration for every EJB3 facade? For example if clientMaxPoolSize is 50, it means what it is 50 for UserFacade and 50 for ContactFacade? Or it means what it is 25 for each facade?
And is it useful in client side/application keep connection pool and do remote call via connection pool and if for each facade connection count increased over than 25(50) clear connections or do another actions.
In some client applications I used connection pool, it has some advantages and disadvantages.
Any suggestions?
Thanks

Is this configuration for every EJB3 facade? For example if clientMaxPoolSize is 50, it means what it is 50 for UserFacade and 50 for ContactFacade? Or it means what it is 25 for each facade?
Ans : It will Allow 50 for UserFacade and 50 for ContacrFacade
And is it useful in client side/application keep connection pool and do remote call via connection pool and if for each facade connection count increased over than 25(50) clear connections or do another actions.
Ans : It will go in wait mode and wait for the connections to be available.
In some client applications I used connection pool, it has some advantages and disadvantages. Any suggestions?
Ans : Connection pool is good for those who wants to serve large number of processing with lesser hardware. But make sure your configuration are according to your average application load

Related

What exactly does a connection pool for databases like PostgreSQL do?

I know the general idea that a connection pool is a pool of reusable connections that speeds up traffic to the database because it can reuse connections instead of constantly creating new ones.
But this is a very high level explanation. It doesn't explain what is meant by a connection and why the connection pool works, since even with a connection pool such as for example client -> PgBouncer -> PostgreSQL, while the client does not have to create a connection to the databasee, it still has to connect to create a connection to the proxy.
So what is the connection created from (e.g.) client -> PgBouncer and why is creating this connection faster than creating the connection PgBouncer -> PostgreSQL?
There are two uses of a connection pool:
it prevents opening and closing database connections all the time
There is certainly a certain overhead with establishing a TCP connection to pgBouncer, but that is way cheaper than establishing a database connection. When you start a database connection, additional work is done:
a server process is started, which is way more expensive than a TCP connection
PostgreSQL loads cached metadata tables
it puts a limit on the number of client connections, thereby preventing database overload
The advantage over limiting max_connections is that connections in excess of the limit won't receive an error, but will be queued waiting for a connection to become free.

Vert Http Client max connection pooling? Is this pool per endpoint or in total?

I am using vertx web client (3.8.5) for api-gateway and setting setMaxPoolSize to 20. Is this limit per endpoint or in total across all endpoints?
I am deploying my application with 36 verticles and 1 web client per verticle, which makes 36 web clients in total and my application needs to connect to more than 1000 different ip:port.
Now, to use the benefit of connection pooling, if the above limit is on total connections, I need to setMaxPoolSize >= 1000 which makes overall connections from the application equal to 1000 * 36 >= 36000. What are the advisable settings for the above use case?
If I set maxPoolSize = 20 and none of them has expired (expiry time = 60s) and only let's say 10 of them are being used, what happens when the request comes for ip:port which isn't in pool. Does it get queued or one of the unused connection is disconnected and a new connection (for the new ip:port) is established?
What should be my client configuration for api-gateway to handle multiple concurrent requests for different ip:port?
Thanks,
Nitish
After reading vert.x code, I figured out that maxPoolSize is per destination
So, in the above case, it would be number of http clients * maxPoolSize (per destination)
I don't expect more than 100 concurrent requests to any destination host. So, setting this value to 5 gives me - 5 * 36 (36 http clients) = 180 connections
Note : If you are running good number of http clients in an instance with multiple verticles, you need to configure the max number of open file descriptors

ADO.NET background pool validation

in Java, application servers like JBoss EAP have the option to periodically verify the connections in a database pool (https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html/administration_and_configuration_guide/sect-database_connection_validation). This has been very useful for removing stale connections.
I'm now looking at a ADO.NET application, and I was wondering if there was any similar functionality that could be used with a Microsoft SQL Server?
I ended up find this post by redgate that describes some of the validation that goes on when connections are taken from the pool:
If the connection has died because a router has decided that it no
longer wants to forward your packets and no other routers like you
either then there is no way to know this unless you try to send some
data and don’t get a response.
If you create a connection and a connection pool is created and
connections are put into the pool and not used, the longer they are in
there, the bigger the chance of something bad happening to it.
When you go to use a connection there is nothing to warn you that a
router has stopped forwarding your packets until you go to use it; so
until you use it, you do not know that there is a problem.
This was an issue with connection pooling that was fixed in the first
.Net 4 reliability update (see issue 14 which vaguely describes this)
with a feature called “Connection Pool Resiliency”. The update meant
that when a connection is about to be taken from the pool, it is
checked for TCP validity and only returned if it is in a good state.

Get HTTP connection pool from Websphere 6.1

All
I am making REST client calls from an EJB container (IBM Websphere v6.1) and cannot find any way to get a HTTP connection factory from WAS.
Is this possible in WAS 6.1?
Would expect be able to access this with JNDI so connection pool configuration, socket timeout, connection timeout, connections per URL etc could be centrally managed.
If not the alternative is to use a Client API such as HttpClient 4.3. But this has its own kettle of fish:
They recommend 'BasicHttpClientConnectionManager': "This connection manager implementation should be used inside an EJB container". However this implies one connection per thread which in an application with many threads will exhaust the resources of the O/S.
The other alternative 'PoolingHttpClientConnectionManager' seems to be a much better fit with much of the required controls, but in the the comments on the the Basic manager it says explicitly that the Pooling manager shouldn't be used in a EJB container managed context. Scanning the code for this it looks like the Pooling manager uses Future from the concurrent library but doesn't appear to directly use Threads.
Any suggestions about the best way forward would be appreciated - some options seem to be:
Test with PoolingHttpClientConnectionManager - with risk of subtle problems
Play safe with 'BasicHttpClientConnectionManager' but set short response and socket timeouts to constrain the number of concurrent sockets at the cost of lots of factory overhead. Yuk.
Some other way of getting access to the pool of HTTP connections in WAS 6.1.
Something else
Any suggestions for this rather ikky problem would be ideal.
Please don't suggest upgrading WAS - although future versions ie the WAS commerce version do seem to have a JCA HTTP Adaptor and 8.5 has a built in REST client.
Please don't publish responses relating to MQ/JMS, JDBC connection pooling or setting up resource adaptors for EIS other than HTTP.

Tomcat inter webapp http communication

Given two web apps running on the same Tomcat 6. If you do an http-call from one app to the other, will Tomcat "short circuit" this call, or will it go all the way out on the interwebz before calling home?
#thomasz answer shows the need for more detail. We're using Springs RestTemplate to do the communication. Its plugable architecture lets you provide your own ClientHttpRequestFactory.
Would it be possible to implement a ClientHttpRequest that, if the request was to localhost, it could persuade tomcat to handle it internally?
No, the request will go through all the layers, including loopback interface. Tomcat is not treating requests to the same web container differently. After all, how? You are accessing some URL via URLConnection or HttpClient or raw socket or... - Tomcat would have to somehow intercept (instrument) your application's code and dynamically replace HTTP call with some internal invocation. Possible, but very complicated.
To make matters worse, you can easily cause deadlock or starvation under high load. Imagine your Tomcat worker thread pool has 10 threads and at the same time you access the same servlet from 10 concurrent users. Every servlet now tries to connect to the same web container, but the worker thread pool is exhausted. So all these servlets are blocking, waiting for idle worker thread. But this will never happen, because they are occupying all of them!