ADO.NET background pool validation - ado.net

in Java, application servers like JBoss EAP have the option to periodically verify the connections in a database pool (https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html/administration_and_configuration_guide/sect-database_connection_validation). This has been very useful for removing stale connections.
I'm now looking at a ADO.NET application, and I was wondering if there was any similar functionality that could be used with a Microsoft SQL Server?

I ended up find this post by redgate that describes some of the validation that goes on when connections are taken from the pool:
If the connection has died because a router has decided that it no
longer wants to forward your packets and no other routers like you
either then there is no way to know this unless you try to send some
data and don’t get a response.
If you create a connection and a connection pool is created and
connections are put into the pool and not used, the longer they are in
there, the bigger the chance of something bad happening to it.
When you go to use a connection there is nothing to warn you that a
router has stopped forwarding your packets until you go to use it; so
until you use it, you do not know that there is a problem.
This was an issue with connection pooling that was fixed in the first
.Net 4 reliability update (see issue 14 which vaguely describes this)
with a feature called “Connection Pool Resiliency”. The update meant
that when a connection is about to be taken from the pool, it is
checked for TCP validity and only returned if it is in a good state.

Related

socket connections closing when manually deploying

We made a chat module in our project using socket.io. When the load is balanced and the manual deployed, if socket connections are switched to different servers, socket connections are disconnected and the messaging events are partially not processed. I solved the load balance problem with socket.io-redis library. It acts as a gateway and solves this problem thanks to redis.
Another problem is that when I deploy it manually, the pid of the servers changes and socketio connections are instantly disconnected on the client and then it is not connected even though it says connected.
Do you think that using tools such as Travis CI solves the problems in manual deploy process?
Another question is, if a system that goes to 3 servers with load balance then goes back to 2 servers, the socket connections will be closed again, what method may be required to solve this? I thought of separating the socket.io service from the monolithic structure and keeping it on a single server, and scaling the server vertically when the load increased.
We are using an Aws Elastic Beanstalk(EBS), it automatically performs load balance.

Java/Tomcat Open TCP Connections - Resource Monitor

Right now we are running into a problem where we have a bunch of "open TCP connections" on our Windows server's that are running a tomcat webserver. The Java code is doing a SOAP call to a vendor, and we see a lot of open connections in Resource Monitor (pictured below) showing the vendor's IP address. I've tried a couple different methods of doing the SOAP call thinking the connection wasn't explicitly being closed somewhere behind the scenes. Nothing has worked so far, so I'm thinking that I may be misunderstanding what this page is actually showing.
What is the lifecycle for a TCP connection as it relates to the Windows Resource Monitor? Is it normal for connections that are no longer being used to stay out there for a while? If not, how do I remedy the situation?
It'll be either a connection pool or a resource leak in your code.
To make sure it's not a resource leak check your code to make sure that whatever object is making the network call closes the connection after it's used otherwise you'll be waiting until the garbage collector runs.
However, if the network client supports connection pooling then closing it may only place the open connection back into a pool ready for quick re-use. You don't say which client API you're using but if it supports pooling then it should provide an API to say how long released connections remain in the pool.
There is no Windows Winsock-level pooling or persistence. If the underlying socket gets closed then that's it, it gets closed.

HornetQ local connection never timing out

My application, running in a JBOSS standalone env, relies on a HornetQ (v2.2.5.Final) middleware to exchange messages between parts of my application in a local environment - not over the network.
The default TTL (time-to-live) value for the connection is 60000ms, I am thinking of changing that to -1 since, from an operative point of view, I am looking forward to keep sending messages through such connection from time to time (not known in advance). Also, that would prevent issues like jms queue connection failure.
The question is: what are the issues with never timing out a connection on the server side, in such context? Is that a good choice? If not, is there a strategy that is suited for such situation?
The latest versions of HornetQ automatically disable connection checking for in-vm connections so there shouldn't be any issues if you configure this manually.

Get HTTP connection pool from Websphere 6.1

All
I am making REST client calls from an EJB container (IBM Websphere v6.1) and cannot find any way to get a HTTP connection factory from WAS.
Is this possible in WAS 6.1?
Would expect be able to access this with JNDI so connection pool configuration, socket timeout, connection timeout, connections per URL etc could be centrally managed.
If not the alternative is to use a Client API such as HttpClient 4.3. But this has its own kettle of fish:
They recommend 'BasicHttpClientConnectionManager': "This connection manager implementation should be used inside an EJB container". However this implies one connection per thread which in an application with many threads will exhaust the resources of the O/S.
The other alternative 'PoolingHttpClientConnectionManager' seems to be a much better fit with much of the required controls, but in the the comments on the the Basic manager it says explicitly that the Pooling manager shouldn't be used in a EJB container managed context. Scanning the code for this it looks like the Pooling manager uses Future from the concurrent library but doesn't appear to directly use Threads.
Any suggestions about the best way forward would be appreciated - some options seem to be:
Test with PoolingHttpClientConnectionManager - with risk of subtle problems
Play safe with 'BasicHttpClientConnectionManager' but set short response and socket timeouts to constrain the number of concurrent sockets at the cost of lots of factory overhead. Yuk.
Some other way of getting access to the pool of HTTP connections in WAS 6.1.
Something else
Any suggestions for this rather ikky problem would be ideal.
Please don't suggest upgrading WAS - although future versions ie the WAS commerce version do seem to have a JCA HTTP Adaptor and 8.5 has a built in REST client.
Please don't publish responses relating to MQ/JMS, JDBC connection pooling or setting up resource adaptors for EIS other than HTTP.

How do you know if PGBouncer is working?

I've set up PGBouncer and configured it to connect to my postgres DB and it all connects just fine, however I'm not sure if its actually working.
I have a php script that runs as a daemon and picks up beanstalk jobs. The problem is for every distinct user/action on the system it opens a new connection to postgres and then leaves that connection idling because the daemon doesn't actually stop running so the connection is never terminated (a quick fix for this was to reset the connection at the end of the script loop, but that is going to be inefficient with many connects).
Anyway, this caused postgres to eventually run out of connections and lock up...
So PGBouncer seems like the answer.
But now when I run it, I see the same db connection multiple times when i do ps ax | grep postgres.
Isn't PGBouncer supposed to only ever have 1 connection open to the DB and route all traffic through that connection? Then open a new connection if that one is full?
At present I have 3 for one db connection (my access control system) and 2 for the other database (my client specific data).
To me, it feels like if I roll out these changes then I will just be faced with the same problem that the connections will just get eaten up again because they're not being released.
I hope that explains enough for someone to offer any advice.
It sounds like an important step here is to fix the scripts so they release the connections when they're done. Once you do that, PgBouncer will help reduce the connection setup/teardown overhead, but in connection pooling mode it won't give you the ability to maintain more connections to Pg than you otherwise could.
However, you can also use PgBouncer in transaction-pooling mode. When used for transaction pooling, PgBouncer keeps a pool of idle backend transactions and only assigns them when a client does a BEGIN. Connections are returned to the pool after the client does a COMMIT or ROLLBACK. That means that in transaction pooling mode you can have large numbers of open connections to PgBouncer; they don't each need a corresponding connection to a PostgreSQL backend, so long as most of them are idle at any point in time and don't have any transaction open.
The only real downside of transaction pooling mode is that it breaks applications that expect to SET session-level variables, keep prepared statements across transactions, etc. Applications may need modification, e.g. to use SET LOCAL.