ORMLite --- After .commit , .setAutoCommit --- Connection NOT closed - postgresql

I use ORMLite on a solution made by server and clients.
On server side I use PostgreSQL, on client side I use SQLite. In code, I use the same ORMLite methods, without taking care of the DB that is managed (Postgres or SQLite). I used also pooled connection.
I don't have connection opened, when I need a Sql query, ORMLite takes care to open and close the connection.
Sometime I use the following code to perform a long operation in background on server side, so in DB PostgreSql.
final ConnectionSource OGGETTO_ConnectionSource = ...... ;
final DatabaseConnection OGGETTO_DatabaseConnection =
OGGETTO_ConnectionSource.getReadOnlyConnection( "tablename" ) ;
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, false);
// do long operation with Sql Queries ;
OGGETTO_DAO.commit(OGGETTO_DatabaseConnection);
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, true);
I noted that the number of open connections increased, therefore after sometime the number is so big to stop the server (SqlException "too many clients connected to DB").
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Of course I cannot add at the end a "OGGETTO_ConnectionSource.close()", because it closes the pooled connection source.
If I add at the end "OGGETTO_DatabaseConnection.close();", it doesn't work, open connections continue to increase.
How to solve it?

I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Let's RTFM. Here are the javadocs for the ConnectionSource.getReadOnlyConnection(...) method. I will quote:
Return a database connection suitable for read-only operations. After you are done,
you should call releaseConnection(DatabaseConnection).
You need to do something like the following code:
DatabaseConnection connection = connectionSource.getReadOnlyConnection("tablename");
try {
dao.setAutoCommit(connection false);
try {
// do long operation with Sql Queries
...
dao.commit(connection);
} finally {
dao.setAutoCommit(connection, true);
}
} finally {
connectionSource.releaseConnection(connection);
}
BTW, this is approximately what the TransactionManager.callInTransaction(...) method is doing although it has even more try/finally blocks to ensure that the connection's state is reset. You should probably switch to it. Here are the docs for ORMLite database transactions.

Related

How to handle multiple mongo databases connections

I have a multi tenant application, i.e., each customer has their database.
My stack is NestJS with MongoDB, and to handle HTTP requests to the right database I use the lib nestjs-tenancy, it scopes the connection based in the request subdomain.
But this lib does not work when I need to execute something asynchronous, like a queue job (bull). So in the queue consumer, I create a new connection to the right database that I need to access. This info about the database I extract from the queue job data:
try {
await mongoose.connect(
`${this.configService.get('database.host')}/${
job.data.tenant
}?authSource=admin`,
);
const messageModel = mongoose.model(
Message.name,
MessageSchema,
);
// do a lot of stuffs with messageModel
await messageModel.find()
await messageModel.save()
....
} finally {
await mongoose.connection.close();
}
I have two different jobs that may run at the same time and for the same database.
I'm noticing sometimes that I'm getting some erros about the connection, like:
MongoExpiredSessionError: Cannot use a session that has ended
MongoNotConnectedError: Client must be connected before running operations
So for sure I'm doing something wrong. Ps: in all my operations I use await.
Any clue of what can be?
Should I use const conn = mongoose.createConnection() instead mongoose.connect()?
Should I always close the connection or leave it "open"?
EDIT:
After some testing, I'm sure that I can't use the mongoose.connect(). Changing it to mongoose.createConnection() solved the issues. But I'm still confused if I need to close the connection. Closing it, I'm sure that I'm not overloading mongo with a lot of connections, but in the same time, every request it will create a new connection and I have hundreds of jobs running at once for different connections..

When to close hbase connection and what will happen if connection not closed

Am new to hbase. I want to perform 2 scan operation on different tables.
So i have return genetic function to scan and closed the connection at end in scala.
fuctionhbasescan(Tablename1, scanfield)
fuctionhbasescan(Tablename2, scanfield)
when I call the function for tablename 1 it worked fine and returned result.
But when i called the same function for tablename 2, it says connection closed.
Is that only one connection is established for the instance in scala ? we need to close the connection in end of process in driver?
Please help me to understand the connection process and how it works.
Note:(HConnectionManager)connection established using connectionfactory.createconnection and then connection.getTable.
Not sure based on info provided, but the connection may have been closed by the first function. You can
Check if the connection isClosed as documented
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html
and if necessary, create another connection with createConnection (see
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html )
Avoid closing the connection until you're done with it. Move the close call on the connection outside of the inner function call, and wait until both scans are done, to close the connection.

SQLTimeoutException in play-slick

I'm using play-slick with slick 3.0.0 in this way:
I got a connection by
val conn = db.createSession.conn
then got statement:
val statement = conn.prepareStatement(querySQL)
and return ResultSet:
Future{statement.executeQuery()}
But I got a problem: I tried to use this query about 50 times, then, I got the exception:
SQLTimeoutException: Timeout after 1000ms of waiting for a connection.
I know this may caused by connections are not closed and I didn't close the connection or session in my code manually.
I want to know:
Will connection create by my way close and return to connection pool automatically?
Was my situation caused by connection didn't release?
How to close a connection manually?
Any help would be greatly appreciated!
Remark: It would most helpful, if you post your full code (including your call that is performed 50 times)
Will connection create by my way close and return to connection pool automatically?
No. Even though Java 7 (and up) provides the so called try-with-resources (see https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html ) to auto-close your resource. ,AFAIK, this mechanism is not available in Scala (please correct me somebody, if this is not true).
Still, Scala provides the LOAN-Pattern ( see https://wiki.scala-lang.org/display/SYGN/Loan , especially using ) which provides a FP way of closing resources finally.
Was my situation caused by connection didn't release?
As long as you don't provide your full code, it is only a guess. Yes, not closing connections make the connection pool exceed, thus that no new connections are available eventually.
How to close a connection manually?
connection.close()

How to check if a database connection is still open in Ado.Net?

Is there a way to check if a database connection is open? I am trying to check the connectionState but it shows ConnectionState as Open even when database is down.
private bool IsValidConnection(IDbConnection connection)
{
return (connection != null && connection.State == ConnectionState.Open);
}
While I don't have an answer, I can tell you why stuff like this happens
It basically happens with any protocol above TCP that does not send periodic heartbeats.
TCP provides no way to detect that the remote end is closed, without sending any data, and even then it can take considerable time to detect the failure of the remote end.
In an ideal world, the server would send a TCP FIN packet when it goes down, but this does not happen if someone yanks out the network cable, the server crashes hard or an inbetween NAT gateway/firewall silently drops the connection. This results in the client not knowing the server is gone, and you'll have to send it something and assume the server is gone of you don't receive a response within a reasonable time, or an error occurs (typically the first few send() call after a server is silently gone won't error out).
So, if you want to make sure a DB Conenction is ok, issue a select 1;` query. Some lower level DB APIs might have a ping()/isAlive() or similer method that could be used for the same purpose.
That looks like the correct code to check if your DB connection is still open.
I would expect to see an exception thrown on your connection.Open() when the database is down. Is the database down before you open the connection or does it go down after the open?
a simple way would be to code a button with query, of retrieve some data from some table of the database, if an error is returned catch it.
that would show if connection there or not.

ADO.NET SqlData Client connections never go away

An asp.net application I am working on may have a couple hundred users trying to connect. We get an error that the maximum number of connections in the pool has been reached. I understand the concept of connection pools in ADO.NET although in testing I've found that a connection is left "sleeping" on the ms sql 2005 server days after the connection was made and the browser was then closed. I have tried to limit the connection lifetime in the connection string but this has no effect. Should I push the max number of connections? Have I completely misdiagnosed the real problem?
All of your database connections must either be wrapped in a try...finally:
SqlConnection myConnection = new SqlConnection(connString);
myConnection.Open();
try
{
}
finally
{
myConnection.Close();
}
Or...much better yet, create a DAL (Data Access Layer) that has the Close() call in its Dispose method and wrap all of your DB calls with a using:
using (MyQueryClass myQueryClass = new MyQueryClass())
{
// DB Stuff here...
}
A few notes: ASP.NET does rely on you to release your Connections. It will release them through GC after they've gone out of scope but you can not count on this behavior as it may take a very long time for this to kick in - much longer than you may be able to afford before your connection pool runs out. Speaking of which - ASP.NET actually pools your connections to the database as you request them (recycling old connections rather than releasing them completely and then requesting them anew from the database). This doesn't really matter as far as you are concerned: you still must call Close()!
Also, you can control the pooling by using your connection string (e.g. Min/Max pool size, etc.). See this article on MSDN for more information.