I have an "article" table on a Postgresql 9.1 database and a trigger that notifies a channel on each insert.
I'd like to create a node.js script that catches those inserts and pushes notifications to connected clients using Socket.io. So far I'm using the node-postgres module to LISTEN to the channel but it seems the LISTEN query times out after about 10-15 seconds and stops catching the inserts. I could query a new listen when the timeout happens, but I'm not sure how to properly implement the continuation.
Here's my postgresql notification procedure:
CREATE FUNCTION article_insert_notify() RETURNS trigger AS $$
BEGIN
NOTIFY "article_watcher";
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
The trigger:
CREATE TRIGGER article_insert_trigger
AFTER INSERT ON article
FOR EACH ROW EXECUTE PROCEDURE article_insert_notify();
And the node.js code:
var pg = require ('pg'),
pgConnection = "postgres://user:pass#localhost/db"
pg.connect(pgConnection, function(err, client) {
client.query('LISTEN "article_watcher"');
client.on('notification', function(data) {
console.log(data.payload);
});
});
How can I ensure a fulltime LISTEN or how could I catch those timeouts to reissue a listen query ? Or maybe a module other than node-postgres offers more appropriate tools to do so ?
I got answer to my issue on the node-postgres repo. To quote Brianc:
pg.connect is use to create pooled connections. Using a connection
pool connection for listen events really isn't supported or a good
idea though.
[...]
To 'listen' a connection by definition must stay open permanently. For
a connection to stay open permanently it can never be returned to the
connection pool.
The correct way to listen in this case is to use a standalone client:
var pg = require ('pg'),
pgConnectionString = "postgres://user:pass#localhost/db";
var client = new pg.Client(pgConnectionString);
client.connect();
client.query('LISTEN "article_watcher"');
client.on('notification', function(data) {
console.log(data.payload);
});
LISTEN is supposed to last for the session lifetime or until you do UNLISTEN. So, as long as your code is running, notifications should be delivered. Note, that IIRC, postgresql gives no promises to deliver one notification per NOTIFY — if you have many inserts it may choose to deliver a single NOTIFY. Not sure about 9.1, they introduced LISTEN payload, so it may make a bit less sense.
A permanent connection in pg-promise is
db.connect({direct: true})
.then(...)
.catch(...);
Related
I have a multi tenant application, i.e., each customer has their database.
My stack is NestJS with MongoDB, and to handle HTTP requests to the right database I use the lib nestjs-tenancy, it scopes the connection based in the request subdomain.
But this lib does not work when I need to execute something asynchronous, like a queue job (bull). So in the queue consumer, I create a new connection to the right database that I need to access. This info about the database I extract from the queue job data:
try {
await mongoose.connect(
`${this.configService.get('database.host')}/${
job.data.tenant
}?authSource=admin`,
);
const messageModel = mongoose.model(
Message.name,
MessageSchema,
);
// do a lot of stuffs with messageModel
await messageModel.find()
await messageModel.save()
....
} finally {
await mongoose.connection.close();
}
I have two different jobs that may run at the same time and for the same database.
I'm noticing sometimes that I'm getting some erros about the connection, like:
MongoExpiredSessionError: Cannot use a session that has ended
MongoNotConnectedError: Client must be connected before running operations
So for sure I'm doing something wrong. Ps: in all my operations I use await.
Any clue of what can be?
Should I use const conn = mongoose.createConnection() instead mongoose.connect()?
Should I always close the connection or leave it "open"?
EDIT:
After some testing, I'm sure that I can't use the mongoose.connect(). Changing it to mongoose.createConnection() solved the issues. But I'm still confused if I need to close the connection. Closing it, I'm sure that I'm not overloading mongo with a lot of connections, but in the same time, every request it will create a new connection and I have hundreds of jobs running at once for different connections..
I use ORMLite on a solution made by server and clients.
On server side I use PostgreSQL, on client side I use SQLite. In code, I use the same ORMLite methods, without taking care of the DB that is managed (Postgres or SQLite). I used also pooled connection.
I don't have connection opened, when I need a Sql query, ORMLite takes care to open and close the connection.
Sometime I use the following code to perform a long operation in background on server side, so in DB PostgreSql.
final ConnectionSource OGGETTO_ConnectionSource = ...... ;
final DatabaseConnection OGGETTO_DatabaseConnection =
OGGETTO_ConnectionSource.getReadOnlyConnection( "tablename" ) ;
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, false);
// do long operation with Sql Queries ;
OGGETTO_DAO.commit(OGGETTO_DatabaseConnection);
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, true);
I noted that the number of open connections increased, therefore after sometime the number is so big to stop the server (SqlException "too many clients connected to DB").
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Of course I cannot add at the end a "OGGETTO_ConnectionSource.close()", because it closes the pooled connection source.
If I add at the end "OGGETTO_DatabaseConnection.close();", it doesn't work, open connections continue to increase.
How to solve it?
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Let's RTFM. Here are the javadocs for the ConnectionSource.getReadOnlyConnection(...) method. I will quote:
Return a database connection suitable for read-only operations. After you are done,
you should call releaseConnection(DatabaseConnection).
You need to do something like the following code:
DatabaseConnection connection = connectionSource.getReadOnlyConnection("tablename");
try {
dao.setAutoCommit(connection false);
try {
// do long operation with Sql Queries
...
dao.commit(connection);
} finally {
dao.setAutoCommit(connection, true);
}
} finally {
connectionSource.releaseConnection(connection);
}
BTW, this is approximately what the TransactionManager.callInTransaction(...) method is doing although it has even more try/finally blocks to ensure that the connection's state is reset. You should probably switch to it. Here are the docs for ORMLite database transactions.
I'm trying to understand how a java (client) application that communicates, through JDBC, with a pgSQL database (server) can "catch" the result produced by a query that will be fired (using a trigger) whenever a record is inserted into a table.
So, to clarify, via JDBC I install a trigger procedure prepared to execute a query whenever a record is inserted into a given database table, and from this query's execution will result an output (wrapped in a resultSet, I suppose). And my problem is that I have no idea how the client will be aware of those results, that are asynchronously produced.
I wonder if JDBC supports any "callback" mechanism able to catch the results produced by a query that is fired through a trigger procedure under the "INSERT INTO table" condition. And if there is no such "callback" mechanism, what is the best approach to achieve this result?
Thank you in advance :)
Triggers can't return a resultset.
There's no way to send such a result to the JDBC driver.
There are a few dirty hacks you can use to get results from a trigger to the client, but they're all exactly that. Things like:
DECLARE a cursor for the resultset, then send the cursor name as a NOTIFY payload, so the app can FETCH ALL FROM <cursorname>;
Create a TEMPORARY table and report the name via NOTIFY
It is more typical to append anything the trigger needs to communicate to the app to a table that exists for that purpose and have the app SELECT from it after the operation that fired the trigger ran.
In most cases if you need to do this, you're probably using a trigger where a regular function is a better fit.
I have a postgresql-plsh function
CREATE OR REPLACE FUNCTION MDN_REG_LOCATE(MDN VARCHAR(50), CALLID VARCHAR(50)) RETURN AS '
#!/bin/sh
/home/infoobjects/Projects/java/execute.sh $1 $2 &
logger -t "data" "$1$2"
' LANGUAGE plsh;
and execute.sh call's a java process(method) which takes 3 minutes to execute . I have made the script asynchronous with appending & at the end of script(execute.sh)
My problem is that the postgresql function still waits for the result to come and does not act in asynchronous manner although the shell script behaves asynchronously because logger in above function logs up after jst a call to MDN_REG_LOCATE() but still this postgresql function(MDN_REG_LOCATE) waits for complete process of 3 minutes to over , I don't know what I am lacking , please help me in this .
Thanks In Adv.
Simply backgrounding a process isn't enough; it's still attached to its parent process. See this answer.
You're likely to be much better off reworking your logger program to keep a persistent connection to the database where it LISTENs for tasks. Your client (or a trigger, or whatever would normally invoke your PL/Sh function) sends a NOTIFY with the parameters as a payload, or (for older Pg versions) INSERTs a row into a queue table then sends a NOTIFY to tell the listening client to look at the queue table.
The listening client can then run the background process with no worries about holding up the database. Best of all, the NOTIFY is transactional; it's only delivered when the transaction that sent it commits, at the time it commits.
Is there a way to check if a database connection is open? I am trying to check the connectionState but it shows ConnectionState as Open even when database is down.
private bool IsValidConnection(IDbConnection connection)
{
return (connection != null && connection.State == ConnectionState.Open);
}
While I don't have an answer, I can tell you why stuff like this happens
It basically happens with any protocol above TCP that does not send periodic heartbeats.
TCP provides no way to detect that the remote end is closed, without sending any data, and even then it can take considerable time to detect the failure of the remote end.
In an ideal world, the server would send a TCP FIN packet when it goes down, but this does not happen if someone yanks out the network cable, the server crashes hard or an inbetween NAT gateway/firewall silently drops the connection. This results in the client not knowing the server is gone, and you'll have to send it something and assume the server is gone of you don't receive a response within a reasonable time, or an error occurs (typically the first few send() call after a server is silently gone won't error out).
So, if you want to make sure a DB Conenction is ok, issue a select 1;` query. Some lower level DB APIs might have a ping()/isAlive() or similer method that could be used for the same purpose.
That looks like the correct code to check if your DB connection is still open.
I would expect to see an exception thrown on your connection.Open() when the database is down. Is the database down before you open the connection or does it go down after the open?
a simple way would be to code a button with query, of retrieve some data from some table of the database, if an error is returned catch it.
that would show if connection there or not.