I have a problem or misunderstanding with Postgre trigger -> perform notify -> capture into PHP flow.
My Platform is PHP(5.6) in centos with Postgres.
I have to add trigger with notifications table and whenever a new notification is added to that notifications SMS has to send to that user.
So here added trigger like this
CREATE FUNCTION xxx_sms_trigger() RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
BEGIN
PERFORM pg_notify('sms', NEW.id||'' );
RETURN new;
END;
and in php the inserting new notifications work fine.
Now I have a separate file where added this capturing pg_notify triggering by "pg_get_notify", here I couldn't get this flow totally like how Postgres can trigger some unknown php script without its being running as service or how I can make it work?
You do need a php script running as a service. If that is going to be the language that receives the notification you provide. As #FelipeRosa says, that script will need to connect to the database, then issue at least one command:
listen sms;
There is a good example of the listen on the main site (http://www.php.net/manual/en/function.pg-get-notify.php)
I haven't coded in php in a few years. Recently I have implemented this logic in python, but it should be about the same. I did a little research, and I can find select() in php, but it seems that the postgres socket descriptor is not available in php, so you can't use the select() in php unless you can find the postgres socket descriptor.
Anyway, that thread is here (http://postgresql.1045698.n5.nabble.com/Is-there-any-way-to-listen-to-NOTIFY-in-php-without-polling-td5749888.html). There is a polling example in there for your php script side down near the bottom. You can do the listen as previous selected (once), then put your pg_get_notify() in a loop with a sleep in there for the amount of time you are willing to queue notifications.
Just fwiw, in python I don't poll, I select.select(pg_conn,...), when data arrives on the postgres connection I check it for notifications, so there is no 'polling'. It would be nice if you could find a way to use select() in php instead of looping.
-g
Here’s a cohesive example that registers an interest in a table insertion, waits for notification (or timeout) and responds to the caller. We use a timestamp preceded by the letter ‘C’ to identify the notification channel since Postgres requires the channel name to be a proper identifier.
Postgres SQL
/* We want to know when items of interest get added to this table.
Asynchronous insertions possible from different process or server */
DROP TABLE IF EXISTS History;
CREATE TABLE History (
HistoryId INT PRIMARY KEY,
MYKEY CHAR(17),
Description TEXT,
TimeStamp BIGINT
);
/* Table of registered interest in a notification */
DROP TABLE IF EXISTS Notifications;
CREATE TABLE Notifications (
NotificationId INT PRIMARY KEY,
Channel VARCHAR(20),
MYKEY CHAR(17)
);
/* Function to process a single insertion to History table */
CREATE OR REPLACE FUNCTION notify_me()
RETURNS trigger AS
$BODY$
DECLARE ch varchar(20);
BEGIN
FOR ch IN
SELECT DISTINCT Channel FROM Notifications
WHERE MYKEY=NEW.MYKEY
LOOP
/* NOTIFY ch, 'from notify_me trigger'; */
EXECUTE 'NOTIFY C' || ch || ', ' || quote_literal('from notify_me') || ';';
DELETE FROM Notifications WHERE Channel=ch;
END LOOP;
RETURN NULL;
END;
$BODY$
LANGUAGE 'plpgsql';
/* Trigger to process all insertions to History table */
DROP TRIGGER IF EXISTS HistNotify ON History CASCADE;
CREATE TRIGGER HistNotify AFTER INSERT ON History
FOR EACH ROW EXECUTE PROCEDURE notify_me();
PHP code
// $conn is a PDO connection handle to the Postgres DB
// $MYKEY is a key field of interest
$TimeStamp = time(); // UNIX time (seconds since 1970) of the request
$timeout = 120; // Maximum seconds before responding
// Register our interest in new history log activity
$rg = $conn->prepare("INSERT INTO Notifications (MYKEY, Channel) VALUES (?,?)");
$rg->execute(array($MYKEY, $TimeStamp));
// Wait until something to report
$conn->exec('LISTEN C'.$TimeStamp.';'); // Prepend ‘C’ to get notification channel
$conn->exec('COMMIT;'); // Postgres may need this to start listening
$conn->pgsqlGetNotify (PDO::FETCH_ASSOC, $timeout*1000); // Convert from sec to ms
// Unregister our interest
$st = $conn->prepare("DELETE FROM Notifications WHERE Channel=?");
$st->execute(array($TimeStamp));
Here is an example how to migrate the "Python way" mentioned by #Greg to PHP. After starting the script below - open a new connection to the postgres db and query NOTIFY "test", 'I am the payload'
Sources:
http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications
https://gist.github.com/chernomyrdin/96812377f1ac5bf567b8
<?php
$dsn = 'user=postgres dbname=postgres password=postgres port=5432 host=localhost';
$connection = \pg_connect($dsn);
if (\pg_connection_status($connection) === \PGSQL_CONNECTION_BAD) {
throw new \Exception(
sprintf('The database connect failed: %s', \pg_last_error($connection))
);
}
\pg_query('LISTEN "test"');
while (true) {
$read = [\pg_socket($connection)];
$write = null;
$except = null;
$num = \stream_select(
$read,
$write,
$except,
60
);
if ($num === false) {
throw new \Exception('Error in optaining the stream resource');
}
if (\pg_connection_status($connection) !== \PGSQL_CONNECTION_OK) {
throw new \Exception('pg_connection_status() is not PGSQL_CONNECTION_OK');
} elseif ($num) {
$notify = \pg_get_notify($connection);
if ($notify !== false) {
var_dump($notify);
}
}
}
According to this you should first make the application listen to the desired channel issuing the command "LISTEN ", via pg_query for example, before you can notify messages to the application.
Its a litte example:
The PHP script (I named it teste.php - It's the same at http://php.net/manual/pt_BR/function.pg-get-notify.php):
$conn = pg_pconnect("dbname=mydb");
if (!$conn) {
echo "An error occurred.\n";
exit;
}
while(true){
pg_query($conn, 'LISTEN SMS;');
$notify = pg_get_notify($conn);
if (!$notify) {
echo "No messages\n";
// change it as u want
} else {
print_r($notify);
//your code here
}
sleep(2);
}
Keep the script runnig (I assumed u are using linux):
php teste.php > log.txt 2>&1 &
Note that:
2>&1 redirects both standard output and standard error into the log.txt file.
& runs the whole thing in the background
You can follow the log.txt with this command:
tail -f log.txt
Related
If I call a stored procedure using JdbcIO.Write is it possible to capture the ID (primary key) if the stored procedure returns this data?
public JdbcIO.Write<MyObject> writeMyObject() {
final String UPSERT_MY_OBJECT = "EXEC [MySchema].[UspertMyObject] ?,?,?";
// If my stored procedure returns the generated or existing ID
// is it possible to update the object I'm writing with the ID?
return JdbcIO.<MyObject>write()
.withDataSourceConfiguration(myDataSourceConfig)
.withStatement(UPSERT_MY_OBJECT)
.withPreparedStatementSetter((JdbcIO.PreparedStatementSetter<MyObject>) (myObject, ps) -> {
ps.setInt(1, myObject.getFieldOne());
ps.setString(2, myObject.getFieldTwo());
ps.setString(3, myObject.getFieldThree());
});
}
I don't think it's possible but, as a workaround, you can wait for write's finish (with Wait transform, see an example there) and then read them from database.
I would like to safely drop Firebird table. I have 3 transactions, one to recreate table, one to do something with the table (just inserting a single row to keep it simple) and the last one to drop the table.
If all these txns are executed using single connection these works. If I use a different connection, then the drop command fails with
lock conflict on no wait transaction
unsuccessful metadata update
object TABLE "DEMO" is in use
private static void Test() {
using var conn1 = new FbConnection(ConnectionString);
using var conn2 = new FbConnection(ConnectionString);
using var conn3 = new FbConnection(ConnectionString);
conn1.Open();
conn2.Open();
conn3.Open();
ExecuteTxn(conn1, cmd => {
cmd.CommandText = "recreate table demo (id int primary key)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn2, cmd => {
cmd.CommandText = "insert into demo (id) values (1)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn3, cmd => {
cmd.CommandText = "drop table demo";
cmd.ExecuteNonQuery();
});
}
private static void ExecuteTxn(FbConnection conn, Action<FbCommand> todo) {
using (var txn = conn.BeginTransaction())
using (var cmd = conn.CreateCommand()) {
cmd.Transaction = txn;
todo(cmd);
txn.Commit();
}
}
I realized that changing the transaction options as
txn = conn.BeginTransaction(new FbTransactionOptions { TransactionBehavior = FbTransactionBehavior.Wait }))
seems to help. But I'm not sure if this the right thing to do or just a coincidence...
Using Firebird 3.0.6, FirebirdSql.Data.FirebirdClient.dll 7.5.0.0
As far as I understand it, the problem has to do with how Firebird caches certain metadata, which might result in existence locks being retained, which will prevent deletion of the object. In addition, it is possible - this is a guess! - that the Firebird ADO.net provider retains the statement handle with the insert statement prepared, which will also result in an existence lock being retained.
Executing in a WAIT transaction (optionally with a timeout) is considered an appropriate workaround by the Firebird core developers.
For reference, see the following tickets:
CORE-3766 - Transaction can`t change metadata if it is run in no_wait and there is another connect that once had queried these metadata
CORE-6382 - Triggers accessing a table prevent concurrent DDL command from dropping that table
In certain cases, switching from Firebird ClassicServer or Firebird SuperClassic to Firebird SuperServer can also prevent this problem.
However, if you want a more in-depth explanation, it might be worthwhile to ask this question on the firebird-devel mailing list.
If handler function passed to tcp_server() from socket module runs as fiber is there possibility to communicate with each tcp_connection by fiber.channel?
Yes, it is.
#!/usr/bin/tarantool
local fiber = require('fiber')
local socket = require('socket')
local clients = {}
function rc_handle(s)
-- You can save socket reference in some table
clients[s] = true
-- You can create a channel
-- I recommend attaching it to the socket
-- so it'll be esier to collect garbage
s.channel = fiber.channel()
-- You can also get the reference to the handling fiber.
-- It'll help you to tell alive clients from dead ones
s.fiber = fiber.self()
s:write(string.format('Message for %s:%s: %s',
s:peer().host, s:peer().port, s.channel:get()
))
-- Don't forget to unref the client if it's done manually
-- Or you could make clients table a weak table.
clients[s] = nil
end
server = socket.tcp_server('127.0.0.1', 3003, {
name = 'srv',
handler = rc_handle,
})
function greet_all(msg)
-- So you can broadcast your message to all the clients
for s, _ in pairs(clients) do
s.channel:put(msg)
end
end
require('console').start()
Of course, this snippet if far from being perfect, but I hope it'll help you to get the work done.
What I am trying to do is to call a function whenever there is an update to the DB. I use the 'Committed transaction information' aspect to get the status on the DB update.
pg_xact_commit_timestamp(xid)
SELECT pg_xact_commit_timestamp(xmin) ts FROM "TABLE_NAME" WHERE pg_xact_commit_timestamp(xmin) IS NOT NULL;
The problem: After the first-iteration of update to the DB. The commit_timestamp(xmin) would never be empty. Is there a way that once the functions is executed we can set the commit_timestamp list to be empty?
My requirement: To execute a function only when there is an new (without any information on the previous update) update
I understand that, this is over use of an inbuilt-function.
E.g.
Connection DBconn = DBConnection.connect(url,user,password);
Boolean DBStatus = checkDBStatus(DBconn, configuration);
if(DBStatus) {
System.out.println("DB updated recently");
feedbackStatusJSON = runBusinessLogic();
}
else {
System.out.println("DB not updated");
}
I defined a trigger in PostgreSQL 9.1 that should fire whenever a record is updated using:
CREATE OR REPLACE FUNCTION check() RETURNS TRIGGER AS $$
BEGIN
SELECT doubletEngine.checkForDoublet(NEW.id);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS check_upd on people;
CREATE TRIGGER check_upd AFTER UPDATE ON people
FOR EACH ROW
WHEN (OLD.* IS DISTINCT FROM NEW.*)
EXECUTE PROCEDURE check();
The method doubletEngine.checkForDoublet() was introduced to PL/Java 1.4.3 using
CREATE OR REPLACE FUNCTION doubletEngine.checkForDoublet(varchar) RETURNS void AS 'DoubletSearcher.checkForDoublet' LANGUAGE java;
Inside the method checkForDoublet(), I try to connect to the database using
Class.forName("org.postgresql.Driver");
String url = "jdbc:postgresql://127.0.0.1:5432/db";
c = DriverManager.getConnection(url, "postgres", "postgres");
if ( c == null ) {
throw new RuntimeException("Could not connect to server");
}
...but c is still null. Calling this code directly from my IDE works perfectly, but when the trigger fires, it only throws the RuntimeException. Is there anything I miss...???
According to the PL/Java documentation the connection should be obtained as follows:
Connection conn = DriverManager.getConnection("jdbc:default:connection");