I need to ping some HTTP service every time an insert occurs in a Postgres table using a trigger and an HTTP GET or POST?
Is there any simple way to achieve that with a standard PostgreSQL installation?
In case there isn't, is there any way to do it with additional libraries?
You can do this with PL/perlu or PL/pythonu . However, I strongly recommend that you don't do it this way. DNS problems, problems with the server, etc will cause your PostgreSQL backends to stall, severely disrupting database performance and possibly causing you to exhaust max_connections.
Instead, have a trigger send a NOTIFY when the change occurs, and have the trigger insert details into a log table. Have a client application LISTEN for the notification, read the records inserted by the trigger into the log table, and make any appropriate HTTP requests.
The client should get the requests from the log table one-by-one using SELECT ... FOR UPDATE and DELETE it when the request is made successfully.
See this answer about sending email from within the DB for some more detail.
Related
I need to issue a long sequence of REST calls to the same server (let's call it myapi.com). At the moment, I am using the Rust library reqwest as follows:
I create a reqwest::Client with all default settings.
For each REST call:
I use client.post("https://myapi.com/path/to/the/api") to create a reqwest::RequestBuilder.
I configure the RequestBuilder to obtain a reqwest::Request.
I send() the Request, read the reqwest::Response.
I drop everything except the Client, start again.
I read in the docs that reqwest is supposed to pool connections within the same Client. Given that I always reuse the same Client, I would expect the first API call to take some more (owing to the initial TCP and HTTPS handshakes). However, I observe always a consistent, quite high latency across all requests. So, I am wondering if connections are reused at all, or re-established every time. If they are not, how do I get to recycle the same connection? I feel that latency would be drastically reduced if I could save myself some roundtrips.
How can one write a code in azure postgresql server which can send email.
As Azure postgresql sever is a fully managed by azure and no option is there for installing extensions apart from limited already available extensions.
You usually shouldn't send emails from the database. If the email sending becomes slow, you can get cascading locks. And if it fails, then what should you do? You can store the email to be sent in a table, then have a process written in you favorite language access the table to do the sending. Or you can set up LISTEN/NOTIFY to do the same thing. Or you could combine them, if you want the transactionality of a separate table but don't want the polling of checking on it periodically.
Another option of course is not to use hosting solutions which prevent you from doing what you want.
Is it proper programming practice/ software design to have a REST API call another REST API? If not what would be the recommended way of handling this scenario?
If I understand your question correctly, then YES, it is extremely common.
You are describing the following, I presume:
Client makes API call to Server-1, which in the process of servicing
this request, makes another request to API Server-2, takes the
response from Server-2, does some reformatting or data extraction, and
packages that up to respond back the the Client?
This sort of thing happens all the time. The downside to it, is that unless the connection between Server-1 and Server-2 is very low latency (e.g. they are on the same network), and the bandwidth used is small, then the Client will have to wait quite a while for the response. Obviously there can be caching between the two back-end servers to help mitigate this.
It is pretty much the same as Server-1 making a SQL query to a database in order to answer the request.
An alternative interpretation of your question might be that the Client is asking Server-1 to queue an operation that Server-2 would pick up and execute asynchronously. This also is very common (it's how Google crawls your website, for instance). This scenario would have Server-1 respond to Client immediately without needing to wait for the results of the operation undertaken by Server-2. A message queue or database table is usually used as an intermediary between servers in this case.
Another approach to that is make your REST API(1) store the request details to a queue table. Make a backend that will check that queue table every let's say 100milliseconds. That backend will be the one who will call the other REST API(2).
In your REST API(1) just create a loop that will check if the transaction on queue has been processed. If yes, get the process details and return it to client, if no, just keep on looping until process is done
I'm trying to clear all of the clients and alerts from Sensu, but they keep coming back.
With large numbers of clients, Uchiwa is unable to efficiently or reliably delete them all.
I have also tried deleting all of the keys in Redis while sensu-api and sensu-server services are stopped, but once they are restarted, all of the clients come back, including clients that don't exist and are failing their keepalive checks.
Do I have to empty all of the RabbitMQ queues as well?
Use Uchiwa or API or CLI to remove the client(s). If you want to delete all clients, use Uchiwa->Clients, select all clients and then select Delete from Actions dropdown.
If I write a function for PostgreSql using PLV8, can I call an url with a get/post request from my PLV8 function?
No, as explained by Milen; use an untrusted PL like PL/perlu, PL/pythonu, PL/javau, etc.
Doing this has the same problem as sending email from a trigger, in that unexpected issues like DNS configuration problems could leave all your database connections busy waiting on HTTP connection attempts so nothing else can get any work done.
Instead, use LISTEN and NOTIFY to wake an external helper script that uses a queue table to manage the requests, as explained in the answer linked above.
No, according to this page and my understanding of "trusted":
PL/v8 is a trusted procedural language that is safe to use, fast to run and easy to develop, powered by V8 JavaScript Engine.