I am struggling to read my Postgres NOTICE messages in my C++ API. I can only read EXCEPTIONmessages using the function PQresultErrorMessage(PGresult), but not lower level messages.
PQresultErrorField(res, PG_DIAG_SEVERITY) returns null pointer.
How do I read NOTICE and other low level messages?
(Using PostgreSQL 9.2)
Set up a notice receiver or notice processor using PQsetNoticeReceiver / PQsetNoticeProcessor. Both set up callbacks that are invoked when asynchronous notifications are received. Note that this may happen before, during, or after processing of query data.
It's safe to assume that after all query results are returned (PQexec or whatever has returned) and you've called PQconsumeInput to make sure there's nothing else waiting, then all notices for the last command are received. The PQconsumeInput shouldn't really be necessary, it's just to be cautious.
See the documentation for libpq.
Related
I have been looking for a way to design my API so it will be idempotent, meaning that some of that is to make my POST request routes idempotent, and I stumbled upon this article.
(If I have understood something not the way it is, please correct me!)
In it, there is a good explanation of the general idea. but what is lacking are some examples of the way that he implemented it by himself.
Someone asked the writer of the article, how would he guarantee atomicity? so the writer added a code example.
Essentially, in his code example there are two cases,
the flow if everything goes well:
Open a transaction on the db that holds the data that needs to change by the POST request
Inside this transaction, execute the needed change
Set the Idempotency-key key and the value, which is the response to the client, inside the Redis store
Set expire time to that key
Commit the transaction
the flow if something inside the code goes wrong:
and exception inside the flow of the function occurs.
a rollback to the transaction is performed
Notice that the transaction that is opened is for a certain DB, lets call him A.
However, it is not relevant for the redis store that he also uses, meaning that the rollback of the transaction will only affect DB A.
So it covers the case when something happends inside the code that make it impossible to complete the transaction.
But what will happend if the machine, which the code runs on, will crash, while it is in a state when it has already executed the Set expire time to that key and it is now about to run the committing of the transaction?
In that case, the key will be available in the redis store, but the transaction has not been committed.
This will result in a situation where the service is sure that the needed changes have already happen, but they didn't, the machine failed before it could finish it.
I need to design the API in such a way that if the change to the data or setting of the key and value in redis fail, that they will both roll back.
What is the solution to this problem?
How can I guarantee the atomicity of a changing the needed data in one database, and in the same time setting the key and the needed response in redis, and if any of them fails, rollback them both? (Including in a case that a machine crashes in the middle of the actions)
Please add a code example when answering! I'm using the same technologies as in the article (nodejs, redis, mongo - for the data itself)
Thanks :)
Per the code example you shared in your question, the behavior you want is to make sure there was no crash on the server between the moment where the idempotency key was set into the Redis saying this transaction already happened and the moment when the transaction is, in fact, persisted in your database.
However, when using Redis and another database together you have two independent points of failure, and two actions being executed sequentially in different moments (and even if they are executed asynchronously at the same time there is no guarantee the server won’t crash before any of them completed).
What you can do instead is include in your transaction an insert statement to a table holding relevant information on this request, including the idempotent key. As the ACID properties ensure atomicity, it guarantees either all the statements on the transaction to be executed successfully or none of them, which means your idempotency key will be available in your database if the transaction succeeded.
You can still use Redis as it’s gonna provide faster results than your database.
A code example is provided below, but it might be good to think about how relevant is the failure between insert to Redis and database to your business (could it be treated with another strategy?) to avoid over-engineering.
async function execute(idempotentKey) {
try {
// append to the query statement an insert into executions table.
// this will be persisted with the transaction
query = ```
UPDATE firsttable SET ...;
UPDATE secondtable SET ...;
INSERT INTO executions (idempotent_key, success) VALUES (:idempotent_key, true);
```;
const db = await dbConnection();
await db.beginTransaction();
await db.execute(query);
// we're setting a key on redis with a value: "false".
await redisClient.setAsync(idempotentKey, false, 'EX', process.env.KEY_EXPIRE_TIME);
/*
if server crashes exactly here, idempotent key will be on redis with false as value.
in this case, there are two possibilities: commit to database suceeded or not.
if on next request redis provides a false value, query database to verify if transaction was executed.
*/
await db.commit();
// you can now set key value to true, meaning commit suceeded and you won't need to query database to verify that.
await redis.setAsync(idempotentKey, true);
} catch (err) {
await db.rollback();
throw err;
}
}
I'd like to forward all notifications from PostgreSQL into task queues in RabbitMQ named the same as the channel given in NOTIFY channel. Does PostgreSQL have something that would act like LISTEN *?
Inspecting the source for Skeeter it seems that PQnotifies might be of interest. PostgreSQL's documentation on libpq also mentions PQconsumeInput as a way to consume input from the server. From the documentation:
PQconsumeInput normally returns 1 indicating "no error", but returns 0 if there was some kind of trouble (in which case PQerrorMessage can be consulted). Note that the result does not say whether any input data was actually collected. After calling PQconsumeInput, the application can check PQisBusy and/or PQnotifies to see if their state has changed.
Am I on the right path? Since I'm using .NET I'd prefer not writing any C, so any suggestions are welcome.
I've tried pgsql-listen-exchange but either I'm doing something wrong or the plugin doesn't work for RabbitMQ 3.6 (there's only a 3.5 release). I created an issue.
Specific to RabbitMQ: As an alternative to listening for everything from PostgreSQL, I guess I could create an exchange and have something poll that for queues and just create a listener for each queue. Will be looking into this as well.
I was reading about "idempotent methods", but not quite get it.
1.1. So the GET method must be idempotent.
1.2. An idempotent HTTP method is a HTTP method that can be called many times without different outcomes. It would not matter if the method is called only once, or ten times over. The result should be the same. - See more at: http://restcookbook.com/HTTP%20Methods/idempotency/#sthash.hW6zSUi7.dpuf
Okay, that was theory. Now specific case:
2.1. I have exposed a GET method, that return all records in DB.
2.2. Somebody called this method and it returned 1000 results.
2.3. The application is running, so in a few minutes I have 1001 records in the DB.
2.4. Somebody (maybe the same caller) called this method again and now it returned 1001 results.
Is mine GET method is still idempotent or it should be changed to POST?
Yes.
Because the GET is not changing the resource. That's the distinction.
Consider:
GET /currenttime
Perfectly valid request, idempotent, but you'll get a new answer pretty much every time you call it.
An idempotent HTTP method is a HTTP method that can be called many times without different outcomes. It would not matter if the method is called only once, or ten times over. The result should be the same.
The opening sentence is somewhat unfortunate but the rest explains it pretty clearly.
The key point to note here is that the outcome may not be altered by any number of subsequent calls of the same method. The state of the resource, a represantation of which you're GETting is free to be changed by other means though.
In your example it isn't the GET request that's changing the state of the database. It's an external factor.
Is my GET method is still idempotent or it should be changed to POST?
Yes, the way you describe it, it's both idempotent and safe as it does not modify the state of your resources and it will always yield the same result provided that other parties do not alter the resource state between calls. Calling it does not affect the result of calling it.
I have a scenario where 2 db connections might both run Model.find_or_initialize_by(params) and raise an error: PG::UniqueViolation: ERROR: duplicate key value violates unique constraint
I'd like to update my code so it could gracefully recover from it. Something like:
record = nil
begin
record = Model.find_or_initialize_by(params)
rescue ActiveRecord::RecordNotUnique
record = Model.where(params).first
end
return record
The trouble is that there's not a nice/easy way to reproduce this on my local machine, so I'm not confident that my fix actually works.
So I thought I'd get a bit creative and try calling create 2 times (locally) in a row which should raise then PG::UniqueViolation: ERROR, then I could rescue from it and make sure everything is handled gracefully.
But I get this error: PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
I get this error even when I wrap everything in individual transaction blocks
record = nil
Model.transaction do
record = Model.create(params)
end
begin
Model.transaction do
record = Model.create(params)
end
rescue ActiveRecord::RecordNotUnique
end
Model.transaction do
record = Model.where(params).first
end
return record
My questions:
What's the right way to gracefully handle the race condition I mentioned at the very beginning of this post?
How do I test this locally?
I imagine there's probably something simple that I'm missing here, but it's late and perhaps I'm not thinking too clearly.
I'm running postgres 9.3 and rails 4.
EDIT Turns out that find_or_initialize_by should have been find_or_create_by and the errors I was getting was from the actual save call that happened later on in execution. #VeryTiredWhenIWroteThis
Has this actually happenend?
Model.find_or_initialize_by(params)
should never raise an ´ActiveRecord::RecordNotUnique´ error as it is not saving anything to db. It just creates a new ActiveRecord.
However in the second snippet you are creating records.
create (without bang) does not throw exceptions caused by validations, but
ActiveRecord::RecordNotUnique is always thrown in case of a duplicate by both create and create!
If you're creating records you don't need transactions at all. As Postgres being ACID compliant guarantees that only one of the both operations succeeds and if it responds so it's changes will be durable. (a single statement query against postgres is also a transaction). So your above code is almost fine if you replace through find_or_create_by
begin
record = Model.find_or_create_by(params)
rescue ActiveRecord::RecordNotUnique
record = Model.where(params).first
end
You can test if the code behaves correctly by simply trying to create the same record twice in row. However this will not test ActiveRecord::RecordNotUnique is actually thrown correctly on race conditions.
It's also no the responsibility of your app to test and testing it is not easy. You would have to start rails in multithread mode on your machine, or test against a multi process staging rails instance. Webrick for example handles only one request at a time. You can use puma application server, however on MRI there is no true concurrency (GIL). Threads only share the GIL only on IO blocking. Because talking to Postgres is IO, i'd expect some concurrent requests, but to be 100% sure, the best testing scenario would be to deploy on passenger with multiple workers and then use jmeter to run concurrent request agains the server.
Is there a command to know if the kdb server is busy running a query? Even better, knowing what is the percentage completion of the query being run?
So far I've been looking at the top screen on linux to know which server to use...
Unfortunately, not directly. The reason is due to the single threaded nature of a KDB process. In practice, this is easily worked around by adding some basic logging to your server. So whenever a query comes in just log to a file the time the query came in and when the result was returned to the user.
Take a look at the .z.pg and the .z.ps functions which are called to handle synchronous or asynchronous requests, respectively. By default they are just set to "value", which means evaluate the string and return the result. Just replace this with your own function to log events to a file or a log server.
Besides above solution, a more simple way is: keep checking the port.
Normally all queries will be running against port, and kdb server can launched multiple ports for different purpose.
Details:
Use below code to query again port, if the port is busy, null res will return. And you can further kill the port and restart it or whatever the requirement is.
The code will send out 1 to the port and calculate.
.server.testQuery:{[inPort]
res:#[{hopen(x;3000)};`$":",":" sv string `,inPort;0N];
if[not null res;hclose res];
:res
};