Showing warning to end user through postgres trigger without aborting transaction - postgresql

I am trying to validate one field through postgres trigger.
If targeted field has value in decimals,i need to through a warning but allowing the user to save the record.
I tried with options
RAISE EXCEPTION,RAISE - USING
but it's throwing error on UI and transaction is aborted.
I tried with options
RAISE NOTICE,RAISE WARNING
through which warning is not shown and record is simply saved.
It would be great if any one help on this.
Thanks in Advance

You need to set client_min_messages to a level that'll show NOTICEs and WARNINGs. You can do this:
At the transaction level with SET LOCAL
At the session level with SET
At the user level with ALTER USER
At the database level with ALTER DATABASE
Globally in postgresql.conf
You must then check for messages from the server after running queries and display them to the user or otherwise handle them. How to do that depends on the database driver you're using, which you haven't specified. PgJDBC? libpq? other?
Note that raising a notice or warning will not cause the transaction to pause and wait for user input. You really don't want to do that. Instead RAISE an EXCEPTION that aborts the transaction. Tell the user about the problem, and re-run the transaction if they approve it, possibly with a flag set to indicate that an exception should not be raised again.
It would be technically possible to have a PL/Perlu, PL/Pythonu, or PL/Java trigger pause execution while it asked the client via a side-channel (like a TCP socket) to approve an action. It'd be a really bad idea, though.

Related

PostgreSQL JDBC driver hide prepared statement parameters from logging

I need to hide prepared statement parameters from logging debug level and exception message. There are security critical values. For example using pgp_sym_encrypt, when exception thrown from database, in exception message shown full statement with parameters also 2nd parameter encryption key password.
Is there any way to hide these kind of values, especially in exception message?
The safest way is to do the encryption on the client side and never send the password to the database. Once you send it to the database, it will be very hard to absolutely control what happens to it. Consider that if there is a way to configure the database to suppress this logging, then there is also a way to reverse that configuration.

Can I log the script that invokes DELETE query?

I have to investigate who or what caused tables rows to disappear.
So, I am thinking about creating "on before delete" trigger that logs the script that invokes the deletion. Is this possible? Can I get the db client name or event better - the script that invokes delete query and log it to another temporarly created log table?
I am open to other solutions, too.
Thanks in advance!
You can't get "the script" which issued the delete statement, but you can get various other information:
current_user will return the current Postgres user that initiated the delete statement
inet_client_addr() will return the IP address of the client's computer
current_query() will return the complete statement that caused the trigger to fire
More details about that kind of of functions are available in the manual:
http://www.postgresql.org/docs/current/static/functions-info.html
The Postgres Wiki contains two examples of such an audit trigger:
https://wiki.postgresql.org/wiki/Audit_trigger_91plus
https://wiki.postgresql.org/wiki/Audit_trigger (somewhat outdated)

ActiveRecord find_or_initialize_by race conditions

I have a scenario where 2 db connections might both run Model.find_or_initialize_by(params) and raise an error: PG::UniqueViolation: ERROR: duplicate key value violates unique constraint
I'd like to update my code so it could gracefully recover from it. Something like:
record = nil
begin
record = Model.find_or_initialize_by(params)
rescue ActiveRecord::RecordNotUnique
record = Model.where(params).first
end
return record
The trouble is that there's not a nice/easy way to reproduce this on my local machine, so I'm not confident that my fix actually works.
So I thought I'd get a bit creative and try calling create 2 times (locally) in a row which should raise then PG::UniqueViolation: ERROR, then I could rescue from it and make sure everything is handled gracefully.
But I get this error: PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
I get this error even when I wrap everything in individual transaction blocks
record = nil
Model.transaction do
record = Model.create(params)
end
begin
Model.transaction do
record = Model.create(params)
end
rescue ActiveRecord::RecordNotUnique
end
Model.transaction do
record = Model.where(params).first
end
return record
My questions:
What's the right way to gracefully handle the race condition I mentioned at the very beginning of this post?
How do I test this locally?
I imagine there's probably something simple that I'm missing here, but it's late and perhaps I'm not thinking too clearly.
I'm running postgres 9.3 and rails 4.
EDIT Turns out that find_or_initialize_by should have been find_or_create_by and the errors I was getting was from the actual save call that happened later on in execution. #VeryTiredWhenIWroteThis
Has this actually happenend?
Model.find_or_initialize_by(params)
should never raise an ´ActiveRecord::RecordNotUnique´ error as it is not saving anything to db. It just creates a new ActiveRecord.
However in the second snippet you are creating records.
create (without bang) does not throw exceptions caused by validations, but
ActiveRecord::RecordNotUnique is always thrown in case of a duplicate by both create and create!
If you're creating records you don't need transactions at all. As Postgres being ACID compliant guarantees that only one of the both operations succeeds and if it responds so it's changes will be durable. (a single statement query against postgres is also a transaction). So your above code is almost fine if you replace through find_or_create_by
begin
record = Model.find_or_create_by(params)
rescue ActiveRecord::RecordNotUnique
record = Model.where(params).first
end
You can test if the code behaves correctly by simply trying to create the same record twice in row. However this will not test ActiveRecord::RecordNotUnique is actually thrown correctly on race conditions.
It's also no the responsibility of your app to test and testing it is not easy. You would have to start rails in multithread mode on your machine, or test against a multi process staging rails instance. Webrick for example handles only one request at a time. You can use puma application server, however on MRI there is no true concurrency (GIL). Threads only share the GIL only on IO blocking. Because talking to Postgres is IO, i'd expect some concurrent requests, but to be 100% sure, the best testing scenario would be to deploy on passenger with multiple workers and then use jmeter to run concurrent request agains the server.

How To:Transaction Rollback in squeryl

Can anybody please tell me how to handle a transaction rollback in squeryl explicitly?
And also how can we add or remove columns in squeryl dynamically?
Thanx...
Just to elaborate a bit on the response from #didierd. There is one Session/Connection bound to each transaction. You can access the current Session, and thereby the Connection with code like:
Session.currentSession.connection
Or, if you're not sure if you're within a transaction
Session.currentSessionOption map {_.connection}
If you do roll back the transaction this way it will be your responsibility to start a new one or make sure there is no further use of the connection, so use with care.
You have an access to the JDBC's java.sql.Connection (connection in Session), so if you really cannot use transaction / inTransaction, you can call rollback there.
With access to the connection, you can also execute arbitrary SQL requests and so change the database schema, but be mindful that your squeryl-using code has a static, compile time known schema.

continue insert when exception is raised in postgres

HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.