PGSQL: Trigger counter of rows of another table - postgresql

I want to make a trigger that when a table, let's call it A, is updated, it counts the number of rows on A and updates a value in another table B, how can it be done?

There is full sample here:
http://www.postgresql.org/docs/9.3/static/plpgsql-trigger.html
To create stored procedure you are supposed to provide:
CREATE FUNCTION your_function_name(parameters_goes_here) RETURNS returned_data_type_or_trigger_for_triggers AS string_value_with_body;
Normally semicolon ends command, so you can't type body. Other databases changes command delimiter. PG allows special string delimiter dollar signs. Please read more at this topic:
What are '$$' used for in PL/pgSQL
Going back to counters - they are tricky. Let's say you count posts in topics. And moving posts between two topics. Since they have counters, topics are updated. That in turn means update lock. What would happen if there are two concurrent posts moved between two topics? First move locks his base topic and tries other one. And the same happens to other one. :D
To deal with it you need to ensure that both of them tries with the same one. For instance sort topics by ID and pick first of them.

Related

Why does Postgres sequence item go up even if object creation fails?

I have a Postgres item where one of my models is Client simply indexed by its primary key. I was having an issue creating clients because somewhere along the lines someone created a client while explicitly setting its primary key which I have read doesn't affect Postgres' sequence table for the Clients, which is responsible for auto-incrementing the primary key 1 whenever a Client object is created.
I ran some SQL queries to play around with it, and found that the current sequence value was in fact 1 lower, 262, than the highest id for Clients in the database 263 so it was saying that a Client with the ID 263 already existed. I tried creating a Client in our front end application, got the error again, and decided to re run the queries. I saw that there was no new client created in the database, as expected, but I also noticed that the sequence value did go up to 263, so when I tried creating a client again it worked!
Is this normal behavior for a PostgreSQL sequence table to increment up even if creation of its related model fails ? If so it seems like that could cause some serious issues.
Yes, this the expected behaviour. See docs:
nextval
Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions execute nextval concurrently, each will safely receive a distinct sequence value.
If a sequence object has been created with default parameters, successive nextval calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE SEQUENCE command; see its command reference page for more information.
Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions might leave unused "holes" in the sequence of assigned values.
Note that nextval is normally set as a default value for a autoincrement/serial column.
Also try to imagine how hard and inefficient it would be if nextval were to rollback. Essentially you would have to lock every client on nextval until whole transaction (the one that acquired the lock) is processed. In that case forget about concurrent inserts.
If so it seems like that could cause some serious issues.
Like what? The issue in your case was that someone manually specified a value for an autoincrement column. You should never do that unless you are a samurai. :)

when should I use an After trigger instead of a Before trigger?

Afaik, although I'm pretty new to Postgres, Before-triggers are less expensive then After-triggers.
After all, if you want to change the current record (using NEW), you can change the record before it is written. In contrast, with After-triggers you need two writes: 1 verbatim write and 1 as a result of the after-trigger.
At the same time, all functionality that is available in after-triggers seems to be available in before-triggers. If I'm not mistaken.
So why would you ever use After-triggers to begin with?
If you're changing the record upon which the trigger is acting use a BEFORE trigger. If you're doing some complex logic that may prevent the record from being changed, use a BEFORE trigger.
Almost anything else, use an AFTER trigger. An example might be where you're inserting child records which rely upon the primary key of a record being inserted. For example, if you're adding an entry to a history table for a newly inserted row. The parent row won't exist in the BEFORE trigger, so would fail foreign key checks.

How to modify a record within a record type in oracle forms

I created 3 blocks in my oracle 10g form, Headers, Lines and Lines Details. I am fetching the records using cursors for all the three blocks everything is working fine. Now in Lines Details block there is a numeric field called priority. By default I am using FIFO method for priority value starting from 1 to n numbers. Now I want user to decide the priority such that any specific record can be shifted up or down to increase or decrease the priority without committing line details. Once user is satisfied with priority he will click on save to commit the changes. Please help me with this. Thanks in advance.
Locate the changed record and based on the current priority value make it current priority +/- number of times user clicked Up or Down. Declare a record type variable with exact number of columns as in your lines details data block. Copy the all records including the changed record into record type variable. Clear block with no validate and then re-populate the changed record. To shift the record as per the priority values modify your default order by clause. This will solve your problem.

PostgreSQL transaction variables

This question is sort of a follow up to this question, but it's different enough of a topic that I feel like it merits it's own discussion. For a bit of background, you can refer to it.
As a part of a new file importing system, I am building an audit system based on this wiki page. But, one of the things that I would like to include in the audit trail is the file name of the file that the data came from (these files are archived for long term storage so if there are questions, I can always go back).
One way I could go it is to create a import_batch record and record the name of the file there and then just stamp records when they update. Which is the path that I'm going down. But, it feels a bit clunky in a way. I'm been pondering the idea of trying to have the audit trigger be able to get the import_batch_id without it having to be in the NEW.* record. It seems like to me there are at least a couple of ways I might be able to accomplish this.
I could have a function that could create a temp table and store any information in it that I want (such as batch # or file name or whatever). This seem pretty clean and as I understand it would only live for the duration of the transaction. And as I understand it, it wouldn't have to worry about naming collisions. Each transaction would have a temp file named "tmp_import_info".
If I only care about the import_batch_id (which has a seq), I could probably just get the current value of the sequencer. I'm not a 100% sure how this would behave in a multi-user setting. I would think it would be possible for trans#1 to create import_batch_id #222 and then trans#2 to start and get #223. And then my audit trail would record the wrong data.
Are there other options that I'm not seeing here? Is there a way to add a transaction/session variable? Basically, something like pg_settings (but, that does allow for inserts, updates and deletes of values).
It feels like the best option might be the temp table.
The main good news for variant 2. is - quoting the manual here:
currval
Return the value most recently obtained by nextval for this sequence in the current session. (An error is reported if nextval has never been called for this sequence in this session.) Because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did.
Store your import file names in a table with a serial primary key. You can refer to your last value from the sequence with currval or lastval. Concurrent users cannot interfere. As long as you don't foil this path inside your own transaction yourself, this is safe.

How to safely increment a counter in Entity Framework

Let's say I have a table that tracks the number of times a file was downloaded, and I expose that table to my code via EF. When the file is downloaded I want to update the count by one. At first, I wrote something like this:
var fileRecord = (from r in context.Files where r.FileId == 3 select r).Single();
fileRecord.Count++;
context.SaveChanges();
But then when I examined the actual SQL that is generated by these statements I noticed that the incrementing isn't happening on the DB side but instead in my memory. So my program reads the value of the counter in the database (say 2003), performs the calculation (new value is 2004) and then explicitly updates the row with the new Count value of 2004. Clearly this isn't safe from a concurrency perspective.
I was hoping the query would end up looking instead like:
UPDATE Files SET Count = Count + 1 WHERE FileId=3
Can anyone suggest how I might accomplish this? I'd prefer not to lock the row before the read and then unlock after the update because I'm afraid of blocking reads by other users (unless there is someway to lock a row only for writes but not block reads).
I also looked at doing a Entity SQL command but it appears Entity SQL doesn't support updates.
Thanks
You're certainly welcome to call a stored procedure with EF. Write a sproc with the SQL you show then create a function import in your EF model mapped to said sproc.
You will need to do some locking in order to get this to work. But you can minimise the amount of locking.
When you read the count and you want to update it, you must lock it, this can be done by placing the read and the update inside a transaction scope. This will protect you from race conditions.
When you read the value and you just want to read it, you can do this with a transaction isolation level of ReadUncommited, this read will then not be locked by the read/write lock above.