Access arbitrary metadata in after delete trigger - postgresql

I'm thinking about creating archive tables in our database.
I can create an after delete trigger that would move row to archive table, but I need to fill deleted_by field which has id of the user that removed the data. This user is an entity in our application and not a internal postgres user to be clear.
If postgres would have a way to attach some metadata to the transaction I could've used it inside of the trigger to fill this field. Maybe I can use variables for that? Is there existing solution to this problem?

I suggest you to write a stored procedure that that inserts the row to the archive table and deletes it from the table. Then the API shall use only that procedure to delete a row. The user id is passed as an argument.
You can still write a trigger that inserts the row to the archive table with a NULL user id if someone attempts to use DELETE instead of the procedure. In that case, the row in the archive must have the primary key from the original table in a UNIQUE NULL column to prevent duplicates.

Related

prohibit specifying the id manually when inserting a new record

I want to prohibit specifying the id manually when inserting a new record, so that postgres itself always reads the value by BIGSERIAL. How can i do this? I think that with the help of CREATE TRIGGER BEFORE INSERT, but I don't know what condition I need in order not to skip the id entered by the user.

ADF mapping data flow only inserting, never updating

I have an ADF data flow that will only insert. It never updates rows.
Below is a screenshot of the flow, and the Alter Row task that sets the insert/Update policies.
data flow
alter row task
There is a source table and a destination table.
There is a source table for new data.
A lookup is done against the key of the destination table.
Two columns are then generated, a hash of the source data & hash of the destination data.
In the alter row task, the policy's are as follows:
Insert: if the lookup found no matching id.
Update: if lookup found a matching id and the checksums do not match (i.e. user exists but data is different between the source and existing record).
Otherwise it should do nothing.
The Sink allows insert and updates:
Even so, on first run it inserts all records but on second run it inserts all the records again, even if they exist.
I think I am misunderstanding the process and so appreciate any expertise or advise.
Thank you Joel Cochran for your valuable inputs, repro’d the scenario, and posting it as an answer to help other community members.
If you are using the upsert method in the sink, add alter row transformation with upsert if and write the expression for the upsert condition.
If you are using insert and update as your update method in the sink then in alter row transformation use both inserts if and update if conditions to insert and update data accordingly into the sink based on alter row conditions.

Storing duplicate data as a column in Postgres?

In some database project, I have a users table which somehow has a computed value avg_service_rating. And there is another table called services with all the services associated to the user and the ratings for that service. Is there a computationally-lite way which I can maintain the avg_service_rating rating without updating it every time an INSERT is done on the services table? Perhaps like a generate column but with a function call instead? Any direct advice or link to resources will be greatly appreciated as well!
CREATE TABLE users (
username VARCHAR PRIMARY KEY,
avg_service_ratings NUMERIC -- is it possible to store some function call for this column?,
...
);
CREATE TABLE service (
username VARCHAR NOT NULL REFERENCE users (username);
service_date DATE NOT NULL,
rating INTEGER,
PRIMARY KEY (username, service_date),
);
If the values should be consistent, a generated column won't fit the bill, since it is only recomputed if the row itself is modified.
I see two solutions:
have a trigger on the services table that updates the users table whenever a rating is added or modified. That slows down data modifications, but not your queries.
Turn users into a view. The original users table would be renamed, and it loses the avg_service_rating column, which is computed on the fly by the view.
To make the illusion perfect, create an INSTEAD OF INSERT OR UPDATE OR DELETE trigger on the view that modifies the underlying table. Then your application does not need to be changed.
With this solution you pay a certain price both on SELECT and on data modifications, but the latter price will be lower, since you don't have to modify two tables (and users might receive fewer modifications than services). An added advantage is that you avoid data duplication.
A generated column would only be useful if the source data is in the same table row.
Otherwise your options are a view (where you could call a function or calculate the value via a subquery), or an AFTER UPDATE OR INSERT trigger on the service table, which updates users.avg_service_ratings. With a trigger, if you get a lot of updates on the service table you'd need to consider possible concurrency issues, but it would mean the figure doesn't need to be calculated every time a row in the users table is accessed.

sql update trigger to grab updated data and also select other row data

I am trying to find a way so that when a specific column gets updated on a table that an update trigger (or maybe something else) can then select the stop number column from the same row that the datetime was update on. I want to capture the stop number and the column data before/after the update into another table. I do ok with SQL but I'm no expert so I just can't think of how to accomplish this.
Is it possible?
Yes, it is. Have a read through this. Basically there are two virtual tables, deleted and inserted, that you can query in a trigger. Deleted contains the row that is being deleted, and inserted (you guessed it) the row being inserted.
"How does that help? I'm doing an update." Indeed but an update is effectively a delete followed by an insert, so in an after update trigger you can get at the old value in deleted.

Access 2010 Insert Trigger

I have a parent table (Assessment) with two children tables with 1 to 1 relationships defined. To make sure that a child row is never added that does not have a parent entry, I want to add an insert trigger to the child table (ConsequenceAssessment) in this case. The following ConsequenceAssessment BeforeChange trigger fires but I cannot find how to reference the INSERTED rowset. There is an OLD recordset that works for an update; but, how do I access the inserted row. The following is my best attempt - but, the ConsequenceAssessment table does not yet include the new row and therefore, the trigger always hits the RaiseError.
UPDATE: Just found out that I can enforce Referential Integrity on a one-to-one relationship within Access (rookie misunderstanding). I would still like to know how to access the updated recordset. With MS SQL Server, this is implemented via the INSERTED table which is available within the scope of an INSERT trigger. So, what is the equivalent in MS Access.
In a Before Change data macro, [fieldname] refers to the new value and [old].[fieldname] refers to the old value (which would be Null for an insert).
In your particular case [ConsequenceAssessment].[id] appears to be the primary key for that table, not a foreign key referring to the [Assessment] (parent) table. So, the lookup is simply searching for the wrong key value in the parent table.