prohibit specifying the id manually when inserting a new record - postgresql

I want to prohibit specifying the id manually when inserting a new record, so that postgres itself always reads the value by BIGSERIAL. How can i do this? I think that with the help of CREATE TRIGGER BEFORE INSERT, but I don't know what condition I need in order not to skip the id entered by the user.

Related

Postgres: auto-populating an `INSERT` field based on session variable

I have a web app backed by Postgres.
Each web app request should only read/write data for the current logged-in user.
Every table with user data has a user_id column.
I occasionally have bugs where I forget to add user_id = ? to the WHERE clause of an SQL request. To protect against this problem in a general way, I'm looking into Postgres row-level security (article):
Set a policy on every user data table: CREATE POLICY table_policy ON table USING (user_id::TEXT = current_setting('app.user_id'))
In the web app, when a request begins, set the current logged-in user ID on the request's connection: SET app.user_id = ?.
This allows me to completely ignore user_id when writing SELECT and UPDATE requests.
My remaining problem is INSERTs. Is there a way to avoid having to provide user_id on INSERTs?
Just having a look at the manual :
Existing table rows are checked against the expression specified in USING, while new rows that would be created via INSERT or UPDATE are
checked against the expression specified in WITH CHECK
it seems that you just have to add a WITH CHECK clause to your policy in addition of the USING clause, and which will apply to the INSERT and UPDATE statements.

Postgres Unique Sequences in one table based on owner/foreign key

I am creating a web application that will store all user information in one database using permissions, roles, and FKs to restrict data access. One of the tables in this application tracks work orders created by each user (i.e. the work order table has an FK to the user table).
I am wanting to ensure that each user has their own uninterrupted sequence of 'work order IDs' that are assigned when the work order is scheduled. That is, if user 1 creates his first work order, it will assign it #1, however, if user 2 creates his fifth work order, it will assign it #5.
The work order table has a UUID primary key, so each record is distinguishable, and the user FK has a not-null constraint.
Based on my research so far, it seems like Postgres Sequences would likely be my best answer. I would need to create a sequence for each user, and incorporate it into a trigger to stamp the work order record with the next appropriate ID. However, this seems like it would be very performance intensive, and creating a new sequence for every user would have its own set of challenges.
A second approach could be to create a second table that tracks each user's latest sequence, query it, increment it, and update both the work order table and the number tracking table. However, in this scenario, I think it would be susceptible to race conditions if two users were to convert records at exactly the same time.
I'm unsure what the best way to solve the problem would be. Is there another way that would provide better performance?
Sequences won't work for you, because they are not transactional by design: if an insert with a generated number fails, that number is consumed even after a ROLLBACK.
You should create a second table
CREATE TABLE counters (
user_id bigint PRIMARY KEY REFERENCES users ON DELETE CASCADE,
work_order_id bigint NOT NULL DEFAULT 0
);
Then you get the next number with
UPDATE counters
SET work_order_id = work_order_id + 1
RETURNING work_order_id;
That is atomic and safe from race conditions. Just make sure you run that update and the insert in the same database transaction, then they will either both succeed or both fail and be undone.
This will serialize inserts into the work orders table per user, but gap-less sequences are always a performance problem.

Access arbitrary metadata in after delete trigger

I'm thinking about creating archive tables in our database.
I can create an after delete trigger that would move row to archive table, but I need to fill deleted_by field which has id of the user that removed the data. This user is an entity in our application and not a internal postgres user to be clear.
If postgres would have a way to attach some metadata to the transaction I could've used it inside of the trigger to fill this field. Maybe I can use variables for that? Is there existing solution to this problem?
I suggest you to write a stored procedure that that inserts the row to the archive table and deletes it from the table. Then the API shall use only that procedure to delete a row. The user id is passed as an argument.
You can still write a trigger that inserts the row to the archive table with a NULL user id if someone attempts to use DELETE instead of the procedure. In that case, the row in the archive must have the primary key from the original table in a UNIQUE NULL column to prevent duplicates.

sql update trigger to grab updated data and also select other row data

I am trying to find a way so that when a specific column gets updated on a table that an update trigger (or maybe something else) can then select the stop number column from the same row that the datetime was update on. I want to capture the stop number and the column data before/after the update into another table. I do ok with SQL but I'm no expert so I just can't think of how to accomplish this.
Is it possible?
Yes, it is. Have a read through this. Basically there are two virtual tables, deleted and inserted, that you can query in a trigger. Deleted contains the row that is being deleted, and inserted (you guessed it) the row being inserted.
"How does that help? I'm doing an update." Indeed but an update is effectively a delete followed by an insert, so in an after update trigger you can get at the old value in deleted.

Prevent insertion if the records already exist in sqlite

I am programming for iPhone and i am using SQLITE DB for my app.I have a situation where i want to insert records into the table,only if the records doesn't exist previously.Otherwise the records should not get inserted.
How can i do this?Please any body suggest me a suitable query for this.
Thank you one and all,
Looking at SQLite's INSERT page http://www.sqlite.org/lang_insert.html.
You can do it using the following syntax
INSERT OR IGNORE INTO tablename ....
Example
INSERT OR IGNORE INTO tablename(id, value, data) VALUES(2, 4562, 'Sample Data');
Note : You need to have a KEY on the table columns which uniquely identify a row. It is only if a duplicate KEY is tried to be inserted that INSERT OR IGNORE will not insert a new row.
In the above example if you have a KEY on id, then another row with id = 2 will not be inserted.
If you have a KEY only on id and value then a combination of id = 2 and value = 4562 will cause a new row not be inserted.
In short there must be a key to uniquely identify a ROW only then will the Database know there is a duplicate which SHOULD NOT Be allowed.
Otherwise if you do not have a KEY you would need to go the SELECT and then check if a row is already there, route. But here also whichever condition you are using on columns you can add them as a KEY to the table and simply use the INSERT OR IGNORE
In SQLite it is not possible to ALTER the table and add a constraint like UNIQUE or PRIMAY KEY. For that you need to recreate the table. Look at this FAQ on sqlite.org
http://sqlite.org/faq.html#q11
Hello Sankar what you can do is perform a select query with the record you wish to insert and then check the response via SQLite's SQLITE_NOTFOUND flag you can check whether that record already exists or not. If it doesn't exist you can insert it otherwise you skip inserting.
I hope this is helpful.