I have 2 table order and payment token and define relationship between
them one to many and many to one, payment token table contains I'd of
order table as foreign key, when I go to update table payment token
table it store duplicate entries. I want to update payment token table
without affecting order table and without using native query. How can
I update.
Related
In some database project, I have a users table which somehow has a computed value avg_service_rating. And there is another table called services with all the services associated to the user and the ratings for that service. Is there a computationally-lite way which I can maintain the avg_service_rating rating without updating it every time an INSERT is done on the services table? Perhaps like a generate column but with a function call instead? Any direct advice or link to resources will be greatly appreciated as well!
CREATE TABLE users (
username VARCHAR PRIMARY KEY,
avg_service_ratings NUMERIC -- is it possible to store some function call for this column?,
...
);
CREATE TABLE service (
username VARCHAR NOT NULL REFERENCE users (username);
service_date DATE NOT NULL,
rating INTEGER,
PRIMARY KEY (username, service_date),
);
If the values should be consistent, a generated column won't fit the bill, since it is only recomputed if the row itself is modified.
I see two solutions:
have a trigger on the services table that updates the users table whenever a rating is added or modified. That slows down data modifications, but not your queries.
Turn users into a view. The original users table would be renamed, and it loses the avg_service_rating column, which is computed on the fly by the view.
To make the illusion perfect, create an INSTEAD OF INSERT OR UPDATE OR DELETE trigger on the view that modifies the underlying table. Then your application does not need to be changed.
With this solution you pay a certain price both on SELECT and on data modifications, but the latter price will be lower, since you don't have to modify two tables (and users might receive fewer modifications than services). An added advantage is that you avoid data duplication.
A generated column would only be useful if the source data is in the same table row.
Otherwise your options are a view (where you could call a function or calculate the value via a subquery), or an AFTER UPDATE OR INSERT trigger on the service table, which updates users.avg_service_ratings. With a trigger, if you get a lot of updates on the service table you'd need to consider possible concurrency issues, but it would mean the figure doesn't need to be calculated every time a row in the users table is accessed.
I am creating a web application that will store all user information in one database using permissions, roles, and FKs to restrict data access. One of the tables in this application tracks work orders created by each user (i.e. the work order table has an FK to the user table).
I am wanting to ensure that each user has their own uninterrupted sequence of 'work order IDs' that are assigned when the work order is scheduled. That is, if user 1 creates his first work order, it will assign it #1, however, if user 2 creates his fifth work order, it will assign it #5.
The work order table has a UUID primary key, so each record is distinguishable, and the user FK has a not-null constraint.
Based on my research so far, it seems like Postgres Sequences would likely be my best answer. I would need to create a sequence for each user, and incorporate it into a trigger to stamp the work order record with the next appropriate ID. However, this seems like it would be very performance intensive, and creating a new sequence for every user would have its own set of challenges.
A second approach could be to create a second table that tracks each user's latest sequence, query it, increment it, and update both the work order table and the number tracking table. However, in this scenario, I think it would be susceptible to race conditions if two users were to convert records at exactly the same time.
I'm unsure what the best way to solve the problem would be. Is there another way that would provide better performance?
Sequences won't work for you, because they are not transactional by design: if an insert with a generated number fails, that number is consumed even after a ROLLBACK.
You should create a second table
CREATE TABLE counters (
user_id bigint PRIMARY KEY REFERENCES users ON DELETE CASCADE,
work_order_id bigint NOT NULL DEFAULT 0
);
Then you get the next number with
UPDATE counters
SET work_order_id = work_order_id + 1
RETURNING work_order_id;
That is atomic and safe from race conditions. Just make sure you run that update and the insert in the same database transaction, then they will either both succeed or both fail and be undone.
This will serialize inserts into the work orders table per user, but gap-less sequences are always a performance problem.
I'm thinking about creating archive tables in our database.
I can create an after delete trigger that would move row to archive table, but I need to fill deleted_by field which has id of the user that removed the data. This user is an entity in our application and not a internal postgres user to be clear.
If postgres would have a way to attach some metadata to the transaction I could've used it inside of the trigger to fill this field. Maybe I can use variables for that? Is there existing solution to this problem?
I suggest you to write a stored procedure that that inserts the row to the archive table and deletes it from the table. Then the API shall use only that procedure to delete a row. The user id is passed as an argument.
You can still write a trigger that inserts the row to the archive table with a NULL user id if someone attempts to use DELETE instead of the procedure. In that case, the row in the archive must have the primary key from the original table in a UNIQUE NULL column to prevent duplicates.
I have written a working T-SQL MERGE statement. The premise is that Database A contains records about customer's support calls. If they are returning a product for repair, Database B is to be populated with certain data elements from Database A (e.g. customer name, address, product ID, serial number, etc.) So I will run an SQL Server job that executes an SSIS package every half hour or so, in which the MERGE will do one of the following:
If the support call in Database A requires a product return and it
is not in Database B, INSERT it into Database B..
If the support call in Database A requires a product return and it
is in Database B - but data has changed - UPDATE it in Database B.
If there is a product return in Database B but it is no longer
indicated as a product return in Database A (yes, this can happen - a customer can change their mind at a later time/date and not want to pay for a replacement product), DELETE it from Database
B.
My problem is that Database B has an additional table with a 1-to-many FK relationship with the table being populated in the MERGE. I do not know how, or even if, I can go about using a MERGE statement to first delete the records in the table with FK constraint before deleting the records as I am currently doing in my MERGE statement.
Obviously, one way would be to get rid of the DELETE in the MERGE and hack out writing IDs to delete in a temp table, then deleting from the FK table, then the PK table. But if I can somehow delete from both tables in WHEN NOT MATCHED BY SOURCE that would be cleaner code. Can this be done?
You can only UPDATE, DELETE, or INSERT into/from one table per query.
However, if you added an ON DELETE CASCADE to the FK relationship, the sub-table would be cleaned up as you delete from the primary table, and it would be handled in a single operation.
I have 2 tables in sql server with primary keys set to identity. They are related and work fine.
I then created a form in vb 2008 and tried inserting some values into my database the respective primary keys work but the primary key in the parent table wont show up in the child table.I did create a relationship in vb using ado.net and all the details of my table are defineed in the data table. For example
cust tables (custid,name,..)
book table(bookid,bookname,..,custid)
in vb my insert statement is something like Insert into cust(name) values(#name)
insert into book(bookname) values(#bookname). I do not include the id columns as they auto generate in the database(tables).
My question is that how do i get to insert the custid in the book table when the data is stored back into the tavles in my database.
Please advice with an example as im not half as good as you guys.
Kind Regards
You have to know which customer you want to associate with the book before INSERTing the book. If you don't know before hand, you can't. So somewhere in your Form there should be a way to select a customer. Then when you create a book, you grab that customer's ID and insert it along with the other book info.
You don't actually say that you created a foreign key constraint between the two tables!
You need to:
Ensure that you create an explicit foreign key on the BOOK table to point to a customer in the CUST table.
First insert the customer.
Then find out what the customer's auto-generated ID was. That value is in ##IDENTITY. Store it somewhere e.g. #CUSTID.
Insert the book, specifying #CUSTID as the customer's ID.