I am creating 2 tables, one called quizes with:
id, quiz_name, plays
and the second one called quizes taken with:
session_id, quiz_id
and I would like to store count of rows, inside of plays column in table quizes, but I have no clue how to do it.
In more detail, what I am trying to achieve, is that on every update/insert into quizes_taken, in table quizes, the play" column updates with the row count of that quiz's ID from table quizes_taken.
If someone could explain how to achieve it, I'd be grateful!
Thanks in advance!
you could make a view or materialized view for the expected result output,
CREATE OR REPLACE VIEW AS quizes_live_update AS
SELECT a.id , a.quiz_name, COUNT(b.quiz_id) as plays
FROM quizes a
JOIN quizes_taken b ON a.id = b.quiz_id
GROUP BY a.id, a.quiz_name
Postgres: Create view
Although, if you need to proceed with tables approach, you can set up some upsert triggers
on quizes_taken table to perform update and insert on quizes table, plays column.
Related
I have this table
table A(price) and this table B(name)
and i want to get the Avarage price of something from Table A using the name from table B
I tried like this
select AVG(a.price) from a,b where b.name='something' but this returns the Avarage of all the items in table A.
I also tried using join like this
AVG(a.price) from a left join b on b.name='something'
but it returns the same thing as before
You are missing a key in your join. The price in table A for an item must be against a record with a specific key whereby the same key is available in table b with the name. It might be something like below.
Select avg(price) from A
Join B on
A.key=B.key
Where B.name=‘something’
Then again this depends on your table structure.
Is it possible to add a new column to an existing table from another table using insert or update in conjunction with full outer join .
In my main table i am missing some records in one column in the other table i have all those records i want to take the full record set into the maintable table. Something like this;
UPDATE maintable
SET all_records= othertable.records
FROM
FULL JOIN othertable on maintable.col = othertable.records;
Where maintable.col has same id a othertable.records.
I know i could simply join the tables but i have a lot of comments in the maintable i don't want to have to copy paste back in if possible. As i understand using where is equivalent of a left join so won't show me what i'm missing
EDIT:
What i want is effectively a new maintable.col with all the records i can then pare down based on presence of records in other cols from other tables
Try this:
UPDATE maintable
SET all_records = o.records
FROM othertable o
WHERE maintable.col = o.records;
This is the general syntax to use in postgres when updating via a join.
HTH
EDIT
BTW you will need to change this - I used your example, but you are updating the maintable with the column used for the join! Your set needs to be something like SET missingcol = o.extracol
AMENDED GENERALISED ANSWER (following off-line chat)
To take a simplified example, suppose that you have two tables maintable and subtable, each with the same columns, but where the subtable has extra records. For both tables id is the primary key. To fill maintable with the missing records, for pre 9.5 versions of Postgres you must use the following syntax:
INSERT INTO maintable (SELECT * FROM subtable s WHERE NOT EXISTS
(SELECT 1 FROM maintable m WHERE m.id = s.id));
Since 9.5 there is a (preferred) alternative:
INSERT INTO maintable (SELECT * FROM subtable) ON CONFLICT DO NOTHING;
This is preferred because (apart from being simpler) it avoids the situation that has been known to arise in the former, where a race condition is created between the INSERT and the sub-SELECT.
Obviously when the columns are different, you need to specify in the INSERT statement which columns are inserted from which. Something like:
INSERT INTO maintable (id, ColA, ColB)
(SELECT id, ColE, ColG FROM subtable ....)
Similarly the common field might not be id in both tables. However, the simplified example should be enough to point you in the right direction.
I got this trigger as a tip and I would like to know how it works with updates. It is supposed to create a record everytime there is an update or insert action on my main table.
create trigger tblTriggerAuditRecord on tblOrders
after **update, insert**
as
begin
insert into tblOrdersAudit
(OrderID, OrderApprovalDateTime, OrderStatus, UpdatedBy, UpdatedOn )
select i.OrderID, i.OrderApprovalDateTime, i.OrderStatus, SUSER_SNAME(), getdate()
from tblOrders t
inner join **inserted** i on t.OrderID=i.OrderID
end
go
From my understanding, it inserts all the inserted records to the main table to the stated columns in the audit including timestamp and user but how about the update? What if I update the rows in my main table? should not I have a joing also on the updated records?
Hope my question is clear, thanks a lot for help!
There is no table updated when a trigger is fired. In case of an update, the old values from your main table you'll find in a table deleted and the new ones are (like in case of an insert) in the table inserted.
That's the same as in this example:
UPDATE tabEmployee SET Salary = Salary * 1.05
OUTPUT inserted.EmployeeName, deleted.Salary, inserted.Salary
INTO tabSalaryHistory (EmployeeName, OldSalary, NewSalary)
In this example, every employee gets a salary increase. The value before the increase is stored in the output table deleted and the new value in inserted.
Have a look at this for better understanding.
Good Day,
I'm currently using posgresql as my backend and I have to make huge changes on my table fields.
I will be using two tables.
Table 1 Table 2
Old Index New Index
Product Id Old Index
Address Product Id
Contact no Address
Contact no
Email
I have to migrate all details from Table 1 from Table 2. I’m using a different index for Table 2.
For my other tables to recognize my old index I used this query
Update Table 2 Set OldIndex =Table2.index
From(select Oldindex from Table 1)as new,Table 1
Where Table1.Productid =Table2.Productid
I have other tables related to Table 1 so my goal is to replace the old index with new index and hope that other tables can see the changes too.
But I’m not sure I’m doing this right. my query is slow, I hope someone can test my query and point me on the right direction if I'm doing it all wrong, thank you in advance.
Would you mind to try MERGE
MERGE INTO Table2 AS b
USING Table1 AS p
ON p.product_id = b.product_id
WHEN MATCHED THEN b.OldIndex = b.NewIndex
I do not know how it works for postgresql, but you can find some samples here: https://wiki.postgresql.org/wiki/MergeTestExamples
The way to do this in PostgreSQL is to use a writable CTE (available in 9.2 and later).
In this way you would do something like:
WITH up (UPDATE table2
SET ....
FROM table1 t1
WHERE t1.product_id = table2.product_id
RETURNING product_id)
INSERT INTO table2 (...)
SELECT ...
FROM table1
WHERE product_id NOT IN (select product_id from up);
You can find some examples here.
I have a table in my database and I want for each row in my table to have an unique id and to have the rows named sequently.
For example: I have 10 rows, each has an id - starting from 0, ending at 9. When I remove a row from a table, lets say - row number 5, there occurs a "hole". And afterwards I add more data, but the "hole" is still there.
It is important for me to know exact number of rows and to have at every row data in order to access my table arbitrarily.
There is a way in sqlite to do it? Or do I have to manually manage removing and adding of data?
Thank you in advance,
Ilya.
It may be worth considering whether you really want to do this. Primary keys usually should not change through the lifetime of the row, and you can always find the total number of rows by running:
SELECT COUNT(*) FROM table_name;
That said, the following trigger should "roll down" every ID number whenever a delete creates a hole:
CREATE TRIGGER sequentialize_ids AFTER DELETE ON table_name FOR EACH ROW
BEGIN
UPDATE table_name SET id=id-1 WHERE id > OLD.id;
END;
I tested this on a sample database and it appears to work as advertised. If you have the following table:
id name
1 First
2 Second
3 Third
4 Fourth
And delete where id=2, afterwards the table will be:
id name
1 First
2 Third
3 Fourth
This trigger can take a long time and has very poor scaling properties (it takes longer for each row you delete and each remaining row in the table). On my computer, deleting 15 rows at the beginning of a 1000 row table took 0.26 seconds, but this will certainly be longer on an iPhone.
I strongly suggest that you re-think your design. In my opinion your asking yourself for troubles in the future (e.g. if you create another table and want to have some relations between the tables).
If you want to know the number of rows just use:
SELECT count(*) FROM table_name;
If you want to access rows in the order of id, just define this field using PRIMARY KEY constraint:
CREATE TABLE test (
id INTEGER PRIMARY KEY,
...
);
and get rows using ORDER BY clause with ASC or DESC:
SELECT * FROM table_name ORDER BY id ASC;
Sqlite creates an index for the primary key field, so this query is fast.
I think that you would be interested in reading about LIMIT and OFFSET clauses.
The best source of information is the SQLite documentation.
If you don't want to take Stephen Jennings's very clever but performance-killing approach, just query a little differently. Instead of:
SELECT * FROM mytable WHERE id = ?
Do:
SELECT * FROM mytable ORDER BY id LIMIT 1 OFFSET ?
Note that OFFSET is zero-based, so you may need to subtract 1 from the variable you're indexing in with.
If you want to reclaim deleted row ids the VACUUM command or pragma may be what you seek,
http://www.sqlite.org/faq.html#q12
http://www.sqlite.org/lang_vacuum.html
http://www.sqlite.org/pragma.html#pragma_auto_vacuum