I have a appointment table with patient_id. Now I want, a patient can make maximum 3 appointment. So I want to trigger that will check total appointment made by a patient using patient_id before insert data. The trigger will raise error if count > 3.
Can anyone tell me how can I do that?
Thanks
Related
I am creating 2 tables, one called quizes with:
id, quiz_name, plays
and the second one called quizes taken with:
session_id, quiz_id
and I would like to store count of rows, inside of plays column in table quizes, but I have no clue how to do it.
In more detail, what I am trying to achieve, is that on every update/insert into quizes_taken, in table quizes, the play" column updates with the row count of that quiz's ID from table quizes_taken.
If someone could explain how to achieve it, I'd be grateful!
Thanks in advance!
you could make a view or materialized view for the expected result output,
CREATE OR REPLACE VIEW AS quizes_live_update AS
SELECT a.id , a.quiz_name, COUNT(b.quiz_id) as plays
FROM quizes a
JOIN quizes_taken b ON a.id = b.quiz_id
GROUP BY a.id, a.quiz_name
Postgres: Create view
Although, if you need to proceed with tables approach, you can set up some upsert triggers
on quizes_taken table to perform update and insert on quizes table, plays column.
I got this trigger as a tip and I would like to know how it works with updates. It is supposed to create a record everytime there is an update or insert action on my main table.
create trigger tblTriggerAuditRecord on tblOrders
after **update, insert**
as
begin
insert into tblOrdersAudit
(OrderID, OrderApprovalDateTime, OrderStatus, UpdatedBy, UpdatedOn )
select i.OrderID, i.OrderApprovalDateTime, i.OrderStatus, SUSER_SNAME(), getdate()
from tblOrders t
inner join **inserted** i on t.OrderID=i.OrderID
end
go
From my understanding, it inserts all the inserted records to the main table to the stated columns in the audit including timestamp and user but how about the update? What if I update the rows in my main table? should not I have a joing also on the updated records?
Hope my question is clear, thanks a lot for help!
There is no table updated when a trigger is fired. In case of an update, the old values from your main table you'll find in a table deleted and the new ones are (like in case of an insert) in the table inserted.
That's the same as in this example:
UPDATE tabEmployee SET Salary = Salary * 1.05
OUTPUT inserted.EmployeeName, deleted.Salary, inserted.Salary
INTO tabSalaryHistory (EmployeeName, OldSalary, NewSalary)
In this example, every employee gets a salary increase. The value before the increase is stored in the output table deleted and the new value in inserted.
Have a look at this for better understanding.
I need to automatically insert a row in a stats table that is identified by the month number, if the new month does not exist as a row.
'cards' is a running count of individual IDs that stores a current value (gets reset at rollover time), a rollover count and a running total of all events on that ID
'stats keeps a running count of all IDs events, and how many rollovers occurred in a given month.
CREATE TABLE IDS (ID_Num VARCHAR(30), Curr_Count INT, Rollover_Count INT, Total_Count INT);
CREATE TABLE stats(Month char(10), HitCount int, RolloverCount int);
CREATE TRIGGER update_Tstats BEFORE UPDATE OF Total_Count ON IDS
WHEN 0=(SELECT HitCount from stats WHERE Month = strftime('%m','now'))
(Also tried a "IS NULL" at the other end of the WHEN clause...still no joy)
BEGIN
INSERT INTO stats (Month, HitCount, RolloverCount) VALUES (strftime('%m', 'now'),0,0);
END;
I did have it working to a point, but as rollover was updated twice per cycle (value changed up and down via SQL query I have in a python script), it gave me doubleups in the stats rollover count. So now I'm running a double query in my script. However, this all fall over if the current month number does not exist in the stats table.
All I need to do is check if a blank record exists for the current month for the python script UPDATE queries to run against, and if not, INSERT one. The script itself can't do a 'run once' type of query on initial runup, because it may run for days, including spanning a new month changeover.
Any assistance would be hugely appreciated.
To check whether a record exists, use EXISTS:
CREATE TRIGGER ...
WHEN NOT EXISTS (SELECT 1 FROM stats WHERE Month = ...)
BEGIN
INSERT INTO stats ...
END;
As my Question says that how can i get maximum value from Table?
In my apps. I have table name dataset_master
And table has field name is dataset_id, it is add manually as auto_inc.
So, First time when no record is inserted in Table and when I insert first record then I add dataset_id is 1. (this is only first time)
And Then after insert next record for dataset_id i fire query for get max value of dataset_id and I insert dataset_id +1. (This is for next record and so on..)
In my case I use following Query for get maximum dataset_id.
SELECT MAX(dataset_id) FROM dataset_master where project_id = 1
Here in my application I want to get maximum value of field name is dataset_id from dataset_master table.
This Query properly work when I insert record to dataset_master table each time I get proper maximum number of dataset_id. But when I delete record in sequins such like (1 to 5 from 10) in table and after I insert new record then I got each time last maximum number such like
if my table has 10 record then my dataset_id is 1 to 10;
When I delete record such like 1 to 5 then remains 6 to 10 record and also dataset_id in Table.
And then after I insert new record then each time I got 10 (maximum Number) so each time new record has dataset_id is dataset_id + 1 so 11.
What is problem I don't know (may be mistake in Query ?), please give your suggestion.
You need to reset the sequence in the sqlite_sequence table. I'd advise you not to worry about this though, as by the time it becomes a problem, this will be the least of your headaches.
I think the problem is not in your query, but in your insert. Do you force dataset_id when inserting new rows?
I have a table in my database and I want for each row in my table to have an unique id and to have the rows named sequently.
For example: I have 10 rows, each has an id - starting from 0, ending at 9. When I remove a row from a table, lets say - row number 5, there occurs a "hole". And afterwards I add more data, but the "hole" is still there.
It is important for me to know exact number of rows and to have at every row data in order to access my table arbitrarily.
There is a way in sqlite to do it? Or do I have to manually manage removing and adding of data?
Thank you in advance,
Ilya.
It may be worth considering whether you really want to do this. Primary keys usually should not change through the lifetime of the row, and you can always find the total number of rows by running:
SELECT COUNT(*) FROM table_name;
That said, the following trigger should "roll down" every ID number whenever a delete creates a hole:
CREATE TRIGGER sequentialize_ids AFTER DELETE ON table_name FOR EACH ROW
BEGIN
UPDATE table_name SET id=id-1 WHERE id > OLD.id;
END;
I tested this on a sample database and it appears to work as advertised. If you have the following table:
id name
1 First
2 Second
3 Third
4 Fourth
And delete where id=2, afterwards the table will be:
id name
1 First
2 Third
3 Fourth
This trigger can take a long time and has very poor scaling properties (it takes longer for each row you delete and each remaining row in the table). On my computer, deleting 15 rows at the beginning of a 1000 row table took 0.26 seconds, but this will certainly be longer on an iPhone.
I strongly suggest that you re-think your design. In my opinion your asking yourself for troubles in the future (e.g. if you create another table and want to have some relations between the tables).
If you want to know the number of rows just use:
SELECT count(*) FROM table_name;
If you want to access rows in the order of id, just define this field using PRIMARY KEY constraint:
CREATE TABLE test (
id INTEGER PRIMARY KEY,
...
);
and get rows using ORDER BY clause with ASC or DESC:
SELECT * FROM table_name ORDER BY id ASC;
Sqlite creates an index for the primary key field, so this query is fast.
I think that you would be interested in reading about LIMIT and OFFSET clauses.
The best source of information is the SQLite documentation.
If you don't want to take Stephen Jennings's very clever but performance-killing approach, just query a little differently. Instead of:
SELECT * FROM mytable WHERE id = ?
Do:
SELECT * FROM mytable ORDER BY id LIMIT 1 OFFSET ?
Note that OFFSET is zero-based, so you may need to subtract 1 from the variable you're indexing in with.
If you want to reclaim deleted row ids the VACUUM command or pragma may be what you seek,
http://www.sqlite.org/faq.html#q12
http://www.sqlite.org/lang_vacuum.html
http://www.sqlite.org/pragma.html#pragma_auto_vacuum