I am programming for iPhone and i am using SQLITE DB for my app.I have a situation where i want to insert records into the table,only if the records doesn't exist previously.Otherwise the records should not get inserted.
How can i do this?Please any body suggest me a suitable query for this.
Thank you one and all,
Looking at SQLite's INSERT page http://www.sqlite.org/lang_insert.html.
You can do it using the following syntax
INSERT OR IGNORE INTO tablename ....
Example
INSERT OR IGNORE INTO tablename(id, value, data) VALUES(2, 4562, 'Sample Data');
Note : You need to have a KEY on the table columns which uniquely identify a row. It is only if a duplicate KEY is tried to be inserted that INSERT OR IGNORE will not insert a new row.
In the above example if you have a KEY on id, then another row with id = 2 will not be inserted.
If you have a KEY only on id and value then a combination of id = 2 and value = 4562 will cause a new row not be inserted.
In short there must be a key to uniquely identify a ROW only then will the Database know there is a duplicate which SHOULD NOT Be allowed.
Otherwise if you do not have a KEY you would need to go the SELECT and then check if a row is already there, route. But here also whichever condition you are using on columns you can add them as a KEY to the table and simply use the INSERT OR IGNORE
In SQLite it is not possible to ALTER the table and add a constraint like UNIQUE or PRIMAY KEY. For that you need to recreate the table. Look at this FAQ on sqlite.org
http://sqlite.org/faq.html#q11
Hello Sankar what you can do is perform a select query with the record you wish to insert and then check the response via SQLite's SQLITE_NOTFOUND flag you can check whether that record already exists or not. If it doesn't exist you can insert it otherwise you skip inserting.
I hope this is helpful.
Related
I have a non-empty PostgreSQL table with a GENERATED ALWAYS AS IDENTITY column id. I do a bulk insert with the C++ binding pqxx::stream_to, which I'm assuming uses COPY FROM. My problem is that I want to know the ids of the newly created rows, but COPY FROM has no RETURNING clause. I see several possible solutions, but I'm not sure if any of them is good, or which one is the least bad:
Provide the ids manually through COPY FROM, taking care to give the values which the identity sequence would have provided, then afterwards synchronize the sequence with setval(...).
First stream the data to a temp-table with a custom index column for ordering. Then do something likeINSERT INTO foo (col1, col2)
SELECT ttFoo.col1, ttFoo.col2 FROM ttFoo
ORDER BY ttFoo.idx RETURNING foo.id
and depend on the fact that the identity sequence produces ascending numbers to correlate them with ttFoo.idx (I cannot do RETURNING ttFoo.idx too because only the inserted row is available for that which doesn't contain idx)
Query the current value of the identity sequence prior to insertion, then check afterwards which rows are new.
I would assume that this is a common situation, yet I don't see an obviously correct solution. What do you recommend?
You can find out which rows have been affected by your current transaction using the system columns. The xmin column contains the ID of the inserting transaction, so to return the id values you just copied, you could:
BEGIN;
COPY foo(col1,col2) FROM STDIN;
SELECT id FROM foo
WHERE xmin::text = (txid_current() % (2^32)::bigint)::text
ORDER BY id;
COMMIT;
The WHERE clause comes from this answer, which explains the reasoning behind it.
I don't think there's any way to optimise this with an index, so it might be too slow on a large table. If so, I think your second option would be the way to go, i.e. stream into a temp table and INSERT ... RETURNING.
I think you can create id with type is uuid.
The first step, you should random your ids after that bulk insert them, by this way your will not need to return ids from database.
I have an insert that is recording data from a webform and inserting it into my table. I'd like to run an update immediately after my insert that reads the previous insert and finds all the null fields and updates that record's null fields with a string of --
The data-type for all my fields is varchar
I have 20+ forms each with 100+ fields so i'm looking for a function that would be smart enough to read/update the fields that have null values without specifically enumerating/writing out each field for the update statement. This would just take way too long.
Does anyone know of a way to read simply which fields have null values and update any fields that are null to a string, in my case --
IF you can't alter your existing code,I would go with insert trigger...so after every insert,you can check and see the null values and update them like below
create trigger triggername
on table
after insert
as
begin
update t
set t.col1=isnull(i.col1,'--'),
t.col2=isnull(i.col2,'--')
rest of cols
from table t
join
inserted i
on i.matchingcol=t.mtachingcol
end
The issue with above approach is,you will have to check all inserted rows..I would go with this approach only,since filtering many cols with many or clauses is not good for performance
If is to just for display purposes,i would go with view
Instead of update after insert you may try changing table structure.
Set default value of the columns to --. If while insert no value is provided, -- will be inserted automatically.
My table has three rows and i don't want to add any more rows.
However i want to be able to select & update on the table.
What is the best way to block further inserts ?
Assuming you have a primary key named id with the current values 1,2 and 3 you could do something like this:
alter table the_table
add constraint limit_values check (id in (1,2,3));
Now if you try to insert a new row, you either get a primary key violation (because 1,2 and 3 already exist) or you get a check constraint violation when you try to insert a different ID value that does not yet exist.
I'm currently working with Firebird and attempting to utilize UPDATE OR INSERT functionality in order to solve a particular new case within our software. Basically, we are needing to pull data off of a source and put it into an existing table and then update that data at regular intervals and adding any new references. The source is not a database so it isn't a matter of using MERGE to link the two tables (unless we make a separate table and then merge it, but that seems unnecessary).
The problem rests on the fact we cannot use the primary key of the existing table for matching, because we need to match based off of the ID we get from the source. We can use the MATCHING clause no problem but the issue becomes that the primary key of the existing table will be updated to the next key every time because it has to be in the query because of the insertion chance. Here is the query (along with c# parameter additions) to demonstrate the problem.
UPDATE OR INSERT INTO existingtable (PrimaryKey, UniqueSourceID, Data) VALUES (?,?,?) MATCHING (UniqueSourceID);
this.AddInParameter("PrimaryKey", FbDbType.Integer, itemID);
this.AddInParameter("UniqueSourceID", FbDbType.Integer, source.id);
this.AddInParameter("Data", FbDbType.SmallInt, source.data);
Problem is shown that every time the UPDATE triggers, the primary key will also change to the next incremented key I need a way to leave the primary key alone when updating, but if it is inserting I need to insert it.
Do not generate primary key manually, let a trigger generate it when nessesary:
CREATE SEQUENCE seq_existingtable;
SET TERM ^ ;
CREATE TRIGGER Gen_PK FOR existingtable
ACTIVE BEFORE INSERT
AS
BEGIN
IF(NEW.PrimaryKey IS NULL)THEN NEW.PrimaryKey = NEXT VALUE FOR seq_existingtable;
END^
SET TERM ; ^
Now you can omit the PK field from your statement:
UPDATE OR INSERT INTO existingtable (UniqueSourceID, Data) VALUES (?,?) MATCHING (UniqueSourceID);
and when the insert is triggered by the statement then the trigger will take care of creating the PK. If you need to know the generated PK then use the RETURNING clause of the UPDATE OR INSERT statement.
I have a table:
CREATE TABLE Demo (
id uniqueidentifier PRIMARY KEY
);
I want to create an AFTER UPDATE (or INSTEAD OF UPDATE) trigger, and I need to know OLD and NEW values of "id" column inside the trigger.
Is it possible to do this?
No. This isn't possible.
If the table has 2 rows and you update both of them there is no way of knowing which row in INSERTED maps to which in DELETED.
You can use the OUTPUT clause for this (outside a trigger) though.
Or you could add an IDENTITY column to the table and use that to join INSERTED and DELETED (it is not permitted to update identity columns so that gives you something immutable)