Postgres select & delete where command did not returns empty while the row exists - postgresql

I'm currently writing an app that has a need to remove a row from a table
I have a junction table where a row has 2 foreign keys, to test them, i tried to hardcode a row (the test one) then tried to post them using pg. hence theres 2 rows.
I tried to delete using pg with the line
DELETE FROM playlistsongs WHERE playlist_id = $1 AND song_id = $2 RETURNING id
pg returns not found. So out of curiousity, I tried to select the row from the command line. and It returns this
Are there any explanation why the query did not found any data? And how would I delete the row?
Thanks in advance!

Related

ignite delete row issue

I create a table user(_key(user_id,type),user_id(int),type(string),name(string), and had row,(1,"2","Scott") , then I update the row values to (2,"2","admin").and then delete the row delete from user where user_id = 2 and type = "2",sql scripts executed successfully, but select * from user again, the row still there ,ignite version number 2.9.1.anybody has the issue.
Ignite doesn't support a primary key modification. As result, you're not able to change the "user_id" value since it's a part of the PK. As a workaround, you can remove the existing row and insert a new one with the updated value.

Delete duplicates from a huge table in Postgresql

I have an unusual problem: I need to delete duplicate records from a table in Postgresql. As i have duplicate records so i dont have primary key and unique index in this table. The table conatins like 20million records and it has duplicate records in it. While i am trying the below query it is taking too long time.
'DELETE FROM temp a using temp b where a.recordid=b.recordid and a.ctid < b.ctid;'
So what should be a better approach to handle such huge table with no index in it?
Appreciate for help.
if you have enough empty space, your can copy table without duplicates, then remove old table and rename new table
like this
INSERT INTO new_table
VALUES
SELECT
DISTINCT ON (column)
*
FROM old_table
ORDER BY column ASC
Use COPY TO to dump the table.
Then Unix sort -u to de-duplicate it.
Drop or truncate the table in Postgres, use COPY FROM to read it back in.
Add a primary key column.

fastest way of inserting data into a table

I have a Postgres database, and I have inserted some data into the table. Because of issues with the internet connection, some of the data couldn't be written.The file that I am trying to write into the database is large (about 330712484 rows - even the ws -l command takes a while to complete.
Now, the column row_id is the (integer) primary key, and is already indexed. Since some of the rows could not be inserted into the table, I wanted to insert these specific rows into the table. (I estimate only about 1.8% of the data isn't inserted into the table ...) As a beginning, I tried to see of the primary keys were inside the database like so:
conn = psycopg2.connect(connector)
cur = conn.cursor()
with open(fileName) as f:
header = f.readline().strip()
header = list(csv.reader([header]))[0]
print(header)
for i, l in enumerate(f):
if i>10: break
print(l.strip())
row_id = l.split(',')[0]
query = 'select * from raw_data.chartevents where row_id={}'.format(row_id)
cur.execute(query)
print(cur.fetchall())
cur.close()
conn.close()
Even for the first few rows of data, checking to see whether the primary key exists takes a really large amount of time.
What would be the fastest way of doing this?
The fastest way to insert data in PostgreSQL is using the COPY protocol, which is implemented in psycopg2. COPY will not allow you to check if target id already exists, tho. Best option is to COPY your file content's into a temporary table then INSERT or UPDATE from this, as in the Batch Update article I wrote on my http://tapoueh.org blog a while ago.
With a recent enough version of PostgreSQL you may use
INSERT INTO ...
SELECT * FROM copy_target_table
ON CONFICT (pkey_name) DO NOTHING
Can i offer a work around. ?
The index will be checked for each row inserted, also Postgres performs the whole insert in a single transaction so you are effectively storing all this data to disk before its being written.
Could i suggest you drop the indexes to avoid this slow down, then split the file into smaller files using head -n [int] > newfile or something similar. then performing the copy commands separately for each one.

Duplicate Key error when using INSERT DEFAULT

I am getting a duplicate key error, DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, when I try to INSERT records. The primary key is one column, INTEGER 4, Generated, and it is the first column.
the insert looks like this: INSERT INTO SCHEMA.TABLE1 values (DEFAULT, ?, ?, ...)
It's my understanding that using the value DEFAULT will just let DB2 auto-generate the key at the time of insert, which is what I want. This works most of the time, but sometimes/randomly I get the duplicate key error. Thoughts?
More specifically, I'm running against DB2 9.7.0.3, using Scriptella to copy a bunch of records from one database to another. Sometimes I can process a bunch with no problems, other times I'll get the error right away, other times after 2 records, or 20 records, or 30 records, etc. Does not seem to be a pattern, nor is it the same record every time. If I change the data to copy 1 record instead of a bunch, sometimes I'll get the error one time, then it's fine the next time.
I thought maybe some other process was inserting records during my batch program, and creating keys at the same time. However, the tables I'm copying TO should not have any other users/processes trying to INSERT records during this same time frame, although there could be READS happening.
Edit: adding create info:
Create table SCHEMA.TABLE1 (
SYSTEM_USER_KEY INTEGER NOT NULL
generated by default as identity (start with 1 increment by 1 cache 20),
COL2...,
)
alter table SCHEMA.TABLE1
add constraint SYSTEM_USER_SYSTEM_USER_KEY_IDX
Primary Key (SYSTEM_USER_KEY);
You most likely have records in your table with IDs that are bigger then the next value in your identity sequence. To find out what the current value your sequence is about at, run the following query.
select s.nextcachefirstvalue-s.cache, s.nextcachefirstvalue-s.increment
from syscat.COLIDENTATTRIBUTES as a inner join syscat.sequences as s on a.seqid=s.seqid
where a.tabschema='SCHEMA'
and a.TABNAME='TABLE1'
and a.COLNAME='SYSTEM_USER_KEY'
So basically what happened is that somehow you got records in your table with ids that are bigger then the current last value of your identity sequence. So sooner or later these ids will collide with identity generated ids.
There are different reasons on how this could have happened. One possibility is that data was loaded which already contained values for the id column or that records were inserted with an actual value for the ID. Another option is that the identity sequence was reset to start at a lower value than the max id in the table.
Whatever the cause, you may also want the fix:
SELECT MAX(<primary_key_column>) FROM onsite.forms;
ALTER TABLE <table> ALTER COLUMN <primary_key_column> RESTART WITH <number from previous query + 1>;

Prevent insertion if the records already exist in sqlite

I am programming for iPhone and i am using SQLITE DB for my app.I have a situation where i want to insert records into the table,only if the records doesn't exist previously.Otherwise the records should not get inserted.
How can i do this?Please any body suggest me a suitable query for this.
Thank you one and all,
Looking at SQLite's INSERT page http://www.sqlite.org/lang_insert.html.
You can do it using the following syntax
INSERT OR IGNORE INTO tablename ....
Example
INSERT OR IGNORE INTO tablename(id, value, data) VALUES(2, 4562, 'Sample Data');
Note : You need to have a KEY on the table columns which uniquely identify a row. It is only if a duplicate KEY is tried to be inserted that INSERT OR IGNORE will not insert a new row.
In the above example if you have a KEY on id, then another row with id = 2 will not be inserted.
If you have a KEY only on id and value then a combination of id = 2 and value = 4562 will cause a new row not be inserted.
In short there must be a key to uniquely identify a ROW only then will the Database know there is a duplicate which SHOULD NOT Be allowed.
Otherwise if you do not have a KEY you would need to go the SELECT and then check if a row is already there, route. But here also whichever condition you are using on columns you can add them as a KEY to the table and simply use the INSERT OR IGNORE
In SQLite it is not possible to ALTER the table and add a constraint like UNIQUE or PRIMAY KEY. For that you need to recreate the table. Look at this FAQ on sqlite.org
http://sqlite.org/faq.html#q11
Hello Sankar what you can do is perform a select query with the record you wish to insert and then check the response via SQLite's SQLITE_NOTFOUND flag you can check whether that record already exists or not. If it doesn't exist you can insert it otherwise you skip inserting.
I hope this is helpful.