I have a problem where if my table is empty, I can create new items with Directus. But, if the table has data already I get the error ID: Value has to be unique. I'm using auto increment for my ID's. Any ideas what I've done wrong?
I had similar issue after importing data from another directus and database. I was able to resolve this issue by going into postgres database using psql and resetting auto increment with SELECT setval('my_table_id_seq', (SELECT max(id) FROM my_table)); and then inserting one row INSERT INTO my_table(column1, column2, ...) VALUES (value1, value2, ...); after that I was able to create new table rows thru directs.
Related
I have an unusual problem: I need to delete duplicate records from a table in Postgresql. As i have duplicate records so i dont have primary key and unique index in this table. The table conatins like 20million records and it has duplicate records in it. While i am trying the below query it is taking too long time.
'DELETE FROM temp a using temp b where a.recordid=b.recordid and a.ctid < b.ctid;'
So what should be a better approach to handle such huge table with no index in it?
Appreciate for help.
if you have enough empty space, your can copy table without duplicates, then remove old table and rename new table
like this
INSERT INTO new_table
VALUES
SELECT
DISTINCT ON (column)
*
FROM old_table
ORDER BY column ASC
Use COPY TO to dump the table.
Then Unix sort -u to de-duplicate it.
Drop or truncate the table in Postgres, use COPY FROM to read it back in.
Add a primary key column.
I have a table with two columns, id that is varchar and data that is jsonb. I also have a csv-file with new IDs that I would like to insert into the table, the data I would like to assign to these IDs are identical, and if an ID already exists I would like to update the current data value with the new data. This is what I have done so far:
INSERT INTO "table" ("id", "data")
VALUES ('[IDs from CSV file]', ' {dataObject}')
ON CONFLICT (id) do UPDATE set data='{dataObject}';
I have got it working with a single ID, but I would now like to run this for every ID in my csv-file, hence the array in the example to illustrate this. Is there a way to do this using a query? I was thinking I could create a temporary table and import the IDs there, but I am still not sure how I would utilize that table with my query.
Yes, use a staging table to upload your csv into, make sure to truncate it before uploading. After uploading:
insert into prod_table
select * from csv_upload
on conflict (id) do update
set data = excluded.data;
Don't complicate the process unnecessarily.
Import csv to a temporary table T2
Update T1 where rows match in T2
Insert into T1 from T2 where rows do not match
I have an issue where Postgres is complaining of a duplicate ID following an import of some initial data and I am trying to see how to increment the id column counter?
Details:
I have recently uploaded some initial data into a Postgres table, where the id is set to autoincrement in my Sequelize model definition. For example:
sequelize.define('user', {
id: {
type: Sequelize.INTEGER,
primaryKey: true,
autoIncrement: true
},
name: Sequelize.STRING
}
The data insert looks like:
INSERT INTO "users" ("id","name") VALUES(1, "bob");
INSERT INTO "users" ("id","name") VALUES(2, "brooke");
INSERT INTO "users" ("id","name") VALUES(3, "nico");
Now from my node.js application when I try to insert a new record it complains that Key (id)=(1) already exists. The SQL Sequelize is using is of the form:
INSERT INTO "users" ("id","name") VALUES(DEFAULT, "nico");
If I am empty the table of records and try this again or retry the operations enough times, then I see the counter does increment. The issue seems Postgres is not able to tell what the current max id is, based on the records?
What would be the right way to tell Postgres to update the counters, following uploading initial data into the database?
BTW using Postgres 9.6
After a bit more searching it turns out this will do what I need to do.
SELECT setval('users_id_seq', max(id)) FROM users;
This code will set the id to the current maximum id in the table, here it being my users table. Note, to check if a sequence is associated with a column, this will work:
SELECT pg_get_serial_sequence('patients', 'id')
The only thing to note is that you ignore the 'public.' part in the returned value.
I'll add the setval() to my initial data script.
Try dropping the table before you initially insert data, it may be persisting from a previous run in which case (1, "bob") would already be in your table before you tried adding it again.
This happened to me because I inserted records using literal, numeric values (instead of DEFAULT or undefined) as arguments for the auto-incremented column. Doing so circumvents the column's underlying sequence object's increment call, hence making the sequence's value out of sync with the values in the column in the table.
SELECT setval('users_id_seq', (SELECT MAX(id) from users));
The name of the sequence is auto generated and is always tablename_columnname_seq.
All,
I am trying to bulk insert some data in a table using the COPY TO command and I can't seem to get around the unique key error. Here's my workflow.
Create a dump of the data I want to move to another server
COPY (
SELECT *
FROM mytable
WHERE created_at >= '2012-10-01')
TO 'D:\tmp\file.txt'
Create a new "temp" table in the target DB then COPY the data like so.
COPY temp FROM 'D:\tmp\file.txt'
I now want to move the data from the "temp" table in to the master table in the target DBlike so.
INSERT INTO master SELECT * FROM temp
WHERE id NOT IN (SELECT id FROM master)
This runs fine but nothing gets inserted and no fields are updated. Does anyone have a clue what might be going on here? The schemas for temp and master are identical. Any help on this matter would be great! I am using Postgresql 9.2
Adam
This can happen if there's a null value in the IN list.
In SQL, the presence of a null when making comparisons is always false (you need the special IN NULL test to get a match). This has the unfortunate consequence of making the entire list not match if there's any null values returned from SELECT id FROM master.
See if there are any rows returned from this query:
SELECT id
FROM master
WHERE id is null;
If not, then this isn't your problem.
If there are values, then the fix is to exclude null ids from the list:
INSERT INTO master
SELECT *
FROM temp
WHERE id NOT IN (SELECT id FROM master where id is not null)
The other thing to consider is that there are simply no values not already inserted!
I am new in iphone i am developing a application with data base i am create a database and have to table we want to insert one table value to other. I am not understand how do this.
if any body know help me with coding.
Nested query: INSERT INTO Table VALUES (SELECT blah...)