PostgreSQL - INSERT INTO statement - postgresql

What I'm trying to do is select various rows from a certain table and insert them right back into the same table. My problem is that I keep running into the whole "duplicate PK" error - is there a way to skip the PK field when executing an INSERT INTO statement in PostgreSQL?
For example:
INSERT INTO reviews SELECT * FROM reviews WHERE rev_id=14;
the rev_id in the preceding SQL is the PK key, which I somehow need to skip. (To clarify: I am using * in the SELECT statement because the number of table columns can increase dynamically).
So finally, is there any way to skip the PK field?
Thanks in advance.

You can insert only the values you want so your PK will get auto-incremented
insert into reviews (col1, col2, col3) select col1, col2, col3 from reviews where rev_id=14
Please do not retrieve/insert the id-column

insert into reviews (col0, col1, ...) select col0, col1, ... from reviews where rev_id=14;

Related

Liquibase insert select multiple rows postgres

I want to insert into table1 multiple rows from table2. The problem is that I have some fields in table1 that I want to compute, and some rows that I want to select from table2. For example something like this:
insert into table1 (id, selectField1, selectField2, constant)
values ((gen_random_uuid()), (select superField1 from table2), (select superField2 from table2), 'test');
So the logic is to select superField1 and superField2 from all the rows in the table2 and insert them into table1 with constant value test and generated uids. superField1 and superField2 should be from the same row in table2 when inserting in table1. How can I achieve something like this using liquibase?
P.S: I'm using <sql> tag since it's easier to implement using SQL than using XML changeset, but if you know how to do it in XML that would be appreciated too, but just in SQL will be enough too. DBMS is postgres.
Don't use the VALUES clause if the source is a SELECT statement:
insert into table1 (id, selectField1, selectField2, constant)
select gen_random_uuid(), superField1, superField2, 'test'
from table2;

Issue Insert Into (BigTable1) SELECT From (BiggerTable2)

I am selecting a set of data from a bigTable1 (index-ed) and then inserting them into another bigTable2 (index-ed)
I have two options: Which is a good idea:
Option: 1
INSERT INTO bigTable2 (bigTable2.Col1, bigTable2.Col2)
SELECT bigTable1.Col1, bigTable1.Col2 FROM bigTable1 (nolock)
WHERE bigTable1.col3 between #value1 and #value2
Option: 2
CREATE #TEMP (Col1 int, Col2 varchar(200))
INSERT INTO #TEMP (Col1, Col2)
SELECT bigTable1.Col1, bigTable1.Col2 FROM bigTable1 (nolock)
WHERE bigTable1.col3 between #value1 and #value2
INSERT INTO bigTable2 (bigTable2.Col1, bigTable2.Col2)
SELECT Col1, Col2 FROM #TEMP
I do not want to lock the bigTable1. Please advise which one is the better one between the two? Is there any other suggestion?
If you don't want to lock the table, then go with the first approach. It's a one-step procedure and, no matter how long it takes, you keep the table unlocked, so you don't block others.
The second option would be appropriate in case you wanted the table locked during the select. Having to fill an empty, non-indexed table would be faster, thus keeping your table locked for a shorter period.

Entity Framework: View exclusion without primary key

I am using SQL Server where I have designed a view to sum the results of two tables and I want the output to be a single table with the results. My query simplified is something like:
SELECT SUM(col1), col2, col3
FROM Table1
GROUP BY col2, col3
This gives me the data I want, but when updating my EDM the view is excluded because "a primary key cannot be inferred".
With a little research I modified the query to spoof an id column to as follows:
SELECT ROW_NUMBER() OVER (ORDER BY col2) AS 'ID', SUM(col1), col2, col3
FROM Table1
GROUP BY col2, col3
This kind of query gives me a nice increasing set of ids. However, when I attempt to update my model it still excludes my view because it cannot infer a primary key. How can we use views that aggregate records and connect them with Linq-to-Entities?
As already discussed in the comments you can try adding MAX(id) as id to the view. Based on your feedback this would become:
SELECT ISNULL(MAX(id), 0) as ID,
SUM(col1),
col2,
col3
FROM Table1
GROUP BY col2, col3
Another option is to try creating an index on the view:
CREATE UNIQUE CLUSTERED INDEX idx_view1 ON dbo.View1(id)
I use this code alter view
ISNULL(ROW_NUMBER() OVER(ORDER BY ActionDate DESC), -1) AS RowID
I use this clause in multi relations view / table query
ROW_NUMBER never give null value because it never seen -1
This is all I needed to add in order to import my view into EF6.
select ISNULL(1, 1) keyField

select where not exists excluding identity column

I am inserting only new records that do not exist in a live table from a "dump" table. My issue is there is an identity column that I don't want to insert into the live, I want the live tables identity column to take care of incrementing the value but I am getting an insert error "Insert Error: Column name or number of supplied values does not match table definition." Is there a way around this or is the only fix to remove the identity column all together?
Thanks,
Sam
You need to list of all the needed columns in your query, excluding the identity column.
One more reason why you should never use SELECT *.
INSERT liveTable
(col1, col2, col3)
SELECT col1, col2, col3
FROM dumpTable dt
WHERE NOT EXISTS
(
SELECT 1
FROM liveTable lt
WHERE lt.Id == dt.Id
)
Pro tip: You can also achieve the above by using an OUTER JOIN between the dump and live tables and using WHERE liveTable.col1 = NULL (you will probably need to qualify the column names selected with the dump table alias).
I figured out the issue.... my live table didn't have the ID field set as an identity, somehow when I created it that field wasn't set up correctly.
you can leave that column in your insert statment like this
insert into destination (col2, col3, col4)
select col2, col3 col4 from source
Don't do just
insert into destination
select * from source

Copy three columns from one database table to another

I'm updating an iPhone app with a SQLite3 database. The user's have a database on their phone currently, and I need to update three of the columns with new data (stored in a separate database) if the id of the rows match.
I've been able to attach the two tables and copy an entire table, but not only three columns.
database1
table1
id, col1, col2, col3, col4
database2
table1
id, col1, col2, col3, col4
I want to copy col1, col2, & col3 (not col4) from database1, table1 to database2, table1 if the ids match.
You could use a query along the following lines:
//Select the database, table and table values
SELECT INTO 'db2.table2' (field1, field2, field3)
VALUES
//Insert data into second database using nested SQL
(SELECT * FROM 'db1.table1' (field1, field2, field3) WHERE field1 = 1)
Hope this helps (and works for you) :)