Good Day,
I am currently having trouble with IDENTITY_INSERT. I have a linked server setup with the required permissions needed.
I have tested that I can use IDENTITY_INSERT with a simple insert query, but it does not work if I use the following code.
SET IDENTITY_INSERT table_name ON
INSERT INTO table_name SELECT * FROM OPENQUERY([server_name], 'SELECT * FROM ##Tmp2');
Using the above method I receive the following error:
An explicit value for the identity column in table 'table_name' can only be specified when a column list is used and IDENTITY_INSERT is ON.
Can anyone kindly help me regarding this request, I would like to select data from a Tmp table from a different server and I can't ignore the IDENTITY Column due to it is required or link to other tables.
Thank you in advance.
Answer was provided by #allhuran.
#allmHuran - ' You're not providing an explicit column list.
insert into table_name (aColumn, anotherColumn, ...) select aColumn, anotherColumn, ... from ...
'
Related
I've narrowed it down to two possibilities - DynamicSQL and using a case statement.
However, I've failed with both of these.
I simply don't understand dynamicSQL, and how I would use it in my case.
This is my attempt using case statements; one of many failed variations.
SELECT column_name,
CASE WHEN column_name = 'address' THEN (**update statement gives syntax error within here**)
END
FROM information_schema.columns
WHERE table_name = 'employees';
As an overview, I'm using Axios to talk to my Node server, which is making calls to my Heroku database using Massivejs.
Maybe this isn't the way to go - so here's my main problem:
I've ran into troubles because the values I'm planning on using as column names are sent to my server as strings. The exact call that I've been trying to use is
update employees
set $1 = $2
where employee_id = $3;
Once again, I'm passing into those using massive.
I get the error back { error: syntax error at or near "'address'"} because my incoming values are strings. My thought process was that the above statement would allow me to use variables because 'address' is encapsulated by quotes.
But alas, my thought process has failed me.
This seems to be close to answering my question, but I can't seem to figure out what to do in my case if using dynamic SQL.
How to use dynamic column names in an UPDATE or SELECT statement in a function?
Thanks in advance.
I will show you a way to do this by using a function.
First we create the employees table :
CREATE TABLE employees(
id BIGSERIAL PRIMARY KEY,
column1 TEXT,
column2 TEXT
);
Next, we create a function that requires three parameters:
columnName - the name of the column that needs to be updated
columnValue - the new value to which the column needs to be updated
employeeId - the id of the employee that will be updated
By using the format function we generate the update query as a string and use the EXECUTE command to execute the query.
Here is the code of the function.
CREATE OR REPLACE FUNCTION update_columns_on_employee(columnName TEXT, columnValue TEXT, employeeId BIGINT)
RETURNS VOID AS
$$
DECLARE update_statement TEXT := format('UPDATE EMPLOYEES SET %s = ''%s'' WHERE id = %L',columnName, columnValue, employeeId);
BEGIN
EXECUTE update_statement;
end;
$$ LANGUAGE plpgsql;
Now, lets insert some data into the employees table
INSERT INTO employees(column1, column2) VALUES ('column1_start_value','column2_start_value');
So now we currently have an employee with an id value of 1 who has 'column1_start_value' value for the column1, and 'column2_start_value' value for column2.
If we want to update the value of column2 from 'column2_start_value' to 'column2_new_value' all we have to do is execute the following call
SELECT * FROM update_columns_on_employee('column2','column2_new_value',1);
In my script I have to do a lot of selects to a joined table, so instead I decided to put this join into a temporal table.
First I thought:
1. Create table
2. Put the data from the join into a table
3. Drop the table
But then I thought, what if the script fails before I dropped the table?
So I decided to go with:
1. Drop the table
2. Create the table
3. Put the data from the join into a table
I don't really mind if the table is left there until the next time I run the script, so the second option works too.
But what if somebody had already dropped the table?
I saw some systems have a "drop if exists" but unfortunately not DB2. I would like to do something that won't make the script die when the drop table fails.
Ideas? On any of this? Thanks!
EDIT: I forgot to say this is in a PERL script!
The best way to do this is by using an annonymous block like in this code
You need to call the drop table in a dynamic sql, and catch the exception in the block.
--#SET TERMINATOR #
begin
declare statement varchar(128);
declare continue handle for sqlstate '42710' BEGIN END;
SET STATEMENT = 'DROP TABLE MYTABLE';
EXECUTE IMMEDIATE STATEMENT;
end #
This code will run normally in DB2. It does not need to be part of a procedure nor function.
Why not look for the table first? If you find it, it needs to be dropped; if you don't, it doesn't.
db2perf_quiet_drop that might works the way you want.. Its a free add-on :)
You can look into this post too..
http://www.dbforums.com/showthread.php?1609047-DB2-equivalent-for-mysql-s-DROP-TABLE-IF-EXISTS
If this doesn't work for you please let me know what error you are getting so I can try to help :)
Or this might work
if( NOT exists( create table detailval
(
id int,
detaildeptNo int,
info varchar(255)
);
insert into detailval(1,1, 'detail values A');
insert into detailval(2,1, 'detail values B');
insert into detailval(3,1, 'detail values C');
insert into detailval(4,2, 'detail values D');
)
)
then customStoredproc('droptable');
end if;
End
I think you should look into working with temporary tables (DECLARE GLOBAL TEMPORARY TABLE). They are stored in the temporary table space and are dropped automatically after commit.
You can easily also query syscat.tables like this:
select COUNT(*) from SYSCAT.TABLES where TRIM(TABNAME) = '<some_table_name>'
if this query returns 0 then the table does not exists.
I am doing a experimental script to do a SQL Comparison (COLLATED as case-sensitive) and I am having issues with the SET IDENTITY_INSERT <Table> ON
I have switched on this option and disabled foreign key checks, but it still seems to be complaining about the latter.
Here are the steps I followed:
1 - I created a linked server
EXEC sp_addlinkedserver #Server=N'xxx.xxx.xxx.xxx', #srvproduct=N'SQL Server'
2 - I added the login credentials
EXEC master.dbo.sp_addlinkedsrvlogin
#rmtsrvname = N'xxx.xxx.xxxx.xxx',
#locallogin = NULL ,
#useself = N'False',
#rmtuser = N'xxxxxxxxxxx',
#rmtpassword = N'xxxxxxxxxxx'
3 - In the same batch, I set the identity_insert, disabled foreign key checks and ran the following merge script. Note, the deferred query returns an XML field which is disallowed over distributed servers, so I casted to NVARCHAR(MAX)
SET IDENTITY_INSERT [DATABASE1].[dbo].[TABLE1] ON
ALTER TABLE [DATABASE1].[dbo].[TABLE1] NOCHECK CONSTRAINT ALL
MERGE [DATABASE1].[dbo].[TABLE1]
USING OPENQUERY([xxx.xxx.xxx.xxx], 'SELECT S.ID, S.EventId, S.SnapshotTypeID, CAST(S.Content AS NVARCHAR(MAX)) AS Content FROM [DATABASE1].[dbo].[TABLE1] AS S') AS S
ON (CAST([DATABASE1].[dbo].[TABLE1].Content AS NVARCHAR(MAX)) = S.Content)
WHEN NOT MATCHED BY TARGET
THEN INSERT VALUES (S.ID, S.EventId, S.SnapshotTypeID, CAST(S.Content AS XML))
WHEN MATCHED
THEN UPDATE SET [DATABASE1].[dbo].[TABLE1].EventId = S.EventId,
[DATABASE1].[dbo].[TABLE1].SnapshotTypeID = S.SnapshotTypeID,
[DATABASE1].[dbo].[TABLE1].Content = S.Content
COLLATE Latin1_General_CS_AS;
GO
The error message I am getting reads as follows:
Msg 8101, Level 16, State 1, Line 4
An explicit value for the identity column in table 'Database1.dbo.Table' can only be specified when a column list is used and IDENTITY_INSERT is ON.
How can I fix this? As I mentioned, this script is only an experiment for one of the systems I am writing. I am probably reinventing the wheel somewhere, but its all about learning in this exercise.
An explicit value for the identity column in table 'Database1.dbo.Table' can only be specified when a column list is used and IDENTITY_INSERT is ON.
You have no column list
I am having problems with my query.
Basically, what I am trying to do is empty out a table and copy the records from the same table in another database.
I did use the SET IDENTITY_INSERT code to make sure that the identity column is turned off before I perform my insert. But somehow, it still throws me the error message:
Msg 8101, Level 16, State 1, Line 3
An explicit value for the identity column in table 'dbo.UI_PAGE' can only be specified when a column list is used and IDENTITY_INSERT is ON.
Below is my query:
DELETE FROM [DB1].[dbo].[MY_TABLE]
SET IDENTITY_INSERT [DB1].[dbo].[MY_TABLE] ON
INSERT INTO [DB1].[dbo].[MY_TABLE]
SELECT *
FROM [DB2].[dbo].[MY_TABLE]
SET IDENTITY_INSERT [DB1].[dbo].[MY_TABLE] OFF
Can someone point me as to which step I am doing wrong?
Thanks a lot!
You have to specify all the column names when inserting with IDENTITY INSERT ON when using INSERT INTO
INSERT INTO [DB1].[dbo].[MY_TABLE](TabelID,Field1,Field2,Field3...)
SELECT * FROM [DB2].[dbo].[MY_TABLE]
In case you did not know there is a nifty little trick in ssms. If select a table and expand its' nodes you ctrl-c copy on the Columns node and that will place a comma-delimited list of the field names on your clipboards text buffer.
Addition to the first answer given by Ross Bush,
If your table has many columns then to get those columns name by using this command.
SELECT column_name + ','
FROM information_schema.columns
WHERE table_name = 'TableName'
for xml path('')
(after removing the last comma(',')) Just copy past columns name.
Some SQL servers have a feature where INSERT is skipped if it would violate a primary/unique key constraint. For instance, MySQL has INSERT IGNORE.
What's the best way to emulate INSERT IGNORE and ON DUPLICATE KEY UPDATE with PostgreSQL?
With PostgreSQL 9.5, this is now native functionality (like MySQL has had for several years):
INSERT ... ON CONFLICT DO NOTHING/UPDATE ("UPSERT")
9.5 brings support for "UPSERT" operations.
INSERT is extended to accept an ON CONFLICT DO UPDATE/IGNORE clause. This clause specifies an alternative action to take in the event of a would-be duplicate violation.
...
Further example of new syntax:
INSERT INTO user_logins (username, logins)
VALUES ('Naomi',1),('James',1)
ON CONFLICT (username)
DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;
Edit: in case you missed warren's answer, PG9.5 now has this natively; time to upgrade!
Building on Bill Karwin's answer, to spell out what a rule based approach would look like (transferring from another schema in the same DB, and with a multi-column primary key):
CREATE RULE "my_table_on_duplicate_ignore" AS ON INSERT TO "my_table"
WHERE EXISTS(SELECT 1 FROM my_table
WHERE (pk_col_1, pk_col_2)=(NEW.pk_col_1, NEW.pk_col_2))
DO INSTEAD NOTHING;
INSERT INTO my_table SELECT * FROM another_schema.my_table WHERE some_cond;
DROP RULE "my_table_on_duplicate_ignore" ON "my_table";
Note: The rule applies to all INSERT operations until the rule is dropped, so not quite ad hoc.
For those of you that have Postgres 9.5 or higher, the new ON CONFLICT DO NOTHING syntax should work:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT field_one, field_two, field_three
FROM source_table
ON CONFLICT (field_one) DO NOTHING;
For those of us who have an earlier version, this right join will work instead:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT source_table.field_one, source_table.field_two, source_table.field_three
FROM source_table
LEFT JOIN target_table ON source_table.field_one = target_table.field_one
WHERE target_table.field_one IS NULL;
Try to do an UPDATE. If it doesn't modify any row that means it didn't exist, so do an insert. Obviously, you do this inside a transaction.
You can of course wrap this in a function if you don't want to put the extra code on the client side. You also need a loop for the very rare race condition in that thinking.
There's an example of this in the documentation: http://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html, example 40-2 right at the bottom.
That's usually the easiest way. You can do some magic with rules, but it's likely going to be a lot messier. I'd recommend the wrap-in-function approach over that any day.
This works for single row, or few row, values. If you're dealing with large amounts of rows for example from a subquery, you're best of splitting it into two queries, one for INSERT and one for UPDATE (as an appropriate join/subselect of course - no need to write your main filter twice)
To get the insert ignore logic you can do something like below. I found simply inserting from a select statement of literal values worked best, then you can mask out the duplicate keys with a NOT EXISTS clause. To get the update on duplicate logic I suspect a pl/pgsql loop would be necessary.
INSERT INTO manager.vin_manufacturer
(SELECT * FROM( VALUES
('935',' Citroën Brazil','Citroën'),
('ABC', 'Toyota', 'Toyota'),
('ZOM',' OM','OM')
) as tmp (vin_manufacturer_id, manufacturer_desc, make_desc)
WHERE NOT EXISTS (
--ignore anything that has already been inserted
SELECT 1 FROM manager.vin_manufacturer m where m.vin_manufacturer_id = tmp.vin_manufacturer_id)
)
INSERT INTO mytable(col1,col2)
SELECT 'val1','val2'
WHERE NOT EXISTS (SELECT 1 FROM mytable WHERE col1='val1')
As #hanmari mentioned in his comment. when inserting into a postgres tables, the on conflict (..) do nothing is the best code to use for not inserting duplicate data.:
query = "INSERT INTO db_table_name(column_name)
VALUES(%s) ON CONFLICT (column_name) DO NOTHING;"
The ON CONFLICT line of code will allow the insert statement to still insert rows of data. The query and values code is an example of inserted date from a Excel into a postgres db table.
I have constraints added to a postgres table I use to make sure the ID field is unique. Instead of running a delete on rows of data that is the same, I add a line of sql code that renumbers the ID column starting at 1.
Example:
q = 'ALTER id_column serial RESTART WITH 1'
If my data has an ID field, I do not use this as the primary ID/serial ID, I create a ID column and I set it to serial.
I hope this information is helpful to everyone.
*I have no college degree in software development/coding. Everything I know in coding, I study on my own.
Looks like PostgreSQL supports a schema object called a rule.
http://www.postgresql.org/docs/current/static/rules-update.html
You could create a rule ON INSERT for a given table, making it do NOTHING if a row exists with the given primary key value, or else making it do an UPDATE instead of the INSERT if a row exists with the given primary key value.
I haven't tried this myself, so I can't speak from experience or offer an example.
This solution avoids using rules:
BEGIN
INSERT INTO tableA (unique_column,c2,c3) VALUES (1,2,3);
EXCEPTION
WHEN unique_violation THEN
UPDATE tableA SET c2 = 2, c3 = 3 WHERE unique_column = 1;
END;
but it has a performance drawback (see PostgreSQL.org):
A block containing an EXCEPTION clause is significantly more expensive
to enter and exit than a block without one. Therefore, don't use
EXCEPTION without need.
On bulk, you can always delete the row before the insert. A deletion of a row that doesn't exist doesn't cause an error, so its safely skipped.
For data import scripts, to replace "IF NOT EXISTS", in a way, there's a slightly awkward formulation that nevertheless works:
DO
$do$
BEGIN
PERFORM id
FROM whatever_table;
IF NOT FOUND THEN
-- INSERT stuff
END IF;
END
$do$;