my program has to accomplish these tasks:
Connect to a remote SQLServer database.
Getting data of each table in the database, and each column of each table.
With the data obtained write queries to create a copy of the database
Everything goes fine, but when i try to run the query on my DBMS (Microsoft SQL Server Management Studio)
it returns errors with the last entry but everything goes fine if i run one single query. What am i doing wrong? Here's is a snippet of the result
IF NOT EXISTS (select * from dbo.sysobjects where id = object_id(N'[test]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
BEGIN
CREATE TABLE dbo.test(
KeyId numeric(18, 0) PRIMARY KEY NOT NULL,
intNumTest numeric(18, 0) NULL,
intNumTest2 numeric(18, 0) NULL);
END
Imagine there are hundred of these kind of query, every single one running fine but returning errors if ran together
Related
I am slowly working through a feature where I am importing large csv files. The contents of the csv file has a chance that when it is uploaded the contents will trigger a uniqueness conflict. I've combed stack overflow for some similar resources but I still can't seem to get my trigger to update another table when a duplicate entry is found. The following code is what I have currently implemented with my line of logic for this process. Also, this is implemented in a rails app but the underlying sql is the following.
When a user uploads a file, the following happens when its processed.
CREATE TEMP TABLE codes_temp ON COMMIT DROP AS SELECT * FROM codes WITH NO DATA;
create or replace function log_duplicate_code()
returns trigger
language plpgsql
as
$$
begin
insert into duplicate_codes(id, campaign_id, code_batch_id, code, code_id, created_at, updated_at)
values (gen_random_uuid(), excluded.campaign_id, excluded.code_batch_id, excluded.code, excluded.code_id, now(), now());
return null;
end;
$$
create trigger log_duplicate_code
after insert on codes
for each row execute procedure log_duplicate_code();
INSERT INTO codes SELECT * FROM codes_temp ct
ON CONFLICT (campaign_id, code)
DO update set updated_at = excluded.updated_at;
DROP TRIGGER log_duplicate_code ON codes;
When I try to run this process nothing happens at all. If I were to have a csv file with this value CODE01 and then upload again with CODE01 the duplicate_codes table doesn't get populated at all and I don't understand why. There is no error that gets triggered or anything so it seems like DO UPATE..... is doing something. What am I missing here?
I also have some questions that come to my mind even if this were to work as intended. For example, I am uploading millions of these codes, etc.
1). Should my trigger be a statement trigger instead of a row for scalability?
2). What if someone else tries to upload another file that has millions of codes? I have my code wrapped in a transaction. Would a new separate trigger be created? Will this conflict with a previously processing process?
####### EDIT #1 #######
Thanks to Adriens' comment I do see that After Insert does not have the OLD key phrase. I updated my code to use EXCLUDED and I receive the following error for the trigger.
ERROR: missing FROM-clause entry for table "excluded" (PG::UndefinedTable)
Finally, here are the S.O. posts I've used to try to tailor my code but I just can't seem to make it work.
####### EDIT #2 #######
I have a little more context on to how this is implemented.
When the CSV is loaded, a staging table called codes_temp is created and dropped at the end of the transaction. This table contains no unique constraints. From what I read only the actual table that I want to insert codes should have the unique constraint error.
In my INSERT statement, the DO update set updated_at = excluded.updated_at; doesn't trigger a unique constraint error. As of right now, I don't know if it should or not. I borrowed this logic taken from this s.o. question postgresql log into another table with on conflict it seemed to me like I had to update something if I specify the DO UPDATE SET clause.
Last, the correct criteria for codes in the database is the following:
For example, this is an example entry in my codes table
id, campaign_id, code
1, 1, CODE01
2, 1, CODE02
3, 1, CODE03
If any of these codes appear again somewhere, This should not be inserted into the codes table but it needs to be inserted into the duplicate_codes table because they were already uploaded before.
id, campaign_id, code
1, 1, CODE01.
2, 1, CODE02
3, 1, CODE03
As for the codes_temp table I don't have any unique constraints, so there is no criteria to select the right one.
postgresql log into another table with on conflict
Postgres insert on conflict update using other table
Postgres on conflict - insert to another table
How to do INSERT INTO SELECT and ON DUPLICATE UPDATE in PostgreSQL 9.5?
Seems to me something like:
INSERT INTO
codes
SELECT
distinct on(campaign_id, code) *
FROM
codes_temp ct
ORDER BY
campaign_id, code, id DESC;
Assuming id was assigned sequentially, the above would select the most recent row into codes.
Then:
INSERT INTO
duplicate_codes
SELECT
*
FROM
codes_temp AS ct
LEFT JOIN
codes
ON
ct.id = codes.id
WHERE
codes.id IS NULL;
The above would select the rows in codes_temp that where not selected into codes into the duplicates table.
Obviously not tested on your data set. I would create a small test data set that has uniqueness conflicts and test with.
Currently I'm working on a simple library project using Embarcadero C++Builder 10.3 Community Edition, and Firebird and FlameRobin to create databases.
So far, I need only use simple queries, that were connected to a single database. Therefore, I used TFDConnection and TFDPhysFbDriverLink to connect to a .fdb file. Then, TFDQuery to create SQL commands and TDataSource. It works great.
Unfortunately, now I must join two tables. How do I write this command? I tried this:
SELECT * FROM users_books
join books on
users_books.id_book = books.id
where users_books and books are databases.
I got an error:
SQL error code = -204
Table unknown
BOOKS.
So I think I must connect somehow to these two databases simultaneously. How to do that?
Firebird databases are isolated and don't know about other databases. As a result, it is not possible to join tables across databases with a normal select statement.
What you can do, is use PSQL (Procedural SQL), for example in an EXECUTE BLOCK. You can then use FOR EXECUTE STATEMENT ... ON EXTERNAL to loop over the table in the other database, and then 'manually' join the local table using FOR SELECT (or vice versa).
For example (assuming a table user_books in the remote database, and a table books in the current database):
execute block
returns (book_id integer, book_title varchar(100), username varchar(50))
as
begin
for execute statement 'select book_id, username from user_books'
on external 'users_books' /* may need AS USER and PASSWORD clause as well */
into book_id, username do
begin
for select book_title from books where id = :book_id
into book_title do
begin
suspend;
end
end
end
I have function to insert data from one table to another
$BODY$
BEGIN
INSERT INTO backups.calls2 (uid,queue_id,connected,callerid2)
SELECT distinct (c.uid) ,c.queue_id,c.connected,c.callerid2
FROM public.calls c
WHERE c.connected is not null;
RETURN;
EXCEPTION WHEN unique_violation THEN NULL;
END;
$BODY$
And structure of table:
CREATE TABLE backups.nc_calls_id
(
uid character(30) NOT NULL,
queue_id integer,
callerid2 text,
connected timestamp without time zone,
id serial NOT NULL,
CONSTRAINT calls2_pkey PRIMARY KEY (uid)
)
WITH (
OIDS=FALSE
);
When I have first executed this query, everything went ok, 200000 rows was inserted to new table with unique Id.
But now, when I executing it again, no rows are being inserted
From the rather minimalist description given (no PostgreSQL version, no CREATE FUNCTION statement showing params etc, no other table structure, no function invocation) I'm guessing that you're attempting to do a merge, where you insert a row only if it doesn't exist by skipping rows if they already exist.
What the above function will do is skip all rows if any row already exists.
You need to either use a loop to do the insert within individual BEGIN ... EXCEPTION blocks (slow) or LOCK the table and do an INSERT INTO ... SELECT ... FROM newtable WHERE NOT EXISTS (SELECT 1 FROM oldtable where oldtable.key = newtable.key).
The INSERT INTO ... SELECT ... WHERE NOT EXISTS method will perform a lot better but will fail if more than one runs concurrently or if anything else inserts into the destination table at the same time. LOCKing the destination table before running it will make sure it's safe.
The PL/PgSQL looping BEGIN ... EXCEPTION method sounds nice and safe at first glance. Then you think about what happens when you run two of them at once. One will insert some keys first, one will insert other keys first, so they have a split of the values between them. That's OK, together they make up the full set. But what if only one of them commits and the other fails for some reason? You'll have an interesting sparsely inserted result. For that reason it's probably best to lock the destination table if using this approach too ... in which case you might as well use the vastly more efficient single pass INSERT with subquery-based uniqueness violation check.
I have a system that has been running on Windows Server 2003 and SQL Server 2000 for a number of years without a problem. Recently I moved the system to Windows Server 2008 R2 and SQL Server 2008 R2 and have started get a very strange intermittent problems.
I am inserting a row into a table with an identity column and executing a select to retrieve the identity value. Most of the time this code works fine but at random intervals the identity isn't returned. Sometimes no row is found so dr.Read returns false, other times I get a row back but there is no identity column in it. I checked the database and the insert has succeeded, it's just that the identity isn't returned. There are no triggers on the table.
I also tried changing the SQL to:
INSERT INTO test (value)
VALUES (100)
SELECT SCOPE_IDENTITY() AS 'identity' OPTION (MAXDOP 1)
in case I was running into the 'max degree of parallelism' bug but that didn't help either.
Here is relevant code (slightly simplified for illustration):
Dim dr As SqlDataReader = Nothing
Dim objCommand As New SqlCommand
Dim oConn As New SqlConnection
Dim id as integer
oConn.ConnectionString = connStr
oConn.open()
objCommand = New SqlCommand("INSERT INTO test (value) VALUES (100) SELECT SCOPE_IDENTITY() AS 'identity'", oConn, tr)
try
dr = objCommand.ExecuteReader
If Not dr.Read Then
Throw (New Exception("Could not read IDENTITY"))
Else
id = dr("identity")
End If
Catch
Throw
Finally
If Not dr Is Nothing Then dr.Close()
oConn.close()
End Try
What if you simply execute them as two separate statements?
objCommand = New SqlCommand("INSERT INTO test (value) VALUES (100); SELECT SCOPE_IDENTITY() AS 'identity'", oConn, tr)
SCOPE_IDENTITY gives you "the last identity value inserted into an identity column in the same scope."
The other option on SQL Server 2008 R2 would be to use the OUTPUT clause:
INSERT INTO test (value)
OUTPUT INSERTED.ID
VALUES (100)
This will output the ID of the row that was inserted - and that's the identity value you want.
From your code, you should be able to fetch that ID if you use ExecuteScalar (instead of ExecuteReader) and just convert the object type you get back to an int
I'm using SQL Query Analyzer to build a report from the database on one machine (A), and I'd like to create a temp table on a database server on another machine(B) and load it with the data from machine A.
To be more specific, I have a report that runs on machine A (machine.a.com), pulling from schema tst. Using SQL Query Analyzer, I log into the server at machine.a.com and then have access to the tst schema:
USE tst;
SELECT *
FROM prospect;
I would like to create a temp table from this query window, only I'd like it built on another machine (call it machine.b.com). What syntax would I use for this? My guess is something like:
CREATE TABLE machine.b.com.#temp_prospect_list(name varchar(45) Not Null, id decimal(10) Not Null);
And then I'd like to load this new table with something like:
INSERT INTO machine.b.com.#temp_prospect_list VALUES (
USE tst;
SELECT *
FROM prospect; );
The syntax to access a remote server in T-SQL is to fully qualify any table name with the following (brackets included when necessary):
[LinkedServer].[RemoteDatabase].[User].[Table]
So, for example, to run a SELECT statement on one server that accesses a table on another server:
SELECT * FROM [machine.b.com].tst.dbo.table7;