insert into select - stored procedure affects 0 rows - select

I am using SQL Server 2014. I created a stored procedure to update a table, but when i run this it affects 0 rows. i'm expecting to see 501 rows affected, as the actual insert statement when run alone returns that.The table beingupdated is pre-populated.
I also tried pre-populating the table with 500 records to see if the last 1 row was pulled by the stored procedure, but it still affects 0 rows.
Create PROCEDURE UPDATE_STAGING
(#StatementType NVARCHAR(20) = '')
AS
BEGIN
IF #StatementType = 'Insertnew'
BEGIN
INSERT INTO owner.dbo.MVR_Staging
(
policy_number,
quote_number,
request_id,
CreateTs,
mvr_response_raw_data
)
select
p.pol_num,
A.pol_number,
R.Request_ID,
R.CreateTS,
R._raw_data
from TABLE1 A with (NOLOCK)
left join TABLE2 R with (NOLOCK)
on R.Request_id = isnull(A.CACHE_REQUEST_ID, A.Request_id)
inner join TABLE3 P
on p.quote_policy_num = a.policy_number
where
A.[SOURCE] = 'MVR'
and A.CREATED_ON >= '2020-01-01'
END
IF #StatementType = 'Select'
BEGIN
SELECT *
FROM owner.dbo.MVR_Staging
END
END
to run:
exec UPDATE_STAGING insertnew
GO

Some correction to your code that is not related to your issue, but is good to keep a best practice and clean code.When declaring a stored procedure parameter, there's no point using parenthesis (#StatementType NVARCHAR(20) = ''). Also you should be using ELSE IF #StatementType = 'Select', without ELSE, this second IF condition will always be checked. Execute the procedure exec UPDATE_STAGING 'insertnew', as the parameter is NVARCHAR. As for your real issue, you could try comment the INSERT part and leave only the SELECT to see if rows are returned.

Related

Is it worth Parallel/Concurrent INSERT INTO... (SELECT...) to the same Table in Postgres?

I was attempting an INSERT INTO.... ( SELECT... ) (inserting a batch of rows from SELECT... subquery), onto the same table in my database. For the most part it was working, however, I did see a "Deadlock" exception logged every now and then. Does it make sense to do this or is there a way to avoid a deadlock scenario? On a high-level, my queries both resemble this structure:
CREATE OR REPLACE PROCEDURE myConcurrentProc() LANGUAGE plpgsql
AS $procedure$
DECLARE
BEGIN
LOOP
EXIT WHEN row_count = 0
WITH cte AS (SELECT *
FROM TableA tbla
WHERE EXISTS (SELECT 1 FROM TableB tblb WHERE tblb.id = tbla.id)
INSERT INTO concurrent_table (SELECT id FROM cte);
COMMIT;
UPDATE log_tbl
SET status = 'FINISHED',
WHERE job_name = 'tblA_and_B_job';
END LOOP;
END
$procedure$;
And the other script that runs in parallel and INSERTS... also to the same table is also basically:
CREATE OR REPLACE PROCEDURE myConcurrentProc() LANGUAGE plpgsql
AS $procedure$
DECLARE
BEGIN
LOOP
EXIT WHEN row_count = 0
WITH cte AS (SELECT *
FROM TableC c
WHERE EXISTS (SELECT 1 FROM TableD d WHERE d.id = tblc.id)
INSERT INTO concurrent_table (SELECT id FROM cte);
COMMIT;
UPDATE log_tbl
SET status = 'FINISHED',
WHERE job_name = 'tbl_C_and_D_job';
END LOOP;
END
$procedure$;
So you can see I'm querying two different tables in each script, however inserting into the same some_table. I also have the UPDATE... statement that writes to a log table so I suppose that could also cause issues. Is there any way to use BEGIN... END here and COMMIT to avoid any deadlock/concurrency issues or should I just create a 2nd table to hold the "tbl_C_and_D_job" data?

How to nest a SELECT into a UPDATE statement in PL/pgSQL

I have the below code which works well, the problem is I am creating a table each time, which means I need to recreate all indexes and delete the old tables when the new ones have been created.
DO
$do$
DECLARE
m text;
arr text[] := array['e09000001','e09000007','e09000033','e09000019'];
BEGIN
FOREACH m IN ARRAY arr
LOOP
EXECUTE format($fmt$
CREATE TABLE %I AS
SELECT a.ogc_fid,
a.poly_id,
a.title_no,
a.wkb_geometry,
a.distcode,
SUM(COALESCE((ST_Area(ST_Intersection(a.wkb_geometry, b.wkb_geometry))/ST_Area(a.wkb_geometry))*100, 0)) AS aw
FROM %I a
LEFT OUTER JOIN filter_ancientwoodlands b ON
ST_Overlaps(a.wkb_geometry, b.wkb_geometry) OR ST_Within(b.wkb_geometry, a.wkb_geometry)
GROUP BY a.ogc_fid,
a.poly_id,
a.title_no,
a.wkb_geometry,
a.distcode;
$fmt$, m || '_splitv2_aw', m || '_splitv2_distcode');
END LOOP;
END
$do$
Instead I would like to just create a new column in the existing table and update it. I have done this with simple queries like:
ALTER TABLE e09000001 ADD COLUMN area double precision;
UPDATE e09000001 SET area=ST_AREA(wkb_geometry);
I am having a lot of trouble figuring out to use UPDATE and SET with my more complicated SELECT statement above. Does anyone know how I can achieve this?
UPDATE: So I tried doing what #abelisto suggested:
UPDATE test_table
SET aw = subquery.aw_temp
FROM (SELECT SUM(COALESCE((ST_Area(ST_Intersection(a.wkb_geometry, b.wkb_geometry))/ST_Area(a.wkb_geometry))*100, 0)) AS aw_temp
FROM test_table a
LEFT OUTER JOIN filter_ancientwoodlands b ON
ST_Overlaps(a.wkb_geometry, b.wkb_geometry) OR ST_Within(b.wkb_geometry, a.wkb_geometry)
GROUP BY a.ogc_fid,
a.poly_id,
a.title_no,
a.wkb_geometry,
a.distcode) AS subquery;
But the query just runs for a long time (going one an hour) when it should only take a few seconds. Can anyone see an error in my code?
You need a WHERE clause to join the from expression to the update table.
perhaps like this.
UPDATE test_table
SET aw = subquery.aw_temp
FROM (SELECT SUM(COALESCE((ST_Area(ST_Intersection(a.wkb_geometry, b.wkb_geometry))/ST_Area(a.wkb_geometry))*100, 0)) AS aw_temp,a.wkb_geometry
FROM test_table a
LEFT OUTER JOIN filter_ancientwoodlands b ON
ST_Overlaps(a.wkb_geometry, b.wkb_geometry) OR ST_Within(b.wkb_geometry, a.wkb_geometry)
GROUP BY a.ogc_fid,
a.poly_id,
a.title_no,
a.wkb_geometry,
a.distcode) AS subquery
WHERE
subquery.wkb_geometry = test_table.wkb_geometry;

Unexpected difference between two pieces of sql

I have two pieces of sql which I think should give identical results and do not. They both involve a function
create or replace function viewFromList( lid integer, offs integer, lim integer ) returns setof resultsView as $xxx$
BEGIN
return query select resultsView.* from resultsView, list, files
where list_id=lid and
resultsView.basename = files.basename and
idx(imgCmnt,file_id) > 0
order by idx(imgCmnt,file_id)
limit lim offset offs ;
return;
END;
$xxx$
The first is:
drop table if exists t1;
create temp table t1 as select * from viewFromList( lid, frst, nmbr::integer );
select count(*) into rv.height from t1;
The second is
DECLARE
t1 resultsView;
....
select viewFromList( lid, frst, nmbr::integer ) into t1;
select count(*) into rv.height from t1;
Seems to me, rv.height should get the same value in both cases. It doesn't. If it matters, the correct answer in my case is 7, the second code produces 12. I have, of course, looked at the result of the call to viewFromList with the appropriate values. When run in psql, it returns the expected 7 rows.
Can someone tell me what's going on?
Thanks.

using recursive CTE within a function

I wanted to try out using a recursive CTE for the first time, so I wrote a query to show the notes in a musical scale based on the root note and the steps given the different scales.
When running the script itself all is well, but the second that I try to make it into a function, I get the error "relation "temp_scale_steps" does not exist".
I am using postgreSQL 9.4.1. I can't see any reason why this would not work. Any advice would be gratefully received.
The code below:
create or replace function scale_notes(note_id int, scale_id int)
returns table(ordinal int, note varchar(2))
as
$BODY$
drop table if exists temp_min_note_seq;
create temp table temp_min_note_seq
as
select min(note_seq_id) as min_note_id from note_seq where note_id = $1
;
drop table if exists temp_scale_steps;
create temp table temp_scale_steps
as
with recursive steps (ordinal, step) as
(
select ordinal
,step
from scale_steps
where scale_id = $2
union all
select ordinal+1
,step
from steps
where ordinal < (select max(ordinal) from scale_steps where scale_id = $2)
)
select ordinal
,sum(step) as temp_note_seq_id
from steps
group by 1
order by 1
;
select x.ordinal
,n.note
from
(
select ordinal
,min_note_id + temp_note_seq_id as temp_note_seq_id
from temp_scale_steps
join temp_min_note_seq on (1=1)
) x
join note_seq ns on (x.temp_note_seq_id = ns.note_seq_id)
join notes n on (ns.note_id = n.note_id)
order by ordinal;
$BODY$
language sql volatile;
In response to comments I have changed the script so that the query is done in one step and now all works. However, I would still be interested to know why the version above does not work.

Array-like access to variables in T-SQL

In my stored procedure I have multiple similar variables #V1, #V2 ... #V20 (let's say 20 of them) FETCHED from a record. How would I use dynamic SQL to make 20 calls to another stored procedure using those variables as parameters?
Of course #V[i] syntax is incorrect, but it expresses the intent
fetch next from maincursor into #status, #V1, #V2, ...
while #i<21
begin
-- ??? execute sp_executesql 'SecondSP', '#myParam int', #myParam=#V[i]
-- or
-- ??? execute SecondSP #V[i]
set #i = #i+1
end
As others have said, set up a temporary table, insert the values that you need into it. Then "iterate" through it executing the necessary SQL from those values. This will allow you to have 0 to MANY values to be executed, so you don't have to set up a variable for each.
The following is a complete sample of how you may go about doing that without cursors.
SET NOCOUNT ON
DECLARE #dict TABLE (
id INT IDENTITY(1,1), -- a unique identity column for reference later
value VARCHAR(50), -- your parameter value to be passed into the procedure
executed BIT -- BIT to mark a record as being executed later
)
-- INSERT YOUR VALUES INTO #dict HERE
-- Set executed to 0 (so that the execution process will pick it up later)
-- This may be a SELECT statement into another table in your database to load the values into #dict
INSERT #dict
SELECT 'V1Value', 0 UNION ALL
SELECT 'V2Value', 0
DECLARE #currentid INT
DECLARE #currentvalue VARCHAR(50)
WHILE EXISTS(SELECT * FROM #dict WHERE executed = 0)
BEGIN
-- Get the next record to execute
SELECT
TOP 1 #currentid = id
FROM #dict
WHERE executed = 0
-- Get the parameter value
SELECT #currentvalue = value
FROM #dict
WHERE id = #currentid
-- EXECUTE THE SQL HERE
--sp_executesql 'SecondSP', '#myParam int', #myParam =
PRINT 'SecondSP ' + '#myParam int ' + '#myParam = ' + #currentvalue
-- Mark record as having been executed
UPDATE d
SET executed = 1
FROM #dict d
WHERE id = #currentid
END
Use a #TempTable
if you are at SQL Server 2005 you can create a #TempTable in the parent stored procedure, and it is available in the child stored procedure that it calls.
CREATE TABLE #TempTable
(col1 datatype
,col2 datatype
,col3 datatype
)
INSERT INTO #TempTable
(col1, col2, col3)
SELECT
col1, col2, col3
FROM ...
EXEC #ReturnCode=YourOtherProcedure
within the other procedure, you have access to #TempTable to select, delete, etc...
make that child procedure work on a set of data not on one element at a time
remember, in SQL, loops suck performance away!
Why not just use the table variable instead, and then just loop through the table getting each value.
Basically treat each row in a table as your array cell, with a table that has one column.
Just a thought. :)
This seems like an odd request - will you always have a fixed set of variables? What if the number changes from 20 to 21, and so on, are you constantly going to have to be declaring new variables?
Is it possible, instead of retrieving the values into separate variables, to return them each as individual rows and just loop through them in a cursor?
If not, and you have to use the individual variables as explained, here's one solution:
declare #V1 nvarchar(100)
set #V1 = 'hi'
declare #V2 nvarchar(100)
set #V2 = 'bye'
declare #V3 nvarchar(100)
set #V3 = 'test3'
declare #V4 nvarchar(100)
set #V4 = 'test4'
declare #V5 nvarchar(100)
set #V5 = 'end'
declare aCursor cursor for
select #V1
union select #V2 union select #V3
union select #V4 union select #V5
open aCursor
declare #V nvarchar(100)
fetch next from aCursor into #V
while ##FETCH_STATUS = 0
begin
exec TestParam #V
fetch next from aCursor into #V
end
close aCursor
deallocate aCursor
I don't really like this solution, it seems messy and unscalable. Also, as a side note - the way you phrased your question seems to be asking if there are arrays in T-SQL. By default there aren't, although a quick search on google can point you in the direction of workarounds for this if you absolutely need them.