Inconsistent cursor results when looping over databases - tsql

I have several databases named very similar (my-db-1, my-db-2, my-db-3, my-db-4). I want to execute the same stored procedure on each of these databases. I decided to use cursors. However, I am getting some strange issues. First here is my simple code that I am executing through SQL Server Management Studio 2008.
DECLARE #db_cursor CURSOR
DECLARE #name varchar(255)
DECLARE #Sql nvarchar(4000)
SET #db_cursor = CURSOR FOR
SELECT name FROM sys.databases WHERE name LIKE 'my-db-%'
OPEN #db_cursor
FETCH NEXT FROM #db_cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #Sql = 'Use [' + #name + ']; PRINT DB_NAME();'
exec sp_sqlexec #Sql
FETCH NEXT FROM #db_cursor INTO #name
END
CLOSE #db_cursor
DEALLOCATE #db_cursor
Executing this multiple times in a row within 2 seconds, I get strange results:
Execution1:
my-db-1
my-db-2
my-db-3
my-db-4
Execution2:
my-db-1
my-db-2
Execution3:
my-db-1
my-db-2
my-db-3
my-db-4
Execution4:
my-db-1
It seems like its completely random. Sometimes I'll get all 4 databases to print after 10 executions. Sometimes after just 2 executions only 1 database will get printed.
This SQL is executing on Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 (Build 7600: ) through Microsoft SQL Server Management Studio 10.50.1600.1
Does anyone have any ideas?

Try declaring your cursor as FAST_FORWARD.
The default is an updateable cursor and these required update locks probably conflict with another process accessing sys.databases
Ref.: DECLARE CURSOR

Related

PGAgent - Scheduling a job to drop my old partitions

I have a script below which is running fine when i'm executing directly via PGAdmin however when i schedule it to run using PGAgent though, it's showing successful but my partitions were still untouched.
Below is the PostgreSQL version our company is currently using.
"PostgreSQL 14.2 (EnterpriseDB Advanced Server 14.2.1) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit"
DECLARE
TYPE tbl_arr IS VARRAY(5) OF VARCHAR2(30);
partncnt INTEGER;
drop_sql varchar2(500);
tbnames tbl_arr;
t_partition_name user_tab_partitions.partition_name%TYPE;
BEGIN
tbnames := tbl_arr('MYTABLE');
FOR i IN 1..1 LOOP
select count(1) into partncnt from user_tab_partitions where table_name=tbnames(i);
IF (partncnt) > 7 THEN
select partition_name into t_partition_name from user_tab_partitions where table_name=tbnames(i) and partition_position=2;
drop_sql := 'alter table '|| tbnames(i) ||' drop partition ' || t_partition_name;
execute immediate drop_sql;
ELSE
DBMS_OUTPUT.PUT_LINE('No Partition to drop');
END IF;
END LOOP;
END;
Checking on it's last run you can see that it's running successfully:
screenshot of successful ran job
Manage to find the root cause apparently for postgres you will need to ensure that PGAgent is running for the user/schema. As soon as we had it running., the job were been executed successfully base on the interval i've set.

Writing a for-loop using psql

I am trying to run a for loop via psql command in Linux to do some spatial operations in PostGIS database
My SQL command which I successfully ran in dbeaver looks something like this
DO $$
declare
the_name varchar(50);
BEGIN
FOR the_name IN
SELECT "name" FROM myschema.mytable WHERE "idcolumn" = 'selected id' LIMIT 4
LOOP
some operations inside the loop...
END LOOP;
RAISE NOTICE 'the name - (%)', the_name;
END;
$$;
I usually copy-paste the dbeaver SQL command inside psql -c and it runs fine. But I am getting a syntax error when using the above code. I am a beginner in SQL and any help is appreciated. Thanks.
Edit
This is the error message - it's weird because I don't have this number in the code. It's probably reading from memory or something.
ERROR: syntax error at or near "377643"
LINE 1: DO 377643 declare the_name varchar(50); BEGIN FOR the_...

Flyway migration error with DB2 11.1 SP including pure xml DDL

I have a fairly complex Db2 V11.1 SP that will compile and deploy manually, but when i add the SQL to a migration script I get this issue
https://github.com/flyway/flyway/issues/2795
As the SP compiles and deploys manually, I am confident the SP SQL is ok.
Does anyone have any idea what the underlying issue might be
DB2 11.1
Flyway 6.4.1 (I have tried 7.x versions with same result)
the SP uses pure xml functions, so the SP SQL includes $ and # characters
I tried using obscure statement terminator chars ( ~ ^) but a simple test with Pure xml functions and # as statement terminator seemed to work
--#SET TERMINATOR #
SET SCHEMA CORE
#
CREATE OR REPLACE PROCEDURE CORE.XML_QUERY
LANGUAGE SQL
BEGIN
DECLARE GLOBAL TEMPORARY TABLE OPTIONAL_ELEMENT (
LEG_SEG_ID BIGINT,
OPTIONAL_ELEMENT_NUM INTEGER,
OPTIONAL_ELEMENT_LIST VARCHAR(100),
CLSEQ INTEGER
) ON COMMIT PRESERVE ROWS NOT LOGGED WITH REPLACE;
insert into session.optional_element
select distinct LEG_SEG_ID, A.OPTIONAL_ELEMENT_NUM, A.OPTIONAL_ELEMENT_LIST, A.CLSEQ
from core.leg_seg , XMLTABLE('$d/LO/O' passing XMLPARSE(DOCUMENT(optional_element_xml)) as "d"
COLUMNS
OPTIONAL_ELEMENT_NUM INTEGER PATH '#Num',
OPTIONAL_ELEMENT_LIST VARCHAR(100) PATH 'text()',
CLSEQ INTEGER PATH '#Seq') AS A
WHERE iv_id = 6497222690 and optional_element_xml is not null;
END
#

CDC in SS 2008 R2 not capturing data, but no errors either

First post, so I'll get right to it. Thank you in advance for your answers and consideration.
I have full privileges on the database engine that the DB in question is running on, including sysadmin.
To the best of my knowledge, I have enabled this correctly according to documentation, doing the following:
Running the command EXEC sys.sp_cdc_enable_db via a c# application
that I am using as an interface to deal with setting up, recording,
and comparing DML database changes.
From the same application, running the command
EXEC sys.sp_cdc_enable_table
#source_schema = N'dbo',
#source_name   = N'ORD_ATTACHMENTS',
#role_name     = NULL
I have verified that the DB in question is ready for CDC using SELECT [name], database_id, is_cdc_enabled FROM sys.databases.
The table's readiness I have also verified using SELECT [name], is_tracked_by_cdc FROM sys.tables.
Running SELECT * FROM [msdb].[dbo].[cdc_jobs] WHERE [database_id] = DB_ID() in the database context yields the following information for the capture job:
maxtrans: 7200
maxscans: 10
continuous: 1
pollinginterval: 5
retention and threshold are 0.
After inserting records into the table in question via SSMS, the related CDC table, though present, does not have any data in it. No errors were encountered, and the record was successfully added to the source table.
Additional information:
Database server used to use Windows fibers (lightweight pooling). I
have switched this off, reconfigured, and rebooted the server.
Database used to have compatibility set to SQL Server 2005 (90), but
I have updated this to SQL Server 2008 (100). Again rebooted the
server.
I also set the Change Tracking property to true for the
database in question, but I have since learned that this is
irrelevant.
The source table has the following fields:
[AttachmentID] [bigint] IDENTITY(1,1) NOT NULL,
[ORDNUM] [nvarchar](10) NOT NULL,
[FileName] [nvarchar](260) NOT NULL,
[FileContent] [varbinary](max) NULL,
[CreatedOn] [datetime] NOT NULL CONSTRAINT [DF_ORD_ATTACHMENTS_CreatedOn] DEFAULT (getdate())
No fields are excluded from CDC for this table.
Thank you in advance for any assistance.
Best regards,
Chris.
Update 2016-09-20 15:15:
Ran the following:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Agent XPs', 1;
GO
RECONFIGURE
GO
Have now switched to a test DB to simplify matters. Re-enabled the CDC on my new test table (fields are bigint PK identity field and an NVARCHAR(50) nullable field). Still not working. Also, the capture job has no history entries under SQL Server Agent.
Update 2016-09-20 20:09
Ran sp_MScdc_capture_job in the DB context. This can be, depending on job settings, a continuously executing procedure. Data was found in the CDC table upon running this. Will try to figure out how to automatically engage this.
Update 2016-09-28 17:19
The capture job is scripted as follows:
USE [msdb]
GO
/****** Object: Job [cdc.CDCTest_capture] Script Date: 2016-09-28 5:18:13 PM ******/
BEGIN TRANSACTION
DECLARE #ReturnCode INT
SELECT #ReturnCode = 0
/****** Object: JobCategory [REPL-LogReader] Script Date: 2016-09-28 5:18:13 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'REPL-LogReader' AND category_class=1)
BEGIN
EXEC #ReturnCode = msdb.dbo.sp_add_category #class=N'JOB', #type=N'LOCAL', #name=N'REPL-LogReader'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE #jobId BINARY(16)
EXEC #ReturnCode = msdb.dbo.sp_add_job #job_name=N'cdc.CDCTest_capture',
#enabled=1,
#notify_level_eventlog=2,
#notify_level_email=0,
#notify_level_netsend=0,
#notify_level_page=0,
#delete_level=0,
#description=N'CDC Log Scan Job',
#category_name=N'REPL-LogReader',
#owner_login_name=N'sa', #job_id = #jobId OUTPUT
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Starting Change Data Capture Collection Agent] Script Date: 2016-09-28 5:18:14 PM ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Starting Change Data Capture Collection Agent',
#step_id=1,
#cmdexec_success_code=0,
#on_success_action=3,
#on_success_step_id=0,
#on_fail_action=3,
#on_fail_step_id=0,
#retry_attempts=10,
#retry_interval=1,
#os_run_priority=0, #subsystem=N'TSQL',
#command=N'RAISERROR(22801, 10, -1)',
#server=N'AECON-SQL',
#database_name=N'CDCTest',
#flags=4
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Change Data Capture Collection Agent] Script Date: 2016-09-28 5:18:14 PM ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Change Data Capture Collection Agent',
#step_id=2,
#cmdexec_success_code=0,
#on_success_action=1,
#on_success_step_id=0,
#on_fail_action=2,
#on_fail_step_id=0,
#retry_attempts=10,
#retry_interval=1,
#os_run_priority=0, #subsystem=N'TSQL',
#command=N'sys.sp_MScdc_capture_job',
#server=N'AECON-SQL',
#database_name=N'CDCTest',
#flags=4
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_update_job #job_id = #jobId, #start_step_id = 1
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobschedule #job_id=#jobId, #name=N'CDC capture agent schedule.',
#enabled=1,
#freq_type=64,
#freq_interval=0,
#freq_subday_type=0,
#freq_subday_interval=0,
#freq_relative_interval=0,
#freq_recurrence_factor=0,
#active_start_date=20160920,
#active_end_date=99991231,
#active_start_time=0,
#active_end_time=235959,
#schedule_uid=N'd1fc7d85-c051-4b24-af84-5505308caaf0'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobserver #job_id = #jobId, #server_name = N'(local)'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (##TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO
Chris,
When you enable CDC at the DB and table level, a number of further objects are created underneath the cdc schema. Of importance are the various functions, the _CT table, and the two jobs cdc.XXXX_capture & cdc.XXXX_cleanup (where XXXX is the full name of the database).
From your description thus far, especially considering the latest update, the error appears to possibly be with the jobs themselves.
At the outset, and it might sound obvious, but do you have SQL Agent running on this instance? I only ask because it is not mentioned in your initial description.
If it is already running, then you will need to get in a little deeper.
If you navigate to your SQL Agent/Jobs folder (under SSMS), locate the capture job, right click and request it to be scripted, you should find the following.
4 calls:
sp_add_job #job_name=N'cdc.XXXX_capture'
sp_add_jobstep #step_name=N'Starting Change Data Capture Collection Agent'
sp_add_jobstep #step_name=N'Change Data Capture Collection Agent'
sp_add_jobschedule #name=N'CDC capture agent schedule.'
The second of those sp_add_jobstep calls is the one that executes the same code you indicates above, #command=N'sys.sp_MScdc_capture_job'.
You can attempt to kick the job off manually to see if that kicks it into life, or, at the very least, provides some data into the _CT table.
In addition, check the last of those calls above, the schedule, sp_add_jobschedule. This should also be enabled, with #freq_type=64 (to ensure it starts when the Agent starts).
Please provide the results of what you find in the response to assist further troubleshooting.
Thanks,
The problem turned out to be that the SQL Server Agent job was not running, although SQL Server indicated that it was. Turned this service on in the services console and was able to capture data in CDC.
Special thanks to LogicalMan who very patiently worked with me through all of this.

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi