CDC in SS 2008 R2 not capturing data, but no errors either - sql-server-2008-r2

First post, so I'll get right to it. Thank you in advance for your answers and consideration.
I have full privileges on the database engine that the DB in question is running on, including sysadmin.
To the best of my knowledge, I have enabled this correctly according to documentation, doing the following:
Running the command EXEC sys.sp_cdc_enable_db via a c# application
that I am using as an interface to deal with setting up, recording,
and comparing DML database changes.
From the same application, running the command
EXEC sys.sp_cdc_enable_table
#source_schema = N'dbo',
#source_name   = N'ORD_ATTACHMENTS',
#role_name     = NULL
I have verified that the DB in question is ready for CDC using SELECT [name], database_id, is_cdc_enabled FROM sys.databases.
The table's readiness I have also verified using SELECT [name], is_tracked_by_cdc FROM sys.tables.
Running SELECT * FROM [msdb].[dbo].[cdc_jobs] WHERE [database_id] = DB_ID() in the database context yields the following information for the capture job:
maxtrans: 7200
maxscans: 10
continuous: 1
pollinginterval: 5
retention and threshold are 0.
After inserting records into the table in question via SSMS, the related CDC table, though present, does not have any data in it. No errors were encountered, and the record was successfully added to the source table.
Additional information:
Database server used to use Windows fibers (lightweight pooling). I
have switched this off, reconfigured, and rebooted the server.
Database used to have compatibility set to SQL Server 2005 (90), but
I have updated this to SQL Server 2008 (100). Again rebooted the
server.
I also set the Change Tracking property to true for the
database in question, but I have since learned that this is
irrelevant.
The source table has the following fields:
[AttachmentID] [bigint] IDENTITY(1,1) NOT NULL,
[ORDNUM] [nvarchar](10) NOT NULL,
[FileName] [nvarchar](260) NOT NULL,
[FileContent] [varbinary](max) NULL,
[CreatedOn] [datetime] NOT NULL CONSTRAINT [DF_ORD_ATTACHMENTS_CreatedOn] DEFAULT (getdate())
No fields are excluded from CDC for this table.
Thank you in advance for any assistance.
Best regards,
Chris.
Update 2016-09-20 15:15:
Ran the following:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Agent XPs', 1;
GO
RECONFIGURE
GO
Have now switched to a test DB to simplify matters. Re-enabled the CDC on my new test table (fields are bigint PK identity field and an NVARCHAR(50) nullable field). Still not working. Also, the capture job has no history entries under SQL Server Agent.
Update 2016-09-20 20:09
Ran sp_MScdc_capture_job in the DB context. This can be, depending on job settings, a continuously executing procedure. Data was found in the CDC table upon running this. Will try to figure out how to automatically engage this.
Update 2016-09-28 17:19
The capture job is scripted as follows:
USE [msdb]
GO
/****** Object: Job [cdc.CDCTest_capture] Script Date: 2016-09-28 5:18:13 PM ******/
BEGIN TRANSACTION
DECLARE #ReturnCode INT
SELECT #ReturnCode = 0
/****** Object: JobCategory [REPL-LogReader] Script Date: 2016-09-28 5:18:13 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'REPL-LogReader' AND category_class=1)
BEGIN
EXEC #ReturnCode = msdb.dbo.sp_add_category #class=N'JOB', #type=N'LOCAL', #name=N'REPL-LogReader'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE #jobId BINARY(16)
EXEC #ReturnCode = msdb.dbo.sp_add_job #job_name=N'cdc.CDCTest_capture',
#enabled=1,
#notify_level_eventlog=2,
#notify_level_email=0,
#notify_level_netsend=0,
#notify_level_page=0,
#delete_level=0,
#description=N'CDC Log Scan Job',
#category_name=N'REPL-LogReader',
#owner_login_name=N'sa', #job_id = #jobId OUTPUT
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Starting Change Data Capture Collection Agent] Script Date: 2016-09-28 5:18:14 PM ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Starting Change Data Capture Collection Agent',
#step_id=1,
#cmdexec_success_code=0,
#on_success_action=3,
#on_success_step_id=0,
#on_fail_action=3,
#on_fail_step_id=0,
#retry_attempts=10,
#retry_interval=1,
#os_run_priority=0, #subsystem=N'TSQL',
#command=N'RAISERROR(22801, 10, -1)',
#server=N'AECON-SQL',
#database_name=N'CDCTest',
#flags=4
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Change Data Capture Collection Agent] Script Date: 2016-09-28 5:18:14 PM ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Change Data Capture Collection Agent',
#step_id=2,
#cmdexec_success_code=0,
#on_success_action=1,
#on_success_step_id=0,
#on_fail_action=2,
#on_fail_step_id=0,
#retry_attempts=10,
#retry_interval=1,
#os_run_priority=0, #subsystem=N'TSQL',
#command=N'sys.sp_MScdc_capture_job',
#server=N'AECON-SQL',
#database_name=N'CDCTest',
#flags=4
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_update_job #job_id = #jobId, #start_step_id = 1
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobschedule #job_id=#jobId, #name=N'CDC capture agent schedule.',
#enabled=1,
#freq_type=64,
#freq_interval=0,
#freq_subday_type=0,
#freq_subday_interval=0,
#freq_relative_interval=0,
#freq_recurrence_factor=0,
#active_start_date=20160920,
#active_end_date=99991231,
#active_start_time=0,
#active_end_time=235959,
#schedule_uid=N'd1fc7d85-c051-4b24-af84-5505308caaf0'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobserver #job_id = #jobId, #server_name = N'(local)'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (##TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

Chris,
When you enable CDC at the DB and table level, a number of further objects are created underneath the cdc schema. Of importance are the various functions, the _CT table, and the two jobs cdc.XXXX_capture & cdc.XXXX_cleanup (where XXXX is the full name of the database).
From your description thus far, especially considering the latest update, the error appears to possibly be with the jobs themselves.
At the outset, and it might sound obvious, but do you have SQL Agent running on this instance? I only ask because it is not mentioned in your initial description.
If it is already running, then you will need to get in a little deeper.
If you navigate to your SQL Agent/Jobs folder (under SSMS), locate the capture job, right click and request it to be scripted, you should find the following.
4 calls:
sp_add_job #job_name=N'cdc.XXXX_capture'
sp_add_jobstep #step_name=N'Starting Change Data Capture Collection Agent'
sp_add_jobstep #step_name=N'Change Data Capture Collection Agent'
sp_add_jobschedule #name=N'CDC capture agent schedule.'
The second of those sp_add_jobstep calls is the one that executes the same code you indicates above, #command=N'sys.sp_MScdc_capture_job'.
You can attempt to kick the job off manually to see if that kicks it into life, or, at the very least, provides some data into the _CT table.
In addition, check the last of those calls above, the schedule, sp_add_jobschedule. This should also be enabled, with #freq_type=64 (to ensure it starts when the Agent starts).
Please provide the results of what you find in the response to assist further troubleshooting.
Thanks,

The problem turned out to be that the SQL Server Agent job was not running, although SQL Server indicated that it was. Turned this service on in the services console and was able to capture data in CDC.
Special thanks to LogicalMan who very patiently worked with me through all of this.

Related

DB2 Scheduled Trigger

I'm new to triggers and I want to ask the proper procedure to create a trigger (or any better methods) to duplicate the contents of T4 table to T5 table on a specified datetime.
For example, on the 1st day of every month at 23:00, I want to duplicate the contents of T4 table to T5 table.
Can anyone please advise what's the best method?
Thank you.
CREATE TRIGGER TRIG1
AFTER INSERT ON T4
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
INSERT INTO T5 VALUES (:NEW.B, :NEW.A);
END TRIG1;
It can be done by Administrative Task Scheduler feature instead of cron. Here is a sample script.
#!/bin/sh
db2set DB2_ATS_ENABLE=YES
db2stop
db2start
db2 -v "drop db db1"
db2 -v "create db db1"
db2 -v "connect to db1"
db2 -v "CREATE TABLESPACE SYSTOOLSPACE IN IBMCATGROUP MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 4"
db2 -v "create table s1.t4 (c1 int)"
db2 -v "create table s1.t5 (c1 int)"
db2 -v "insert into s1.t4 values (1)"
db2 -v "create procedure s1.copy_t4_t5() language SQL begin insert into s1.t5 select * from s1.t4; end"
db2 -v "CALL SYSPROC.ADMIN_TASK_ADD ('ATS1', CURRENT_TIMESTAMP, NULL, NULL, '0,10,20,30,40,50 * * * *', 'S1', 'COPY_T4_T5',NULL , NULL, NULL )"
date
It will create a task, called 'ATS1' and will call the procedure s1.copy_t4_t5 every 10 minuets, such as 01:00, 01:20, 01:30. You may need to run below after executing the script:
db2 -v "connect to db1"
Then, after some time, run below to see if the t5 table has row as expected:
db2 -v "select * from s1.t5"
For your case, the 5th parameter would be replaced with '0 23 1 * *'.
It represents 'minute hour day_of_month month weekday' so
it will be called every 1st day of month at 23:00.
For more information, how to modify existing task, delete task, review status, see at:
Administrative Task Scheduler routines and views
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/doc/c0061223.html
Also, here is one of good article about it:
[DB2 LUW] Sample administrative task scheduler ADMIN_TASK_ADD and ADMIN_TASK_REMOVE usage
https://www.ibm.com/support/pages/node/1140388?lang=en
Hope this helps.

Catch Block for Insert Trigger Not Firing for Error Launching SSA Job

I'm setting up an AFTER INSERT trigger to launch an SSA job which executes a SSIS package to report on an ETL log file after its process completes. The syntax for the TRY...CATCH block appears correct but the error handling doesn't work for the error code when it detects the SSA job it's told to launch is already running.
The trigger is for a user table in a SQL Server 2012 SP4 instance (with SQL 2012 (110) compatibility). I tried handling the one error (#22022) for a SQL job already running but it seems that you can't error trap the execution of a system stored procedure.
CREATE TRIGGER InterfaceSupport.trg_XMLTrigger_Insert
ON [InterfaceSupport].[XMLLogReaderTriggers]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRY
EXEC msdb.dbo.sp_start_job 'Launch job'
END TRY
BEGIN CATCH
IF ERROR_NUMBER()=22022 --Job already running
EXEC msdb.dbo.sp_send_dbmail NULL,'name#domain.org',NULL,NULL,'Interface XML Log Reader Already Running','The SSA Job reading Interface logs is already running. The job will attempt to catch the new request at the end of the current cycle.'
ELSE
BEGIN
DECLARE #Errormsg nvarchar(max)
SELECT #Errormsg=ERROR_MESSAGE()
EXEC msdb.dbo.sp_send_dbmail NULL,'name#domain.org',NULL,NULL,'Interface XML Log Reader Spawn Error',#Errormsg
END
END CATCH
END
GO
I'm getting the following error despite the error handling block. It's as if the CATCH block is non-existent.
Msg 22022, LeveSQLServerAgent Error: Request to run job Launch Job (from User xxx) refused because the job is already running from a request by User xxx. l 16, State 1, Line 17
This error can't be caught by TRY/CATCH unfortunately.
You should check if the job is running and start only if it's not, although the job might start in between the check and the EXEC statement.
DECLARE #JobID UNIQUEIDENTIFIER = (SELECT job_id FROM msdb.dbo.sysjobs AS J WHERE J.name = 'Launch job')
IF NOT EXISTS (
SELECT
'job is running'
FROM
msdb.dbo.sysjobactivity ja
INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
WHERE
ja.session_id = (SELECT TOP 1 session_id FROM msdb.dbo.syssessions ORDER BY agent_start_date DESC) AND
ja.start_execution_date is not null AND
ja.stop_execution_date is null AND
ja.job_id = #JobID
)
BEGIN
EXEC msdb.dbo.sp_start_job #job_name = 'Launch job'
END
ELSE
BEGIN
EXEC msdb.dbo.sp_send_dbmail NULL,'name#domain.org',NULL,NULL,'Interface XML Log Reader Already Running','The SSA Job reading Interface logs is already running. The job will attempt to catch the new request at the end of the current cycle.'
END

using bcp to copy between servers gives SQLState 22008

I'm trying to copy data from one database to another (on different servers) whose tables have identical schema, every time I run the below query I get an error saying the following...
"SQLState = 22008 NativeError = 0 Error = [Microsoft][SQL Server Native Client 10.0]Invalid date format"
The query is this...
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'xp_cmdshell', 1
GO
RECONFIGURE
GO
DECLARE #MasterServerName varchar(40)
DECLARE #MasterServerUserName varchar(20)
DECLARE #MasterServerPassword varchar(20)
DECLARE #SlaveServerName varchar(40)
DECLARE #SlaveServerUserName varchar(20)
DECLARE #SlaveServerPassword varchar(20)
DECLARE #ExportFile varchar (40)
DECLARE #ExportFile1 varchar (40)
DECLARE #ExportFile2 varchar (40)
DECLARE #ExportFile3 varchar (40)
DECLARE #ExportFile4 varchar (40)
SET #MasterServerName='{SQL_Server_Name}'
SET #MasterServerUserName='{SQL_USER_LOGIN}'
SET #MasterServerPassword='{SQL_USER_PASSWORD}'
SET #SlaveServerName='{SLAVE_NAME}\{SLAVE_INSTANCE}'
SET #SlaveServerUserName='{SLAVE_USER_LOGIN}'
SET #SlaveServerPassword='{SLAVE_USER_PASSWORD}'
-------------------------------------
SET #ExportFile1='C:\ExportTracking1.txt'
SET #ExportFile2='C:\ExportTracking2.txt'
SET #ExportFile3='C:\ExportTracking3.txt'
SET #ExportFile4='C:\ExportTracking4.txt'
DECLARE #BCP varchar(8000)
----------------------------------------------
--Collecting tracking data from the slave server
-----------------------------------------------
SET #BCP =
'bcp "select * FROM <DATABASE_NAME>.dbo.<TABLE_NAME> where ExportID="9999999"" queryoutout '+#ExportFile1+' -c -U'+#SlaveServerUserName+' -P'+#SlaveServerPassword+' -S'+#SlaveServerName+' -C{850}'
PRINT #BCP
EXEC xp_CMDshell #BCP
-----------------------------------------------
--Adding tracking data to the master server
-----------------------------------------------
SET #BCP =
'bcp <DATABASE_NAME>.dbo.<TABLE_NAME> in '+#ExportFile1+' -e C:\error1.txt -c -U'+#MasterServerUserName+' -P'+#MasterServerPassword+' -S'+#MasterServerName+' -C{850}'
PRINT #BCP
EXEC xp_CMDshell #BCP
-----------------------------------------------
EXEC sp_configure 'xp_cmdshell', 0
GO
RECONFIGURE
GO
EXEC sp_configure 'show advanced options', 0
GO
RECONFIGURE
GO
Can anyone please help shed some light on why this error is occurring?
Any input would be greatly appreciated.
Thanks
Lee
Less probable but also possible, please check the language and regional settings of the windows servers running the sql instances.

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi

Inconsistent cursor results when looping over databases

I have several databases named very similar (my-db-1, my-db-2, my-db-3, my-db-4). I want to execute the same stored procedure on each of these databases. I decided to use cursors. However, I am getting some strange issues. First here is my simple code that I am executing through SQL Server Management Studio 2008.
DECLARE #db_cursor CURSOR
DECLARE #name varchar(255)
DECLARE #Sql nvarchar(4000)
SET #db_cursor = CURSOR FOR
SELECT name FROM sys.databases WHERE name LIKE 'my-db-%'
OPEN #db_cursor
FETCH NEXT FROM #db_cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #Sql = 'Use [' + #name + ']; PRINT DB_NAME();'
exec sp_sqlexec #Sql
FETCH NEXT FROM #db_cursor INTO #name
END
CLOSE #db_cursor
DEALLOCATE #db_cursor
Executing this multiple times in a row within 2 seconds, I get strange results:
Execution1:
my-db-1
my-db-2
my-db-3
my-db-4
Execution2:
my-db-1
my-db-2
Execution3:
my-db-1
my-db-2
my-db-3
my-db-4
Execution4:
my-db-1
It seems like its completely random. Sometimes I'll get all 4 databases to print after 10 executions. Sometimes after just 2 executions only 1 database will get printed.
This SQL is executing on Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 (Build 7600: ) through Microsoft SQL Server Management Studio 10.50.1600.1
Does anyone have any ideas?
Try declaring your cursor as FAST_FORWARD.
The default is an updateable cursor and these required update locks probably conflict with another process accessing sys.databases
Ref.: DECLARE CURSOR