I'm trying to copy data from one database to another (on different servers) whose tables have identical schema, every time I run the below query I get an error saying the following...
"SQLState = 22008 NativeError = 0 Error = [Microsoft][SQL Server Native Client 10.0]Invalid date format"
The query is this...
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'xp_cmdshell', 1
GO
RECONFIGURE
GO
DECLARE #MasterServerName varchar(40)
DECLARE #MasterServerUserName varchar(20)
DECLARE #MasterServerPassword varchar(20)
DECLARE #SlaveServerName varchar(40)
DECLARE #SlaveServerUserName varchar(20)
DECLARE #SlaveServerPassword varchar(20)
DECLARE #ExportFile varchar (40)
DECLARE #ExportFile1 varchar (40)
DECLARE #ExportFile2 varchar (40)
DECLARE #ExportFile3 varchar (40)
DECLARE #ExportFile4 varchar (40)
SET #MasterServerName='{SQL_Server_Name}'
SET #MasterServerUserName='{SQL_USER_LOGIN}'
SET #MasterServerPassword='{SQL_USER_PASSWORD}'
SET #SlaveServerName='{SLAVE_NAME}\{SLAVE_INSTANCE}'
SET #SlaveServerUserName='{SLAVE_USER_LOGIN}'
SET #SlaveServerPassword='{SLAVE_USER_PASSWORD}'
-------------------------------------
SET #ExportFile1='C:\ExportTracking1.txt'
SET #ExportFile2='C:\ExportTracking2.txt'
SET #ExportFile3='C:\ExportTracking3.txt'
SET #ExportFile4='C:\ExportTracking4.txt'
DECLARE #BCP varchar(8000)
----------------------------------------------
--Collecting tracking data from the slave server
-----------------------------------------------
SET #BCP =
'bcp "select * FROM <DATABASE_NAME>.dbo.<TABLE_NAME> where ExportID="9999999"" queryoutout '+#ExportFile1+' -c -U'+#SlaveServerUserName+' -P'+#SlaveServerPassword+' -S'+#SlaveServerName+' -C{850}'
PRINT #BCP
EXEC xp_CMDshell #BCP
-----------------------------------------------
--Adding tracking data to the master server
-----------------------------------------------
SET #BCP =
'bcp <DATABASE_NAME>.dbo.<TABLE_NAME> in '+#ExportFile1+' -e C:\error1.txt -c -U'+#MasterServerUserName+' -P'+#MasterServerPassword+' -S'+#MasterServerName+' -C{850}'
PRINT #BCP
EXEC xp_CMDshell #BCP
-----------------------------------------------
EXEC sp_configure 'xp_cmdshell', 0
GO
RECONFIGURE
GO
EXEC sp_configure 'show advanced options', 0
GO
RECONFIGURE
GO
Can anyone please help shed some light on why this error is occurring?
Any input would be greatly appreciated.
Thanks
Lee
Less probable but also possible, please check the language and regional settings of the windows servers running the sql instances.
Related
I apologize for the lack of a better title. I'm at a loss as to what the exact problem could be.
PostgreSQL 13.4
TimescaleDB 2.4.2
Ubuntu 18.04.6 LTS
Running in Docker (Dockerfile further down)
shared_memory 16GB
shm size 2GB
The query at the bottom causes postgres to shut down with the error:
2022-05-09 15:17:42.012 UTC [1] LOG: server process (PID 1316) was terminated by signal 11: Segmentation fault
2022-05-09 15:17:42.012 UTC [1] DETAIL: Failed process was running: CALL process_wifi_traffic_for_range('2022-01-10','2022-01-12')
2022-05-09 15:17:42.012 UTC [1] LOG: terminating any other active server processes
2022-05-09 15:17:42.013 UTC [654] WARNING: terminating connection because of crash of another server process
2022-05-09 15:17:42.013 UTC [654] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2022-05-09 15:17:42.013 UTC [654] HINT: In a moment you should be able to reconnect to the database and repeat your command.
I want to process traffic data day by day, since my data lake is very big.
First I load the day into the temporary table unprocessed.
Then I extract the difference between the traffic reported the record before and now since the source only gives the total traffic accrued.
The tables wifi_traffic_last_traffic, t_last_traffic are just to keep track of the last known traffic per client form the last time the procedure was run.
ht_wifi_traffic_processed is a timescaledb hypertable, but the error also occurs when I use a normal table. It also does not help to reduce the timeframe to 1 hour instead of one day in case of memory issues. Sometimes it manages to do 1 or 2 days and the data that manages to finish is correct.
The query is a bit long, but I don't want to omit anything:
DECLARE
f_date date = from_date::date;
t_date date = to_date::date;
BEGIN
SET SESSION temp_buffers = '1GB';
CREATE TEMPORARY TABLE t_last_traffic (
clientid text,
devicecpuid text,
traffic bigint,
PRIMARY KEY(clientid,devicecpuid)
);
INSERT INTO t_last_traffic (
SELECT * FROM wifi_traffic_last_traffic
);
CREATE TEMPORARY TABLE unprocessed (
"timestamp" timestamp,
clientid text,
devicecpuid text,
customer text,
traffic bigint
) ON COMMIT DELETE ROWS;
CREATE TEMPORARY TABLE processed (
"timestamp" timestamp,
clientid text,
devicecpuid text,
customer text,
traffic bigint,
PRIMARY KEY("timestamp", clientid, devicecpuid)
) ON COMMIT DELETE ROWS;
LOOP
RAISE NOTICE 'Processing date: %', f_date;
INSERT INTO unprocessed
SELECT wt."timestamp", wt.clientid, wt.devicecpuid, wt.customer, wt.traffic
FROM wifi_traffic AS wt
WHERE wt."timestamp"
BETWEEN
f_date::timestamp
AND
f_date::timestamp + INTERVAL '1 day'
ORDER BY
devicecpuid ASC, --Important to sort by cpuID first as to not mix traffic results.
clientid ASC,
wt."timestamp" ASC;
RAISE NOTICE 'Unprocessed import done';
INSERT INTO processed
SELECT * FROM (
SELECT
up."timestamp",
up.clientid,
up.devicecpuid,
up.customer,
wifi_traffic_lag(
up.traffic,
lag(
up.traffic,
1,
COALESCE(
(
SELECT lt.traffic
FROM t_last_traffic lt
WHERE
lt.clientid = up.clientid
AND
lt.devicecpuid = up.devicecpuid
FETCH FIRST ROW ONLY
),
CAST(0 AS bigint)
)
)
OVER (
PARTITION BY
up.clientid,
up.devicecpuid
ORDER BY
up.clientid,
up.devicecpuid,
up."timestamp"
)
) AS traffic
FROM unprocessed up
WHERE
up.traffic != 0
) filtered
WHERE
filtered.traffic > 0
ON CONFLICT ON CONSTRAINT processed_pkey DO NOTHING;
RAISE NOTICE 'Processed import done';
INSERT INTO t_last_traffic(devicecpuid, clientid, traffic)
SELECT up.devicecpuid, up.clientid, MAX(up.traffic)
FROM unprocessed up
GROUP BY up.devicecpuid, up.clientid
ON CONFLICT ON CONSTRAINT t_last_traffic_pkey DO UPDATE SET
traffic = EXCLUDED.traffic;
INSERT INTO ht_wifi_traffic_processed
SELECT * FROM processed;
TRUNCATE TABLE unprocessed;
TRUNCATE TABLE processed;
COMMIT;
RAISE NOTICE 'Finished processing for date: %', f_date;
f_date = f_date + 1;
EXIT WHEN f_date > t_date;
END LOOP;
INSERT INTO wifi_traffic_last_traffic
SELECT * FROM t_last_traffic
ON CONFLICT ON CONSTRAINT wifi_traffic_last_traffic_pkey DO UPDATE SET
traffic = EXCLUDED.traffic;
DROP TABLE t_last_traffic;
DROP TABLE unprocessed;
DROP TABLE processed;
COMMIT;
END
Docker Compose:
services:
postgres-storage:
image: <redacted>/postgres_gis_tdb:pg13_tdb2.4.2_gis3.1.4
restart: unless-stopped
shm_size: 2gb
ports:
- '5433:5432'
networks:
- bigdata
volumes:
- /mnt/data_storage/psql_data:/var/lib/postgresql/data
- /usr/docker-volumes/postgres-storage:/var/lib/postgresql/ssd
environment:
POSTGRES_USER: postgres
env_file:
- .env
Dockerfile:
FROM postgres:13
ENV POSTGIS_MAJOR 3
ENV POSTGIS_VERSION 3.1.4+dfsg-1.pgdg100+1
### INSTALL POSTGIS ###
RUN apt-get update \
&& apt-cache showpkg postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR \
&& apt-get install -y --no-install-recommends \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /docker-entrypoint-initdb.d
COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/10_postgis.sh
COPY ./update-postgis.sh /usr/local/bin
### INSTALL TIMESCALEDB ###
# Important: Run timescaledb-tune #
# once for COMPLETELY NEW DATABASES, #
# so no existing postgresql_data. #
### ###
RUN apt-get update \
&& apt-get install -y postgresql-common wget lsb-release
RUN echo "yes" | sh /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
RUN sh -c "echo 'deb [signed-by=/usr/share/keyrings/timescale.keyring] https://packagecloud.io/timescale/timescaledb/debian/ $(ls$
RUN wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | gpg --dearmor -o /usr/share/keyrings/timescale.keyr$
RUN apt-get update \
&& apt-get install -y timescaledb-2-postgresql-13 timescaledb-tools
It seems like the error comes from doing
CREATE TEMPORARY TABLE unprocessed (
"timestamp" timestamp,
clientid text,
devicecpuid text,
customer text,
traffic bigint
) ON COMMIT DELETE ROWS;
As well as:
TRUNCATE TABLE unprocessed;
I did this initially because a test indicated the the ON COMMIT DELETE ROWS wasn't really clearing the table after the COMMIT in the middle of the procedure.
Leaving it out prevented the error from occuring and further tests showed that even without it the data was as expected. It seems to be some sort of race condition.
I will post this in the postgres github as well.
I'm new to triggers and I want to ask the proper procedure to create a trigger (or any better methods) to duplicate the contents of T4 table to T5 table on a specified datetime.
For example, on the 1st day of every month at 23:00, I want to duplicate the contents of T4 table to T5 table.
Can anyone please advise what's the best method?
Thank you.
CREATE TRIGGER TRIG1
AFTER INSERT ON T4
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
INSERT INTO T5 VALUES (:NEW.B, :NEW.A);
END TRIG1;
It can be done by Administrative Task Scheduler feature instead of cron. Here is a sample script.
#!/bin/sh
db2set DB2_ATS_ENABLE=YES
db2stop
db2start
db2 -v "drop db db1"
db2 -v "create db db1"
db2 -v "connect to db1"
db2 -v "CREATE TABLESPACE SYSTOOLSPACE IN IBMCATGROUP MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 4"
db2 -v "create table s1.t4 (c1 int)"
db2 -v "create table s1.t5 (c1 int)"
db2 -v "insert into s1.t4 values (1)"
db2 -v "create procedure s1.copy_t4_t5() language SQL begin insert into s1.t5 select * from s1.t4; end"
db2 -v "CALL SYSPROC.ADMIN_TASK_ADD ('ATS1', CURRENT_TIMESTAMP, NULL, NULL, '0,10,20,30,40,50 * * * *', 'S1', 'COPY_T4_T5',NULL , NULL, NULL )"
date
It will create a task, called 'ATS1' and will call the procedure s1.copy_t4_t5 every 10 minuets, such as 01:00, 01:20, 01:30. You may need to run below after executing the script:
db2 -v "connect to db1"
Then, after some time, run below to see if the t5 table has row as expected:
db2 -v "select * from s1.t5"
For your case, the 5th parameter would be replaced with '0 23 1 * *'.
It represents 'minute hour day_of_month month weekday' so
it will be called every 1st day of month at 23:00.
For more information, how to modify existing task, delete task, review status, see at:
Administrative Task Scheduler routines and views
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/doc/c0061223.html
Also, here is one of good article about it:
[DB2 LUW] Sample administrative task scheduler ADMIN_TASK_ADD and ADMIN_TASK_REMOVE usage
https://www.ibm.com/support/pages/node/1140388?lang=en
Hope this helps.
I have created the function below to create a Foreign Server. The function runs without error, but nothing gets created. This is not a rights issue as I can run the individual commands just fine and the server gets created.
CREATE OR REPLACE FUNCTION sch.helper_create_linked_server(servername text, host text, port text, dbname text, localuser text, remoteuser text, password text, timeout text)
RETURNS void
LANGUAGE plpgsql
AS $function$
declare srv_name text;
tmp text;
begin
CREATE EXTENSION IF NOT EXISTS postgres_fdw;
CREATE EXTENSION IF NOT EXISTS dblink;
execute format($s$ select srvname from pg_foreign_server where srvname = %L $s$, servername) into srv_name;
if srv_name is null then
execute format($ex$
CREATE SERVER %1$I
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host %2$L, port %3$L, dbname %4$L, connect_timeout %5$L);
$ex$, servername, host, port, dbname, timeout, remoteuser);
raise info 'server created: %', servername;
execute format($ex$
create user mapping for %I
server %I
options (user %L, password %L)
$ex$, localuser, servername, remoteuser, password);
raise info '%', 'umapping created';
perform format($$ ALTER SERVER %1$I OWNER TO %2$I $$, servername, remoteuser);
raise info '%', 'assigned user to server';
end if;
end $function$
I have printed out the command that gets generated and it is correct. This is how I call the function:
perform sch.helper_create_linked_server(link_name, conn_props->>'endpoint', conn_props->>'port', conn_props->>'db', conn_props->>'user', conn_props->>'user', rs_pwd, conn_props->>'timeout');
Also, all the raise info messages are printed as well, so the code is running. And no, the server is not already created.
I cannot see why this will not create.
Any advice appreciated.
Thanks!
First post, so I'll get right to it. Thank you in advance for your answers and consideration.
I have full privileges on the database engine that the DB in question is running on, including sysadmin.
To the best of my knowledge, I have enabled this correctly according to documentation, doing the following:
Running the command EXEC sys.sp_cdc_enable_db via a c# application
that I am using as an interface to deal with setting up, recording,
and comparing DML database changes.
From the same application, running the command
EXEC sys.sp_cdc_enable_table
#source_schema = N'dbo',
#source_name = N'ORD_ATTACHMENTS',
#role_name = NULL
I have verified that the DB in question is ready for CDC using SELECT [name], database_id, is_cdc_enabled FROM sys.databases.
The table's readiness I have also verified using SELECT [name], is_tracked_by_cdc FROM sys.tables.
Running SELECT * FROM [msdb].[dbo].[cdc_jobs] WHERE [database_id] = DB_ID() in the database context yields the following information for the capture job:
maxtrans: 7200
maxscans: 10
continuous: 1
pollinginterval: 5
retention and threshold are 0.
After inserting records into the table in question via SSMS, the related CDC table, though present, does not have any data in it. No errors were encountered, and the record was successfully added to the source table.
Additional information:
Database server used to use Windows fibers (lightweight pooling). I
have switched this off, reconfigured, and rebooted the server.
Database used to have compatibility set to SQL Server 2005 (90), but
I have updated this to SQL Server 2008 (100). Again rebooted the
server.
I also set the Change Tracking property to true for the
database in question, but I have since learned that this is
irrelevant.
The source table has the following fields:
[AttachmentID] [bigint] IDENTITY(1,1) NOT NULL,
[ORDNUM] [nvarchar](10) NOT NULL,
[FileName] [nvarchar](260) NOT NULL,
[FileContent] [varbinary](max) NULL,
[CreatedOn] [datetime] NOT NULL CONSTRAINT [DF_ORD_ATTACHMENTS_CreatedOn] DEFAULT (getdate())
No fields are excluded from CDC for this table.
Thank you in advance for any assistance.
Best regards,
Chris.
Update 2016-09-20 15:15:
Ran the following:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Agent XPs', 1;
GO
RECONFIGURE
GO
Have now switched to a test DB to simplify matters. Re-enabled the CDC on my new test table (fields are bigint PK identity field and an NVARCHAR(50) nullable field). Still not working. Also, the capture job has no history entries under SQL Server Agent.
Update 2016-09-20 20:09
Ran sp_MScdc_capture_job in the DB context. This can be, depending on job settings, a continuously executing procedure. Data was found in the CDC table upon running this. Will try to figure out how to automatically engage this.
Update 2016-09-28 17:19
The capture job is scripted as follows:
USE [msdb]
GO
/****** Object: Job [cdc.CDCTest_capture] Script Date: 2016-09-28 5:18:13 PM ******/
BEGIN TRANSACTION
DECLARE #ReturnCode INT
SELECT #ReturnCode = 0
/****** Object: JobCategory [REPL-LogReader] Script Date: 2016-09-28 5:18:13 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'REPL-LogReader' AND category_class=1)
BEGIN
EXEC #ReturnCode = msdb.dbo.sp_add_category #class=N'JOB', #type=N'LOCAL', #name=N'REPL-LogReader'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE #jobId BINARY(16)
EXEC #ReturnCode = msdb.dbo.sp_add_job #job_name=N'cdc.CDCTest_capture',
#enabled=1,
#notify_level_eventlog=2,
#notify_level_email=0,
#notify_level_netsend=0,
#notify_level_page=0,
#delete_level=0,
#description=N'CDC Log Scan Job',
#category_name=N'REPL-LogReader',
#owner_login_name=N'sa', #job_id = #jobId OUTPUT
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Starting Change Data Capture Collection Agent] Script Date: 2016-09-28 5:18:14 PM ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Starting Change Data Capture Collection Agent',
#step_id=1,
#cmdexec_success_code=0,
#on_success_action=3,
#on_success_step_id=0,
#on_fail_action=3,
#on_fail_step_id=0,
#retry_attempts=10,
#retry_interval=1,
#os_run_priority=0, #subsystem=N'TSQL',
#command=N'RAISERROR(22801, 10, -1)',
#server=N'AECON-SQL',
#database_name=N'CDCTest',
#flags=4
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Change Data Capture Collection Agent] Script Date: 2016-09-28 5:18:14 PM ******/
EXEC #ReturnCode = msdb.dbo.sp_add_jobstep #job_id=#jobId, #step_name=N'Change Data Capture Collection Agent',
#step_id=2,
#cmdexec_success_code=0,
#on_success_action=1,
#on_success_step_id=0,
#on_fail_action=2,
#on_fail_step_id=0,
#retry_attempts=10,
#retry_interval=1,
#os_run_priority=0, #subsystem=N'TSQL',
#command=N'sys.sp_MScdc_capture_job',
#server=N'AECON-SQL',
#database_name=N'CDCTest',
#flags=4
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_update_job #job_id = #jobId, #start_step_id = 1
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobschedule #job_id=#jobId, #name=N'CDC capture agent schedule.',
#enabled=1,
#freq_type=64,
#freq_interval=0,
#freq_subday_type=0,
#freq_subday_interval=0,
#freq_relative_interval=0,
#freq_recurrence_factor=0,
#active_start_date=20160920,
#active_end_date=99991231,
#active_start_time=0,
#active_end_time=235959,
#schedule_uid=N'd1fc7d85-c051-4b24-af84-5505308caaf0'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
EXEC #ReturnCode = msdb.dbo.sp_add_jobserver #job_id = #jobId, #server_name = N'(local)'
IF (##ERROR <> 0 OR #ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (##TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO
Chris,
When you enable CDC at the DB and table level, a number of further objects are created underneath the cdc schema. Of importance are the various functions, the _CT table, and the two jobs cdc.XXXX_capture & cdc.XXXX_cleanup (where XXXX is the full name of the database).
From your description thus far, especially considering the latest update, the error appears to possibly be with the jobs themselves.
At the outset, and it might sound obvious, but do you have SQL Agent running on this instance? I only ask because it is not mentioned in your initial description.
If it is already running, then you will need to get in a little deeper.
If you navigate to your SQL Agent/Jobs folder (under SSMS), locate the capture job, right click and request it to be scripted, you should find the following.
4 calls:
sp_add_job #job_name=N'cdc.XXXX_capture'
sp_add_jobstep #step_name=N'Starting Change Data Capture Collection Agent'
sp_add_jobstep #step_name=N'Change Data Capture Collection Agent'
sp_add_jobschedule #name=N'CDC capture agent schedule.'
The second of those sp_add_jobstep calls is the one that executes the same code you indicates above, #command=N'sys.sp_MScdc_capture_job'.
You can attempt to kick the job off manually to see if that kicks it into life, or, at the very least, provides some data into the _CT table.
In addition, check the last of those calls above, the schedule, sp_add_jobschedule. This should also be enabled, with #freq_type=64 (to ensure it starts when the Agent starts).
Please provide the results of what you find in the response to assist further troubleshooting.
Thanks,
The problem turned out to be that the SQL Server Agent job was not running, although SQL Server indicated that it was. Turned this service on in the services console and was able to capture data in CDC.
Special thanks to LogicalMan who very patiently worked with me through all of this.
I have several databases named very similar (my-db-1, my-db-2, my-db-3, my-db-4). I want to execute the same stored procedure on each of these databases. I decided to use cursors. However, I am getting some strange issues. First here is my simple code that I am executing through SQL Server Management Studio 2008.
DECLARE #db_cursor CURSOR
DECLARE #name varchar(255)
DECLARE #Sql nvarchar(4000)
SET #db_cursor = CURSOR FOR
SELECT name FROM sys.databases WHERE name LIKE 'my-db-%'
OPEN #db_cursor
FETCH NEXT FROM #db_cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #Sql = 'Use [' + #name + ']; PRINT DB_NAME();'
exec sp_sqlexec #Sql
FETCH NEXT FROM #db_cursor INTO #name
END
CLOSE #db_cursor
DEALLOCATE #db_cursor
Executing this multiple times in a row within 2 seconds, I get strange results:
Execution1:
my-db-1
my-db-2
my-db-3
my-db-4
Execution2:
my-db-1
my-db-2
Execution3:
my-db-1
my-db-2
my-db-3
my-db-4
Execution4:
my-db-1
It seems like its completely random. Sometimes I'll get all 4 databases to print after 10 executions. Sometimes after just 2 executions only 1 database will get printed.
This SQL is executing on Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 (Build 7600: ) through Microsoft SQL Server Management Studio 10.50.1600.1
Does anyone have any ideas?
Try declaring your cursor as FAST_FORWARD.
The default is an updateable cursor and these required update locks probably conflict with another process accessing sys.databases
Ref.: DECLARE CURSOR