I am working on a SQL Server 2008 R2 server.
On this server are two instances (let's call them A and B).
A replicates one of it's databases to B using a transactional replication.
This replication however became inactive and the snapshot deleted after some maintenance occurred.
I have reinitialized the subscribers and have successfully created a new snapshot but when I start synchronizing, I get the following error:
The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configuration.
(Source: MSSQLServer, Error number: 1204)
I have killed the connections to database B that aren't Administrator (login of sa) and that aren't running. This didn't work and I am still getting the error.
I'm not very knowledgeable when it comes to this stuff and I don't have a DBA to talk to so any help is greatly appreciated as I am running out of ideas fast.
Thanks in advance!
Edit: Here is some more info after talking with Razzle Dazzle.
DBCC OPENTRAN - This finds no open transactions on both the provider and the subscriber.
sp_who - This shows that very few (under 40) connections are going for both provider and subscriber. The only connection(s) that are in a status other than either 'Background' or 'Sleeping' is the connection(s) that I have to the server via SSMS.
DBCC MEMORYSTATUS -
Publisher: Publisher Memory Status
Subscriber: Subscriber Memory Status
Related
I have a problem when querying Active Directory or MySQL database as Linked servers.
The problem occurs when running the query through SSMS on a server other than the database server where AD is mounted.
If I run these queries on the actual Db server through SSMS I get results from the linked server.
If I run these queries on a 'Management' machine on a separate VLAN they return error 7390
The requested operation could not be performed because OLE DB provider "ADsDSOObject" for linked server "ACTIVEDIR" does not support the required transaction interface.
This only affects the Linked servers, I can query any table on the Db server from the management machine, so it's not ports and networking (that I can see).
I have tried changing the settings for RPC, RPC Out and Promotion of Distributed Transactions in the properties sheet of the linked servers, with various combinations but I still get no results, just the error
For good measure I have also tried to set the TRANSACTION ISOLATION LEVEL to READ UNCOMMITTED .. in the SQL blocks executed
It used to work before I migrated from SQLserver 2008R2 to 2016....
I would appreciate any guidance and wisdom ..
I know that using Firebird 2.5+ I can check if there are users accessing my database using SQL, but unfortunately, Firebird 2.0 doesn't have this feature. Yes, I know it's an old version, but it's a legacy software and I'm not allowed to upgrade this in a short time... :(
I need to know if someone is connected to my 2.0 Firebird database, due to a process I'll run:
Block connections to DB (but ONLY if no one is connected)
Run my process
Allow users to reconnect again
I can start my process only when there are no users connected.
My database is part of a client/server system (no Web).
Any hints?
-at[tach] : this parameter prevents any new connections to the database from being made with the exception of the SYSDBA and the database owner. The shutdown will fail if there are any sessions connected after the timeout period has expired. It makes no difference if those connected sessions belong to the SYSDBA, the database owner or any other user. Any connections remaining will terminate the shutdown with the following details:
https://firebirdsql.org/manual/gfix-dbstartstop.html
There is also Services API to do it so your database access library should expose the shutdown function. Specify a short shutdown, and if it failed - then there were some users. If it succeeded - now you can go on with maintenance, having a warranty client applications will not be able to connect.
Alternatively you can upgrade Firebird 2.0 -> 2.1 which is more close to 2.0 than 2.5 but already have Monitoring Tables implemented.
However this your approach has one weak point - race conditions. Using M.T. you envision your work as following:
Keep querying M.T. (which slows down server work significantly) until there are no other connections.
start maintenance work, that would fail if other connections are active
complete maintenance work
Problem is, that even after at step 1 you gained "no other connection" state, it does not mean that between steps 1 and 2, and especially between steps 2 and 3 now new connections would be made.
Even if you made your checks and ensure #1 condition, when you would go on with maintenance there would be some new user connected back and working now. Not every time of course, but as time goes by it will eventually happen one day.
But there is yet one more good thing in FB 2.1 - database-level triggers.
c:\Program Files\Firebird\Firebird_2_1\doc\sql.extensions\README.db_triggers.txt
You can create a regular "all_current_connections" table, using on connect and on disconnect triggers to keep it up to date.
You perhaps would also have to add some logic to your applications, so they would update that table with your internal application ID, to tell main workflow apps/connections from servicing utilities. However it is also possible that mere CURRENT_USER and CURRENT_CONNECTION pair, which the trigger knows and can store to the table, would be enough for that table, if you can infer kind of application from mere user name.
Then on disconnect trigger might be checking whether all "main workflow" apps disconnected and POST_EVENT to notify servicing utilities. However those utilities would still have to shutdown the database first, anyway.
You can shut down the database using gfix. The gfix tool will try to shutdown the database and if connections still exist after a timeout, the shutdown will fail.
For example, use:
gfix -shut -attach 5 <your-database>
This will:
prevent new connection being created,
wait 5 seconds for the existing connections to end,
if after 5 seconds there are still active connections the shutdown will abort,
otherwise, after 5 seconds the database will be shut down.
After shutdown, only SYSDBA or the database owner can create a connection to the database. This is only a viable option if your application it self doesn't use SYSDBA or the database owner account.
You bring the database back online using:
gfix -online <your-database>
For more information, see also Gfix - Database Housekeeping: Database Startup and Shutdown
Well, not an elegant way, but works...
I try to rename the database file.
If there is someone accessing the database, the rename operation will give me
an exception, saying that the file is in use by some process.
If rename succeeds, new users will not be able to access the database
anymore (the connection string used by my systems is not changed).
I run the exclusive process I have to.
Rename the database file to its original name, allowing new users to
connect again.
I post my solution in the hope that helps someone facing a similar problem.
Our new version of the product will probably a Web application and the database was not choosen yet, but certainly will no be Firebird.
Thanks to all that tried to give me an answer.
Brief Background:
We have a cloud based Warehouse Management System that uses Glassfish to dish out the java interface. The Warehouse Management System consists of a Dashboard and a mobile application - both of which talk constantly with the Glassfish server (using a web browser).
Issue:
Recently our PostgreSQL database server HDD failed. After restoring from a backup and moving the database to an Amazon Web Service Server, idle connections seem to be dropping out. This causes the entire Warehouse Management System to fail. Restarting the Glassfish server seems to fix the issue until the idle connection causes it to fail again.
It happens around 3-4 times per day after approx 20mins of idle activity i.e. our customer's lunch breaks, after hours etc..
Question:
Is there a setting that I'm missing in the postgresql.conf file? What else could be causing this?
Attachments:
I've attached a screenshot containing the output of running 'select * from pg_stat_activity;' and also the postgresql.conf file.
select * from pg_stat_activity
postgresql.conf
Log:
postgresql-8.4-main.log shows this occasionally, although it doesn't seem to be when it cuts out.
2015-10-19 07:51:41 NZDT [9971-1] postgres#customerName LOG: unexpected EOF on client connection
glassfish server.log is riddled with these lines:
[#|2015-10-19T07:46:49.715+1300|SEVERE|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=25;_ThreadName=Thread-2;|WebModule[/pns-CustomerName]Received InterruptedException on request thread
[#|2015-10-20T09:34:42.351+1300|WARNING|glassfish3.1.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=17;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(2).|
[#|2015-10-20T07:33:55.414+1300|WARNING|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=14;_ThreadName=Thread-2;|Response Error during finishResponse java.lang.NullPointerException
Thanks in advance
We are having an issue with SQL Server 2008 R2 64 responding to stored procedure call. About every 2 weeks or so, the database stops responding to stored procedures called from an ADO connection/Command set (4.0 framework). We have been working on this for several months now, with little improvement.
System changes:
We upgraded an existing vendor product from SQL Server 2005 to SQL Server 2008 R2 via their upgrade method. The database instance moved from a 32-bit Windows 2003 Server to 64-bit Windows 2008 Server.
The pattern of failure:
The application is run throughout the day, executed by different users via Citrix without issue. Every few weeks, the application stops responding around the same time frame. Once the database stops responding to the hosted instance of the application, any execution of the procedure from the application hangs (installed on CITRIX server, installed on varied physical systems, or debugging in VStudio 2010). After an hour of checking logs, server status, SQL Monitoring tools, tracing the repeated execution attempts, the server decides to respond to the application without intervention.
Strange thing is, when the server is not responding to ADO.Net calls, we execute the stored procedure from SQL Server Management Studio and receive results in 1 to 2 seconds. We are using the same login to access SQL Server Management Studio, and executing the stored procedure with the same parameters.
Looking at the connection string passed to the ADO connection, I don’t see anything unusual:
connectionString="Data Source=myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
Tried so far:
Added extra 2gb of RAM to the OS: no change
Added extra tempdb file, expanded size of tempdb log file from 1 to 5gb: reduced the issue from weekly to every 2nd or 3rd week.
Installed SQL Server 2008 R2 SP3: no change.
The black cloud:
To me, the repeating time pattern of failure implies an issue at the database host (server or resource), but the DBAs do not see load or resource issue. If it were purely a host issue, why does it respond to SQL Server Management calls, and not ADO.NET calls?
The last occurrence lasted over two hours, and was resolved after rebooting the database server. Not a great fallback, but desperate times and all…..
Updating the ADO.NET connection to use named pipes has resolved the issue for our application. Prefixing the database name with "np:" has the connection using named pipes.
connectionString="Data Source=np:myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
The issue returned on 5/14. This query timeout posting gave us hints how to force SQL Management Studio to behave like the ADO.NET connection and allowed us to recognize this is a "parameter sniffing" issue. We have applied changes to disable the parameter sniffing within the stored procedure.
I am learning how to use the Service Broker of SQL Server 2008 R2. When following the tutorial Completing a Conversation in a Single Database. Following the Lesson 1, I have successfully created the message types, contract, the queues and services. Following the Lesson 2, I have probably sent the message. However, when trying to receive the message, I get the NULL for the ReceivedRequestMsg instead of the sent content.
When looking at the sys.transmission_queue, the transmission_status for the message says:
An exception occurred while enqueueing a message in the target queue. Error: 15517, State: 1. Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.
I have installed SQL Server using the Windows login like Mycomp\Petr. I am using that login also for the lessons.
Can you guess what is the problem? What should I check and or set to make it working?
Edited 2012/07/16: For helping to reproduce the problem, here is what I did. Can you reproduce the error if you follow the next steps?
Firstly, I am using Windows 7 Enterprise SP1, and Microsoft SQL Server 2008 R2, Developer Edition, 64-bit (ver. 10.50.2500.0, Root Directory located at C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL_PRIKRYL05\MSSQL).
Following the tutorial advice, I have downloaded the AdventureWorks2008R2_Data.mdf sample database, and copied it into C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL_PRIKRYL05\MSSQL\DATA\AdventureWorks2008R2_Data.mdf
The SQL Server Management Studio had to be launched "As Administrator" to be able to attach the data later. Then I connected the SQL Server.
Right click on Databases, context menu Attach..., button Add..., pointed to AdventureWorks2008R2_Data.mdf + OK. Then selected the AdventureWorks2008R2_Log.ldf from the grid below (reported as Not found) and pressed the Remove... button. After pressing OK, the database was attached and the AdventureWorks2008R2_log.LDF was created automatically.
The following queries were used for looking at "Service Broker enabled/disabled", and for enabling (the Service Broker was enabled successfully for the database):
USE master;
GO
SELECT name, is_broker_enabled FROM sys.databases;
GO
ALTER DATABASE AdventureWorks2008R2
SET ENABLE_BROKER
WITH ROLLBACK IMMEDIATE;
GO
SELECT name, is_broker_enabled FROM sys.databases;
GO
Then, following the tutorial, the queries below were executed to create the message types, the contract, the queues, and the services:
USE AdventureWorks2008R2;
GO
CREATE MESSAGE TYPE
[//AWDB/1DBSample/RequestMessage]
VALIDATION = WELL_FORMED_XML;
CREATE MESSAGE TYPE
[//AWDB/1DBSample/ReplyMessage]
VALIDATION = WELL_FORMED_XML;
GO
CREATE CONTRACT [//AWDB/1DBSample/SampleContract]
([//AWDB/1DBSample/RequestMessage]
SENT BY INITIATOR,
[//AWDB/1DBSample/ReplyMessage]
SENT BY TARGET
);
GO
CREATE QUEUE TargetQueue1DB;
CREATE SERVICE
[//AWDB/1DBSample/TargetService]
ON QUEUE TargetQueue1DB
([//AWDB/1DBSample/SampleContract]);
GO
CREATE QUEUE InitiatorQueue1DB;
CREATE SERVICE
[//AWDB/1DBSample/InitiatorService]
ON QUEUE InitiatorQueue1DB;
GO
So far, so good.
Then the following queries are used to look at the queues (now empty when used):
USE AdventureWorks2008R2;
GO
SELECT * FROM InitiatorQueue1DB WITH (NOLOCK);
SELECT * FROM TargetQueue1DB WITH (NOLOCK);
SELECT * FROM sys.transmission_queue;
GO
The problem manifests when the message is sent:
BEGIN TRANSACTION;
BEGIN DIALOG #InitDlgHandle
FROM SERVICE
[//AWDB/1DBSample/InitiatorService]
TO SERVICE
N'//AWDB/1DBSample/TargetService'
ON CONTRACT
[//AWDB/1DBSample/SampleContract]
WITH
ENCRYPTION = OFF;
SELECT #RequestMsg =
N'<RequestMsg>Message for Target service.</RequestMsg>';
SEND ON CONVERSATION #InitDlgHandle
MESSAGE TYPE
[//AWDB/1DBSample/RequestMessage]
(#RequestMsg);
SELECT #RequestMsg AS SentRequestMsg;
COMMIT TRANSACTION;
GO
When looking at the queues, the Initiator... and the Target... queues are empty, and the sent message can be found in sys.transmission_queue with the above mentioned error reported via the transmission_status.
alter authorization on database::[<your_SSB_DB>] to [sa];
The EXECUTE AS infrastructure requires dbo to map to a valid login. Service Broker uses the EXECUTE AS infrastructure to deliver the messages. A typical scenario that runs into this problem is a corporate laptop when working from home. You log in to the laptop using cached credentials, and you log in into the SQL using the same Windows cached credentials. You issue a CREATE DATABASE and the dbo gets mapped to your corporate domain account. However, the EXECUTE AS infrastructre cannot use the Windows cached accounts, it requires direct connectivity to the Active Directory. The maddening part is that things work fine the next day at office (your laptop is again in the corp network and can access to AD...). You go home in the evening, continue with Lesson 3... and all of the sudden it doesn't work anymore. Make the whole thing seem flimsy and unreliable. Is just the fact that AD conectivity is needed...
Another scenatio that leads to the same problem is caused by the fact that databases reteint the SID of their creator (the Windows login that issues the CREATE DATABASE) when restored or attached. If you used a local account PC1\Fred when you create the DB and then copy/attach the database to PC2, the account is invalid on PC2 (it is scoped to PC1, of course). Again, not much is affected but EXECUTE AS is, and this causes Service Broker to give the error you see.
And last example is when the DB is created by a user that later leaves the company and the AD account gets deleted. Seems like revenge from his part, but he's innocent. The production DB just stops working, simply because it's his SID that the dbo maps too. Fun...
By simply changing the dbo to sa login you fix this whole EXECUTE AS thing and all the moving parts that depend on it (and SSB is probably the biggest dependency) start working.
You would need to grant receive on your target queue to your login. And it should work!
USE [YourDatabase]
GRANT RECEIVE ON [dbo].[YourTargetQueue]
TO [Mycomp\Petr];
GO
And you also need to grant send for your user, permission on Target Service should be sufficient, but let's enable on both services for the future.
USE AdventureWorks2008R2 ;
GO
GRANT SEND ON SERVICE::[//AWDB/1DBSample/InitiatorService]
TO [Mycomp\Petr] ;
GO
GRANT SEND ON SERVICE::[//AWDB/1DBSample/TargetService]
TO [Mycomp\Petr] ;
GO