Transient problems executing stored procedures on SQL Server 2008 R2 - sql-server-2008-r2

We are having an issue with SQL Server 2008 R2 64 responding to stored procedure call. About every 2 weeks or so, the database stops responding to stored procedures called from an ADO connection/Command set (4.0 framework). We have been working on this for several months now, with little improvement.
System changes:
We upgraded an existing vendor product from SQL Server 2005 to SQL Server 2008 R2 via their upgrade method. The database instance moved from a 32-bit Windows 2003 Server to 64-bit Windows 2008 Server.
The pattern of failure:
The application is run throughout the day, executed by different users via Citrix without issue. Every few weeks, the application stops responding around the same time frame. Once the database stops responding to the hosted instance of the application, any execution of the procedure from the application hangs (installed on CITRIX server, installed on varied physical systems, or debugging in VStudio 2010). After an hour of checking logs, server status, SQL Monitoring tools, tracing the repeated execution attempts, the server decides to respond to the application without intervention.
Strange thing is, when the server is not responding to ADO.Net calls, we execute the stored procedure from SQL Server Management Studio and receive results in 1 to 2 seconds. We are using the same login to access SQL Server Management Studio, and executing the stored procedure with the same parameters.
Looking at the connection string passed to the ADO connection, I don’t see anything unusual:
connectionString="Data Source=myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
Tried so far:
Added extra 2gb of RAM to the OS: no change
Added extra tempdb file, expanded size of tempdb log file from 1 to 5gb: reduced the issue from weekly to every 2nd or 3rd week.
Installed SQL Server 2008 R2 SP3: no change.
The black cloud:
To me, the repeating time pattern of failure implies an issue at the database host (server or resource), but the DBAs do not see load or resource issue. If it were purely a host issue, why does it respond to SQL Server Management calls, and not ADO.NET calls?
The last occurrence lasted over two hours, and was resolved after rebooting the database server. Not a great fallback, but desperate times and all…..

Updating the ADO.NET connection to use named pipes has resolved the issue for our application. Prefixing the database name with "np:" has the connection using named pipes.
connectionString="Data Source=np:myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
The issue returned on 5/14. This query timeout posting gave us hints how to force SQL Management Studio to behave like the ADO.NET connection and allowed us to recognize this is a "parameter sniffing" issue. We have applied changes to disable the parameter sniffing within the stored procedure.

Related

SQL Server query Linked Server returns an Interface error code 7390 when executed remotely

I have a problem when querying Active Directory or MySQL database as Linked servers.
The problem occurs when running the query through SSMS on a server other than the database server where AD is mounted.
If I run these queries on the actual Db server through SSMS I get results from the linked server.
If I run these queries on a 'Management' machine on a separate VLAN they return error 7390
The requested operation could not be performed because OLE DB provider "ADsDSOObject" for linked server "ACTIVEDIR" does not support the required transaction interface.
This only affects the Linked servers, I can query any table on the Db server from the management machine, so it's not ports and networking (that I can see).
I have tried changing the settings for RPC, RPC Out and Promotion of Distributed Transactions in the properties sheet of the linked servers, with various combinations but I still get no results, just the error
For good measure I have also tried to set the TRANSACTION ISOLATION LEVEL to READ UNCOMMITTED .. in the SQL blocks executed
It used to work before I migrated from SQLserver 2008R2 to 2016....
I would appreciate any guidance and wisdom ..

Use data mining in SQL Server 2008 R2

I have SQL Server 2008 R2 on my computer and I want to use data mining with this version of SQL Server. My question is how can I do this? Because I've read some where that I can use data mining in SQL Server evaluation edition. I can use data mining in SQL Server 2008 R2?.
And I have one other problem when I want to use SQL Server 2008 Data Mining Add-Ins I can't connect to SQL Server and displays this error message.
Unable to connect to server 'localhost'. Please make sure user '' has at least read permission to some database on the server.
First you should get SQL Server Data Tools which runs in Visual Studio.
You will need Analysis Services installed; if you don't have it just run the SQL Server installer again and look for the option to install it.
After that you can take a look at this post I wrote a few months ago:
http://www.sqlservercentral.com/Forums/Topic480010-147-1.aspx
I wrote it specifically targeting the Neural Network models, but it contains details on several background steps you will need to do.
Finally - since you're using an evaluation version, you may want to just go for SQL Server 2012 (that's what I use, so I know it works).

Is This MSDTC configuration Issue?

It seems I am running into the Microsoft Distributed Transaction Coordinator (MSDTC) related issue.
SCENARIO
I am using TransactionScope and with in the single scope it hits two different databases on different servers (for instance, DB_A running Windows Server 2003 and DB_B running Windows Server 2008). One database is accessed using Entity Framework 4.0 and another using normal ADO.NET APIs.
When I run the application from my development machine (running WinXP) it commits and rollbacks both the connections accurately. But when I run the application, deployed on another server (for instance WAS_A running Windows Server 2003) it commits correctly but in case of exception is doesn't roll back the database activities on both the servers.
I thought it would be the MSDTC configuration issue on the WAS_A. So I went to the MSDTC -> Security Configuration and checked all the available options (as I did previously on other machines). But still I am facing the same issue.
Looking for your expert advices. :)
I believe that you need to look into Enabling Transaction Flow. Specifically, take a look at how one may error and the other complete as described in TransactionScope and WCF Services:
an error in a second WCF service call was NOT rolling back the changes made in a previous WCF service call...
In order to create an ambient transaction in your client and ensure that it is used by your WCF services...
The article then details the following steps:
Configure Your Binding with transactionFlow
Decorate Your Interface with [TransactionFlow(TransactionFlowOption)]
Decorate Your Method with [OperationBehavior(TransactionScopeRequired)]
Optionally update your Connection Strings with Transaction Binding*
*Note: This is optional in my opinion.

Why does a SQL Azure DACPAC upgrade (via a PowerShell script) consistently take 30min to complete

I created a PowerShell script to upgrade a SQL Azure instance with my latest DACPAC (taken from http://msdn.microsoft.com/en-us/library/ee634742.aspx).
What I have experienced when running my PowerShell script is that it consistently takes approximately 30min to execute. The script is idle for almost half an hour waiting on $dacstore.IncrementalUpgrade($dacName, $dacType, $upgradeProperties) to return from execution and nothing is printed out on the PowerShell console window. Only right at the end of the half hour does the incremental update start spitting out console messages which inform me that the upgrade is taking place (essentially it appears that the script has hung for 30min until it finally comes back alive and the script does this consistently every time).
Does it usually take this long for the IncrementalUpgrade to complete and is there supposed to be a 30min period of inactivity/waiting?
Note that I am running the PowerShell script from my local machine which is external to the Azure network.
Thanks for any insight you can give for this, I am hoping that I can reduce this incremental upgrade process to substantially less than 30min so that my continuous integration build doesn't take so long.
According to Microsoft Support this is a known issue and will be fixed in SQL Server 2012 (code named Denali). Here are the details from Microsoft Support:
It’s a known issue that using SSMS 2008 or PowerShell to update DAC on
SqlAzure is very slow. SQLServer 2008 utilize old extraction engine
which run query for every column and small object. This way works well
at on-premise server, and meets SQLServer 2008 original design target.
However, when managing the SqlAzure database, the query need be
transferred over internet, network latency makes the old extraction
becomes inefficient, especially, when network is not good.
Our SQL product team aware this issue and designed new extraction
engine to fix it. The new engine is integrated in SQL Server 2012
(code name Denali). Unfortunately, some of the engine behavior may
bring break changes to SQL Server 2008. We try different approach but
we can’t relief regression barrier when apply the new engine in the
SQL server 2008. Therefore, we don’t have plan to deliver the new
extraction engine as hotfix on SQLServer 2008 so far. That will impact
the current on-premise users and operation.
Further details about how I architected the PowerShell script with a continuous integration (CI) process can be found here.

ADO.NET Sync Framework - Determining which records synced successfully/failed from PDA to Server

I am using ADO.NET Sync Framework and on the client side (PDA running Windows Mobile 5 and .net cf 3.5 and SQL CE 3.5). Server side is using SQL Server 2005.
On server side manual queries have been written to determine which records are selected for insert/update/delete for each client as well as any conflicting records.
On PDA though, I can't seem to find a way to determine exactly which records were synced successfully and which failed. I can obtain the SyncStatistics but this just gives totals and I need actual row id's so that I can delete the successfully synced records off the PDA.
Any ideas?
Are you not having a event handler for ClientApplyChangeFailed? You can use this for logging the failures.