compatibility level during migration from 2008 R2 to 2014 - sql-server-2008-r2

I'm working on a project to create tools for our project teams to use to quickly spin up development and test environments having some or all our 20+ databases, but with only the specific data they need for their project. The environments may be physical servers, VMs or local instances on the developer's computer. The production DBs are all 2008 R2 but we will be moving to 2014 so we need to support both.
I'm developing a script to create a standard set of logins and database users with specific roles assigned. These will necessarily be version specific to account for the role-related 2008 stored procedures that have now been deprecated. The specific databases each team restores will vary, so the script uses a cursor over sys.databases and does its work in a WHILE loop having a USE at the top and a FETCH NEXT at the end.
Most of the DBs have is_auto_update_statistics_async set to 0, but a few, including everyone's default database, have it set to 1. As a result, the USE statement will change the SET context that existed when the cursor was allocated causing the FETCH NEXT to fail. My first attempt to correct that was to issue SET AUTO_UPDATE_STATS_ASYNC ON right before the FETCH NEXT, but that resulted in error - even on a 2014 local instance ('AUTO_UPDATE_STATS_ASYNC' is not a recognized SET option).
I have a solution for that, but while researching I noticed that the compatibility_level was the same (100) in both the original 2008 R2 databases, and in the databases restored to a 2014 local instance.
Does the error, and the fact that the compatibility level remained the same, indicate the databases were not really migrated when they were restored to the 2014 instance?
If so, would incrementing the compatibility level after the restore correct that?

From what I can tell in your post, it is likely that it's just a syntax error, firstly it's AUTO_UPDATE_STATISTICS_ASYNC and not AUTO_UPDATE_STATS_ASYNC.
Secondly, SET AUTO_UPDATE_STATISTICS_ASYNC is not a valid SET option (like ANSI_NULLS etc), it is an ALTER DATABASE setting, so should be executed as so:
ALTER DATABASE [DatabaseName] SET AUTO_UPDATE_STATISTICS_ASYNC ON

Related

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

BizTalk Databases Missing, Not sure what to do

I was testing my code changes, meaning undeploying/redeploying applications in Biztalk and then all of the BizTalk databases disappeared (BAMAcrhive, BAMPrimaryImport, BiztalkDTADb, BizTalkMgmtDb, BizTalkMsgBoxDb, BizTalkRulEngineDb, BTAHL7). This is my test environment however, i did not have any backups of these databases (yes i have learned my lesson).
I tried restoring databases from another test environment and then updating the server names and what not within the tables. I tried stopping/deleting some applications in the console but I get more errors that come up.
I am assuming that the GUIDs/Keys of the deployed applications in TESTSERVER1 and TESTSERVER2 are different therefore it won't delete properly.
I am currently getting this error"Schema referenced by Map 'XXXXX' has been deleted. The local, cached version of the BizTalk Server group configuration is out of date. You must refresh the BizTalk Server group configuration before making further changes. (Microsoft.BizTalk.Administration.SnapIn)".
When I try to refresh the BizTalk Group in the console, i get the above error as well as "The application does not exist"
I tried truncating the tables that consisted of this data but there are too many references to go through the trouble.
I have also tried to restore the SSO key. Updated services (Biztalk, SSO, and a few more). When i try to start the BizTalk Service BizTalk Group: BizTalkServerApplication. It says the service has started and stopped.
So a few questions:
What should i do? I hope a re-installation of BizTalk is last resort.
How did the databases disappear in the first place, the undeployment scripts have nothing to do with the databases, only applications
Sorry if the solution is obvious, I am by no means a BizTalk Developer. Just a stressed junior BI developer on a friday night.
if you already lost the Biztalk environnements(undeployed applications + DBs lost), the best choice is to reinstall your environment and setup a backup just after. but try to understand the source proble in windows and sql server logs.

Sitecore MongoDB not creating all database/collections

We are working on Sitecore deployment in Azure.
Sitecore Experience Platform 8.0 rev. 160115
MongoDB - 3.0.4
We installed MongoDB, and we can connect to localhost using Robomongo. We can only see “Analytics” database/collections.
Our connection strings setup are:
Connectionstring.config
But the other 3 databases and collections are not created.
Tracking.live
Tracking.history
Tracking.contact
In Sitecore.Analytics.config file – the setting “Analytics.Enabled” is set to true.
Sitecore.Analytics.config
In log we found some references to xDB cloud initialization failed issues, therefore we disabled it.
Are we missing any configurations? Any help or suggestions are appreciated.
Thank you
Keep in mind that MongoDB is schemaless. Of course, in a production environment you would probably have to create these databases manually - to ensure that access rights are assigned correctly. But in a development environment, any database can be created on the fly.
The only reason the analytics database was created for you is because Sitecore creates indexes for the Interactions collection. Otherwise, you wouldn't see this database until xDB wrote some data into it. Same goes for any MongoDB collection - those won't appear until there's either data being written or an index created.
The other three databases will be created once the aggregation/processing logic is executed. I.e. when your instance starts to actually collect and process visit data.
As a conclusion, don't worry about these databases missing (for now). Just verify that xDB functionality is working properly.

New SQLAzure databases are not visable in the portal nor via the powershell cmdlets

Last week I created 8 databases on a V12 SqlAzure server via powershell and ARM templates, it worked fine. We started to use these databases in SQL Management studio and have set up users and tables etc. There is some data in them and we can select and update as expected. In short they work!
But today I wanted to apply some resource locks to the databases using the azure powershell cmdlet New-AzureRmResourceLock but I'm finding that the command Get-AzureRmResource | Where-Object {$_.ResourceType -eq "Microsoft.Sql/servers/databases"} does not return the databases I'm looking for!
Also I now look in the portal https://portal.azure.com and I see the SQL Servers listed, and when i enter the blade for my sql server I see the databases. But if I click on a DB I'm lead to a not found resource. Also when using the SQL Databases blade I don't see any of the databases listed.
As an aside if I log on to the classic portal https://manage.windowsazure.com I can see the sql server and see all the databases, and click on them and configure them.
I don't really want to have to recreate all these databases as we have started to set them up with schemas, users and data but do need to be able to use the cmdlets to change them especially to add resource locks to them.
Has anyone see this before? and what could i try to bring them back so i can use powershell to configure them again.
I was in touch with Microsoft support last week and they had a look. this is the resolution.
From: Microsoft support Email
I suspect that our case issue derives from stale subscription cache.
In summary, subscription cache can become stale when changes made
within a subscription occur over time. In an effort to mitigate our
case issue, I have refreshed the subscription cache from the backend.
After they had a look it was sorted out that day, both the portal and more importantly the command line are fixed.
Thanks All
Please provide your subscription id, server name and missing database names and I will have this investigated. Apologies for the inconvenience. You can send details to me at bill dot gibson at microsoft . com.

Transient problems executing stored procedures on SQL Server 2008 R2

We are having an issue with SQL Server 2008 R2 64 responding to stored procedure call. About every 2 weeks or so, the database stops responding to stored procedures called from an ADO connection/Command set (4.0 framework). We have been working on this for several months now, with little improvement.
System changes:
We upgraded an existing vendor product from SQL Server 2005 to SQL Server 2008 R2 via their upgrade method. The database instance moved from a 32-bit Windows 2003 Server to 64-bit Windows 2008 Server.
The pattern of failure:
The application is run throughout the day, executed by different users via Citrix without issue. Every few weeks, the application stops responding around the same time frame. Once the database stops responding to the hosted instance of the application, any execution of the procedure from the application hangs (installed on CITRIX server, installed on varied physical systems, or debugging in VStudio 2010). After an hour of checking logs, server status, SQL Monitoring tools, tracing the repeated execution attempts, the server decides to respond to the application without intervention.
Strange thing is, when the server is not responding to ADO.Net calls, we execute the stored procedure from SQL Server Management Studio and receive results in 1 to 2 seconds. We are using the same login to access SQL Server Management Studio, and executing the stored procedure with the same parameters.
Looking at the connection string passed to the ADO connection, I don’t see anything unusual:
connectionString="Data Source=myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
Tried so far:
Added extra 2gb of RAM to the OS: no change
Added extra tempdb file, expanded size of tempdb log file from 1 to 5gb: reduced the issue from weekly to every 2nd or 3rd week.
Installed SQL Server 2008 R2 SP3: no change.
The black cloud:
To me, the repeating time pattern of failure implies an issue at the database host (server or resource), but the DBAs do not see load or resource issue. If it were purely a host issue, why does it respond to SQL Server Management calls, and not ADO.NET calls?
The last occurrence lasted over two hours, and was resolved after rebooting the database server. Not a great fallback, but desperate times and all…..
Updating the ADO.NET connection to use named pipes has resolved the issue for our application. Prefixing the database name with "np:" has the connection using named pipes.
connectionString="Data Source=np:myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
The issue returned on 5/14. This query timeout posting gave us hints how to force SQL Management Studio to behave like the ADO.NET connection and allowed us to recognize this is a "parameter sniffing" issue. We have applied changes to disable the parameter sniffing within the stored procedure.