BizTalk Databases Missing, Not sure what to do - single-sign-on

I was testing my code changes, meaning undeploying/redeploying applications in Biztalk and then all of the BizTalk databases disappeared (BAMAcrhive, BAMPrimaryImport, BiztalkDTADb, BizTalkMgmtDb, BizTalkMsgBoxDb, BizTalkRulEngineDb, BTAHL7). This is my test environment however, i did not have any backups of these databases (yes i have learned my lesson).
I tried restoring databases from another test environment and then updating the server names and what not within the tables. I tried stopping/deleting some applications in the console but I get more errors that come up.
I am assuming that the GUIDs/Keys of the deployed applications in TESTSERVER1 and TESTSERVER2 are different therefore it won't delete properly.
I am currently getting this error"Schema referenced by Map 'XXXXX' has been deleted. The local, cached version of the BizTalk Server group configuration is out of date. You must refresh the BizTalk Server group configuration before making further changes. (Microsoft.BizTalk.Administration.SnapIn)".
When I try to refresh the BizTalk Group in the console, i get the above error as well as "The application does not exist"
I tried truncating the tables that consisted of this data but there are too many references to go through the trouble.
I have also tried to restore the SSO key. Updated services (Biztalk, SSO, and a few more). When i try to start the BizTalk Service BizTalk Group: BizTalkServerApplication. It says the service has started and stopped.
So a few questions:
What should i do? I hope a re-installation of BizTalk is last resort.
How did the databases disappear in the first place, the undeployment scripts have nothing to do with the databases, only applications
Sorry if the solution is obvious, I am by no means a BizTalk Developer. Just a stressed junior BI developer on a friday night.

if you already lost the Biztalk environnements(undeployed applications + DBs lost), the best choice is to reinstall your environment and setup a backup just after. but try to understand the source proble in windows and sql server logs.

Related

Two master instances on same database

I want to use Postgresql in Windows Server 2012 R2 for one our project where it can be 24/7 uptime.
I would like to ask the community if I can have 2 master instances in 2 different servers A&B and they will 'work' on the same DB located in a shared file storage in lan. Always one master instance on server A will be online and when it goes offline for some reason (I suppose) a powershell script will recognize that the postgresql service stopped and will start the service in server B. The same script will continuous check that only one service in servers A & B is working to avoid conflicts.
I'd like to ask if this is possible or a better approach for my configuration.
(I can't use replication because when server A shuts down the server B is in read-only mode thing that I don't want)
If you manage to start two instances of PostgreSQL on the same data directory, serious data corruption will happen.
Normally there is a postmaster.pid file that prevents that, but a PostgreSQL server process on a different machine that accesses the same file system will happily unlink that after spewing some log messages, thinking it was left behind from a crash.
So you are really walking on thin ice with a solution like that.
One other issue that you didn't think of is that script that is supposed to check if the server is still running. What if that script fails, because for example the network connection between the two servers is down, but the server is still up an running happily? Such a “split brain” scenario will cause data corruption with your setup.
Another word of caution: since you seem to be using Windows (Powershell?), you probably envision a CIFS file system when you are talking of shared storage. A Windows “network share” is not a reliable file system — last time I checked, it did not honor _commit.
Creating a reliable failover cluster is harder than you think, and I'd recommend that you check existing solutions before you try to roll your own.

New SQLAzure databases are not visable in the portal nor via the powershell cmdlets

Last week I created 8 databases on a V12 SqlAzure server via powershell and ARM templates, it worked fine. We started to use these databases in SQL Management studio and have set up users and tables etc. There is some data in them and we can select and update as expected. In short they work!
But today I wanted to apply some resource locks to the databases using the azure powershell cmdlet New-AzureRmResourceLock but I'm finding that the command Get-AzureRmResource | Where-Object {$_.ResourceType -eq "Microsoft.Sql/servers/databases"} does not return the databases I'm looking for!
Also I now look in the portal https://portal.azure.com and I see the SQL Servers listed, and when i enter the blade for my sql server I see the databases. But if I click on a DB I'm lead to a not found resource. Also when using the SQL Databases blade I don't see any of the databases listed.
As an aside if I log on to the classic portal https://manage.windowsazure.com I can see the sql server and see all the databases, and click on them and configure them.
I don't really want to have to recreate all these databases as we have started to set them up with schemas, users and data but do need to be able to use the cmdlets to change them especially to add resource locks to them.
Has anyone see this before? and what could i try to bring them back so i can use powershell to configure them again.
I was in touch with Microsoft support last week and they had a look. this is the resolution.
From: Microsoft support Email
I suspect that our case issue derives from stale subscription cache.
In summary, subscription cache can become stale when changes made
within a subscription occur over time. In an effort to mitigate our
case issue, I have refreshed the subscription cache from the backend.
After they had a look it was sorted out that day, both the portal and more importantly the command line are fixed.
Thanks All
Please provide your subscription id, server name and missing database names and I will have this investigated. Apologies for the inconvenience. You can send details to me at bill dot gibson at microsoft . com.

Is This MSDTC configuration Issue?

It seems I am running into the Microsoft Distributed Transaction Coordinator (MSDTC) related issue.
SCENARIO
I am using TransactionScope and with in the single scope it hits two different databases on different servers (for instance, DB_A running Windows Server 2003 and DB_B running Windows Server 2008). One database is accessed using Entity Framework 4.0 and another using normal ADO.NET APIs.
When I run the application from my development machine (running WinXP) it commits and rollbacks both the connections accurately. But when I run the application, deployed on another server (for instance WAS_A running Windows Server 2003) it commits correctly but in case of exception is doesn't roll back the database activities on both the servers.
I thought it would be the MSDTC configuration issue on the WAS_A. So I went to the MSDTC -> Security Configuration and checked all the available options (as I did previously on other machines). But still I am facing the same issue.
Looking for your expert advices. :)
I believe that you need to look into Enabling Transaction Flow. Specifically, take a look at how one may error and the other complete as described in TransactionScope and WCF Services:
an error in a second WCF service call was NOT rolling back the changes made in a previous WCF service call...
In order to create an ambient transaction in your client and ensure that it is used by your WCF services...
The article then details the following steps:
Configure Your Binding with transactionFlow
Decorate Your Interface with [TransactionFlow(TransactionFlowOption)]
Decorate Your Method with [OperationBehavior(TransactionScopeRequired)]
Optionally update your Connection Strings with Transaction Binding*
*Note: This is optional in my opinion.

Deployment process

We are having a massive system having around 15 servers hosting .Net WCF services, mvc application etc.
When we do a deployment (out of office hours) we have to uninstall and install everything on the live servers.
This takes lot of time and if something goes wrong we have to rollback everything.
can you please suggest something different to this?
like
Deply into a other environment (whenever you like) and switch the URL to point to new servers
[This comes with the overhead of cost of maintaining 2 copies of production (active and passive)]
any other ideas please.
Does services need to be uninstalled for all deployments ?
You can have a script that does this against all the servers in parallel:
Stop any windows services
Stop IIS
Make backup of replaced files
XCopy assemblies, resources, website files.
Perhaps run InstallUtil if deploying a service (as needed).
Start IIS and services.
Such a script will not take too long to execute. With 15 servers it will be well worth the effort writing it and make the deployment and rollback process completely automated.
It sounds like you need a load balancer to handle the trafic to your production servers. You would deploy all your new code to Server Farm B and test it using a test DNS entry. Once you are satisfied with the changes you would repoint your load balancer addresses from Server Farm A to Server Farm B it will then become live. The only down side to this is with database changes.

Database migrations: manage with build script or automatic on app startup?

I'm in the process of developing a deployment system for a new web app and I'm wondering where the best point in the process to manage database migrations is (the question of how to do the migrations is another problem entirely).
It seems there are two ways to go:
Use a migration script that can
either be run manually from command
line or as part of the automatic
deployment/build process
Run the migrations when the app
starts up (I'm using ASP.NET so this
can be done easily enough without
causing a long-running user request)
Does anyone have any suggestions/insight/experience with these approaches? Any other suggestions?
I can see why #1 might be more attractive - it gives me complete control over when the DB is updated. However, I quite like #2 as it allows me to quickly iterate between deployments and reduces the manual process. #2 could also be used on my development machine to allow even quicker iterations. Hmm, starting to think having both might be a good thing...
We have a sales-force system with ~100 client and we are updating database at application startup (True, our is a desktop application.) I like this approach, it's safe and iterative if we have indeterministic startpoint (is the client database new or only updated to verison x.y.z?).
But at serverside I'm preferr your #1 option: we create a SQL query file on our virtual machine (based on the copy of the original database) and runs this query against the real server.
So IMHO:
Disconnected clients: startup, iterative scripts
Server: query created on VM based on the actual and real database
So I'm interrested in this problem too, and find some (half)frameworks as RikMigrations. After some googling there is a good startplace about DB versioning/migration frameworks: .NET Database Migration Tool Roundup. Not neccessarely the documentation but the team blogs can be interresting.
I like option #1 better as it seems much more flexible. In lieu of actually performing migrations on each app start, I think I would verify that the database schema (version number?) matches the code, and if not, throw a warning or error about a mismatched database schema.
I'd prefer option #1 for a number of reasons. First, integration tests usually require your DB schema to be up-to-date, and launching a web-site to upgrade the schema will be a huge timewaster. Second, you cannot change database schema while your site is running (say, add a couple of indexes to speed things up).
As for production side of things, upgrading your database in transaction MSI-style installation is much better than attempting to upgrade at each app startup since you can potentially end up with desynchronized database-application versions.
And if you're looking for the migration framework, take a look at Wizardby.
If the application ever has to run on a customer's machine than migrating at startup can prevent a lot of support calls - assuming you can do seamless migration without user intervention (I hope you aren't normally running your web app with permission to modify the database).
If the application always runs under your control automatic migration is less of an issue - but still can be a good feature, especially if you want to minimize downtime and manual deployment steps.