I'm using Powershell with resource templates to provision SQL Servers (and databases) in Azure. After a few provisions it starts returning this error:
New-AzureRmResourceGroupDeployment : 2:40:18 PM - Resource Microsoft.Sql/servers 'oao-sql01-gd6helghx' failed with message 'Subscription 'mysubscription Guid here'
is not ready for the operation because another operation is currently in progress. Please wait a few minutes and then try the operation again.'
There is no operation currently in progress that I can ascertain. Any Sql Servers I have provisioned are operational and I am able to provision other kinds of resources with no problems.
If I wait several hours (not a few minutes), it will let me provision a new server but soon after blocks me again. Is there some kind of throttling going on if I am doing a bunch of provisioning/teardown?
Any ideas are appreciated.
Turns out that after Azure support bumped my quota of SQL Server, this began to work a while later. Not sure why it took so long to kick in as I only had 1 server at the time. The best explanation was that the counter tracking the number of DB's against our quota got stale and didn't clear for a while. The default quota is 6 SQL Server instances.
Related
We are using on site Dev-Ops and have a similar problem to that described in the link Example from SO.
But ours is intermittent.
Our environment uses two build and deploy machines, which each deploy machine having two worker agents.
For one of our projects, when it is deployed, we constantly get the error:
The VisualStudioRemoteDeployerc4d3852f-411b-48ba-97d8-5e09c8d07ce4 service failed to start due to the following error:
%%2
But here is the rub, not every time. Sometimes the deployment completes without error.
Other projects that use the same deployment machine and the same target server work each and every time without fail.
The deployment log reports "The WSMan provider host process did not return a proper response." as an error.
Checking the allocated memory, described in PowerShell Out of Memory, to find our set at 2.1 Billion.
This is an interesting issue that I have uncovered. The source of this problem stems from the interaction of McAfee Endpoint security.
Said antivirus was reporting that when the remote powershell script, using WSMan, was called. McAfee, saw this as a viral payload and canceled the deployment by stopping the service from running and deleting the payload. This has been reported to McAfee as an issue. In the mean time, internal network security settings for McAfee has had to be modified to ignore the processes used by powershell in remote deployment.
I've tried, failed, deleted the database and tried again 7 times now and I get this error each time. I'm on the lite plan and taking the IBM Data Science Certification course and I can't get past this part. Any assistance would be greatly appreciated.
Deleting database (can only have one in lite plan I believe) retried several times
I just verified that I am able to create a fully working Lite instance on my end. Is it possible that it's a networking issue on your end? Was that the full error message? It seems to be cut off. In what region and datacenter are you trying to create the service instance?
After I proceed to restore database from automated backup file generated on Mar 13, 2019, the SQL instance stuck in this state forever:"
Restoring from backup. This may take a few minutes. While this operation is running, you may continue to view information about the instance."
The database size is very small, less than 1MB.
For future users that experience problems like this is in the future, here is how you can handle it:
If you have a Google Cloud support package, file a support ticket directly with support for the quickest response.
Otherwise please file a private GCP issue describing the problem, remembering to include the project id and instance name.
However - Cloud SQL instances are monitored for stuck states like this, so often the issue will resolve itself within a few hours.
I was testing my code changes, meaning undeploying/redeploying applications in Biztalk and then all of the BizTalk databases disappeared (BAMAcrhive, BAMPrimaryImport, BiztalkDTADb, BizTalkMgmtDb, BizTalkMsgBoxDb, BizTalkRulEngineDb, BTAHL7). This is my test environment however, i did not have any backups of these databases (yes i have learned my lesson).
I tried restoring databases from another test environment and then updating the server names and what not within the tables. I tried stopping/deleting some applications in the console but I get more errors that come up.
I am assuming that the GUIDs/Keys of the deployed applications in TESTSERVER1 and TESTSERVER2 are different therefore it won't delete properly.
I am currently getting this error"Schema referenced by Map 'XXXXX' has been deleted. The local, cached version of the BizTalk Server group configuration is out of date. You must refresh the BizTalk Server group configuration before making further changes. (Microsoft.BizTalk.Administration.SnapIn)".
When I try to refresh the BizTalk Group in the console, i get the above error as well as "The application does not exist"
I tried truncating the tables that consisted of this data but there are too many references to go through the trouble.
I have also tried to restore the SSO key. Updated services (Biztalk, SSO, and a few more). When i try to start the BizTalk Service BizTalk Group: BizTalkServerApplication. It says the service has started and stopped.
So a few questions:
What should i do? I hope a re-installation of BizTalk is last resort.
How did the databases disappear in the first place, the undeployment scripts have nothing to do with the databases, only applications
Sorry if the solution is obvious, I am by no means a BizTalk Developer. Just a stressed junior BI developer on a friday night.
if you already lost the Biztalk environnements(undeployed applications + DBs lost), the best choice is to reinstall your environment and setup a backup just after. but try to understand the source proble in windows and sql server logs.
I provide support for a large application across multiple servers. System has been running live for 6+ months.
8th December: total system failure. iisreset across each of the servers sorted it out. Everything back to normal.
Post failure investigation showed various processes not able to get a response from a particular server which hosts an instance of Dynamics CRM (2011 R11). Specifically it seems the SOAP service was not responding (Organization.svc). 503 - Server Unavailable (really it was just the web service). I suspect it died.
Having the exact time of the error I checked the event logs on the server but these did not have anything of use. The last error prior to the failure was a report rendering error which was 9 minutes before the system actually went down. Surely if web service crashed this would be reflected in the event log?
Fast forward to today, 8th January and the system fails again. The 8th of the month again! iisreset fixes it... again!
Again, completely useless event logs showings no errors prior to failure.
Entertained the idea of Dynamics CRM trace logging but this is out of the question due to the performance hit.
Apart from the event logs where else to look? Are there possible external factors or causes? I'm trying to find the root cause but have run out of ideas!
While this may not address the source of your problem, maybe it can help minimize the symptoms. May I suggest that you configure the IIS server to recycle the application pool at a scheduled interval within your production environment.
http://technet.microsoft.com/en-us/library/cc753179%28v=ws.10%29.aspx