We are using Azure DevOps to deploy to a staging slot and then swap with production.
When there is an issue swapping it will keep trying for nearly 30 minutes.
Therefore I would like to put a timeout on the swap task, but if I do that it will stop the task in DevOps and leave the process happening in Azure.
I would like a way to force stop the process through a CLI, API, PowerShell or DevOps task.
Azure CLI doesn't seem to have anything
Kudu API can delete deployments but doesn't look to stop them (https://github.com/projectkudu/kudu/wiki/REST-API#deployment)
I have read that you can stop a process, but using a Linux Container App Service, I can't see that option. Azure-Web-sites: How to cancel a deployment?
Is there a way?
Please try to use the following command in Azure PowerShell task to cancel a pending swap:
Invoke-AzResourceAction -ResourceGroupName [resource group name] -ResourceType Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action resetSlotConfig -ApiVersion 2015-07-01
Here is the document and my sample, I added -Force -Confirm:$false at the end of the command:
Update
If any errors occur in the target slot (for example, the production slot) after a slot swap, restore the slots to their pre-swap states by swapping the same two slots immediately.
So, we don't need to stop it, just wait swap operation succeed.
When you submit swap slot request, you will get HttpStatus 202 code. On portal, when you click swap button, you will find that the browser has been requesting the url of location to get the status of swap.
As for when it ends, we can check the swap operation by polling.
If the swap operation time is too long, it is recommended to raise a support ticket and ask the engineer to check the reason.
Previous
You can use AzureAppServiceManage task.
Azure App Service Manage task
Use this task to start, stop, restart, slot swap, Swap with Preview, install site extensions, or enable continuous monitoring for an Azure App Service.
Tips
When you use rest api to swap slot, you can check location in response header.
When you submit swap slot request, you will get HttpStatus 202 code. On portal, when you click swap button, you will find that the browser has been requesting the url of location to get the status of swap.
Related
I am trying to publish an upgrade of a Service Fabric application from Visual Studio 2017 to our Azure Service Fabric Cluster. In mid-September, I successfully published an upgrade of this same app with same PowerShell script to SFC with no issues. I am now trying to upgrade it at the next version number and suddenly getting this error.
I get the following error during Publish, related to Powershell.
2>Started executing script 'Deploy-FabricApplication.ps1'.
2>powershell -NonInteractive -NoProfile -WindowStyle Hidden -ExecutionPolicy Bypass -Command ". 'C:\Users\pj\Source\Workspaces\VDevelopment\trunk\Services\Sources\src\For.Application.ServiceFabric.Sources\Scripts\Deploy-FabricApplication.ps1' -ApplicationPackagePath 'C:\Users\pj\Source\Workspaces\VDevelopment\trunk\Services\Sources\src\For.Application.ServiceFabric.Sources\pkg\Debug' -PublishProfileFile 'C:\Users\pj\Source\Workspaces\VDevelopment\trunk\Services\Sources\src\For.Application.ServiceFabric.Sources\PublishProfiles\Cloud.xml' -DeployOnly:$false -ApplicationParameter:#{} -UnregisterUnusedApplicationVersionsAfterUpgrade $false -OverrideUpgradeBehavior 'None' -OverwriteBehavior 'SameAppTypeAndVersion' -SkipPackageValidation:$false -ErrorAction Stop"
2>Copying application package to image store...
2>Upload to Image Store succeeded
2>Registering application type...
2>Register application type started. Use Get-ServiceFabricApplicationType to query for status.
2>Running Image Builder process ...
2>Application package is registered.
2>Start upgrading application...
2>aka.ms/upgrade-defaultservices
2>Start-ServiceFabricApplicationUpgrade : aka.ms/upgrade-defaultservices
2>At C:\Program Files\Microsoft SDKs\Service
2>Fabric\Tools\PSModule\ServiceFabricSDK\Publish-UpgradedServiceFabricApplication.ps1:317 char:13
2>+ Start-ServiceFabricApplicationUpgrade #UpgradeParameters
2>+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2> + CategoryInfo : InvalidOperation: (Microsoft.Servi...usterConnection:ClusterConnection) [Start-ServiceFa
2> bricApplicationUpgrade], FabricException
2> + FullyQualifiedErrorId : UpgradeApplicationErrorId,Microsoft.ServiceFabric.Powershell.StartApplicationUpgrade
2>
2>Finished executing script 'Deploy-FabricApplication.ps1'.
2>Time elapsed: 00:07:39.0407526
2>The PowerShell script failed to execute.
========== Build: 1 succeeded, 0 failed, 10 up-to-date, 0 skipped ==========
========== Publish: 0 succeeded, 1 failed, 0 skipped ==========
Any idea what's going on here? Again, when I last published this in September, with the same script, no issues at all, and I haven't made any changes to the solution other than upgrading the Manifest versions to push it out as a new upgraded version.
I noted this S/O thread: Getting error as part of trying to upgrade Service Fabric Application using Start-ServiceFabricApplicationUpgrade and saw the user's error was similar, but the answer does not apply to my issue because all three steps in the answer provided are definitely included in my powershell deploy script.
I can add the deployment script if helpful, but will wait until that is requested as it's long, and I only want to post it here if someone feels it's needed to diagnose.
You are getting this error because you are changing some parameters in a DefaultService that are not allowed by default.
The link aka.ms/upgrade-defaultservices shown in the error logs explain this.
Some default service parameters defined in the application manifest
can also be upgraded as part of an application upgrade.
Only the service parameters that support being changed through
Update-ServiceFabricService can be changed as part of an upgrade. The
behavior of changing default services during application upgrade is as
follows:
Default services in the new application manifest that do not already exist in the cluster are created.
Default services that exist in both the previous and new application manifests are updated. The parameters of the default
service in the new application manifest overwrite the parameters of
the existing service. The application upgrade will rollback
automatically if updating a default service fails.
Default services that do not exist in the new application manifest are deleted if they exist in the cluster. Note that deleting a default
service will result in deleting all that service's state and cannot be
undone.
Also, there is this other SO question about the same thing: Default service descriptions can not be modified as part of upgrade set EnableDefaultServicesUpgrade to true
The item 1 above is a common approach, where new services are added to the solution and later created during the upgrade without errors, the item 2 and 3 are the restricted approach that requires the EnableDefaultServicesUpgrade.
The item 2, is like described in the answer you've added, you changed MinReplicaSize and TargetReplicaSize to 1 during a manual update, when SF validated the state of your service for upgrade, it identified the difference and prevented the upgrade to continue, if you had set cluster setting EnableDefaultServicesUpgrade to true it would continue and override the default values.
The item 3, would occur you when you removed the service and added again, you had changed or misspelled the name, SF default settings would prevent the deletion of this service.
Regarding the solution you've found(delete and recreate), is not ideal,
In scenarios where you have stateful services running in production, would be risky to apply, because you would have to backup the state, re-deploy the services, and restore the backup, in some cases, depending on what these changes are, you wouldn't be able to restore the backup, because they have to match with the original services definitions (partitions type, number, and son on). You would also lose the benefits of Rolling Updates, and your service would go down maybe for a while if these backups are big.
The issue had to do with us trying to push out the application with mismatched node instances. We have a stateful service running under this application that is supposed to have MinReplicaSize and TargetReplicaSize set to 3. Yesterday, due to an issue, we deleted and re-created this service inside the SF Explorer. Upon doing so, it reset the replica size parameters back to 1. So we used a Powershell script to change them back to 3, but that script did not include all the necessary commands to get the service back to the exact state it was in before we deleted it. So today when we went to upgrade the app, the app in SFC wouldn't accept an upgrade from VS deployment, because of mismatches between what was in the parameters of the solution vs. what was in our SFC. To resolve, we re-deleted those services first, then deployed from VS, and no more error.
After creating a subscription in the Reporting Configuration, the emails are not sent at the specified time, but if you perform any actions on the site, the emails will be sent. This is the problem of the virtual machine on which the site or IIS is deployed. Or does the feature Kentico.
Prompt in what there can be a problem and how the system of sending of emails inin the Reporting Configuration works.
This is a correct behavior. Kentico checks for e-mails that are to be sent at the end of requests which means that if there are no requests, no e-mails are sent. If you need some tasks to be send at a specified time, you need to use Windows scheduler and configure task (e-mmail sending) to use it. See official documentation for more detail.s
Another option to Enn's, is to use a service like UpTime Robot to continually visit your page (like every 5 minutes), this not only generates a request but also will help keep your site awake and from going to sleep if you can't manually set the Worker Processor to never go to sleep.
Windows Scheduler is the most reliable, but UpTime robot has worked well for us.
https://uptimerobot.com/
Under the "Scheduled Tasks" module there is a schedule task called "Report subscription sender". This task runs every minute and is responsible for checking your subscriptions and sending e-mails based on your configuration. This task runs within the Kentico instance. By default IIS will put the site/app pool to sleep after x (default 20) minutes of idle and the scheduled tasks are no longer able to run. When you hit the site it wakes the process back up and the scheduled task is able to run again. You can go into IIS and configure the "Idle Time-out (minutes)" for the application pool see this link https://patrickdesjardins.com/blog/iis-no-sleep-idle-and-autostart-with-values for a pretty good illustration. You can also adjust the app pool recycle intervals but that is probably not necessary for your issue.
The other option, as mentioned by Enn, is to install the Kentico Scheduler Windows service which always runs and configure that scheduled task to run in the service.
I have a PowerShell script that deploys an ARM template to Azure, but I have encountered an error that I can't quite seem to wrap my head around. When running in PowerShell, itself, I get the following error:
New-AzureRmResourceGroupDeployment : 7:46:01 AM - Error:
Code=CannotUpdatePlan; Message=Resource plan can not be changed.
The error description doesn't seem all that complicated, but I'm not sure why this is the case in the first place. I don't have any locks on the Resource Group, resources, or subscription so it should theoretically be able to work properly, right?
Upon testing in VSTS, I got the error mentioned above along with the following error message preceding it:
Selected subscription is in 'Disabled' state.
I'm not sure if that has to do with the other, but I know the subscription is active as I can deploy resources to it, manually. Also, it clearly says "Active" when viewing the subscription from the portal.
According to your description, I suggest you could check as the following steps.
1.Ensure your subscription is enabled. For test, you could create a web app. If your subscription is really disabled, please refer to this link re-active your subscription.
2.You had better check your subscription connection. Please ensure subscription is right. When you verify connection, it should show Verified.
Note: I use the agent Hosted VS2017 and use Azure PowerShell script to deploy template.
I am currently using Teamcity to deploy a web application to Azure Cloud Services. We typically deploy using powershell scripts to the Staging Slot and thereafter do a manual swap (Staging to Production) on the Azure Portal.
After the swap, we typically leave the Staging slot active with the old production deployment for a few days (in the event we need to revert/backout of the deployment) and thereafter delete it - this is a manual process.
I am looking to automate this process using Teamcity. My intended solution is to have a Teamcity build kick off x days after the deployment build has suceeded (The details of the build steps are irrelevant since I'd probably use powershell again to delete the staging slot)
This plan has pointed me to look into Teamcity build chains, snapshot dependencies etc.
What I have done so far is
correctly created the build chain by creating a snapshot dependency on the deployment build configuration and
created a Finish Build Trigger
At the moment, the current approach kickoffs the dependent build 'Delete Azure Staging Web' (B) immediately after the deployment build has succeeded. However, I would like this to be a delayed build after x days.
Looking at the above build chain, I would like the build B to run on 13-Aug-2016 at 7.31am (if x=3)
I have looked into the Schedule Trigger option as well, but am slightly lost as to how I can use it to achieve this. As far as I understand, using a cron expression will result in the build continuously running which is not what I want - I would like for the build B to only execute once.
Yes this can be done by making use of the REST api.
I've made a small sample which should convey the fundamental steps. This is a PowerShell script that will clear the triggers on another build configuration (determined by the parameter value in the script) and add a scheduled trigger with a start time X days on from the current time (determined by the parameter value in the script)
1) Add a PowerShell step to the main build, at the end and run add-scheduled-trigger as source code
2) Update the parameter values in the script
$BuildTypeId - This is the id of the configuration you want to add the trigger to
$NumberOfDays - This is the number of days ahead that you want to schedule the trigger for
There is admin / admin embedded in the script = Username / Password authentication for the REST api
One this is done you should see a scheduled trigger created / updated each time you build the first configuration
Hope this helps
I am looking for a way to run a job on a schedule and also alert the user to that running job. Specifically, I am using PowerShell to manage a computer lab scenario, and between sessions I want to refresh the environment, clean off the desktop, reset shortcuts pinned to the task bar for the next session, etc. But I want to warn anyone sitting at the machine that this is about to happen. However, my scripts that use Balloontips very successfully as regular scripts don't work as scheduled jobs. They run, and I have verified they run as the user in question, by creating a Scheduled Job that rights a text file to the user desktop. But Balloon Tips don't actually appear. Is there some secret to getting this to work, or is this a form of "interaction" that a scheduled job just can't do?
I also tried an alternative approach, launching the browser with a web page warning of the impending cleanup. That also didn't work. Suggesting some limits to what can be done as a Scheduled Job.
I would much rather go the very "integrated with the OS" route of the balloon tips, but for the life of me it seems like that just isn't an option. So, any other suggestions for providing user info by way of a scheduled job?
Since this runs in Session 0 where GUI interaction doesn't exist you must resort to some other mechanism.
You say this happens between sessions. You could show your ballon via another "notification script" that is executed from within your ScheduledJob. You have options here. For example:
Add entry to registry Run key that will self delete on run. Shows popup when user logs in (session change ? ). Entry executes posh script which parameters you could craft, i.e. (powershell -File notify.ps1 -ArgumentList "Operation bla bla..")
Add ScheduledTask that doesn't run in Session 0 (regular task). You need to do that only once. Every next time you run this job to show notification to the user from within main script via schtasks run or ScheduledTasks module depending on your system.
Add a ScheduledTask that check periodically EventLog for the input of your main script. The task would start on logon and subscribe to event log notifications. I don't like this as the script must run non-stop.