I'm reading through the CodeDeploy reference docs here; and can't find the equivalent of aws deploy push command to send up a new version of my application to s3 to be ready for deployment.
Do I need to just zip these files myself and send them to s3 with the other PowerShell tools instead?
Since push is not a single API call, but rather a multistep operation, the simplest way to automate it in a powershell script is to literally put the command in the script
aws deploy push
You may need to make sure the aws executable is on your path.
Related
Is there any way to find and re-run an earlier instance of a Power Automate workflow programmatically?
I can do this manually: download the .csv file containing the instances, search in the Trigger output column the one I want, get the id, copy-paste the run URL, and click resubmit.
I tried with Power Automate itself:
The built-in Flow Management connector supports only to find a specific flow by name, and does not even go to the history.
PowerShell:
Installed the PowerApps module, I can list the instances with
Get-FlowRun -FlowName {flow name}
But I don't see the same properties as in the exported .csv file, and there's also no Run-Flow command that would let me run it.
So, I am a little stuck here; could someone please help me out?
We cannot programmatically resubmit the Flow run from the history with PowerShell or by any other api method yet.
But can avoid some manual work by using workflow function in a Flow compose step, we can automate the composition of Flow history run url. Read more
https://xxx.flow.microsoft.com/manage/environments/07aa1562-fea6-4583-8d76-9a8e67cbf298/flows/141e89fb-af2d-47ac-be25-f9176e64e9a0/runs/08586722084717816659969428791CU12?backUrl=%2Fflows%2F141e89fb-af2d-47ac-be25-f9176e64e9a0%2Fdetails&runStatus=Failed
There are 3 guids that I need to find aso that I can build up the flow history url.
The first guid is my environmentName (07aa1562-fea6-4583-8d76-9a8e67cbf298), then I’ve got the flow name ( 141e89fb-af2d-47ac-be25-f9176e64e9a0) and finally the run (08586722084717816659969428791CU12).
There is a cmdlet from Microsoft 365 CLI to resubmit a flow run
m365 flow run resubmit --environment flowEnvironmentID --flow flowGUID --name flowRunID –confirm
You can also resubmit a flow run using Power Automate REST API
https://api.flow.microsoft.com/providers/Microsoft.ProcessSimple/environments/{FlowEnvironment}/flows/{FlowGUID}/triggers/manual/histories/{FlowRunID}/resubmit?api-version=2016-11-01
For the Power Automate REST API, you will have to pass an authorization token.
For more information, go through the following post
https://ashiqf.com/2021/05/09/resubmit-your-failed-power-automate-flow-runs-automatically-using-m365-cli-and-rest-api/
Is it possible Via Powershell to upload a Json Document to Replace the Current Indexing Policy on a CosmosDB Database, if so How? we would like to be able to deploy a completed File rather than edit the file via the portal, we can then implement versioning, then no one is hand editing files, or cutting and pasting.
You can use the Azure CLI (which can run through powershell) in order to run the
az cosmosdb collection update command which can be found here here.
You will need to use the --indexing-policy optional parameter to achieve this.
You can enter it as a string or as a file, e.g., --indexing-policy #policy-file.json)
For the record if you use the --url-connection and --key arguments you won't need to az login.
I have a buch of tools that I copy to destination machine in Azure every time I create a new one. How I do it now
zip folder with tools
open powershell session
use Copy-Item -toSession
unzip
This somewhat works. However, it's not ideal - e.g. update of one tool is not as easy as it should be.
I would like to add this to PowerShell DSC configuration. Tried to find something like that and every File resource I found so far uses network shares.
Q: Is there any oficial way how to achieve the same result?
Q: If not, any sensible way how to achieve this? DSC was my first choice, but is not mandatory.
I find this as basic requirement a would expect that this will be one of scenarios that people try to solve.
Note1: I use DSC in push mode.
Note2: We were trying ansible to cover whole process (VM creation, LB, NSG, VPN, ..., VM setp - registry, FW, ..)), but found out that not everything in Azure is possible with ansible (IIRC gateways, vpns, ..)
I have configured a CI build for a Service Fabric application, in Visual Studio Team Services, according to this documentation: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration
But instead of having my CI build do the publishing, I only perform the Build and Package tasks, and include all Service Fabric related output, such as pkg folder, scripts, publish profiles and application parameters, in the drop. This way I can pass it along to the new Release pipeline (agent-based releases) to do the actual deployment of my service fabric application.
In my release definition I have a single Azure Powershell task, that uses an ARM endpoint (with proper service principals configured).
When I deploy my app to an existing service fabric cluster, I use the default Deploy-FabricApplication cmdlet passing along the pkg folder and a publish profile that is configured with a connection to the existing cluster.
The release fails with an error message "Cluster connection instance is null". And I cannot understand why?
Doing some debugging I have found that:
The Deploy-FabricApplication cmdlet executes the Connect-ServiceFabricCluster cmdlet just fine, but as soon as the Publish-NewServiceFabricApplication cmdlet takes over execution, then the cluster connection is lost.
I would expect that this scenario is possible using the service fabric cmdlets, but I cannot figure out how to keep the cluster connection open during depoyment.
UPDATE: The link to the documentation no longer refers to the Service Fabric powershell scripts, so the pre-condition for this question is no longer documented. The article now refers to the VSTS build and release tasks, which can be prefered over the powershell cmdlets I tried to use.
When the Connect-ServiceFabricCluster function is called (from Deploy-FabricApplication.ps1) a local $clusterConnection variable is set after the call to Connect-ServiceFabricCluster. You can see that using Get-Variable.
Unfortunately there is logic in some of the SDK scripts that expect that variable to be set but because they run in a different scope, that local variable isn't available.
It works in Visual Studio because the Deploy-FabricApplication.ps1 script is called using dot source notation, which puts the $clusterConnection variable in the current scope.
I'm not sure if there is a way to use dot sourcing when running a script though the release pipeline but you could, as a workaround, make the $clusterConnection variable global right after it's been set via the Connect-ServiceFabricCluster call. Edit your Deploy-FabricApplication.ps1 script and add the following line after the connection logic (~line 169):
$global:clusterConnection = $clusterConnection
By the way, you might want to consider setting up custom build/release tasks that deploy a Service Fabric application, rather than using the various Deploy-FabricApplication.ps1 scripts.
There now exists a built-in VSTS task for deploying a Service Fabric app so you no longer need to bother with executing the PowerShell script on your own. Task documentation page is at https://www.visualstudio.com/docs/build/steps/deploy/service-fabric-deploy. The original CI article has also been updated which provides details on how to set everything up: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/.
Try to use "PowerShell" task instead of "Azure PowerShell" task.
I hit the same bug today and opened a GitHub issue here
On a side note, VS generated script Deploy-FabricApplication.ps1 uses module
"$((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Service Fabric SDK" -Name "FabricSDKPSModulePath").FabricSDKPSModulePath)\ServiceFabricSDK.psm1"
That's where Publish-NewServiceFabricApplication comes from. You can check the deployment logic and rewrite it in more sane way using lower-level ServiceFabric SDK cmdlets (potentially getting connection using Get-ServiceFabricClusterConnection instead of global-ling it)
I'm running my Deployments on the Release Management(Currently Preview) tool in VSO.
When you configure a new Release(with the new release management tool on VSO) you can add to the Flow a task named:Azure PowerShell(Run a PowerShell script within an Azure environment)
What i'm trying to do is to Make some changes to the web.config using the Get-WebApplication and then Set-WebConfigurationProperty.
the error i get from the Log is:
Process should have elevated status to access IIS configuration data.
##[error]Cannot find a provider with the name 'WebAdministration'.
Is it even possible to run those kind of commands in there or do you i need to use another kind of command to update my web.config?
There is no Azure API to make arbitrary transforms to your web.config.
Instead, the way this is typically done is to use the deployment time transform engine (e.g. via Web.Debug.config or using Chained Config transforms).
If you're trying to set the web.config of an Azure WebApp then you need to use the Set-AzureWebSite cmdlet or the Set-AzureRMWebApp cmdlet.
Which one you need to use depends on which Azure cmdlets are installed on the machine running the script. The hosted servers for RM may still have the 0.9.x cmdlets (which uses SetAzureWebSite). The Set-AzureRMWebApp cmdlet is in the 1.x cmdlets. Either will work to set the config, you just need to use the appropriate cmdlet for what's have installed.