PowerShell on remote target shows incomplete verbose logs - powershell

I wrote the PowerShell script to run the SQL scripts on target database server using RoundhousE.
Example code in PowerShell script:
$outsideTransactionDir = (Join-Path $migrateDir "OutsideTransaction")
Write-Host $(Get-Item $outsideTransactionDir).FullName
if ((Test-Path $outsideTransactionDir) -eq $true)
{
Write-Verbose "Running RoundhousE (Outside Transaction)" -Verbose
Write-Host "Running RoundhousE (Outside Transaction)"
ExecuteRoundhousE $(Get-Item $outsideTransactionDir).FullName $([String]::Format('-cs "{0}" -ni -ct 500 -rb "runBeforeUp" –-debug --env PROD --vf "db_version.xml" {1}', $settings.roundhouse.connectionstring, $r))
}
Write-Verbose "Running RoundhousE" -Verbose
Write-Host "Running RoundhousE"
ExecuteRoundhousE $(Get-Item $migrateDir).FullName $([String]::Format('-cs "{0}" --trx -ni -ct 500 -rb "runBeforeUp" -sp "StoredProcedure" -fu "UserDefinedFunction" -vw "View" -ix "Index" –-debug --env PROD --vf "db_version.xml" {1}', $settings.roundhouse.connectionstring, $r))
After executing the above lines of code through the Run PowerShell on Target Server task at VSTS release. Internally one text file will be created in specified server, it contains logs (changes) related to database SQL scripts execution, and also same logs will be displayed in VSTS release logs under specified task. But the issue is total logs will not be displayed In VSTS release logs only few logs will be displayed in VSTS release logs.

Related

Powershell exit does not stop script from running

I'm running a Powershell script from a Powershell window.
Inside this script, I'm trying to connect to a service, and wish to stop the script if the connection fails on an exception.
I'm catching the exception but the script refuses to stop, but continues on.
I tried checking error output using **-ErrorAction** and **-ErrorVariable** and many other things.
But the issue is the script does not stop. The script is saved in a .ps1 file and I just run it from the shell windoe".\script.ps1"
Here's the code:
Write-Host "Attaching to cluster and retrieving credential" -ForegroundColor Gray
if ([string]::IsNullOrEmpty($clusterrg) -or [string]::IsNullOrEmpty($clustername)) {
Write-Host "Failed to attacth to cluster. Parameters missing." -ForegroundColor Red
Write-Host "Use -clusterreg CLUSTER_RESOURCE_GROUP -clustername CLUSTER_NAME in the command line`n" -ForegroundColor Red
Exit 1
} else {
try{
az aks get-credentials --resource-group $clusterrg --name $clustername
Write-Host "Done`n" -ForegroundColor Green
} catch {
Write-Error $Error[0]
exit 1
}
}
Would appreciate any help.
Thanks you!
You are using the Azure CLI, not the Azure PowerShell commands. So try/catch as well as the common parameters -ErrorAction and -ErrorVariable are not supported. You have to check the $LASTEXITCODE variable for errors.
az aks get-credentials --resource-group $clusterrg --name $clustername
if( 0 -ne $LASTEXITCODE ) {
Write-Error "Azure CLI failed with exit code $LASTEXITCODE"
exit 1
}
You can find possible Azure CLI exit codes documented here.

Azure DevOps Azure CLI task with PowerShell script and parallel ForEach-Object execution: no output on failure

In order to scale Function Apps quickly we want to be able to deploy them via IaC and then deploy a code package onto it. Unfortunately this is not possible dynamically with YAML pipelines in Azure DevOps so I had to resort to using the Azure CLI.
Below you see the PowerShell script I came up with to deploy the code into the pool of Function Apps that I deployed through Terraform before-hand. To speed things up I turned on parallel processing of the ForEach-Object loop since there are no dependencies between the single instances. This also works fine to a certain extent but I am having troubles due to the quirkiness of the Azure CLI. Writing non-error information to StdErr seems to be by design. This combined with some other strange behavior leads to the following scenarios:
Running sequentially usually works flawlessly and I see any error output if a problem occurs. Also I don't need to set powerShellErrorActionPreference: 'continue'. This of course is slowing down the deployment significantly.
Running in parallel fails always without setting powerShellErrorActionPreference: 'continue'. The reason for the failure is not output to the console. This seems to happen even if no real error occurs as with continue there is no error output to the console as well. This wouldn't be an issue if the pipeline fails in the case of a real error (which should be handled by checking the state of the ChildJobs - but it doesn't.
So here I am between a rock and a hard place. Does anyone see the flaw in my implementation? Any suggestions are highly appreciated.
- task: AzureCLI#2
displayName: 'Functions deployment'
env:
AZURE_CORE_ONLY_SHOW_ERRORS: 'True'
AZURE_DEVOPS_EXT_PAT: $(System.AccessToken)
ARM_CLIENT_ID: $(AzureApplicationId)
ARM_CLIENT_SECRET: $(AzureApplicationSecret)
ARM_SUBSCRIPTION_ID: $(AzureSubscriptionId)
ARM_TENANT_ID: $(AzureTenantId)
inputs:
azureSubscription: 'MySubscription'
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
Write-Output -InputObject "INFO: Get Function App names"
$appNames = terragrunt output -json all_functionapp_names | ConvertFrom-Json
Write-Output -InputObject "INFO: Loop over Function Apps"
$jobs = $appNames | ForEach-Object -Parallel {
$name = $_
try
{
Write-Output -InputObject "INFO: $name`: start slot"
az functionapp start --resource-group $(ResourceGroup) --name "$name" --slot Stage --verbose
Write-Output -InputObject "INFO: $name`: deploy into slot"
az functionapp deploy --resource-group $(ResourceGroup) --name "$name" --slot Stage --src-path "$(System.ArtifactsDirectory)/drop/MyCodePackage.zip" --type zip --verbose
Write-Output -InputObject "INFO: $name`: deploy app settings"
az functionapp config appsettings set --resource-group $(ResourceGroup) --name "$name" --slot Stage --settings "#$(Build.ArtifactStagingDirectory)/appsettings.json" --verbose
Write-Output -InputObject "INFO: $name`: swap slot with production"
az functionapp deployment slot swap --resource-group $(ResourceGroup) --name "$name" --slot Stage --action swap --verbose
}
catch
{
Write-Output -InputObject "ERROR: $name`: An error occured during deployment"
Write-Output -InputObject ($_.Exception | Format-List -Force)
}
finally
{
try
{
Write-Output -InputObject "INFO: $name`: stop slot"
az functionapp stop --resource-group $(ResourceGroup) --name "$name" --slot Stage --verbose
}
catch
{
Write-Output -InputObject "ERROR: $name`: could not stop slot"
}
}
} -AsJob
[int]$pollingInterval = 10
[int]$elapsedSeconds = 0
while ($jobs.State -eq "Running") {
$jobs.ChildJobs | ForEach-Object {
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject "INFO: $($_.Name) output [$($elapsedSeconds)s]"
Write-Output -InputObject "---------------------------------"
$_ | Receive-Job
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject ""
}
$elapsedSeconds += $pollingInterval
[Threading.Thread]::Sleep($pollingInterval * 1000)
}
$jobs.ChildJobs | Where-Object { $_.JobStateInfo.State -eq "Failed" } | ForEach-Object {
Write-Output -InputObject "ERROR: At least one of the deployments failed with the following reason:"
Write-Output -InputObject $_.JobStateInfo.Reason
}
if ($jobs.State -eq "Failed")
{
exit 1
}
else
{
exit 0
}
powerShellErrorActionPreference: 'continue'
workingDirectory: './infrastructure/environments/$(TerraFormEnvironmentName)'
Edit 1
To get all output from ChildJobs I had to alter the code like so:
[int]$pollingInterval = 10
[int]$elapsedSeconds = 0
$lastResultsRead = false
while ($jobs.State -eq "Running" -or !$lastResultsRead)
{
$lastResultsRead = $jobs.State -ne "Running"
$jobs.ChildJobs | ForEach-Object {
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject "INFO: $($_.Name) output [$($elapsedSeconds)s]"
Write-Output -InputObject "---------------------------------"
$_ | Receive-Job
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject ""
}
$elapsedSeconds += $pollingInterval
if (!$lastResultsRead)
{
[Threading.Thread]::Sleep($pollingInterval * 1000)
}
Hope this helps everyone that wants to achieve something similar.
So it seems that the mystery is solved.
TLDR;
If you want proper error handling, remove the --verbose from all Azure CLI calls as the verbose output is always written to StdErr even when setting the environment variable AZURE_CORE_ONLY_SHOW_ERRORS.
Explanation
I stumbled over the solution by adding an unrelated functionality to this script and noticed that in certain situations the last output of the ChildJobs is not being collected. I initially took that for a quirk of the Azure DevOps task but discovered that this also happens when I debug the output locally in VSCode.
That led me to add another condition for the while loop that would ensure to give me the final output. I'll update the script in my initial post accordingly. Finally equipped with the whole picture of what is going on in the ChildJobs I set up a separate test pipeline where I would run different test cases to find the culprit. Soon enough I noticed that taking away --verbose prevents the task from failing. This happened with AZURE_CORE_ONLY_SHOW_ERRORS set or not. So I gave the --only-show-errors option a go, which should have the same result as the environment variable though only on a single Azure CLI call. Due to the full output now at my disposal I could finally see the message that --verbose and --only-show-errors can't be used in conjunction. That settled it. --verbose had to go. All it adds is the information of how long the command ran anyway. I think we can do without it.
On an additional side-note: at the same time I discovered that ForEach-Object -Parallel {} -AsJob is making heavy use of PowerShell runspaces. That means that it cannot be debugged from within VSCode in the typical way. I found a video that might help in situations like this: https://www.youtube.com/watch?v=O-dksknPQBw
I hope this answer helps others that stumble over the same strange behavior. Happy coding.

azure cli not stopping on error in PS script

Boiled down to the minimum I have a Powershell script that looks like this:
$ErrorActionPreference='Stop'
az group deployment create -g ....
# Error in az group
# More az cli commands
Even though there is an error in the az group deployment create, it continues to execute beyond the error. How do I stop the script from executing on error?
Normally, the first thing to try is to wrap everything in a try...catch block.
try {
$ErrorActionPreference='Stop'
az group deployment create -g ....
# Error in az group
# More az cli commands
}
catch {
Write-Host "ERROR: $Error"
}
Aaaaand it doesn't work.
This is when you scratch your head and realize that we are dealing with Azure CLI commands and not Azure PowerShell. They are not native PowerShell commands which would honor $ErrorActionPreference, instead, (as bad as it sounds), we have to treat each Azure CLI command independently as if we were running individual programs (in the back end, the Azure CLI is basically aliases which run python commands. Ironically most of Azure PowerShell commands are just PowerShell wrappers around Azure CLI commands ;-)).
Knowing that the Azure CLI will not throw a terminating error, instead, we have to treat it like a program, and look at the return code (stored in the variable $LASTEXITCODE) to see if it was successful or not. Once we evaluate that, we can then throw an error:
az group deployment create -g ....
if($LASTEXITCODE){
Write-Host "ERROR: in Az Group"
Throw "ERROR: in Az Group"
}
This then can be implemented into a try...catch block to stop the subsequent commands from running:
try {
az group deployment create -g ....
if($LASTEXITCODE){
Write-Host "ERROR: in Az Group"
Throw "ERROR: in Az Group"
}
# Error in az group
# More az cli commands
}
catch {
Write-Host "ERROR: $Error"
}
Unfortunately this means you have to evaluate $LASTEXITCODE every single time you execute an Azure CLI command.
You may use the automatic variable $?. This contains the result of the last execution, i.e. True if succeded or False if it failed: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables?view=powershell-5.1#section-1
Your code would look something like this:
az group deployment create -g ....
if(!$?){
Write-Error "Your error message"
# Handle your error
}
Unfortunately, and same to #HAL9256 answer, you will need to add this code every single time you execute azure-cli
Update - PS Version 7 error trapping for Az CLI (or kubectl)
This behavior changed significantly in PowerShell v7:
https://github.com/PowerShell/PowerShell/issues/4002
https://github.com/PowerShell/PowerShell/pull/13361
I ended up using the following solution instead which is consistent with PowerShell v5/7. It has the advantage of accepting pipeline input for commands like kubectl where stdio can be used for applying configuration etc.
I used scriptblocks since this integrates nicely into existing scripts without breaking syntax:
# no error
# Invoke-Cli {cmd /c echo hi}
# throws terminating error
# (with psv7 you can now call Get-Error to see details)
# Invoke-Cli {az bad}
# outputs a json object
# Invoke-Cli {az account show} -AsJson
# applies input from pipeline to kubernetes cluster
# Read-Host "Enter some json" | Invoke-Cli {kubectl apply -f -}
function Invoke-Cli([scriptblock]$script, [switch]$AsJson)
{
$ErrorActionPreference = "Continue"
$jsonOutputArg = if ($AsJson)
{
"--output json"
}
$scriptBlock = [scriptblock]::Create("$script $jsonOutputArg 2>&1")
if ($MyInvocation.ExpectingInput)
{
Write-Verbose "Invoking with input: $script"
$output = $input | Invoke-Command $scriptBlock 2>&1
}
else
{
Write-Verbose "Invoking: $script"
$output = Invoke-Command $scriptBlock
}
if ($LASTEXITCODE)
{
Write-Error "$Output" -ErrorAction Stop
}
else
{
if ($AsJson)
{
return $output | ConvertFrom-Json
}
else
{
return $output
}
}
}
Handling command shell errors in PowerShell <= v5
Use $ErrorActionPreference = 'Stop' and append 2>&1 to the end of the statement.
# this displays regular output:
az account show
# this also works as normal:
az account show 2>&1
# this demonstrates that regular output is unaffected / still works:
az account show -o json 2>&1 | ConvertFrom-Json
# this displays an error as normal console output (but unfortunately ignores $ErrorActionPreference):
az gibberish
# this throws a terminating error like the OP is asking:
$ErrorActionPreference = 'Stop'
az gibberish 2>&1
Background
PowerShell native and non-native streams, while similar, do not function identically. PowerShell offers extended functionality with streams and concepts that are not present in the Windows command shell (such as Write-Warning or Write-Progress).
Due to the way PowerShell handles Windows command shell output, the error stream from a non-native PowerShell process is unable (by itself) to throw a terminating error in PowerShell. It will appear in the PowerShell runspace as regular output, even though in the context of Windows command shell, it is indeed writing to the error stream.
This can be demonstrated:
# error is displayed but appears as normal text
cmd /c "asdf"
# nothing is displayed since the stdio error stream is redirected to nul
cmd /c "asdf 2>nul"
# error is displayed on the PowerShell error stream as red text
(cmd /c asdf) 2>&1
# error is displayed as red text, and the script will terminate at this line
$ErrorActionPreference = 'Stop'
(cmd /c asdf) 2>&1
Explanation of 2>&1 workaround
Unless otherwise specified, PowerShell will redirect Windows command shell stdio errors to the console output stream by default. This happens outside the scope of PowerShell. The redirection applies before any errors reach the PowerShell error stream, making $ErrorActionPreference irrelevant.
The behavior changes when explicitly specified to redirect the Windows command shell error stream to any other location in PowerShell context. As a result, PowerShell is forced to remove the stdio error redirection, and the output becomes visible to the PowerShell error stream.
Once the output is on the PowerShell error stream, the $ErrorActionPreference setting will determine the outcome of how error messages are handled.
Further info on redirection and PowerShell streams
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_redirection?view=powershell-7.2
The sample from HAL9256 didn't work for me either, but I did find a workaround:
az group deployment create -g ....
if($error){
Write-Host "ERROR: in Az Group"
#throw error or take corrective action
$error.clear() # optional - clear the error
}
# More az cli commands
I used this to try to deploy a keyvault, but in my environment, soft delete is enabled, so the old key vault is kept for a number of days. If the deployment failed, I'd run a key vault purge and then try the deployment again.
Building from HAL9256's and sam's answers, add this one-liner after each az cli or Powershell az command in a Powershell script to ensure that the catch block is hit when an error occurs:
if($error){ throw $error }
Then for the catch block, clear the error, e.g.
catch {
$ErrorMessage = $_.Exception.Message
$error.Clear()
Write-Warning "-- Operation Failed: $ErrorMessage"
}

Newman with AzureDevOps + Powershell

Good morning everyone
I am trying integrate postman tests with AzureDevops Release pipeline.
I have two steps:
First step is to install newman
Second step is to run collection scripts with newman run comand
The second step looks like:
try
{
$testFiles = Get-ChildItem *.postman_collection.json -Recurse
$environmentFile = Get-ChildItem *staging.postman_environment.json -Recurse
Write-Host $testFiles.Count files to test
foreach ($f in $testFiles)
{
$environment = $environmentFile[0].FullName
Write-Host running file $f.FullName
Write-Host usting environment $environment
$collection = $f.FullName
$resultFile = "Results\" + $f.BaseName + ".xml"
Write-Host running $collection
Write-Host will create $resultFile
$(newman run $collection -e $environment -r junit --reporter-junit-export $resultFile)
}
}
catch
{
Write-Host "Exception occured"
Write-Host $_
}
Above step do not work as expected. In the release log I can see the both messages like:
Write-Host running $collection
Write-Host will create $resultFile
However the line
$(newman run $collection -e $environment -r junit --reporter-junit-export $resultFile)
is not being executed.
I did the same on my local machine and the command is working. However the bad thing is the try catch block is not working and only I can see as the result is :
2019-11-22T15:11:23.8332717Z ##[error]PowerShell exited with code '1'.
2019-11-22T15:11:23.8341270Z ##[debug]Processed: ##vso[task.logissue type=error]PowerShell exited with code '1'.
2019-11-22T15:11:23.8390876Z ##[debug]Processed: ##vso[task.complete result=Failed]Error detected
2019-11-22T15:11:23.8414283Z ##[debug]Leaving D:\a\_tasks\PowerShell_e213ff0f-5d5c-4791-802d-52ea3e7be1f1\2.151.2\powershell.ps1.
Do anyone know how to get real error or had experience with newman testing in AzureDevOps ?
When you run those above scripts in VSTS, please remove $() in the newman run line:
newman run $collection -e $environment -r junit --reporter-junit-export $resultFile
Then the script can be run very successfully.
I think you have known that for powershell command line, there will no result displayed in the powershell command line interface in the even if the newman run command has been ran succeed. So, there will no any directly message displayed in the log to let you know whether it is succeed. To confirm this in VSTS, you could check the agent cache if you are using private agent:

Install Windows HotFix using Terraform on AWS

I have a very simple PowerShell script that uploads a generated test file to an AWS S3 bucket from a Windows 2008 R2 Datacenter server (clean AWS instance). If I run the script remotely on the server using Terraform (remote-exec provisioner), the script fails on the S3 upload with a StackOverflowException. When I run the script directly on the server, it runs fine and uploads the file.
I've experimented with different sizes for the file and 14.5MB seems to be about the maximum that works before the StackOverflowException occurs. Just about any size works fine when I RDP into the server and run the script directly. I've tested 200MB and it works fine.
Any idea why this is happening or what I can do to fix it? The actual file I need to upload is 50MB.
Here are the essential parts to recreate the problem. terraform.tf file:
resource "aws_instance" "windows" {
count = "1"
ami = "ami-e935fc94" #base win 2008 R2 datacenter
instance_type = "t2.micro"
connection {
type = "winrm"
user = "<username>"
password = "<password>"
timeout = "30m"
}
provisioner "file" {
source = "windows/upload.ps1"
destination = "C:\\scripts\\upload.ps1"
}
provisioner "remote-exec" {
inline = [
"powershell.exe -File C:\\scripts\\upload.ps1"
]
}
}
The PowerShell script is very simple. upload.ps1:
$f = new-object System.IO.FileStream C:\Temp\test.dat, Create, ReadWrite
$f.SetLength(40MB) # change this to 14.5MB and it works!
$f.Close()
Write-S3Object -BucketName "mybucket" -Folder "C:\Temp" -KeyPrefix "20180322" -SearchPattern "*.dat"
The error that I receive when launching the script from Terraform (remote-exec provisioner):
aws_instance.windows (remote-exec): Process is terminated due to StackOverflowException.
Running upload.ps1 from RDP on the server itself works fine, including larger files (tested up to 200MB).
Here is the version information:
Microsoft Windows Server 2008 R2 Datacenter
Powershell Version: 3.0
AWS Tools for Windows PowerShell, Version 3.3.245.0
Amazon Web Services SDK for .NET, Core Runtime Version 3.3.21.15
This problem results from a Windows bug. This is all fine and good for a standard Windows server -- you can patch and move on. But, things are more tricky with AWS automation using Terraform.
The ideal solution would allow 1) use of the base AMI, 2) apply the hotfix to itself, and 3) then run the WinRM remote-exec, all from Terraform. Another solution would be to create an AMI with the hotfix installed and have Terraform generate instances using that AMI. However, then you're stuck maintaining AMIs.
Normally, I grab the Microsoft-provided base AMI using a filter:
data "aws_ami" "windows2008" {
most_recent = true
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "name"
values = ["Windows_Server-2008-R2_SP1-English-64Bit-Base*",]
}
owners = ["801119661308", "amazon"]
}
Then I use that AMI to create the AWS instance:
resource "aws_instance" "windows" {
count = "1"
ami = "${data.aws_ami.windows2008.id}"
...
}
But, the base AMI doesn't have the hotfix installed allowing you to avoid this WinRM/Windows bug. This is were it gets tricky.
You can use a userdata script to perform a multi-phase setup. In the first boot of the instance (Phase 1), we'll block the instance so that the remote-exec doesn't come in before we're ready. Then, we'll download and install the hotfix and we'll reboot (thanks to Niklas Akerlund, Micky Balladelli and Techibee). On the second boot (in method described here), we'll unblock the instance (enable WinRM) so that the remote-exec can connect.
Here's my userdata/PowerShell script:
$StateFile = "C:\Temp\userdata_state.txt"
If(-Not (Test-Path -Path $StateFile))
{
# PHASE 1
# Close the instance to WinRM connections until instance is ready (probably already closed, but just in case)
Start-Process -FilePath "winrm" -ArgumentList "set winrm/config/service/auth #{Basic=`"false`"}" -Wait
# Set the admin password for WinRM connections
$Admin = [adsi]("WinNT://./Administrator, user")
$Admin.psbase.invoke("SetPassword", "${tfi_rm_pass}")
# Create state file so after reboot it will know
New-Item -Path $StateFile -ItemType "file" -Force
# Make it so that userdata will run again after reboot
$EC2SettingsFile="C:\Program Files\Amazon\Ec2ConfigService\Settings\Config.xml"
$Xml = [xml](Get-Content $EC2SettingsFile)
$XmlElement = $Xml.get_DocumentElement()
$XmlElementToModify = $XmlElement.Plugins
Foreach ($Element in $XmlElementToModify.Plugin)
{
If ($Element.name -eq "Ec2HandleUserData")
{
$Element.State="Enabled"
}
}
$Xml.Save($EC2SettingsFile)
# Download and install hotfix
# Download self-extractor
$DownloadUrl = "https://hotfixv4.trafficmanager.net/Windows%207/Windows%20Server2008%20R2%20SP1/sp2/Fix467402/7600/free/463984_intl_x64_zip.exe"
$HotfixDir = "C:\hotfix"
$HotfixFile = "$HotfixDir\KB2842230.exe"
mkdir $HotfixDir
(New-Object System.Net.WebClient).DownloadFile($DownloadUrl, $HotfixFile)
# Extract self-extractor
Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($HotfixFile, $HotfixDir)
# Install - NOTE: wusa returns immediately, before install completes, so you must check process to see when it finishes
Get-Item "$HotfixDir\*.msu" | Foreach { wusa ""$_.FullName /quiet /norestart"" ; While (#(Get-Process wusa -ErrorAction SilentlyContinue).Count -ne 0) { Start-Sleep 3 } }
# Reboot
Restart-Computer
}
Else
{
# PHASE 2
# Open WinRM for remote-exec
Start-Process -FilePath "winrm" -ArgumentList "quickconfig -q"
Start-Process -FilePath "winrm" -ArgumentList "set winrm/config/service #{AllowUnencrypted=`"true`"}" -Wait
Start-Process -FilePath "winrm" -ArgumentList "set winrm/config/service/auth #{Basic=`"true`"}" -Wait
Start-Process -FilePath "winrm" -ArgumentList "set winrm/config #{MaxTimeoutms=`"1900000`"}"
}