Powershell 5.1 Workflow not retrying - powershell

I'm trying to learn Workflows to process some long running steps. I have to use Powershell 5.1 for various reasons. I really like what I've learnt so far, Workflows can definitely reduce the amount of PS code you have to create. One thing that I'm really stuck on is the retry parameter :
InlineScript {
Write-Output "test"
0/0
} -PSComputerName $computer -PSConfigurationName "testuser" -PSActionRetryCount 3 -PSActionRetryIntervalSec 5
Without writing some additional code or using jobs. Can anybody give me some pointers as to why this is failing. I'm trying to force an error but the Workflow just seems to hang and then not retry for a 2nd time.
Thanks!

Related

How to make an Azure DevOps Pipeline wait for an external process to complete?

I am using Azure DevOps Server deployed on premises. I would like to achieve the following, using Azure DevOps Pipelines:
Pull, build and package a C# solution.
Call out to a proprietary deployment server (deployed in the same network as ADOS) to pick up the package and deploy it to a target machine.
Have the deployment server signal Azure DevOps that it's done deploying.
Original (or dependent?) pipeline runs some tests against the newly deployed target.
I've not been able to find a suitable task in the documentation to get this done. Am I missing something? Can I write a custom task of my own to make the pipeline wait for an external signal?
To make the Pipeline launch and then wait for my external process, I chose the path of least resistance and coded it as a PowerShell Task.
The external process is controlled through a REST API. Launching is done via a POST request and then a loop keeps polling the API with a GET request to see if the work is done. If a certain amount of time has passed without the process finishing successfully, the loop is aborted and the task is failed.
Here's the gist of my code:
$TimeoutAfter = New-TimeSpan -Minutes 5
$WaitBetweenPolling = New-TimeSpan -Seconds 10
# Launch external process
Invoke-RestMethod ...
$Timeout = (Get-Date).Add($TimeoutAfter)
do
{
# Poll external process to see if it is done
$Result = Invoke-RestMethod ...
Start-Sleep -Seconds $WaitBetweenPolling.Seconds
}
while (($Result -eq "IN_PROGRESS") -and ((Get-Date) -lt $Timeout))
if ($Result -ne "SUCCESS")
{
exit 1
}
PS - It's a good idea to sprinkle meaningful Write-Host messages in the above code, to make it easier to debug when running in the pipeline.

Can you use a powershell script to create a powershell script?

So this may be an odd request and maybe I'm going about this all wrong but I also have a unique situation. I have servers that are sometimes cloned and I need to run a script that I created on the clones servers. Due to the nature of the clones they cannot be connected to a network.
Currently I am manually putting the generic script on each server before cloning and then running the script on the clone server.
What I would like to do is have a script that runs and gathers all the information, say installed programs as an example, and generate a custom version of my current script on the servers before they are cloned.
I have both the powershell script that gets the server information and the generic one that makes the changes to the clone but I have not found a way to merge the two or any documentation so I don't know if i am hitting a limitation with this one.
Edit for more explanation and examples. I'm doing this from my phone atm so I dont have an example I can post.
Current I have a script that has a set number of applications to uninstall, registry keys to remove, services to stop ect. In another application I have a list of all the software that we have for each server and I can pull that data for each server. What I need to do is pull the data for each server, and have a script placed on each server that will uninstall just the programs for that server.
Currently the script has to run through every potential software and try to uninstall it and then check the other application to see if there are any additional programs that need to be uninstalled.
Hope this extra info helps.
Thanks.
Stop thinking of it as code.
Use script 1 to export blocks of text into a new file. for example, you might have a configuration that says all Dell servers must have this line of code run:
Set-DELL -attribute1 unmanaged
where on HP, the script would have been
Set-HP -attribute1 unmanaged
on web servers, you want:
set-web -active yes
where if not a web server, you want nothing.. so, your parent script code would look like:
$Dell = "Set-DELL -attribute1 unmanaged"
$HP = "Set-HP -attribute1 unmanaged"
$web = "set-web -active yes"
if (Get-servermake -eq "Dell")
{
$dell | out-file Child.ps1 -append
}
if (Get-servermake -eq "HP")
{
$HP | out-file Child.ps1 -append
}
if (Get-webserver -eq $true)
{
$web | out-file Child.ps1 -append
}
The result is a customized script for the specific server, child.ps1.
Now, you can take this and run with it. You could say add functionality to the child script like "Is it an AD controller", etc.
However, you might be better off having all of this in a single script, and just block off sections that don't apply in an if statement for example.
I'm still not totally sure I understand what your asking. If I've missed the mark, tell me how, and I'll tell you how to tweak this better. (And hopefully obvious is that the Get-whatever is sample code. I don't expect that to be what your using to determine a computer make/model/etc)

Posh-SSH and Cisco Gear

I've been working on doing some scripting with Powershell and the Posh-SSH module. I'm connecting to mainly Cisco gear, but have some other network gear as well. My issue seems to be that I can connect to the gear just fine, but my commands don't seem to run. I've attempted Invoke-SSHCommand as well as creating New-SSHShellStream. What is odd is if I open Powershell and step through each command manually, it appears to work just fine, but for some reason running in a script doesn't produce the results I'm looking for.
I have found that plink.exe works just fine, but I'd really rather code all this from Powershell if possible. Is there something I'm missing with these network devices that might be different than a Linux server?
Code:
New-SSHSession -ComputerName $fw-ip -Credential (Get-Credential) -Verbose<br/>
$session = Get-SSHSession -Index 0<br/>
$stream = $session.Session.CreateShellStream("dumb", 0, 0, 0, 0, 1000)<br/>
$stream.Write("show ver")<br/>
$stream.Read()
What I get back:
Type help or '?' for a list of available commands.<br/>
FW/Admin>
So after working with you in the comments it looks like there were two problems, the first being that the SSH shell supported by the SSH.NET library does not seem to support partial commands, swapping "Show Ver" to "Show Version" corrected that. In addition the command was taking a bit longer to run than the script was waiting before calling its read() on the stream, which can be fixed by adding a start-sleep -seconds 5 between the write() and the read(). If you are planning to use this with commands that may take longer or where you are not sure how long they will take you may want to look into some additional handling that checks to see if the command is done by checking the dataavailable property or creating a listener for the DataReceived event but if your keeping things simple a basic timer will work great for you.

Powershell ScheduledTask to report error

So if we use the following to create a scheduled task in powershell
Register-ScheduledJob -Name TestJob3 -ScriptBlock {
throw "This is an ugly error that should be visible"
} -Credential $cred
...and then run this task.
The task scheduler reports success.
So we can slightly improve on this by changing the scriptblock to something like the following
try{
throw "This is an ugly error that should be visible"
} catch {
Exit -1
}
Now if you look very carefully you can see that the task scheduler is reporting a last run result of something other than 0 (0xFFFFFFF).
However all we have in the history tab is bunch of infomation logging saying that the task and action have completed.
What I would really like is to get some big loud exception level logging in the history view preferably with the nice descriptive error that has been thrown.
Anyone know how this can be achieved?
Unfortunately, it looks like Task Scheduler doesn't respect exit codes. As far as it is concerned, running the program and seeing it exit is a success. So logging your exceptions will probably need to be done either manually or through Write-EventLog.
Out of curiosity, since this is a ScheduledJob instead of a ScheduledTask, does running Get-ScheduledJob show that the job completed without errors also?

How to resume PowerShell script after system reboot?

I am using below powershell script to delete SharePoint Alerts.
foreach ($site in Get-SPSite -Limit All)
{
"Site Collection $site"
foreach ($web in $site.allwebs)
{
" Web $web"
$c = $web.alerts.count
" Deleting $c alerts"
for ($i=$c-1;$i -ge 0; $i--) { $web.alerts.delete($i) }
}
}
There are around million alerts in each of Dev, Test and UAT environments. It takes many hours to delete all the alerts at one go and as the servers automatically get restarted periodically, the script doesn't get executed fully.
I am aware that to resume PowerShell scripts after reboot we can use PowerShell Workflow with Checkpoint-workflow but not sure where to place checkpoints and PSPersist.
Need help to resume deleting of alerts in the above script after system reboot.
Update: After trying to implement it, I realized that SharePoint PowerShell cmdlets cannot be coupled with PowerShell Workflow. It doesn't allow
Add-PSSnapin "Microsoft.SharePoint.PowerShell"
to be added to workflows
Workflow SPAlerts
{
//Below tweaks didn't work
InlineScript{Add-PSSnapin "Microsoft.SharePoint.PowerShell"}
Invoke-Command -ScriptBlock {Add-PSSnapin "Microsoft.SharePoint.PowerShell"}
Invoke-Expression "......"
}
The MSDN documentation states:
You can place checkpoints anywhere in a workflow, including before and after each command or expression, or after each activity in a workflow. Windows PowerShell Workflow allows you to do so.
... Add a checkpoint after each significant part of the workflow completes; a part that you do not want to repeat if the workflow is interrupted.
... When the workflow definitions is complete, use the testing process to refine your checkpoints. Add and remove (comment out) checkpoints to achieve both a reasonable running time and a reasonable recovery time.
Looking at your code, I would place the checkpoint right after "Site Collection $site" if deleting alerts for the given website takes reasonable time on average. If there are just few sites each of them containing tons of alerts then I would place it on the start of the next foreach.
I would definitely not place it inside the worker foreach which deletes alerts.
I would also suggest you look at the foreach -Parallel capability of workflows to make the deleting parallel. That way even the last site/website should get it's turn, even if the server is restarted often.