Parallelizing powershell script execution - powershell

I am having 8 powershell scripts. Few of them having dependencies. It means they can't be executed in parallel. They should be executed on after another.
Some of the Powershell scripts has no dependency and it can be executed in parallel.
Following is the dependency explained in detail
Powershell scripts 1, 2, and 3 depend on nothing else
Powershell script 4 depends on Powershell script 1
Powershell script 5 depends on Powershell scripts 1, 2, and 3
Powershell script 6 depends on Powershell scripts 3 and 4
Powershell script 7 depends on Powershell scripts 5 and 6
Powershell script 8 depends on Powershell script 5
I knew that by manually hard coding the dependency is possible. But 10 more powershell scripting may be added and dependency among them may added.
Has any one acheived parallelism by finding dependency? If so please share me how to proceed.

You need to look at PowerShell 3.0 Workflows. It offers the features you need for your requirement. Something like this:
workflow Install-myApp {
param ([string[]]$computername)
foreach -parallel($computer in $computername) {
"Installing MyApp on $computer"
#Code for invoking installer here
#This can take as long as 30mins and may reboot a couple of times
}
}
workflow Install-MyApp2{
param ([string[]]$computername)
foreach -parallel($computer in $computername) {
"Installing MyApp2 on $computer"
#Code for invoking installer here
#This can take as long as 30mins!
}
}
WorkFlow New-SPFarm {
Sequence {
Parallel {
Install-MyApp2 -computername "Server2","Server3"
Install-MyApp -computername "Server1","Server4","Server5"
}
Sequence {
#This activity can happen only after the set of activities in the above parallel block are complete"
"Configuring First Server in the Farm [Server1]"
#The following foreach should take place only after the above activity is complete and that is why we have it in a sequence
foreach -parallel($computer in $computername) {
"Configuring SharePoint on $computer"
}
}
}
}

How familiar with parallel programming in general are you? Have you heard of and used the concept of mutual exclusion? The concept in general is to use some kind of messaging/locking mechanism to protect a shared resource among different parallel threads.
In your case, you're making the dividing lines be the scripts themselves - which I think may make this much simpler than most of the techniques outlined in that wikipedia article. Would this simple template work for what you're looking for?
Define a folder in the local file system. This location will be known to all scripts (default parameter).
Before running any of the scripts, make sure any files in that directory are deleted.
For each script, as the very last step of their execution, they should write a file in the shared directory with their script name as the name of the file. So script1.ps1 would create script1 file, for example.
Any script that has a dependency on another script will define these dependencies in terms of the file names of the scripts. If script3 is dependent on script1 and script2, this will be defined as a dependency parameter in script3.
All scripts with dependencies will run a function that checks if the files exist for the scripts it's dependent on. If they are, it proceeds with the execution of the script, otherwise it pauses until they are complete.
All scripts get kicked off simultaneously by a master script / batch file. All of the scripts are ran as PowerShell jobs so that the OS will run their execution in parallel. Most of the scripts will start up, see they have dependencies, and then wait patiently for these to get resolved before continuing with the actual execution of the script body.
The good news is that this would allow for flexible changing of dependencies. Every script writes a file, making no assumption about whether someone else is waiting for them or not. Changing the dependency of a particular script would be a simple one-line change or change of input parameter.
This is definitely not a perfect solution though. For instance what would happen if a script fails (or your script can exit in multiple different code paths but you forget to write the file in one of them)? This could cause a deadlock situation where no dependent scripts will get kicked off. The other bad thing is the busy wait of sleeping or spinning while waiting for the right files to get created - this could be corrected by implementing an Event-based approach where you have the OS watch the directory for changed.
Hope this helps and isn't all garbage.

You'll just have to order you calls appropriately. There's nothing built-in that will handle the dependencies for you.
Run 1,2,3 at the same time Start-Job.
Wait for them to get done Get-Job -State Running | Wait-Job
Run 4,5 at the same time Start-Job
Wait for them to get done Get-Job -State Running | Wait-Job
Run 6 and wait for it.
Run 7, 8 at the same time Start-Job

Related

Powershell takes minutes to load script / show prompt [duplicate]

I have slow PowerShell console startup times (always more than 5 second wait) and was hoping for advice on troubleshooting steps to find out where the bottlenecks might be?
I have read that for running scripts, -NoProfile is important to prevent Modules etc loading, but how, in general, should we approach finding out what is slowing things down? I don't have many Modules installed and I know that since PowerShell 3.0, Modules are just referenced at startup and not fully loaded (a Module is only fully loaded when a function from a given Module is invoked) so I just can't understand why it takes 5+ seconds to start a bare console (my $profile also is empty).
Any advice on various steps that I can look at to debug the console startup process would be appreciated? Also, are there maybe some Microsoft or third-party tools that exist to debug the various steps in the console startup process to look for bottlenecks?
When PowerShell starts to become slow at startup, an update of the .NET framework might be the cause.
To speed up again, use ngen.exe on PowerShell's assemblies.
It generate native images for an assembly and its dependencies and install them in the Native Images Cache.
Run this as Administrator
$env:PATH = [Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()
[AppDomain]::CurrentDomain.GetAssemblies() | ForEach-Object {
$path = $_.Location
if ($path) {
$name = Split-Path $path -Leaf
Write-Host -ForegroundColor Yellow "`r`nRunning ngen.exe on '$name'"
ngen.exe install $path /nologo
}
}
Hope that helps
Step 1: Stop using PowerShell.
Now, seriously, something that needs ~13 seconds (YMMV) on an quad-core i7 cpu to launch off an ssd drive is an abomination of software architecture.
But yes, I hear you, "no viable alternative" etc...
... but if forced, bribed or blackmailed to still use it, check if your Windows has DNS cache service enabled.
For me, with DNS cache disabled and powershell executable firewalled, the built-in 5.1.19041.906 version starts quickly, but the new pwsh 7.1.4 would take around 13 seconds to get responsive to keyboard input under the same circumstances. It's so desperate to call home that it would just synchronously wait for some network timeout while blocking all user input, as if threads were a thing for the weak.
My resolution was to stick with the olden powershell 5.
My work computer stored the main profile on a remote server. Another minor problem was that it imported duplicate modules from 4 different profile.ps1 files.
Use the following commands to see where your profiles and modules are stored. Delete the unnecessary profile.ps1 and move all your modules into one directory.
echo $env:PSModulePath
$profile | select *
My loading time was reduced from 21000ms to 1300ms.
Found this solution when I googled having the same problem, but in 2022. Unfortunately this did not fix my issue.
Our systems have a group policy security requirement to "Turn on PowerShell Transcription". The policy requires that we specify "the Transcript output directory to point to a Central Log Server or another secure location". The server name changed and no one updated the policy. As soon as I updated the GPO with the new location, PowerShell opened instantly again.
Press Windows+R
Type %temp% and hit enter
C+A & SHIFT+DEL
That should do it

PowerShell Self-Updating Script

We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers

How can I run a PowerShell script after reboot?

I have a powershell script that tails specifics logs. If there is no update to the logs within a specific time span, an alert is then sent to nagios (as this depicts that the service is no longer running).
The powershell script works great when run manually, but my issue is that I want it to load up on reboot. I've tried creating a scheduled task that repeats itself every 5 minutes using the arguments '-noexit -file C::\script.ps1'. The problem is then that my script doesn't actually work when run as a scheduled task.
The execution policy is set to Unrestricted, so the script runs, but the code doesn't execute and work like it does when manually run.
FWIW, the code is:
function Write-EventlogCustom($msg) {
Write-EventLog System -source System -eventid 12345 -message $msg
}
Get-Content -Path C:\test.log -Wait | % {Write-EventlogCustom $_}
So if I update test.log while the powershell script runs a scheduled task, the event log doesn't get updated. However, when I run this script manually, and update to test.log, it does appear in the event viewer.
I'm hoping that a second set of eyes might find something that I may have missed?
As #Tim Ferrill has mentioned, I needed to run the process with task schedulers 'Run with highest privileges' setting. This resolved the issue.

How can I update the current script before running it?

We have some PowerShell scripts living on network share A, and the latest versions of those scripts living on read-only network share B. The read-only share is the output of our build system, and it has the build number in the path. Partly out of habit, partly because the scripts must create files on disk, but mostly because the paths are predictable, we copy PowerShell scripts to network share A before executing them. The trouble is that we don't always copy the scripts to network share A, so occasionally those scripts are out of date.
I'm trying to create a script that will update the PowerShell scripts on network share A (by copying the latest versions from share B), and then execute those latest scripts. Right now, my idea is to have a tiny script that grabs the latest script from share B, copies it to share A, and then executes that script on share A.
Is it possible to have a script update itself? I.e., instead of having two scripts, can I have one script (that lives on share A) that copies a newer version itself from share B to share A, then restarts execution of itself? (I would put in some logic about file-creation date so that on first execution, it would update itself, and on second execution it would run the actual meat of the script.)
Yes, you can update the script you're running, then execute it again. (Just make sure to exit the first script after updating.) Here's some sample code I created:
Write-Host "Starting script"
if ($(Get-Item G:\selfUpdater2.ps1).CreationTimeUtc -gt $(Get-Item G:\selfUpdater.ps1).CreationTimeUtc) {
Copy-Item G:\selfUpdater2.ps1 G:\selfUpdater.ps1
$(Get-Item G:\selfUpdater.ps1).CreationTimeUtc = [DateTime]::UtcNow
&G:\selfUpdater.ps1
exit
}
Write-Host "Continuing original script; will not get here if we updated."
Note that, if you have parameters to pass around, you'll have to pass them to the target script. Since your updated script may well have more or fewer parameters than your current script (some bound, some unbound by the current script), you'll need to iterate through both $script:MyInvocation.BoundParameters and $script:MyInvocation.UnboundArguments to pick up all of them and pass them on.
(Personally, I've had more luck with random-parameter-passing using Invoke-Expression ".\scriptName.ps1 $stringOfArguments" than with &.\scriptName.ps1 $arguments, but your mileage may vary - or you may know more PowerShell than I do. If you use Invoke-Expression, then be sure to re-add quotes around any parameters that have spaces in them.)
There's one drawback: If a mandatory script parameter is removed in a future version of the script, then you need to run the script at least once with the no-longer-mandatory parameter before it will update itself allow you to drop the parameter.
Here's a function I put together. Pass it the path of the file that might hold a newer release. This will update itself and then re-run with any arguments handed to he original script.
function Update-Myself
{
[CmdletBinding()]
param
(
[Parameter(Mandatory = $true,
Position = 0)]
[string]$SourcePath
)
#check that the destination file exists
if (Test-Path $SourcePath)
{
#The path of THIS script
$CurrentScript = $MyInvocation.ScriptName
if (!($SourcePath -eq $CurrentScript ))
{
if ($(Get-Item $SourcePath).LastWriteTimeUtc -gt $(Get-Item $CurrentScript ).LastWriteTimeUtc)
{
write-host "Updating..."
Copy-Item $SourcePath $CurrentScript
#If the script was updated, run it with orginal parameters
&$CurrentScript $script:args
exit
}
}
}
write-host "No update required"
}
Update-Myself \\path\to\newest\release\of\file.ps1

Starting a process remotely in Powershell, getting %ERRORLEVEL% in Windows

A bit of background:
I'm trying to start and stop some performance counters remotely at the start of a test, then stop them at the end of the test. I'm doing this from an automated test framework from a Win2003 machine, the test framework executes commands without launching a console, some of the system under test is running Win2008. I've written scripts to choose the performance counters based on roles assigned to the servers.
My problem(s):
logman can't start or stop counters on machines that run a later version of the OS.
psexec can be used to run logman remotely, but psexec likes to hang intermittently when run from the test framework. It runs fine manually from the command line. I'm guessing that this is because the calling process doesn't provide a console, or some similar awkwardness. There's not much I can do about this (GRRRR)
I wrote a PowerShell script that executes logman remotely using the WMI's win32_process and called it from a batch script, this works fine. However, the test framework decides pass and fail scenarios based on the %ERRORLEVEL% and the content of stderr, but WMI's win32_process does not give me access to either. So if the counters fail to start, the test will plough on anyway and waste everyone's time.
I'm looking for a solution that will allow me to execute a program on a remote machine, check the return code of the program and/or pipe stderr back to the caller. For reasons of simplicity, it needs to be written in tools that are available on a vanilla Win2k3 box. I'd really prefer not to use a convoluted collection of scripts that dump things into log files then reading them back out again.
Has anyone had a similar problem, and solved it, or at least have a suggestion?
For reasons of simplicity, it needs to
be written in tools that are available
on a vanilla Win2k3 box. I'd really
prefer not to use a convoluted
collection of scripts that dump things
into log files then reading them back
out again.
PowerShell isn't a native tool in Windows 2003. Do you still want to tag this question PowerShell and look for an answer? Anyway, I will give you a PowerShell answer.
$proc = Invoke-WmiMethod -ComputerName Test -Class Win32_Process -Name Create -ArgumentList "Notepad.exe"
Register-WmiEvent -ComputerName test -Query "Select * from Win32_ProcessStopTrace Where ProcessID=$($proc.ProcessId)" -Action { Write-Host "Process ExitCode: $($event.SourceEventArgs.NewEvent.ExitStatus)" }
This requires PowerShell 2.0 on the system where you are running these scripts. Your Windows 2003 remote system does not really need PowerShell.
PS: If you need a crash course on WMI and PowerShell, do read my eGuide: WMI Query Language via PowerShell