i have a script that kicks off a remote job via invoke-command -asjob on 20 servers to run a legacy command line application. It might take a few hours to run it so I used foreach loop until I have 0 instances of (get-job - state running). Inside the loop I put write-progress to inform user about how many servers still running this process.
All works well but I need to add a small feature now to allow users running this script to stop the script and kill all running remote jobs.
Now I know how to kill them, but I have no idea how to allow user to provide feedback back to my loop. Originally I was thinking about popping up a GUI button using async runspace (like described here http://www.vistax64.com/powershell/16998-howto-create-windows-form-without-stopping-script-processing.html) but it looks like I cannot execute functions on the thread with my script from the UI thread.
Is there a way to allow my users to stop the script and kill remote processes?
Here is an example of breaking out of a loop.
Write-Host "Press 'q' to quit"
while(1){
if([console]::KeyAvailable){
$key = [console]::ReadKey($true)
if($key.Key -eq 'Q') {break}
}
}
Just place the if statement into your loop.
Related
We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers
I'm using a Powershell script to perform some automated testing on a web application.
Part of this script runs a small, separate script which basically monitors the web app for pop ups and closes them if they appear. It is called during the main script like so:
Start-Process Powershell.exe -Argumentlist "-file C:\Users\Documents\Monitor.ps1"
At some point though I would like to close the monitor script, perform some commands, and then start the monitor script again.
Is there a way for me to kill the monitor script from the main, without closing the main script as well in the process?
You would want to save it to a variable:
$a = start-process notepad.exe -PassThru
$a.Id
10536
So you could later kill it.
I've got a powershell script that eventually passes a stack of arguments into a batch file via invoke-expression command.
However, on one server, when the powershell scripts executes that batch file, that batch file opens in a new window, but on the other server, the batch file executes within the powershell window.
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
So my question is... does anybody know why the behaviours are different between the two servers, and how to get the batch file to execute in the powershell window? I'm thinking it's a configuration thing, but can't actually find anything that tells me how to make it do what I want it to do.....
Thanks!
--edit--
I'm currently just piping the line straight through like this:
E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)
Previously, it was
$invokeString = 'E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)'
$output = invoke-expression $invokeString
Both had the same behaviour.
So my question is... does anybody know why the behaviours are different between the two servers
Most often I've seen this sort of thing related to how a scripts is called. If the same user is logged on multiple times on the same server (i.e., console and RDP) then the window might appear in a different session. Similarly, if the script runs as a scheduled task and the user that runs the task isn't the user logged on, the window will never be visible. If the same user is logged on, it might be visible.
how to get the batch file to execute in the powershell window?
You could try Start-Process with -NoNewWindow, as #Paul mentions.
However....
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
It sounds like your actual problem is that your code has a race condition. You should fix the actual problem. Use Start-Process with the -Wait parameter, or use the jobs system in PowerShell.
I'm needing to monitor the processes running on a server that are named "Prov.Messenger.exe" and alert if the number of occurrences is less than 5.
I know I can look at PowerShell with the Get-Process command, which I've done a "get-process prov*" at a Power Shell Command prompt and it shows 5 which is correct.
How can I get some functionality to check if the number of occurrences is less than 5 then alert though? I'm needing to do this from a remote server.
You can simply do:
if ($(Get-Process "prov*").count -lt 5) {
# Alert logic
}
If you need this to run continuously, wrap the above block in a while loop and add a sleep command
You can use this one-liner for monitoring processes remotely:
Get-Process -ComputerName MyPC
I am having 8 powershell scripts. Few of them having dependencies. It means they can't be executed in parallel. They should be executed on after another.
Some of the Powershell scripts has no dependency and it can be executed in parallel.
Following is the dependency explained in detail
Powershell scripts 1, 2, and 3 depend on nothing else
Powershell script 4 depends on Powershell script 1
Powershell script 5 depends on Powershell scripts 1, 2, and 3
Powershell script 6 depends on Powershell scripts 3 and 4
Powershell script 7 depends on Powershell scripts 5 and 6
Powershell script 8 depends on Powershell script 5
I knew that by manually hard coding the dependency is possible. But 10 more powershell scripting may be added and dependency among them may added.
Has any one acheived parallelism by finding dependency? If so please share me how to proceed.
You need to look at PowerShell 3.0 Workflows. It offers the features you need for your requirement. Something like this:
workflow Install-myApp {
param ([string[]]$computername)
foreach -parallel($computer in $computername) {
"Installing MyApp on $computer"
#Code for invoking installer here
#This can take as long as 30mins and may reboot a couple of times
}
}
workflow Install-MyApp2{
param ([string[]]$computername)
foreach -parallel($computer in $computername) {
"Installing MyApp2 on $computer"
#Code for invoking installer here
#This can take as long as 30mins!
}
}
WorkFlow New-SPFarm {
Sequence {
Parallel {
Install-MyApp2 -computername "Server2","Server3"
Install-MyApp -computername "Server1","Server4","Server5"
}
Sequence {
#This activity can happen only after the set of activities in the above parallel block are complete"
"Configuring First Server in the Farm [Server1]"
#The following foreach should take place only after the above activity is complete and that is why we have it in a sequence
foreach -parallel($computer in $computername) {
"Configuring SharePoint on $computer"
}
}
}
}
How familiar with parallel programming in general are you? Have you heard of and used the concept of mutual exclusion? The concept in general is to use some kind of messaging/locking mechanism to protect a shared resource among different parallel threads.
In your case, you're making the dividing lines be the scripts themselves - which I think may make this much simpler than most of the techniques outlined in that wikipedia article. Would this simple template work for what you're looking for?
Define a folder in the local file system. This location will be known to all scripts (default parameter).
Before running any of the scripts, make sure any files in that directory are deleted.
For each script, as the very last step of their execution, they should write a file in the shared directory with their script name as the name of the file. So script1.ps1 would create script1 file, for example.
Any script that has a dependency on another script will define these dependencies in terms of the file names of the scripts. If script3 is dependent on script1 and script2, this will be defined as a dependency parameter in script3.
All scripts with dependencies will run a function that checks if the files exist for the scripts it's dependent on. If they are, it proceeds with the execution of the script, otherwise it pauses until they are complete.
All scripts get kicked off simultaneously by a master script / batch file. All of the scripts are ran as PowerShell jobs so that the OS will run their execution in parallel. Most of the scripts will start up, see they have dependencies, and then wait patiently for these to get resolved before continuing with the actual execution of the script body.
The good news is that this would allow for flexible changing of dependencies. Every script writes a file, making no assumption about whether someone else is waiting for them or not. Changing the dependency of a particular script would be a simple one-line change or change of input parameter.
This is definitely not a perfect solution though. For instance what would happen if a script fails (or your script can exit in multiple different code paths but you forget to write the file in one of them)? This could cause a deadlock situation where no dependent scripts will get kicked off. The other bad thing is the busy wait of sleeping or spinning while waiting for the right files to get created - this could be corrected by implementing an Event-based approach where you have the OS watch the directory for changed.
Hope this helps and isn't all garbage.
You'll just have to order you calls appropriately. There's nothing built-in that will handle the dependencies for you.
Run 1,2,3 at the same time Start-Job.
Wait for them to get done Get-Job -State Running | Wait-Job
Run 4,5 at the same time Start-Job
Wait for them to get done Get-Job -State Running | Wait-Job
Run 6 and wait for it.
Run 7, 8 at the same time Start-Job