PowerShell ISE and PowerShell.Exiting event - powershell

I'm using the PowerShell.Exiting engine event to save my command history and do some other tidying up when I close a PowerShell session, registering it in my profile thus:
# Set up automatic functionality on engine exit.
Register-EngineEvent PowerShell.Exiting -SupportEvent -Action `
{
#stuff
...
}
This works perfectly when I use PowerShell in a console window, but when I'm running PowerShell in ISE, it appears that the PowerShell.Exiting event somehow never fires, since nothing I put in there, be it the usual stuff or test code, ever runs.
Is this a known problem, and if so, is there a known workaround or alternative?

Well, this is weird as hell.
After cutting down the profile completely outside the Register-EngineEvent call only to find that it still didn't work, I started to cut down the contents of that, too, and restored the rest of the profile to its original state. Here are my findings:
If you have a write-host, or other output to the PowerShell host, in the scriptblock for Register-EngineEvent PowerShell.Exited, it doesn't run when you exit ISE (even though it works fine when you exit the console).
In fact, none of the scriptblocks you have registered against the PowerShell.Exited event appear to run, even ones that don't contain any statements outputting to the host. (Which is why when I tested other people's working examples up above, they didn't work for me unless I started PowerShell without running the profile that added the existing event handler.)
Expunge all statements that cause host output from the scriptblocks you use with Register-EngineEvent PowerShell.Exited , or redirect the output somewhere else, and they all start working.
(Take this with appropriate quantities of salt, since I have not yet had time to go chasing it with a debugger, but I have a sneaking suspicion ISE is closing down its tab before the engine is done with it...)

Related

Question about ISE vs Console with SystemEvents

When I run the following in PowerShell ISE, it works perfectly, gives me the reason "AccountLock" or "AccountUnlock" exactly as it's supposed to. However, when I run this exact command in an elevated powershell console, it does not return the sessionswitch reason at all in console. It returns nothing after an unlock.
I checked Get-EventSubscriber as well as Get-Job and both look successfully created.
Screenshot of Subscriber & Job:
Register-ObjectEvent -InputObject $([microsoft.win32.systemevents]) -EventName "SessionSwitch" -Action {write-host $event.SourceEventArgs.Reason}
One thing I would like to do is have windows detect when the session is unlocked (after a user syncs their password with the domain) and open a program.
OS: Windows 10
Version: 5.1 Build 17134 R 590
After a lot of looking around, i couldn't find a good way to use [windows.win32.systemevents], so i reached out to Lee Holmes. He said it is because the powershell console host runs by default in the Single Thread Apartment (STA) model. If you run it in MTA, it works fine.
Simplest workaround is to use code similar to this at the beginning of your script to ensure you are in MTA mode, and if start a new powershell process if you are not and reload the script.
if ([System.Threading.Thread]::CurrentThread.ApartmentState -ne [System.Threading.ApartmentState]::MTA)
{
powershell.exe -MTA -File $MyInvocation.MyCommand.Path
return
}
Have a look to Microsoft documentation : SystemEvents.SessionSwitch Event, the explanation seems to be that this message is sent to message pump which is the part of code that process graphic messages. You perhaps can try to use an hidden form in your code to force creation of a message pump.
Note :
This event is only raised if the message pump is running. In a Windows service, unless a hidden form is used or the message pump has been started manually, this event will not be raised. For a code example that shows how to handle system events by using a hidden form in a Windows service, see the SystemEvents class.

PowerShell Self-Updating Script

We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers

Powershell Ambiguous Code Running on Windows Machine

Out of sudden, a colleague of mine found a script that runs during the startup of the Windows on his machine, which is found to be a powershell script.
The code is below, what I understood that it only minimizes all windows opened.
powershell -command "& { $x = New-Object -ComObject Shell.Application; $x.minimizeall() }"
I need to know if there is an impact from this code on his machine as to isolate from the network.
It doesn't seem to present a threat in and of itself. If someone is doing something unwanted, it would probably be that they're starting some other program that does something not nice, and this PowerShell command minimizes its window (possibly to the taskbar notification area where it can be hidden) so that you don't see what's going on. But the command itself appears harmless.
It only minimizes all the windows. Except that nothing else. I don't think it is harmful. If someone has kept it in the startup, I believe the reason being some other App is opening window in the startup , so it is trying to avoid that.

Script not running first time in console, does afterwards

Hoping you can help as I have run out of ideas and at this point I think it is because I don't understand PowerShell properly.
I read this series; https://foxdeploy.com/2016/05/17/part-v-powershell-guis-responsive-apps-with-progress-bars/ and tried to do the same thing, then I discovered that I could not call my script by opening a normal PowerShell console (Not ISE) and doing .\script.ps1
I also noticed that I could not do the same with the example script located here;
https://github.com/1RedOne/BlogPosts/blob/master/GUI%20Part%20V/PowerShell_GUI_Template.ps1
It does run the SECOND time however and every time after that? For example if you do
.\script.ps1 (doesn't work)
.\script.ps1 (works)
.\script.ps1 (works)
etc...
But if you do
powershell .\script.ps1 (doesn't work)
powershell .\script.ps1 (doesn't work)
However if you close ISE, open it again and run script it works first time?
I don't really know what is causing this.
Does anyone have any ideas?
If you would like to see this just copy from github link as that one has same problem as mine.
The issue is due to line 39-41 in the script on GitHub:
$syncHash.Window=[Windows.Markup.XamlReader]::Load( $reader )
[void][System.Reflection.Assembly]::LoadWithPartialName('presentationframework')
As you can see, presentationframework.dll is not being loaded into the appdomain until after XamlReader.Load() is being called.
The entire rest of the script is dependant on the output from this call, so the entire thing fails.
Second time around, the presentationframework assembly has already been loaded, so the call succeeds and the Window is created correctly.

Powershell config to force a batch file to run within the powershell window?

I've got a powershell script that eventually passes a stack of arguments into a batch file via invoke-expression command.
However, on one server, when the powershell scripts executes that batch file, that batch file opens in a new window, but on the other server, the batch file executes within the powershell window.
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
So my question is... does anybody know why the behaviours are different between the two servers, and how to get the batch file to execute in the powershell window? I'm thinking it's a configuration thing, but can't actually find anything that tells me how to make it do what I want it to do.....
Thanks!
--edit--
I'm currently just piping the line straight through like this:
E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)
Previously, it was
$invokeString = 'E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)'
$output = invoke-expression $invokeString
Both had the same behaviour.
So my question is... does anybody know why the behaviours are different between the two servers
Most often I've seen this sort of thing related to how a scripts is called. If the same user is logged on multiple times on the same server (i.e., console and RDP) then the window might appear in a different session. Similarly, if the script runs as a scheduled task and the user that runs the task isn't the user logged on, the window will never be visible. If the same user is logged on, it might be visible.
how to get the batch file to execute in the powershell window?
You could try Start-Process with -NoNewWindow, as #Paul mentions.
However....
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
It sounds like your actual problem is that your code has a race condition. You should fix the actual problem. Use Start-Process with the -Wait parameter, or use the jobs system in PowerShell.