PowerShell WMI query failure over time - powershell

I am performing PowerShell (4.0) WMI query as below to obtain a specific process ID for java processes, the process variable is effectively 'java' and command is part of the path to a know part of the command line value (java command line options).
(Get-WmiObject win32_process -Filter "name like '%$process%' and commandLine like '%$command%' and not commandline like '%shutdown%'")
This, given a unique command line should always return a single process/process id value.
The PowerShell is called originally from a Scheduled Task that then enters into an infinite loop calling the WMI query every X minutes, this works for ~1 hour after I log off from the server and then simply stops returning the process. Logging back onto the server I can then open a new PS shell and run the command manually - it works, the task running in task manager continues to show no values.
If I am logged onto the server with a PowerShell ISE window open and configure an infinite loop to do the same while remaining logged in this does not suffer the same situation and the loop seems to continue without failure - but the key difference here is I can not log off from the server as I have an interactive shell open - I think from testing the scheduled task also does not fail while the interactive session is active.
The scheduled task is called as a local admin account running with highest privs with the password stored.
Any ideas why the WMI query stops after a period of time and how this may be corrected as I can not rely on keeping my RDP session active?
Thanks in advance.

I have worked around this with a simple 'if the WMI query fails' check for (Get-WmiObject -Query "select * from Win32_process").count if this is 0 then it restarted the entire process (as this should never be 0)

Related

PowerShell Self-Updating Script

We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers

Get-WmiObject returns nothing in a PowerShell Script running as scheduled task

My entire script is about 800 lines long and contains a lot of other functionality so i will focus on the actual problem in this question.
I made a PowerShell script that uses the Get-WmiObject cmdlet to get the installed Windows Updates on the computer it runs on.
The full command looks like this:
$windowsupdates = Get-WmiObject Win32_QuickFixEngineering -ErrorAction Stop |
Select-Object Caption, CSName, Description, FixComments, HotFixID, InstallDate, InstalledBy, InstalledOn, Name, ServicePackInEffect, Status -ErrorAction Stop |
Sort-Object -Property InstallDate -ErrorAction Stop
After this command i iterate through the found updates via ForEach and generate a csv file from them.
This works entirely fine when running the script manually but as soon as i run it as scheduled task under the exact same user i used to run it manually it will no longer find any updates installed on the system.
There are no exceptions thrown and everything seems as if the script was running normal.
When i finish the script a scheduled task will be used to run it because it has to run in a specific time interval so i have to make sure it runs fine as scheduled task. The script will be running under a standard user without administrator permissions (i will not be able to run it under an administrator user so don't suggest me doing that, it DOES work on an administrator user though) so i'm sure it is a security/access problem because the script runs fine under an administrator user as scheduled task.
The only other question i found about this issue is WMI query in powershell script returns no object when run in a scheduled task and i DID check the "Run with highest privileges" Checkbox but it still doesn't work. Has anybody ever experienced this issue and knows a solution?

Powershell config to force a batch file to run within the powershell window?

I've got a powershell script that eventually passes a stack of arguments into a batch file via invoke-expression command.
However, on one server, when the powershell scripts executes that batch file, that batch file opens in a new window, but on the other server, the batch file executes within the powershell window.
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
So my question is... does anybody know why the behaviours are different between the two servers, and how to get the batch file to execute in the powershell window? I'm thinking it's a configuration thing, but can't actually find anything that tells me how to make it do what I want it to do.....
Thanks!
--edit--
I'm currently just piping the line straight through like this:
E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)
Previously, it was
$invokeString = 'E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)'
$output = invoke-expression $invokeString
Both had the same behaviour.
So my question is... does anybody know why the behaviours are different between the two servers
Most often I've seen this sort of thing related to how a scripts is called. If the same user is logged on multiple times on the same server (i.e., console and RDP) then the window might appear in a different session. Similarly, if the script runs as a scheduled task and the user that runs the task isn't the user logged on, the window will never be visible. If the same user is logged on, it might be visible.
how to get the batch file to execute in the powershell window?
You could try Start-Process with -NoNewWindow, as #Paul mentions.
However....
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
It sounds like your actual problem is that your code has a race condition. You should fix the actual problem. Use Start-Process with the -Wait parameter, or use the jobs system in PowerShell.

Powershell/Polymon Monitoring

So my boss wants us to use Polymon monitoring to watch a server's print spooler, because it has recently been turning itself off for no reason. We have a simple bat script on the desktop of the server to run "net start spooler" when Polymon sees the spooler shut off. However, the script I am using does not run the bat file...here is what Polymon says:
Monitor status is available through the following $Status object
Properties: $Status.StatusID, $Status.Status The following StatusID
values have corresponding Status values: 1=OK, 2=WARN, 3=FAIL
Monitor Counters are available through the $Counters collection.
This collection exposes a default Item property which retrieves a
Counter by index value, e.g. $Counters(0) and also exposes a Counter
property that retrieves a Counter by name, e.g.
Counters.Counter("MyCounterName")
Counter objects expose the following properties: CounterName,
CounterValue
My script is:
cmd /c C:\Documents and Settings\Username\Desktop\start_spooler1.bat
Polymon says the script checks out, but when I manually shut the spooler service down, all I get is notifications and the spooler does not turn back on. Thoughts? I'm a total newb at Powershell...and Polymon requires the action script to either be in Powershell or VB, so I'm open to either that will make it work.
I think your problem is due simply to the spaces in the path to the batch script. Put quotes around the path and it could work:
cmd /c "C:\Documents and Settings\Username\Desktop\start_spooler1.bat"
But note that you don't need to use cmd for this - powershell will run .bat and .cmd files itself. Due to the spaces in the path, you will again need some quoting, and the & operator:
& "C:\Documents and Settings\Username\Desktop\start_spooler1.bat"

Starting a process remotely in Powershell, getting %ERRORLEVEL% in Windows

A bit of background:
I'm trying to start and stop some performance counters remotely at the start of a test, then stop them at the end of the test. I'm doing this from an automated test framework from a Win2003 machine, the test framework executes commands without launching a console, some of the system under test is running Win2008. I've written scripts to choose the performance counters based on roles assigned to the servers.
My problem(s):
logman can't start or stop counters on machines that run a later version of the OS.
psexec can be used to run logman remotely, but psexec likes to hang intermittently when run from the test framework. It runs fine manually from the command line. I'm guessing that this is because the calling process doesn't provide a console, or some similar awkwardness. There's not much I can do about this (GRRRR)
I wrote a PowerShell script that executes logman remotely using the WMI's win32_process and called it from a batch script, this works fine. However, the test framework decides pass and fail scenarios based on the %ERRORLEVEL% and the content of stderr, but WMI's win32_process does not give me access to either. So if the counters fail to start, the test will plough on anyway and waste everyone's time.
I'm looking for a solution that will allow me to execute a program on a remote machine, check the return code of the program and/or pipe stderr back to the caller. For reasons of simplicity, it needs to be written in tools that are available on a vanilla Win2k3 box. I'd really prefer not to use a convoluted collection of scripts that dump things into log files then reading them back out again.
Has anyone had a similar problem, and solved it, or at least have a suggestion?
For reasons of simplicity, it needs to
be written in tools that are available
on a vanilla Win2k3 box. I'd really
prefer not to use a convoluted
collection of scripts that dump things
into log files then reading them back
out again.
PowerShell isn't a native tool in Windows 2003. Do you still want to tag this question PowerShell and look for an answer? Anyway, I will give you a PowerShell answer.
$proc = Invoke-WmiMethod -ComputerName Test -Class Win32_Process -Name Create -ArgumentList "Notepad.exe"
Register-WmiEvent -ComputerName test -Query "Select * from Win32_ProcessStopTrace Where ProcessID=$($proc.ProcessId)" -Action { Write-Host "Process ExitCode: $($event.SourceEventArgs.NewEvent.ExitStatus)" }
This requires PowerShell 2.0 on the system where you are running these scripts. Your Windows 2003 remote system does not really need PowerShell.
PS: If you need a crash course on WMI and PowerShell, do read my eGuide: WMI Query Language via PowerShell