Memory leak in Powershell 3.0 script to monitor log file - powershell

I've written a script in Powershell 3.0 to monitor a log file for specific errors. The script starts a background process, which monitors the file. When anything gets written to the file, the background process simply passes it to the foreground process, if it matches the proper format (a datestamped line). The foreground process then counts the number of errors.
Everything works correctly with no errors. The issue is that, as the source logfile grows in size, the memory consumed by Powershell increases dramatically. These logs are capped at ~24M before they are rotated, which amounts to ~250K lines. In my tests, by the time the log size reaches ~80K lines or so, the monitor process is consuming 250M RAM (foreground and background processes combined. They're consuming ~70M combined when they first start. This type of growth is unacceptable in our environment. What can I do to decrease this?
Here's the script:
# Constants.
$F_IN = "C:\Temp\test.log"
$RE = "^\d+-\d+-\d+ \d+:\d+:\d+,.+ERROR.+Foo$"
$MAX_RESTARTS = 3 # Max restarts for failed background job.
$SLEEP_DELAY = 60 # In seconds.
# Background job.
$SCRIPT_BLOCK = { param($f, $r)
Get-Content -Path $f -Tail 0 -Wait -EA SilentlyContinue `
| Where { $_ -match $r }
}
function Start-FileMonitor {
Param([parameter(Mandatory=$true,Position=0)][alias("f")]
[String]$file,
[parameter(Mandatory=$true,Position=1)][alias("b")]
[ScriptBlock]$SCRIPT_BLOCK,
[parameter(Mandatory=$true,Position=2)][alias("re","r")]
[String]$regex)
$j = Start-Job -ScriptBlock $SCRIPT_BLOCK -Arg $file,$regex
return $j
}
function main {
# Tail log file in the background, return any errors.
$job = Start-FileMonitor -b $SCRIPT_BLOCK -f $F_IN -r $RE
$restarts = 0 # Current number of restarts.
# Poll background $job every $SLEEP_DELAY seconds.
While ($true) {
$a = (Receive-Job $job | Measure-Object)
If ($job.JobStateInfo.State -eq "Running") {
$restarts = 0
If ($a.Count -gt 0) {
$t0 = $a.Count
Write-Host "Error Count: ${t0}"
}
}
Else {
If ($restarts -lt $MAX_RESTARTS) {
$job = Start-FileMonitor -b $SCRIPT_BLOCK -f $F_IN -r $RE
$restarts++
Write-Host "Background job not running. Attempted restart ${restarts}."
}
Else {
Write-Host "`$MAX_RESTARTS (${MAX_RESTARTS}) exceeded. Exiting."
Break
}
}
# Sleep for $SLEEP_DELAY.
Start-Sleep -Seconds $SLEEP_DELAY
}
Write-Host "Done."
}
# Execute script.
main
...and here's the sample data:
2015-11-19 00:00:00, WARN Foo
2015-11-19 00:00:00, ERROR Foo
In order to replicate this issue:
Paste the sample data lines into the file C:\Temp\test.log. Save.
Start the monitoring script.
Paste additional sample data lines into the log and save. Wait for the Error Count: line to confirm that everything is working correctly.
Continue to paste additional lines and watch the memory consumption for powershell.exe in Task Manager. Note how much it increases at 400 lines...800 lines...8,000 lines...80,000 lines...

Related

How to Ignore lines with StreamWriter WriteLine

trying to figure out how to ignore or stop specific lines from being written to file with StreamWriter. Here is the code I'm working with from How to pass arguments to program when using variable as path :
$LogDir = "c:\users\user" # Log file output directory
$PlinkDir = "C:" # plink.exe directory
$SerialIP = "1.1.1.1" # serial device IP address
$SerialPort = 10000 # port to log
function CaptureWeight {
Start-Job -Name WeightLog -ScriptBlock {
filter timestamp {
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
$sw = [System.IO.StreamWriter]::new("$using:LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt")
& "$using:PlinkDir\plink.exe" -telnet $using:SerialIP -P $using:SerialPort | TimeStamp
}
finally {
$sw.ForEach('Flush')
$sw.ForEach('Dispose')
}
}
}
$job = CaptureWeight # For testing, save the job
Start-Sleep -Seconds 60 # wait 1 minute
$job | Stop-Job # kill the job
Get-Content "$LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt" # Did it work?
And the output is this:
05/09/2022_14:34:19 G+027800 lb
05/09/2022_14:34:20
05/09/2022_14:34:20 G+027820 lb
05/09/2022_14:34:21
05/09/2022_14:34:21 G+027820 lb
05/09/2022_14:34:22
05/09/2022_14:34:22 G+027820 lb
Without the TimeStamp, every other line is blank. I have a couple lines to cleanup the logs, one removes every other line one removes lines with zero weights:
Set-Content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" -Value (get-content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" | Where-Object { $i % 2 -eq 0; $i++ })
Set-Content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" -Value (get-content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" | Select-String -Pattern '00000' -NotMatch)
If files get to be too large these can take a while to run, would be nice to not have them written to start with.
Thanks!
Edit, This is what I ended up with:
#****************Serial Scale Weight Logger********************
$LogDir = "c:\ScaleWeightLogger\Logs" # Log File Output Directory
$PlinkDir = "c:\ScaleWeightLogger" # plink.exe Directory
$SerialIP = "1.1.1.1" # Serial Device IP Address
$SerialPort = "10000" # Serial Device Port to Log
$MakeWeight = "000\d\d\d" # Minimum weight to log
[datetime]$JobEndTime = '23:58' # "WeightLog" Job End Time
[datetime]$JobStartTime = '00:02' #Use '8/24/2024 03:00' for a date in the future
# https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_regular_expressions
function StartWeightCapture {
Start-Job -Name WeightLog -ScriptBlock {
filter timestamp {
# Set Output Filter, Do Not Write Blank Lines or Weight Matching...
if([string]::IsNullOrWhiteSpace($_) -or $_ -match $using:MakeWeight) {
# skip it
return
}
# Set TimeStamp Format Filter
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
# Set File Path, Set $true to Append
$sw = [System.IO.StreamWriter]::new("$using:LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt", $true)
# Keep Memory Buffer Clear After Writting
$sw.AutoFlush = $true
# Start plink, Filter Output, Append TimeStamp
& "$using:PlinkDir\plink.exe" -telnet $using:SerialIP -P $using:SerialPort | TimeStamp
}
finally {
# Discard Data After Writing
$sw.ForEach('Flush')
$sw.ForEach('Dispose')
}
}
}
function WeightCaptureEndTime {
[datetime]$CurrentTime = Get-Date
[int]$WaitSeconds = ( $JobEndTime - $CurrentTime ).TotalSeconds
Start-Sleep -Seconds $WaitSeconds
}
function StopWeightCapture {
Stop-Job WeightLog
$AddDaysWhenInPast = 1
[datetime]$CurrentTime = Get-Date
If ($JobStartTime -lt $CurrentTime) { $JobStartTime = $JobStartTime.AddDays($AddDaysWhenInPast) }
[int]$WaitSeconds = ( $JobStartTime - $CurrentTime ).TotalSeconds
Start-Sleep -Seconds $WaitSeconds
}
while ($true) {
StartWeightCapture
WeightCaptureEndTime
StopWeightCapture
}
I'm launching it at boot with:
powershell -windowstyle hidden -ExecutionPolicy bypass "& "C:\ScaleWeightLogger\ScaleWeightLogger.ps1"" & exit
And got this to end it manually since it's in the background. It only grabs the PID of the main powershell process and not the job:
#echo off
for /F "tokens=2" %%K in ('
tasklist /FI "ImageName eq powershell.exe" /FI "Status eq Running" /FO LIST ^| findstr /B "PID:"
') do (
echo "PID is %%K, Ending process..."
taskkill /F /PID %%K
)
pause
exit
If I understand correctly, adding this condition should avoid you the trouble of having to read the logs and skip the unwanted lines.
See String.IsNullOrWhiteSpace(String) Method and -match matching operator for details.
filter timestamp {
# if this output is purely whitespace or it matches `00000`
if([string]::IsNullOrWhiteSpace($_) -or $_ -match '00000') {
# skip it
return
}
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
Regarding the observation noted in previous question:
...when trying view the file while it's running, it seems like it updates (for viewing) about every 2 minutes, you get one 2 minute chunk of data that is about 2 minutes behind, the 2 minutes of data is there...
For this, you can enable the AutoFlush property from your StreamWriter.
Remarks has an excellent explanation of when it's worth enabling this property as well as the performance implications:
When AutoFlush is set to false, StreamWriter will do a limited amount of buffering, both internally and potentially in the encoder from the encoding you passed in. You can get better performance by setting AutoFlush to false, assuming that you always call Close (or at least Flush) when you're done writing with a StreamWriter.
For example, set AutoFlush to true when you are writing to a device where the user expects immediate feedback. Console.Out is one of these cases: The StreamWriter used internally for writing to Console flushes all its internal state except the encoder state after every call to StreamWriter.Write.
$sw = [System.IO.StreamWriter]::new("$using:LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt")
$sw.AutoFlush = $true

Powershell Script to Kill all PIDs that are non-responsive for 3 minutes

I need some help with a powershell script to kill all PIDs that are non-responsive for 3 minutes.
This is my script, but is not doing the trick. This script is running, but i need it to run as in a while loop, forever since the computer is running till the end of the working time.
I need to have a list with all the processes that are unresponsive for a period of 3 minutes. After 3 minutes, if those processes from the list have the same status -eq NoT Responsing to kill them. I don't want to kill the processes that are not responsing for 5 seconds or so, only those that are hanging for more than 3 minutes.
My purpose is to kill the PIDs that are running with the status Not Responding for more than 3 minutes.
As you know, processes sometimes are unresponsive for a couple of seconds e.g IE hangs for 7 seconds till the server response with the DOM etc. hence I need to close all the pids that are hanging with the status Not Responsive for more than 3 min.
while (1) {
# if ( $allProcesses = get-process -name $pN -errorAction SilentlyContinue ) {
foreach ($oneProcess in $allProcesses) {
if ( -not $oneProcess.Responding ) {
write "Status = Not Responding: Kill& Restart.."
$oneProcess.kill()
## restart ..
} else {
write "Status = either normal or not detectable (no Window-Handle)"
}
}
start-sleep 5
}
A quick and dirty solution, not tested, is based on idea about storing the process info in a hashtable and performing a re-check after sleep period. Like so,
while($true){
# Get a list of non-responding processes
$ps = get-process | ? { $_.responding -eq $false }
$ht = #{}
# Store process info in a hash table.
foreach($p in $ps) {
$o = new-object psobject -Property #{ "name"=$p.name; "status"=$p.responding; "time"=get-date; "pid"=$p.id }
$ht.Add($o.pid, $o)
}
# sleep for a while
start-sleep -minutes 3
# Get a list of non-responding processes, again
$ps = get-process | ? { $_.responding -eq $false }
foreach($p in $ps) {
# Check if process already is in the hash table
if($ht.ContainsKey($p.id)) {
# Calculate time difference, in minutes for
# process' start time and current time
# If start time's older than 3 minutes, kill it
if( ((get-date)-$ht[$p.id].Time).TotalMinutes -ge 3 ) {
# Actuall killing
$p.kill()
}
}
}
}
It's certainly possible to store process objects in the hashtable, but in most cases all you need is process id. Mind that process ids are recycled. If you are spawning a lot of processes, it might be reasonable to check $p.time value so that newly created process isn't killed instead.

Wait for multiple simultaneous powershell commands in other sessions to finish before running next commands

I am trying to get a master powershell script to do the following:
Run commands in 3 other powershell sessions (they all go for about ~1h - I'd like them to run concurrently, so that the jobs they do can all get done at the same time)
Wait for all 3 other powershell sessions to finish
Continue on with remaining commands in the initial powershell window
Extremely simple example
My real use case is similar to the following, except the times always vary, ECHO "hi" should happen only once all the other (3) commands have finished (in this case we know they'll take 10000 seconds, but in my actual use case this varies a lot). Also note, it's not clear which of the 3 commands will take the longest each time.
start powershell { TIMEOUT 2000 }
start powershell { TIMEOUT 3000 }
start powershell { TIMEOUT 10000 }
ECHO "hi"
I can see (here) that I can put an & in front of the command in order to tell powershell to wait until it's complete before progressing to subsequent commands. However, I do not know how to do so with 3 simultaneous commands
You are indeed looking for Powershell background jobs, as Lee Daily advises.
However, jobs are heavy-handed, because each job runs in its own process, which introduces significant overhead, and can also result in loss of type fidelity (due to PowerShell's XML-based serialization infrastructure being involved - see this answer).
The ThreadJob module offers a lightweight alternative based on threads.
It comes with PowerShell [Core] v6+ and in Windows PowerShell can be installed on demand with, e.g., Install-Module ThreadJob -Scope CurrentUser.[1]
You simply call Start-ThreadJob instead of Start-Job, and use the standard *-Job cmdlets to manage such thread jobs - the same way you'd manage a regular background job.
Here's an example:
$startedAt = [datetime]::UtcNow
# Define the commands to run as [thread] jobs.
$commands = { $n = 2; Start-Sleep $n; "I ran for $n secs." },
{ $n = 3; Start-Sleep $n; "I ran for $n secs." },
{ $n = 10; Start-Sleep $n; "I ran for $n secs." }
# Start the (thread) jobs.
# You could use `Start-Job` here, but that would be more resource-intensive
# and make the script run considerably longer.
$jobs = $commands | Foreach-Object { Start-ThreadJob $_ }
# Wait until all jobs have completed, passing their output through as it
# is received, and automatically clean up afterwards.
$jobs | Receive-Job -Wait -AutoRemoveJob
"All jobs completed. Total runtime in secs.: $(([datetime]::UtcNow - $startedAt).TotalSeconds)"
The above yields something like the following; note that the individual commands' output is reported as it becomes available, but execution of the calling script doesn't continue until all commands have completed:
I ran for 2 secs.
I ran for 3 secs.
I ran for 10 secs.
All jobs completed. Total runtime in secs.: 10.2504931
Note: In this simple case, it's obvious which output came from which command, but more typically the output from the various jobs will run unpredictably interleaved, which makes it difficult to interpret the output - see the next section for a solution.
As you can see, the overhead introduced for the thread-based parallel execution in the background is minimal - overall execution took only a little longer than 10 seconds, the runtime of the longest-running of the 3 commands.
If you were to use the process-based Start-Job instead, the overall execution time might look something like this, showing the significant overhead introduced, especially the first time you run a background job in a session:
All jobs completed. Total runtime in secs.: 18.7502717
That is, at least on the first invocation in a session, the benefits of parallel execution in the background were negated - execution took longer than sequential execution would have taken in this case.
While subsequent process-based background jobs in the same session run faster, the overhead is still significantly higher than it is for thread-based jobs.
Synchronizing the job output streams
If you want show output from the background commands per command, you need to collect output separately.
Note: In a console window (terminal), this requires you to wait until all commands have completed before you can show the output (because there is no way to show multiple output streams simultaneously via in-place updating, at least with the regular output commands).
$startedAt = [datetime]::UtcNow
$commands = { $n = 1; Start-Sleep $n; "I ran for $n secs." },
{ $n = 2; Start-Sleep $n; "I ran for $n secs." },
{ $n = 3; Start-Sleep $n; "I ran for $n secs." }
$jobs = $commands | Foreach-Object { Start-ThreadJob $_ }
# Wait until all jobs have completed.
$null = Wait-Job $jobs
# Collect the output individually for each job and print it.
foreach ($job in $jobs) {
"`n--- Output from {$($job.Command)}:"
Receive-Job $job
}
"`nAll jobs completed. Total runtime in secs.: $('{0:N2}' -f ([datetime]::UtcNow - $startedAt).TotalSeconds)"
The above will print something like this:
--- Output from { $n = 1; Start-Sleep $n; "I ran for $n secs." }:
I ran for 1 secs.
--- Output from { $n = 2; Start-Sleep $n; "I ran for $n secs." }:
I ran for 2 secs.
--- Output from { $n = 3; Start-Sleep $n; "I ran for $n secs." }:
I ran for 3 secs.
All jobs completed. Total runtime in secs.: 3.09
Using Start-Process to run the commands in separate windows
On Windows, you can use Start-Process (whose alias is start) to run commands in a new window, which is also asynchronous by default, i.e., serially launched commands do run in parallel.
In a limited form, this allows you to monitor command-specific output in real time, but it comes with the following caveats:
You'll have to manually activate the new windows individually to see the output being generated.
The output is only visible while a command is running; on completion, its window closes automatically, so you can't inspect the output after the fact.
To work around that you'd have to use something like Tee-Object in your PowerShell cmdlet in order to also capture output in a file, which the caller could later inspect.
This is also the only way to make the output available programmatically, albeit only as text.
Passing PowerShell commands to powershell.exe via Start-Process requires you to pass your commands as strings (rather than script blocks) and has annoying parsing requirements, such as the need to escape " chars. as \" (sic) - see below.
Last and not least, using Start-Process also introduces significant processing overhead (though with very long-running commands that may not matter).
$startedAt = [datetime]::UtcNow
# Define the commands - of necessity - as *strings*.
# Note the unexpected need to escape the embedded " chars. as \"
$commands = '$n = 1; Start-Sleep $n; \"I ran for $n secs.\"',
'$n = 2; Start-Sleep $n; \"I ran for $n secs.\"',
'$n = 3; Start-Sleep $n; \"I ran for $n secs.\"'
# Use `Start-Process` to launch the commands asynchronously,
# in a new window each (Windows only).
# `-PassThru` passes an object representing the newly created process through.
$procs = $commands | ForEach-Object { Start-Process -PassThru powershell -Args '-c', $_ }
# Wait for all processes to exit.
$procs.WaitForExit()
"`nAll processes completed. Total runtime in secs.: $('{0:N2}' -f ([datetime]::UtcNow - $startedAt).TotalSeconds)"
[1] In Windows PowerShell v3 and v4, Install-Module isn't available by default, because these versions do not come with the PowerShellGet module. However, this module can be installed on demand, as detailed in Installing PowerShellGet
A simple answer to the question, using jobs.
start-job { sleep 2000 }
start-job { sleep 3000 }
start-job { sleep 10000 }
get-job | wait-job
echo hi
Here's another way. Start being an alias for start-process. You could use timeout instead of sleep. Running timeout three times looks pretty cool actually, but it can mess up some of the output.
$a = start -NoNewWindow powershell {timeout 10; 'a done'} -PassThru
$b = start -NoNewWindow powershell {timeout 10; 'b done'} -PassThru
$c = start -NoNewWindow powershell {timeout 10; 'c done'} -PassThru
$a,$b,$c | wait-process
'hi'
b done
c done
a done
hi
Here's an attempt at workflow.
function sleepfor($time) { sleep $time; "sleepfor $time done"}
workflow work {
parallel {
sleepfor 3
sleepfor 2
sleepfor 1
}
'hi'
}
work
sleepfor 1 done
sleepfor 2 done
sleepfor 3 done
hi
Or just:
function sleepfor($time) { sleep $time; "sleepfor $time done"}
workflow work2 {
foreach -parallel ($i in 1..3) { sleepfor 10 }
'hi'
}
work2 # runs in about 13 seconds
sleepfor 10 done
sleepfor 10 done
sleepfor 10 done
hi
Api attempt with 3 runspaces:
$a = [PowerShell]::Create().AddScript{sleep 5;'a done'}
$b = [PowerShell]::Create().AddScript{sleep 5;'b done'}
$c = [PowerShell]::Create().AddScript{sleep 5;'c done'}
$r1,$r2,$r3 = ($a,$b,$c).begininvoke()
$a.EndInvoke($r1); $b.EndInvoke($r2); $c.EndInvoke($r3)
($a,$b,$c).dispose()
a done
b done
c done
Remote invoke-command (elevated prompt):
invoke-command localhost,localhost,localhost { sleep 5; 'done' }
done
done
done

Displaying only changes when using get-content -wait

I created the following function which I wanted to use for a very simple CTI solution I have to use at work. This CTI process is writing all received calls to a text logile.
This function starts a new powershell Job and checks if the .log has been saved during the last 2 seconds and gets the last 4 lines of the log (receiving calls always creates 4 new lines).
During the job update I'm using regex to find the line with the phonenumber and time and append this to a richtextbox in a form.
In theory this works exactly as I want it to work. If I manually add new lines and save the file, it's always showing the timecode and phone number.
In the field however, this doesn't work as the CTI process is opening the file and doesn't save the it unless the process is shutting down.
I know that I can use get-content -wait to display new lines. I already tested this in the console and it's displaying new lines as soon as the .log is updated from the CTI process. What I don't know is how to rewrite the function to work with that, displaying only new lines and not all the old stuff when first running the script. I need to keep it in the job for a responsive form. Another thing is, that the computer running the form, doesn't have that much power. I don't know if get-content -wait could cause high memory usage after several hours. Maybe there are also some alternative solutions for a case like that available?
function start-CTIlogMonitoring
{
Param ($CTIlogPath)
Write-Debug "startCTI monitor"
Add-JobTracker -Name "CTILogger" -ArgumentList $CTIlogPath `
-JobScript {
#--------------------------------------------------
#TODO: Set a script block
#Important: Do not access form controls from this script block.
Param ($CTIlogPath) #Pass any arguments using the ArgumentList parameter
while ($true)
{
$diff = ((Get-ChildItem $CTIlogPath).LastWriteTime - (get-date)).totalseconds
Write-Debug "diff $diff"
if ($diff -gt -2)
{
Write-Debug "cti log DIFF detected"
Get-Content -Path "$CTIlogPath" -Tail 4
Start-Sleep -Seconds 1
}
}
#--------------------------------------------------
}`
-CompletedScript { Param ($Job) }`
-UpdateScript {
Param ($Job)
$results = Receive-Job -Job $Job | Out-String # Out-String required to get new lines in RTB
#get the stuff from results and make it more appearing to read for humans
if ($results -match '(Ein, E, (\d+))')
{
Write-debug "Incoming Call:"
$time = ([regex]'[0-9]{2}:[0-9]{2}:[0-9]{2}').Match($results)
$phoneNumber = ([regex]'Ein, E, (\d+)').Split($results)[1]
Write-Debug "$time ----> $phoneNumber"
if ($richtextboxCTIlogs.lines.count -eq 0)
{
$richtextboxCTIlogs.AppendText("$time ----> $phoneNumber")
}
else
{
$richtextboxCTIlogs.AppendText("`n$time ----> $phoneNumber")
}
$richtextboxCTIlogs.SelectionStart = $richtextboxCTIlogs.TextLength;
$richtextboxCTIlogs.ScrollToCaret()
}
<#else
{
Write-Debug "found nothin"
}#>
}
}

Limited powershell start-jobs

I curious if you can answer this or point me in the right direction.
I've written a script that tests/monitors urls. I'm not posting the code ( unless you want me to ) because there is no error in the code. It works great. I can even scriptblock run it as part of start job. The issue I have seems to be that I can not run more than 3 jobs at time.. or they hang. I'm not sure why this is. I can run it for a total of 15 urls throttled to 3 and it's great. If I try to run it on 15 urls with 4 as my run limit, they will hang.. and I can kill one at a time.. until only 3 remain and those will finish. So it seems that I can only start a total of 3 powershell instances or they hang. Anyone explain why this is? All my searches lead me to pages that show how to throttle and it's not really my issue.
Watching the processes, each consumes about 25MBs of memory and sits there idle... If I kill one the other 3 will start using cpu and process go up to maybe 30MBs of memory and terminate completed. System has 8GBs of memory & a quad cord I5-2400 CPU # 3.10GHz. As requested...
Param(
$file
)
$testscript =
{
Param(
[string]$url,
#[ValidateSet('InternetExplorer','Chrome','Firefox','Safari','Opera', IgnoreCase = $true)]
[string]$browser="InternetExplorer",
[string]$teststring="Solution Center",
[int]$timeout=20,
[int]$retry
)
$i=0
do {
$userAgent = [Microsoft.PowerShell.Commands.PSUserAgent]::$browser
$data = Invoke-WebRequest $url -UserAgent $userAgent -TimeoutSec $timeout
$data.Content
$findit = $data.Content.Contains($teststring)
$i++
If ($findit){
break
}
}
while ($i -lt $retry)
if(!$findit) {
Echo "opcmsg a=PSURLCheck o=NHTSA msg_t='$teststring was not found on $url or $url failed to load'"
}
}
$urls = Import-Csv $file | % {
Start-Job -ScriptBlock $testscript -ArgumentList $_.url, $_.browser, $_.teststring, $_.retry
}
While (#(Get-Job | Where { $_.State -eq "Running" }).Count -ne 0)
{ Write-Host "Processing URLs..."
Get-Job
Start-Sleep -Seconds 5
}
$Data = ForEach ($Job in (Get-Job)) {
Receive-Job $Job
Remove-Job $Job
}
$data | select *
So I've used new system.net.webclient and I've even tried doing this with [System.Collections.Queue]... but all three methods use Jobs... so it appears.. I can not run more than three start jobs at any one time.
Are you sure your code is fine? If you're calling separate powershell sessions multiple times memory can be consumed very quickly. Check process monitor for high CPU or memory usage and ensure your blocks are terminating. Or post the code.