I have this script to trap the IP's on a network to compare to historic packet captures as part of a larger problem solving exercise.
function qp($comp)
{
$p = ping $comp -n 1 -w 2 -4
IF($? -eq $true){$out = $p[1].split("[")[1].split("]")[0]}
else{$out = $False}
return $out
}
$comps = Get-Content C:\PacketCapture\comps.txt
DO
{
foreach($comp in $comps)
{
ECHO "$(qp $comp);$comp" >>"C:\PacketCapture\IP_$(Get-date -format HHmm-ddMMyy).txt"
}
Start-sleep 3600
}
until($null -eq "WANG")
I set this off initially a fortnight ago, at the middle of last week the terminal it was running on was grinding to a halt as the memory use for the power shell process was nearly at 2GB.
I stopped and restarted it and again we were at 1.2GB RAM use this morning.
Whilst not particularly critical, I've modified this to run once then stop/start itself, I'm interested to know which element is causing the memory leak and how I would identify that in the future.
You could try invoking garbage collection manually. I think the do/until loop is a good place for it:
DO
{
foreach($comp in $comps)
{
ECHO "$(qp $comp);$comp" >>"C:\PacketCapture\IP_$(Get-date -format HHmm-ddMMyy).txt"
}
Start-sleep 3600
[GC]::Collect()
}
until($null -eq "WANG")
Try introducing garbage collection into your code with [System.GC]::Collect()
Source : https://dmitrysotnikov.wordpress.com/2012/02/24/freeing-up-memory-in-powershell-using-garbage-collector
Related
I need some help with a powershell script to kill all PIDs that are non-responsive for 3 minutes.
This is my script, but is not doing the trick. This script is running, but i need it to run as in a while loop, forever since the computer is running till the end of the working time.
I need to have a list with all the processes that are unresponsive for a period of 3 minutes. After 3 minutes, if those processes from the list have the same status -eq NoT Responsing to kill them. I don't want to kill the processes that are not responsing for 5 seconds or so, only those that are hanging for more than 3 minutes.
My purpose is to kill the PIDs that are running with the status Not Responding for more than 3 minutes.
As you know, processes sometimes are unresponsive for a couple of seconds e.g IE hangs for 7 seconds till the server response with the DOM etc. hence I need to close all the pids that are hanging with the status Not Responsive for more than 3 min.
while (1) {
# if ( $allProcesses = get-process -name $pN -errorAction SilentlyContinue ) {
foreach ($oneProcess in $allProcesses) {
if ( -not $oneProcess.Responding ) {
write "Status = Not Responding: Kill& Restart.."
$oneProcess.kill()
## restart ..
} else {
write "Status = either normal or not detectable (no Window-Handle)"
}
}
start-sleep 5
}
A quick and dirty solution, not tested, is based on idea about storing the process info in a hashtable and performing a re-check after sleep period. Like so,
while($true){
# Get a list of non-responding processes
$ps = get-process | ? { $_.responding -eq $false }
$ht = #{}
# Store process info in a hash table.
foreach($p in $ps) {
$o = new-object psobject -Property #{ "name"=$p.name; "status"=$p.responding; "time"=get-date; "pid"=$p.id }
$ht.Add($o.pid, $o)
}
# sleep for a while
start-sleep -minutes 3
# Get a list of non-responding processes, again
$ps = get-process | ? { $_.responding -eq $false }
foreach($p in $ps) {
# Check if process already is in the hash table
if($ht.ContainsKey($p.id)) {
# Calculate time difference, in minutes for
# process' start time and current time
# If start time's older than 3 minutes, kill it
if( ((get-date)-$ht[$p.id].Time).TotalMinutes -ge 3 ) {
# Actuall killing
$p.kill()
}
}
}
}
It's certainly possible to store process objects in the hashtable, but in most cases all you need is process id. Mind that process ids are recycled. If you are spawning a lot of processes, it might be reasonable to check $p.time value so that newly created process isn't killed instead.
I've written a script in Powershell 3.0 to monitor a log file for specific errors. The script starts a background process, which monitors the file. When anything gets written to the file, the background process simply passes it to the foreground process, if it matches the proper format (a datestamped line). The foreground process then counts the number of errors.
Everything works correctly with no errors. The issue is that, as the source logfile grows in size, the memory consumed by Powershell increases dramatically. These logs are capped at ~24M before they are rotated, which amounts to ~250K lines. In my tests, by the time the log size reaches ~80K lines or so, the monitor process is consuming 250M RAM (foreground and background processes combined. They're consuming ~70M combined when they first start. This type of growth is unacceptable in our environment. What can I do to decrease this?
Here's the script:
# Constants.
$F_IN = "C:\Temp\test.log"
$RE = "^\d+-\d+-\d+ \d+:\d+:\d+,.+ERROR.+Foo$"
$MAX_RESTARTS = 3 # Max restarts for failed background job.
$SLEEP_DELAY = 60 # In seconds.
# Background job.
$SCRIPT_BLOCK = { param($f, $r)
Get-Content -Path $f -Tail 0 -Wait -EA SilentlyContinue `
| Where { $_ -match $r }
}
function Start-FileMonitor {
Param([parameter(Mandatory=$true,Position=0)][alias("f")]
[String]$file,
[parameter(Mandatory=$true,Position=1)][alias("b")]
[ScriptBlock]$SCRIPT_BLOCK,
[parameter(Mandatory=$true,Position=2)][alias("re","r")]
[String]$regex)
$j = Start-Job -ScriptBlock $SCRIPT_BLOCK -Arg $file,$regex
return $j
}
function main {
# Tail log file in the background, return any errors.
$job = Start-FileMonitor -b $SCRIPT_BLOCK -f $F_IN -r $RE
$restarts = 0 # Current number of restarts.
# Poll background $job every $SLEEP_DELAY seconds.
While ($true) {
$a = (Receive-Job $job | Measure-Object)
If ($job.JobStateInfo.State -eq "Running") {
$restarts = 0
If ($a.Count -gt 0) {
$t0 = $a.Count
Write-Host "Error Count: ${t0}"
}
}
Else {
If ($restarts -lt $MAX_RESTARTS) {
$job = Start-FileMonitor -b $SCRIPT_BLOCK -f $F_IN -r $RE
$restarts++
Write-Host "Background job not running. Attempted restart ${restarts}."
}
Else {
Write-Host "`$MAX_RESTARTS (${MAX_RESTARTS}) exceeded. Exiting."
Break
}
}
# Sleep for $SLEEP_DELAY.
Start-Sleep -Seconds $SLEEP_DELAY
}
Write-Host "Done."
}
# Execute script.
main
...and here's the sample data:
2015-11-19 00:00:00, WARN Foo
2015-11-19 00:00:00, ERROR Foo
In order to replicate this issue:
Paste the sample data lines into the file C:\Temp\test.log. Save.
Start the monitoring script.
Paste additional sample data lines into the log and save. Wait for the Error Count: line to confirm that everything is working correctly.
Continue to paste additional lines and watch the memory consumption for powershell.exe in Task Manager. Note how much it increases at 400 lines...800 lines...8,000 lines...80,000 lines...
I've been struggling with this for a week now and have exhausted all the methods and options I have found online. I am hoping someone here will be able to help me out with this.
I am using powershell to start 8 jobs, each job running FFmpeg to stream a 7 minute file to a remote RTMP server. This is pulling from a file on the disk and each job uses a different file. The command is in a do while loop so that it is constantly restreaming.
This is causing the shell I launched the jobs from to accumulate a massive amount of memory, consuming all that it can. In 24 hours it consumed 30 of the 32 GB of my server.
Here is my launch code, any help would be appreciated.
start-job -Name v6 -scriptblock {
do { $d = $true; $f = Invoke-Expression -Command "ffmpeg -re -i `"C:\Shares\Matthew\180p_3000k.mp4`" -vcodec copy -acodec copy -f flv -y rtmp://<ip>/<appName>/<streamName>"; $f = $null }
while ($d = $true)
}
I've tried to receive the jobs and pipe it to out-null, I've tried setting $f to $null before starting the do while loop, and some other things I found online but to no avail. Thanks everyone for your time!
Better late than never I guess. I've had the same problem with huge memory consumption when running ffmpeg in Powershell jobs. The core of the issue is that a Powershell job will store any/all output into memory, and ffmpeg is extremely happy to log output to both standard output and standard error streams.
My solution was to add the parameter "-loglevel quiet" to ffmpeg. Alternatively you could redirect both the standard and error streams to null (it's not enough to redirect just the standard stream). For more on how to redirect the standard streams, refer to this question: Redirection of standard and error output appending to the same log-file
There are many ways to invoke a PowerShell script or an expression.
One of my favorites is using RunspacePool. In a RunspacePool each task is started with a new runspace. When the task completes the runspace is disposed of. This should keep memory consumption fairly constant. It is not the easiest method to get your head around, but here is a bare-bone example that might work for you.
$Throttle = 10 # Maximum jobs that can run simultaneously in a runpool
$maxjobs = 8 # Maximum jobs that will run simultaneously
[System.Collections.ArrayList]$jobs = #()
$script:currentjobs = 0
$jobindex = 0
$jobaction = "ffmpeg -re -i `"C:\Shares\Matthew\180p_3000k.mp4`" -vcodec copy -acodec copy -f flv -y rtmp://<ip>/<appName>/<streamName>"
$sessionstate = [system.management.automation.runspaces.initialsessionstate]::CreateDefault()
$runspace = [runspacefactory]::CreateRunspacePool(1, $Throttle, $sessionstate, $Host)
$runspace.Open()
function QueueJob{
param($jobindex)
$job = "" | Select-Object JobID,Script,Errors,PS,Runspace,Handle
$job.JobID = $jobindex
$job.Script = $jobaction
$job.PS = [System.Management.Automation.PowerShell]::create()
$job.PS.RunspacePool = $runspace
[void]$job.PS.AddScript($job.Script)
$job.Runspace = $job.PS.BeginInvoke()
$job.Handle = $job.Runspace.AsyncWaitHandle
Write-Host "----------------- Starting Job: $("{0:D4}" -f $job.JobID) --------------------" -ForegroundColor Blue
$jobs.add($job)
$script:currentjobs++
}
Function ServiceJobs{
foreach($job in $Jobs){
If ($job.Runspace.isCompleted) {
$result = $job.PS.EndInvoke($job.Runspace)
$job.PS.dispose()
$job.Runspace = $null
$job.PS = $null
$script:currentjobs--
Write-Host "----------------- Job Completed: $("{0:D4}" -f $job.JobID) --------------------" -ForegroundColor Yellow
Write-Host $result
$Jobs.Remove($job)
return
}
}
}
while ($true) {
While ($script:currentjobs -le $maxjobs){
QueueJob $jobindex
$jobindex++
}
ServiceJobs
Start-Sleep 1
}
$runspace.Close()
[gc]::Collect()
It is using an infinite loop without proper termination and is lacking any sort of error checking, but hopefully it is enough to demonstrate the technique.
I curious if you can answer this or point me in the right direction.
I've written a script that tests/monitors urls. I'm not posting the code ( unless you want me to ) because there is no error in the code. It works great. I can even scriptblock run it as part of start job. The issue I have seems to be that I can not run more than 3 jobs at time.. or they hang. I'm not sure why this is. I can run it for a total of 15 urls throttled to 3 and it's great. If I try to run it on 15 urls with 4 as my run limit, they will hang.. and I can kill one at a time.. until only 3 remain and those will finish. So it seems that I can only start a total of 3 powershell instances or they hang. Anyone explain why this is? All my searches lead me to pages that show how to throttle and it's not really my issue.
Watching the processes, each consumes about 25MBs of memory and sits there idle... If I kill one the other 3 will start using cpu and process go up to maybe 30MBs of memory and terminate completed. System has 8GBs of memory & a quad cord I5-2400 CPU # 3.10GHz. As requested...
Param(
$file
)
$testscript =
{
Param(
[string]$url,
#[ValidateSet('InternetExplorer','Chrome','Firefox','Safari','Opera', IgnoreCase = $true)]
[string]$browser="InternetExplorer",
[string]$teststring="Solution Center",
[int]$timeout=20,
[int]$retry
)
$i=0
do {
$userAgent = [Microsoft.PowerShell.Commands.PSUserAgent]::$browser
$data = Invoke-WebRequest $url -UserAgent $userAgent -TimeoutSec $timeout
$data.Content
$findit = $data.Content.Contains($teststring)
$i++
If ($findit){
break
}
}
while ($i -lt $retry)
if(!$findit) {
Echo "opcmsg a=PSURLCheck o=NHTSA msg_t='$teststring was not found on $url or $url failed to load'"
}
}
$urls = Import-Csv $file | % {
Start-Job -ScriptBlock $testscript -ArgumentList $_.url, $_.browser, $_.teststring, $_.retry
}
While (#(Get-Job | Where { $_.State -eq "Running" }).Count -ne 0)
{ Write-Host "Processing URLs..."
Get-Job
Start-Sleep -Seconds 5
}
$Data = ForEach ($Job in (Get-Job)) {
Receive-Job $Job
Remove-Job $Job
}
$data | select *
So I've used new system.net.webclient and I've even tried doing this with [System.Collections.Queue]... but all three methods use Jobs... so it appears.. I can not run more than three start jobs at any one time.
Are you sure your code is fine? If you're calling separate powershell sessions multiple times memory can be consumed very quickly. Check process monitor for high CPU or memory usage and ensure your blocks are terminating. Or post the code.
Is there a simple way to time the execution of a command in PowerShell, like the 'time' command in Linux?
I came up with this:
$s=Get-Date; .\do_something.ps1 ; $e=Get-Date; ($e - $s).TotalSeconds
But I would like something simpler like
time .\do_something.ps1
Yup.
Measure-Command { .\do_something.ps1 }
Note that one minor downside of Measure-Command is that you see no stdout output.
[Update, thanks to #JasonMArcher] You can fix that by piping the command output to some commandlet that writes to the host, e.g. Out-Default so it becomes:
Measure-Command { .\do_something.ps1 | Out-Default }
Another way to see the output would be to use the .NET Stopwatch class like this:
$sw = [Diagnostics.Stopwatch]::StartNew()
.\do_something.ps1
$sw.Stop()
$sw.Elapsed
You can also get the last command from history and subtract its EndExecutionTime from its StartExecutionTime.
.\do_something.ps1
$command = Get-History -Count 1
$command.EndExecutionTime - $command.StartExecutionTime
Use Measure-Command
Example
Measure-Command { <your command here> | Out-Host }
The pipe to Out-Host allows you to see the output of the command, which is
otherwise consumed by Measure-Command.
Simples
function time($block) {
$sw = [Diagnostics.Stopwatch]::StartNew()
&$block
$sw.Stop()
$sw.Elapsed
}
then can use as
time { .\some_command }
You may want to tweak the output
Here's a function I wrote which works similarly to the Unix time command:
function time {
Param(
[Parameter(Mandatory=$true)]
[string]$command,
[switch]$quiet = $false
)
$start = Get-Date
try {
if ( -not $quiet ) {
iex $command | Write-Host
} else {
iex $command > $null
}
} finally {
$(Get-Date) - $start
}
}
Source: https://gist.github.com/bender-the-greatest/741f696d965ed9728dc6287bdd336874
Using Stopwatch and formatting elapsed time:
Function FormatElapsedTime($ts)
{
$elapsedTime = ""
if ( $ts.Minutes -gt 0 )
{
$elapsedTime = [string]::Format( "{0:00} min. {1:00}.{2:00} sec.", $ts.Minutes, $ts.Seconds, $ts.Milliseconds / 10 );
}
else
{
$elapsedTime = [string]::Format( "{0:00}.{1:00} sec.", $ts.Seconds, $ts.Milliseconds / 10 );
}
if ($ts.Hours -eq 0 -and $ts.Minutes -eq 0 -and $ts.Seconds -eq 0)
{
$elapsedTime = [string]::Format("{0:00} ms.", $ts.Milliseconds);
}
if ($ts.Milliseconds -eq 0)
{
$elapsedTime = [string]::Format("{0} ms", $ts.TotalMilliseconds);
}
return $elapsedTime
}
Function StepTimeBlock($step, $block)
{
Write-Host "`r`n*****"
Write-Host $step
Write-Host "`r`n*****"
$sw = [Diagnostics.Stopwatch]::StartNew()
&$block
$sw.Stop()
$time = $sw.Elapsed
$formatTime = FormatElapsedTime $time
Write-Host "`r`n`t=====> $step took $formatTime"
}
Usage Samples
StepTimeBlock ("Publish {0} Reports" -f $Script:ArrayReportsList.Count) {
$Script:ArrayReportsList | % { Publish-Report $WebServiceSSRSRDL $_ $CarpetaReports $CarpetaDataSources $Script:datasourceReport };
}
StepTimeBlock ("My Process") { .\do_something.ps1 }
All the answers so far fall short of the questioner's (and my) desire to time a command by simply adding "time " to the start of the command line. Instead, they all require wrapping the command in brackets ({}) to make a block. Here is a short function that works more like time on Unix:
Function time() {
$command = $args -join ' '
Measure-Command { Invoke-Expression $command | Out-Default }
}
A more PowerShell inspired way to access the value of properties you care about:
$myCommand = .\do_something.ps1
Measure-Command { Invoke-Expression $myCommand } | Select -ExpandProperty Milliseconds
4
As Measure-Command returns a TimeSpan object.
note: The TimeSpan object also has TotalMilliseconds as a double (such as 4.7322 TotalMilliseconds in my case above) which might be useful to you. Just like TotalSeconds, TotalDays, etc.
(measure-commmand{your command}).totalseconds
for instance
(measure-commmand{.\do_something.ps1}).totalseconds
Just a word on drawing (incorrect) conclusions from any of the performance measurement commands referred to in the answers. There are a number of pitfalls that should taken in consideration aside from looking to the bare invocation time of a (custom) function or command.
Sjoemelsoftware
'Sjoemelsoftware' voted Dutch word of the year 2015
Sjoemelen means cheating, and the word sjoemelsoftware came into being due to the Volkswagen emissions scandal. The official definition is "software used to influence test results".
Personally, I think that "Sjoemelsoftware" is not always deliberately created to cheat test results but might originate from accommodating practical situation that are similar to test cases as shown below.
As an example, using the listed performance measurement commands, Language Integrated Query (LINQ)(1), is often qualified as the fasted way to get something done and it often is, but certainly not always! Anybody who measures a speed increase of a factor 40 or more in comparison with native PowerShell commands, is probably incorrectly measuring or drawing an incorrect conclusion.
The point is that some .Net classes (like LINQ) using a lazy evaluation (also referred to as deferred execution(2)). Meaning that when assign an expression to a variable, it almost immediately appears to be done but in fact it didn't process anything yet!
Let presume that you dot-source your . .\Dosomething.ps1 command which has either a PowerShell or a more sophisticated Linq expression (for the ease of explanation, I have directly embedded the expressions directly into the Measure-Command):
$Data = #(1..100000).ForEach{[PSCustomObject]#{Index=$_;Property=(Get-Random)}}
(Measure-Command {
$PowerShell = $Data.Where{$_.Index -eq 12345}
}).totalmilliseconds
864.5237
(Measure-Command {
$Linq = [Linq.Enumerable]::Where($Data, [Func[object,bool]] { param($Item); Return $Item.Index -eq 12345})
}).totalmilliseconds
24.5949
The result appears obvious, the later Linq command is a about 40 times faster than the first PowerShell command. Unfortunately, it is not that simple...
Let's display the results:
PS C:\> $PowerShell
Index Property
----- --------
12345 104123841
PS C:\> $Linq
Index Property
----- --------
12345 104123841
As expected, the results are the same but if you have paid close attention, you will have noticed that it took a lot longer to display the $Linq results then the $PowerShell results.
Let's specifically measure that by just retrieving a property of the resulted object:
PS C:\> (Measure-Command {$PowerShell.Property}).totalmilliseconds
14.8798
PS C:\> (Measure-Command {$Linq.Property}).totalmilliseconds
1360.9435
It took about a factor 90 longer to retrieve a property of the $Linq object then the $PowerShell object and that was just a single object!
Also notice an other pitfall that if you do it again, certain steps might appear a lot faster then before, this is because some of the expressions have been cached.
Bottom line, if you want to compare the performance between two functions, you will need to implement them in your used case, start with a fresh PowerShell session and base your conclusion on the actual performance of the complete solution.
(1) For more background and examples on PowerShell and LINQ, I recommend tihis site: High Performance PowerShell with LINQ
(2) I think there is a minor difference between the two concepts as with lazy evaluation the result is calculated when needed as apposed to deferred execution were the result is calculated when the system is idle