Timing a command's execution in PowerShell - powershell

Is there a simple way to time the execution of a command in PowerShell, like the 'time' command in Linux?
I came up with this:
$s=Get-Date; .\do_something.ps1 ; $e=Get-Date; ($e - $s).TotalSeconds
But I would like something simpler like
time .\do_something.ps1

Yup.
Measure-Command { .\do_something.ps1 }
Note that one minor downside of Measure-Command is that you see no stdout output.
[Update, thanks to #JasonMArcher] You can fix that by piping the command output to some commandlet that writes to the host, e.g. Out-Default so it becomes:
Measure-Command { .\do_something.ps1 | Out-Default }
Another way to see the output would be to use the .NET Stopwatch class like this:
$sw = [Diagnostics.Stopwatch]::StartNew()
.\do_something.ps1
$sw.Stop()
$sw.Elapsed

You can also get the last command from history and subtract its EndExecutionTime from its StartExecutionTime.
.\do_something.ps1
$command = Get-History -Count 1
$command.EndExecutionTime - $command.StartExecutionTime

Use Measure-Command
Example
Measure-Command { <your command here> | Out-Host }
The pipe to Out-Host allows you to see the output of the command, which is
otherwise consumed by Measure-Command.

Simples
function time($block) {
$sw = [Diagnostics.Stopwatch]::StartNew()
&$block
$sw.Stop()
$sw.Elapsed
}
then can use as
time { .\some_command }
You may want to tweak the output

Here's a function I wrote which works similarly to the Unix time command:
function time {
Param(
[Parameter(Mandatory=$true)]
[string]$command,
[switch]$quiet = $false
)
$start = Get-Date
try {
if ( -not $quiet ) {
iex $command | Write-Host
} else {
iex $command > $null
}
} finally {
$(Get-Date) - $start
}
}
Source: https://gist.github.com/bender-the-greatest/741f696d965ed9728dc6287bdd336874

Using Stopwatch and formatting elapsed time:
Function FormatElapsedTime($ts)
{
$elapsedTime = ""
if ( $ts.Minutes -gt 0 )
{
$elapsedTime = [string]::Format( "{0:00} min. {1:00}.{2:00} sec.", $ts.Minutes, $ts.Seconds, $ts.Milliseconds / 10 );
}
else
{
$elapsedTime = [string]::Format( "{0:00}.{1:00} sec.", $ts.Seconds, $ts.Milliseconds / 10 );
}
if ($ts.Hours -eq 0 -and $ts.Minutes -eq 0 -and $ts.Seconds -eq 0)
{
$elapsedTime = [string]::Format("{0:00} ms.", $ts.Milliseconds);
}
if ($ts.Milliseconds -eq 0)
{
$elapsedTime = [string]::Format("{0} ms", $ts.TotalMilliseconds);
}
return $elapsedTime
}
Function StepTimeBlock($step, $block)
{
Write-Host "`r`n*****"
Write-Host $step
Write-Host "`r`n*****"
$sw = [Diagnostics.Stopwatch]::StartNew()
&$block
$sw.Stop()
$time = $sw.Elapsed
$formatTime = FormatElapsedTime $time
Write-Host "`r`n`t=====> $step took $formatTime"
}
Usage Samples
StepTimeBlock ("Publish {0} Reports" -f $Script:ArrayReportsList.Count) {
$Script:ArrayReportsList | % { Publish-Report $WebServiceSSRSRDL $_ $CarpetaReports $CarpetaDataSources $Script:datasourceReport };
}
StepTimeBlock ("My Process") { .\do_something.ps1 }

All the answers so far fall short of the questioner's (and my) desire to time a command by simply adding "time " to the start of the command line. Instead, they all require wrapping the command in brackets ({}) to make a block. Here is a short function that works more like time on Unix:
Function time() {
$command = $args -join ' '
Measure-Command { Invoke-Expression $command | Out-Default }
}

A more PowerShell inspired way to access the value of properties you care about:
$myCommand = .\do_something.ps1
Measure-Command { Invoke-Expression $myCommand } | Select -ExpandProperty Milliseconds
4
As Measure-Command returns a TimeSpan object.
note: The TimeSpan object also has TotalMilliseconds as a double (such as 4.7322 TotalMilliseconds in my case above) which might be useful to you. Just like TotalSeconds, TotalDays, etc.

(measure-commmand{your command}).totalseconds
for instance
(measure-commmand{.\do_something.ps1}).totalseconds

Just a word on drawing (incorrect) conclusions from any of the performance measurement commands referred to in the answers. There are a number of pitfalls that should taken in consideration aside from looking to the bare invocation time of a (custom) function or command.
Sjoemelsoftware
'Sjoemelsoftware' voted Dutch word of the year 2015
Sjoemelen means cheating, and the word sjoemelsoftware came into being due to the Volkswagen emissions scandal. The official definition is "software used to influence test results".
Personally, I think that "Sjoemelsoftware" is not always deliberately created to cheat test results but might originate from accommodating practical situation that are similar to test cases as shown below.
As an example, using the listed performance measurement commands, Language Integrated Query (LINQ)(1), is often qualified as the fasted way to get something done and it often is, but certainly not always! Anybody who measures a speed increase of a factor 40 or more in comparison with native PowerShell commands, is probably incorrectly measuring or drawing an incorrect conclusion.
The point is that some .Net classes (like LINQ) using a lazy evaluation (also referred to as deferred execution(2)). Meaning that when assign an expression to a variable, it almost immediately appears to be done but in fact it didn't process anything yet!
Let presume that you dot-source your . .\Dosomething.ps1 command which has either a PowerShell or a more sophisticated Linq expression (for the ease of explanation, I have directly embedded the expressions directly into the Measure-Command):
$Data = #(1..100000).ForEach{[PSCustomObject]#{Index=$_;Property=(Get-Random)}}
(Measure-Command {
$PowerShell = $Data.Where{$_.Index -eq 12345}
}).totalmilliseconds
864.5237
(Measure-Command {
$Linq = [Linq.Enumerable]::Where($Data, [Func[object,bool]] { param($Item); Return $Item.Index -eq 12345})
}).totalmilliseconds
24.5949
The result appears obvious, the later Linq command is a about 40 times faster than the first PowerShell command. Unfortunately, it is not that simple...
Let's display the results:
PS C:\> $PowerShell
Index Property
----- --------
12345 104123841
PS C:\> $Linq
Index Property
----- --------
12345 104123841
As expected, the results are the same but if you have paid close attention, you will have noticed that it took a lot longer to display the $Linq results then the $PowerShell results.
Let's specifically measure that by just retrieving a property of the resulted object:
PS C:\> (Measure-Command {$PowerShell.Property}).totalmilliseconds
14.8798
PS C:\> (Measure-Command {$Linq.Property}).totalmilliseconds
1360.9435
It took about a factor 90 longer to retrieve a property of the $Linq object then the $PowerShell object and that was just a single object!
Also notice an other pitfall that if you do it again, certain steps might appear a lot faster then before, this is because some of the expressions have been cached.
Bottom line, if you want to compare the performance between two functions, you will need to implement them in your used case, start with a fresh PowerShell session and base your conclusion on the actual performance of the complete solution.
(1) For more background and examples on PowerShell and LINQ, I recommend tihis site: High Performance PowerShell with LINQ
(2) I think there is a minor difference between the two concepts as with lazy evaluation the result is calculated when needed as apposed to deferred execution were the result is calculated when the system is idle

Related

Huge memory use from small powershell script

I have this script to trap the IP's on a network to compare to historic packet captures as part of a larger problem solving exercise.
function qp($comp)
{
$p = ping $comp -n 1 -w 2 -4
IF($? -eq $true){$out = $p[1].split("[")[1].split("]")[0]}
else{$out = $False}
return $out
}
$comps = Get-Content C:\PacketCapture\comps.txt
DO
{
foreach($comp in $comps)
{
ECHO "$(qp $comp);$comp" >>"C:\PacketCapture\IP_$(Get-date -format HHmm-ddMMyy).txt"
}
Start-sleep 3600
}
until($null -eq "WANG")
I set this off initially a fortnight ago, at the middle of last week the terminal it was running on was grinding to a halt as the memory use for the power shell process was nearly at 2GB.
I stopped and restarted it and again we were at 1.2GB RAM use this morning.
Whilst not particularly critical, I've modified this to run once then stop/start itself, I'm interested to know which element is causing the memory leak and how I would identify that in the future.
You could try invoking garbage collection manually. I think the do/until loop is a good place for it:
DO
{
foreach($comp in $comps)
{
ECHO "$(qp $comp);$comp" >>"C:\PacketCapture\IP_$(Get-date -format HHmm-ddMMyy).txt"
}
Start-sleep 3600
[GC]::Collect()
}
until($null -eq "WANG")
Try introducing garbage collection into your code with [System.GC]::Collect()
Source : https://dmitrysotnikov.wordpress.com/2012/02/24/freeing-up-memory-in-powershell-using-garbage-collector

Understanding performance impact of "write-output"

I'm writing a Powershell script (PS version 4) to parse and process IIS log files, and I've come across an issue I don't quite understand: write-output seems to add significant processing time to the script. The core of it is this (there is more, but this demonstrates the issue):
$file = $args[0]
$linecount = 0
$start = [DateTime]::Now
$reader = [IO.File]::OpenText($file)
while ($reader.Peek() -ge 0) {
$line = $reader.ReadLine()
$linecount++
if (0 -eq ($linecount % 10000)) {
$batch = [DateTime]::Now
[Console]::Error.WriteLine(" Processed $linecount lines ($file) in $(($batch - $start).TotalMilliseconds)ms")
$start = $batch
}
$parts = $line.split(' ')
$out = "$file,$($parts[0]) $($parts[1]),$($parts[2]),$($parts[3]),$($parts[4]),$($parts[5]),$($parts[6]),$($parts[7])"
## Send the output out - comment in/out the desired output method
## Direct output - roughly 10,000 lines / 880ms
$out
## Via write-output - roughly 10,000 lines / 1500ms
write-output $out
}
$reader.Close()
Invoked as .\script.ps1 {path_to_340,000_line_IIS_log} > bob.txt; progress/performance timings are given on stderr.
The script above has two output lines - the write-output one is reporting 10,000 lines every 1500ms, whereas the line that does not have write-output takes as good as half that, averaging about 880ms per 10,000 lines.
I thought that an object defaulted to write-output if it had no other thing (i.e., I thought that "bob" was equivalent to write-output "bob"), but the times I'm getting argue against this.
What am I missing here?
Just a guess, but:
Looking at the help on write-output
Write-Output [-InputObject] <PSObject[]> [-NoEnumerate] [<CommonParameters>]
You're giving it an list of objects as an argument, so it's having to spend a little time assembling to them into an array internally before it does the write, whereas simply outputting them just streams them to the pipeline immediately. You could pipe them to Write-Object, but that's going to add another pipeline which might be even worse.
Edit
In addition you'll find that it's adding .062ms per operation (1500 -880)/10000. You have to scale that to very large data sets before it becomes noticeable.

loop to minus 15 days from the current date

I'm trying to figure out what would be the best way to call an exe that requires a date range parameter (ex: 20130801-20130815) and then loop it so it minuses 15 days and calls the exe with the new date range.
I thought of using a do until but i'm not sure how (new to powershell/programming) but I'm sure this is far from the right method :). I've just started to figure this out, so thanks in advance for any/all help.
do {
$startDate = (Get-Date).adddays(-34)
$requireddate = some date that is set ad-hoc
$startdate.ToString("yyyyMMdd")
#[datetime]::parseexact($startdate,"MMddyyyy",$null)
Call THE EXE at this point with the parameters $startdate and $enddate
$enddate = $startdate.AddDays(-15)
write-host $enddate.ToString("yyyyMMdd")
}
until ($enddate -eq $requireddate)
You can use a For loop to do what you're trying to achieve as well:
(I've split things out into Variables a bit as well as I find it helps when writing functions)
$requiredAddDays = 30
$requiredDate = (get-date).AddDays($requiredAddDays)
$startDate = (get-date).AddDays(-34)
$endAddDays = -15
for($i = $startDate; $i -lt $requiredDate; $i = $i.AddDays(1))
{
Write-Output "$($i.ToString("yyyyMMdd"))-$($i.AddDays($endAddDays).ToString("yyyyMMdd"))"
}
Using Write-Output means that whatever is returned is returned as an object (whereas Write-Host always returns a string). By returning an object it can be fed into the Pipeline (using the pipe |)
The $(code) syntax in my Write-Output means that whatever is inside of the brackets gets evaluated before returning the string (as an object).
You could go one further and make this a parameterised function if you'll be using it lots:
Function Get-DateRange
{
Param(
[datetime]$startDate,
[int]$endAddDays,
[datetime]$requiredDate
)
for($i = $startDate; $i -lt $requiredDate; $i = $i.AddDays(1))
{
Write-Output "$($i.ToString("yyyyMMdd"))-$($i.AddDays($endAddDays).ToString("yyyyMMdd"))"
}
}
Then you could call it (once it's loaded into your session) by running something like this:
Get-DateRange -startDate (get-Date).AddDays(-10) -endAddDays 15 -requiredDate (get-Date).AddDays(15)
P.S. If you would like to write functions it might be a good idea to try and keep to the typical Powershell Verbs if you can. Run get-verb | sort verb to see the whole list. :)
There's lots of ways. If you want to use a Powershell specific method (not do..until or while(){} ) then you could go with a pipeline:
0..15 | %{
$changingDate = $startdate.AddDays(-$_)
#do your work with the .exe & $changingDate
$changingDate
}

What are some of the most useful yet little known features in the PowerShell language [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
A while back I was reading about multi-variable assignments in PowerShell. This lets you do things like this
64 > $a,$b,$c,$d = "A four word string".split()
65 > $a
A
66 > $b
four
Or you can swap variables in a single statement
$a,$b = $b,$a
What little known nuggets of PowerShell have you come across that you think may not be as well known as they should be?
The $$ command. I often have to do repeated operations on the same file path. For instance check out a file and then open it up in VIM. The $$ feature makes this trivial
PS> tf edit some\really\long\file\path.cpp
PS> gvim $$
It's short and simple but it saves a lot of time.
By far the most powerful feature of PowerShell is its ScriptBlock support. The fact that you can so concisely pass around what are effectively anonymous methods without any type constraints are about as powerful as C++ function pointers and as easy as C# or F# lambdas.
I mean how cool is it that using ScriptBlocks you can implement a "using" statement (which PowerShell doesn't have inherently). Or, pre-v2 you could even implement try-catch-finally.
function Using([Object]$Resource,[ScriptBlock]$Script) {
try {
&$Script
}
finally {
if ($Resource -is [IDisposable]) { $Resource.Dispose() }
}
}
Using ($File = [IO.File]::CreateText("$PWD\blah.txt")) {
$File.WriteLine(...)
}
How cool is that!
A feature that I find is often overlooked is the ability to pass a file to a switch statement.
Switch will iterate through the lines and match against strings (or regular expressions with the -regex parameter), content of variables, numbers, or the line can be passed into an expression to be evaluated as $true or $false
switch -file 'C:\test.txt'
{
'sometext' {Do-Something}
$pwd {Do-SomethingElse}
42 {Write-Host "That's the answer."}
{Test-Path $_} {Do-AThirdThing}
default {'Nothing else matched'}
}
$OFS - output field separator. A handy way to specify how array elements are separated when rendered to a string:
PS> $OFS = ', '
PS> "$(1..5)"
1, 2, 3, 4, 5
PS> $OFS = ';'
PS> "$(1..5)"
1;2;3;4;5
PS> $OFS = $null # set back to default
PS> "$(1..5)"
1 2 3 4 5
Always guaranteeing you get an array result. Consider this code:
PS> $files = dir *.iMayNotExist
PS> $files.length
$files in this case may be $null, a scalar value or an array of values. $files.length isn't going to give you the number of files found for $null or for a single file. In the single file case, you will get the file's size!! Whenever I'm not sure how much data I'll get back I always enclose the command in an array subexpression like so:
PS> $files = #(dir *.iMayNotExist)
PS> $files.length # always returns number of files in array
Then $files will always be an array. It may be empty or have only a single element in it but it will be an array. This makes reasoning with the result much simpler.
Array covariance support:
PS> $arr = '127.0.0.1','192.168.1.100','192.168.1.101'
PS> $ips = [system.net.ipaddress[]]$arr
PS> $ips | ft IPAddressToString, AddressFamily -auto
IPAddressToString AddressFamily
----------------- -------------
127.0.0.1 InterNetwork
192.168.1.100 InterNetwork
192.168.1.101 InterNetwork
Comparing arrays using Compare-Object:
PS> $preamble = [System.Text.Encoding]::UTF8.GetPreamble()
PS> $preamble | foreach {"0x{0:X2}" -f $_}
0xEF
0xBB
0xBF
PS> $fileHeader = Get-Content Utf8File.txt -Enc byte -Total 3
PS> $fileheader | foreach {"0x{0:X2}" -f $_}
0xEF
0xBB
0xBF
PS> #(Compare-Object $preamble $fileHeader -sync 0).Length -eq 0
True
Fore more stuff like this, check out my free eBook - Effective PowerShell.
Along the lines of multi-variable assignments.
$list = 1,2,3,4
While($list) {
$head, $list = $list
$head
}
1
2
3
4
I've been using this:
if (!$?) { # if previous command was not successful
Do some stuff
}
and I also use $_ (current pipeline object) quite a bit, but these might be more known than other stuff.
The fact that many operators work on arrays as well and return the elements where a comparison is true or operate on each element of the array independently:
1..1000 -lt 800 -gt 400 -like "?[5-9]0" -replace 0 -as "int[]" -as "char[]" -notmatch "\d"
This is faster than Where-Object.
Not a language feature but super helpful
f8 -- Takes the text you have put in already and searches for a command that starts with that text.
Tab-search through your history with #
Example:
PS> Get-Process explorer
PS> "Ford Explorer"
PS> "Magellan" | Add-Content "great explorers.txt"
PS> type "great explorers.txt"
PS> #expl <-- Hit the <tab> key to cycle through history entries that have the term "expl"
Love this thread. I could list a ton of things after reading Windows Powershell in Action. There's a disconnect between that book and the documentation. I actually tried to list them all somewhere else here, but got put on hold for "not being a question".
I'll start with foreach with three script blocks (begin/process/end):
Get-ChildItem | ForEach-Object {$sum=0} {$sum++} {$sum}
Speaking of swapping two variables, here's swapping two files:
${c:file1.txt},${c:file2.txt} = ${c:file2.txt},${c:file1.txt}
Search and replace a file:
${c:file.txt} = ${c:file.txt} -replace 'oldstring','newstring'
Using assembly and using namespace statements:
using assembly System.Windows.Forms
using namespace System.Windows.Forms
[messagebox]::show('hello world')
A shorter version of foreach, with properties and methods
ps | foreach name
'hi.there' | Foreach Split .
Use $() operator outside of strings to combine two statements:
$( echo hi; echo there ) | measure
Get-content/Set-content with variables:
$a = ''
get-content variable:a | set-content -value there
Anonymous functions:
1..5 | & {process{$_ * 2}}
Give the anonymous function a name:
$function:timestwo = {process{$_ * 2}}
Anonymous function with parameters:
& {param($x,$y) $x+$y} 2 5
You can stream from foreach () with these, where normally you can't:
& { foreach ($i in 1..10) {$i; sleep 1} } | out-gridview
Run processes in background like unix '&', and then wait for them:
$a = start-process -NoNewWindow powershell {timeout 10; 'done a'} -PassThru
$b = start-process -NoNewWindow powershell {timeout 10; 'done b'} -PassThru
$c = start-process -NoNewWindow powershell {timeout 10; 'done c'} -PassThru
$a,$b,$c | wait-process
Or foreach -parallel in workflows:
workflow work {
foreach -parallel ($i in 1..3) {
sleep 5
"$i done"
}
}
work
Or a workflow parallel block where you can run different things:
function sleepfor($time) { sleep $time; "sleepfor $time done"}
workflow work {
parallel {
sleepfor 3
sleepfor 2
sleepfor 1
}
'hi'
}
work
Three parallel commands in three more runspaces with the api:
$a = [PowerShell]::Create().AddScript{sleep 5;'a done'}
$b = [PowerShell]::Create().AddScript{sleep 5;'b done'}
$c = [PowerShell]::Create().AddScript{sleep 5;'c done'}
$r1,$r2,$r3 = ($a,$b,$c).begininvoke()
$a.EndInvoke($r1); $b.EndInvoke($r2); $c.EndInvoke($r3) # wait
($a,$b,$c).Streams.Error # check for errors
($a,$b,$c).dispose() # cleanup
Parallel processes with invoke-command, but you have to be at an elevated prompt with remote powershell working:
invoke-command localhost,localhost,localhost { sleep 5; 'hi' }
An assignment is an expression:
if ($a = 1) { $a }
$a = $b = 2
Get last array element with -1:
(1,2,3)[-1]
Discard output with [void]:
[void] (echo discard me)
Switch on arrays and $_ on either side:
switch(1,2,3,4,5,6) {
{$_ % 2} {"Odd $_"; continue}
4 {'FOUR'}
default {"Even $_"}
}
Get and set variables in a module:
'$script:count = 0
$script:increment = 1
function Get-Count { return $script:count += $increment }' > counter.psm1 # creating file
import-module .\counter.psm1
$m = get-module counter
& $m Get-Variable count
& $m Set-Variable count 33
See module function definition:
& $m Get-Item function:Get-Count | foreach definition
Run a command with a commandinfo object and the call operator:
$d = get-command get-date
& $d
Dynamic modules:
$m = New-Module {
function foo {"In foo x is $x"}
$x=2
Export-ModuleMember -func foo -var x
}
flags enum:
[flags()] enum bits {one = 1; two = 2; three = 4; four = 8; five = 16}
[bits]31
Little known codes for the -replace operator:
$number Substitutes the last submatch matched by group number.
${name} Substitutes the last submatch matched by a named capture of the form (?).
$$ Substitutes a single "$" literal.
$& Substitutes a copy of the entire match itself.
$` Substitutes all the text from the argument string before the matching portion.
$' Substitutes all the text of the argument string after the matching portion.
$+ Substitutes the last submatch captured.
$_ Substitutes the entire argument string.
Demo of workflows surviving interruptions using checkpoints. Kill the window or reboot. Then start PS again. Use get-job and resume-job to resume the job.
workflow test1 {
foreach ($b in 1..1000) {
$b
Checkpoint-Workflow
}
}
test1 -AsJob -JobName bootjob
Emacs edit mode. Pressing tab completion lists all the options at once. Very useful.
Set-PSReadLineOption -EditMode Emacs
Any command that begins with "get-", you can leave off the "get-":
date
help
End parsing --% and end of parameters -- operators.
write-output --% -inputobject
write-output -- -inputobject
Tab completion on wildcards:
cd \pro*iles # press tab
Compile and import a C# module with a cmdlet inside, even in Osx:
Add-Type -Path ExampleModule.cs -OutputAssembly ExampleModule.dll
Import-Module ./ExampleModule.dll
Iterate backwards over a sequence just use the len of the sequence with a 1 on the other side of the range:
foreach( x in seq.length..1) { Do-Something seq[x] }

Is it possible to terminate or stop a PowerShell pipeline from within a filter

I have written a simple PowerShell filter that pushes the current object down the pipeline if its date is between the specified begin and end date. The objects coming down the pipeline are always in ascending date order so as soon as the date exceeds the specified end date I know my work is done and I would like to let tell the pipeline that the upstream commands can abandon their work so that the pipeline can finish its work. I am reading some very large log files and I will frequently want to examine just a portion of the log. I am pretty sure this is not possible but I wanted to ask to be sure.
It is possible to break a pipeline with anything that would otherwise break an outside loop or halt script execution altogether (like throwing an exception). The solution then is to wrap the pipeline in a loop that you can break if you need to stop the pipeline. For example, the below code will return the first item from the pipeline and then break the pipeline by breaking the outside do-while loop:
do {
Get-ChildItem|% { $_;break }
} while ($false)
This functionality can be wrapped into a function like this, where the last line accomplishes the same thing as above:
function Breakable-Pipeline([ScriptBlock]$ScriptBlock) {
do {
. $ScriptBlock
} while ($false)
}
Breakable-Pipeline { Get-ChildItem|% { $_;break } }
It is not possible to stop an upstream command from a downstream command.. it will continue to filter out objects that do not match your criteria, but the first command will process everything it was set to process.
The workaround will be to do more filtering on the upstream cmdlet or function/filter. Working with log files makes it a bit more comoplicated, but perhaps using Select-String and a regular expression to filter out the undesired dates might work for you.
Unless you know how many lines you want to take and from where, the whole file will be read to check for the pattern.
You can throw an exception when ending the pipeline.
gc demo.txt -ReadCount 1 | %{$num=0}{$num++; if($num -eq 5){throw "terminated pipeline!"}else{write-host $_}}
or
Look at this post about how to terminate a pipeline: https://web.archive.org/web/20160829015320/http://powershell.com/cs/blogs/tobias/archive/2010/01/01/cancelling-a-pipeline.aspx
Not sure about your exact needs, but it may be worth your time to look at Log Parser to see if you can't use a query to filter the data before it even hits the pipe.
If you're willing to use non-public members here is a way to stop the pipeline. It mimics what select-object does. invoke-method (alias im) is a function to invoke non-public methods. select-property (alias selp) is a function to select (similar to select-object) non-public properties - however it automatically acts like -ExpandProperty if only one matching property is found. (I wrote select-property and invoke-method at work, so can't share the source code of those).
# Get the system.management.automation assembly
$script:smaa=[appdomain]::currentdomain.getassemblies()|
? location -like "*system.management.automation*"
# Get the StopUpstreamCommandsException class
$script:upcet=$smaa.gettypes()| ? name -like "*StopUpstreamCommandsException *"
function stop-pipeline {
# Create a StopUpstreamCommandsException
$upce = [activator]::CreateInstance($upcet,#($pscmdlet))
$PipelineProcessor=$pscmdlet.CommandRuntime|select-property PipelineProcessor
$commands = $PipelineProcessor|select-property commands
$commandProcessor= $commands[0]
$ci = $commandProcessor|select-property commandinfo
$upce.RequestingCommandProcessor | im set_commandinfo #($ci)
$cr = $commandProcessor|select-property commandruntime
$upce.RequestingCommandProcessor| im set_commandruntime #($cr)
$null = $PipelineProcessor|
invoke-method recordfailure #($upce, $commandProcessor.command)
if ($commands.count -gt 1) {
$doCompletes = #()
1..($commands.count-1) | % {
write-debug "Stop-pipeline: added DoComplete for $($commands[$_])"
$doCompletes += $commands[$_] | invoke-method DoComplete -returnClosure
}
foreach ($DoComplete in $doCompletes) {
$null = & $DoComplete
}
}
throw $upce
}
EDIT: per mklement0's comment:
Here is a link to the Nivot ink blog on a script on the "poke" module which similarly gives access to non-public members.
As far as additional comments, I don't have meaningful ones at this point. This code just mimics what a decompilation of select-object reveals. The original MS comments (if any) are of course not in the decompilation. Frankly I don't know the purpose of the various types the function uses. Getting that level of understanding would likely require a considerable amount of effort.
My suggestion: get Oisin's poke module. Tweak the code to use that module. And then try it out. If you like the way it works, then use it and don't worry how it works (that's what I did).
Note: I haven't studied "poke" in any depth, but my guess is that it doesn't have anything like -returnClosure. However adding that should be easy as this:
if (-not $returnClosure) {
$methodInfo.Invoke($arguments)
} else {
{$methodInfo.Invoke($arguments)}.GetNewClosure()
}
Here's an - imperfect - implementation of a Stop-Pipeline cmdlet (requires PS v3+), gratefully adapted from this answer:
#requires -version 3
Filter Stop-Pipeline {
$sp = { Select-Object -First 1 }.GetSteppablePipeline($MyInvocation.CommandOrigin)
$sp.Begin($true)
$sp.Process(0)
}
# Example
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } # -> 1, 2
Caveat: I don't fully understand how it works, though fundamentally it takes advantage of Select -First's ability to stop the pipeline prematurely (PS v3+). However, in this case there is one crucial difference to how Select -First terminates the pipeline: downstream cmdlets (commands later in the pipeline) do not get a chance to run their end blocks.
Therefore, aggregating cmdlets (those that must receive all input before producing output, such as Sort-Object, Group-Object, and Measure-Object) will not produce output if placed later in the same pipeline; e.g.:
# !! NO output, because Sort-Object never finishes.
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } | Sort-Object
Background info that may lead to a better solution:
Thanks to PetSerAl, my answer here shows how to produce the same exception that Select-Object -First uses internally to stop upstream cmdlets.
However, there the exception is thrown from inside the cmdlet that is itself connected to the pipeline to stop, which is not the case here:
Stop-Pipeline, as used in the examples above, is not connected to the pipeline that should be stopped (only the enclosing ForEach-Object (%) block is), so the question is: How can the exception be thrown in the context of the target pipeline?
Try these filters, they'll force the pipeline to stop after the first object -or the first n elements- and store it -them- in a variable; you need to pass the name of the variable, if you don't the object(s) are pushed out but cannot be assigned to a variable.
filter FirstObject ([string]$vName = '') {
if ($vName) {sv $vName $_ -s 1} else {$_}
break
}
filter FirstElements ([int]$max = 2, [string]$vName = '') {
if ($max -le 0) {break} else {$_arr += ,$_}
if (!--$max) {
if ($vName) {sv $vName $_arr -s 1} else {$_arr}
break
}
}
# can't assign to a variable directly
$myLog = get-eventLog security | ... | firstObject
# pass the the varName
get-eventLog security | ... | firstObject myLog
$myLog
# can't assign to a variable directly
$myLogs = get-eventLog security | ... | firstElements 3
# pass the number of elements and the varName
get-eventLog security | ... | firstElements 3 myLogs
$myLogs
####################################
get-eventLog security | % {
if ($_.timegenerated -lt (date 11.09.08) -and`
$_.timegenerated -gt (date 11.01.08)) {$log1 = $_; break}
}
#
$log1
Another option would be to use the -file parameter on a switch statement. Using -file will read the file one line at a time, and you can use break to exit immediately without reading the rest of the file.
switch -file $someFile {
# Parse current line for later matches.
{ $script:line = [DateTime]$_ } { }
# If less than min date, keep looking.
{ $line -lt $minDate } { Write-Host "skipping: $line"; continue }
# If greater than max date, stop checking.
{ $line -gt $maxDate } { Write-Host "stopping: $line"; break }
# Otherwise, date is between min and max.
default { Write-Host "match: $line" }
}