Pipe a single object and process it without For-EachObject - powershell

Original Question
I a piping a single string and processing it with For-EachObject as follows:
"Test" | % { $_.Substring(0,1) }
It seems wrong to process a single piped item using For-EachObject, partly because it's misleading to future code maintainers. I don't know any other way, though, to capture the string while saying "it's just a single item." For instance, this doesn't work.
"Test" | $_.Substring(0,1)
"Test" | { $_.Substring(0,1) }
How can I process a single object while indicating that I expect only one?
Edit: Add the actual use case
The above is a simplified version of what I'm actually trying to accomplish. I am getting the first paragraph of a Wikipedia article, which is part of a larger function that saves the result to a file.
curl "www.wikipedia.org/wiki/Hope,_British_Columbia" |
select -expand allelements |
? { $_.id -eq "mw-content-text" } |
select -expand innerHTML |
% {
$i = $_.IndexOf("<P>");
$j = $_.IndexOf("</P>");
$_.Substring($i, $j - $i) -replace '<[^>]*>'
}
The part that needs to process a single object follows the select -expand innerHtml expression. Piping is my preferred way because putting multiple parenthesis around the curl part seems ugly.
Aliases
curl is Invoke-WebRequest
select is Select-Object
-expand is ExplandProperty
? is Where-Object
% is For-EachObject

If you are creating single-purpose code where you control both the input and the output, and there will always be only one object, then using the pipeline is overkill and not really appropriate. Just pass the string as a parameter to your function.
function f([String]$s) {
$s.Substring(0,1)
}
PS> f "Test"
T
If you're building a general-purpose function to take input from the pipeline, your function needs to account for more than one object in the stream. Fortunately PowerShell has a natural way to do this using the Process{} block, which is executed once for each item in the input pipeline.
function f {
param(
[Parameter(ValueFromPipeline=$true)]
[String]$item
)
process {
$item.Substring(0,1)
}
}
PS> '123','abc','#%#' | f
1
a
#
This is a common enough function that PowerShell has a shorthand for writing a function that takes one parameter from the pipeline and only contains a process block.
filter f {
$_.SubString(0,1)
}
PS> '123','abc','#%#' | f
1
a
#

Related

Tab-complete a parameter value based on another parameter's already specified value

This question addresses the following scenario:
Can custom tab-completion for a given command dynamically determine completions based on the value previously passed to another parameter on the same command line, using either a parameter-level [ArgumentCompleter()] attribute or the Register-ArgumentCompleter cmdlet?
If so, what are the limitations of this approach?
Example scenario:
A hypothetical Get-Property command has an -Object parameter that accepts an object of any type, and a -Property parameter that accepts the name of a property whose value to extract from the object.
Now, in the course of typing a Get-Property call, if a value is already specified for -Object, tab-completing -Property should cycle through the names of the specified object's (public) properties.
$obj = [pscustomobject] #{ foo = 1; bar = 2; baz = 3 }
Get-Property -Object $obj -Property # <- pressing <tab> here should cycle
# through 'foo', 'bar', 'baz'
#mklement0, regarding first limitation stated in your answer
The custom-completion script block ({ ... }) invoked by PowerShell fundamentally only sees values specified via parameters, not via the pipeline.
I struggled with this, and after some stubbornness I got a working solution.
At least good enough for my tooling, and I hope it can make life easier for many others out there.
This solution has been verified to work with PowerShell versions 5.1 and 7.1.2.
Here I made use of $cmdAst (called $commandAst in the docs), which contains information about the pipeline. With this we can get to know the previous pipeline element and even differentiate between it containing only a variable or a command. Yes, A COMMAND, which with help of Get-Command and the command's OutputType() member method, we can get (suggested) property names for such as well!
Example usage
PS> $obj = [pscustomobject] #{ foo = 1; bar = 2; baz = 3 }
PS> $obj | Get-Property -Property # <tab>: bar, baz, foo
PS> "la", "na", "le" | Select-String "a" | Get-Property -Property # <tab>: Chars, Context, Filename, ...
PS> 2,5,2,2,6,3 | group | Get-Property -Property # <tab>: Count, Values, Group, ...
Function code
Note that apart from now using $cmdAst, I also added [Parameter(ValueFromPipeline=$true)] so we actually pick the object, and PROCESS {$Object.$Property} so that one can test and see the code actually working.
param(
[Parameter(ValueFromPipeline=$true)]
[object] $Object,
[ArgumentCompleter({
param($cmdName, $paramName, $wordToComplete, $cmdAst, $preBoundParameters)
# Find out if we have pipeline input.
$pipelineElements = $cmdAst.Parent.PipelineElements
$thisPipelineElementAsString = $cmdAst.Extent.Text
$thisPipelinePosition = [array]::IndexOf($pipelineElements.Extent.Text, $thisPipelineElementAsString)
$hasPipelineInput = $thisPipelinePosition -ne 0
$possibleArguments = #()
if ($hasPipelineInput) {
# If we are in a pipeline, find out if the previous pipeline element is a variable or a command.
$previousPipelineElement = $pipelineElements[$thisPipelinePosition - 1]
$pipelineInputVariable = $previousPipelineElement.Expression.VariablePath.UserPath
if (-not [string]::IsNullOrEmpty($pipelineInputVariable)) {
# If previous pipeline element is a variable, get the object.
# Note that it can be a non-existent variable. In such case we simply get nothing.
$detectedInputObject = Get-Variable |
Where-Object {$_.Name -eq $pipelineInputVariable} |
ForEach-Object Value
} else {
$pipelineInputCommand = $previousPipelineElement.CommandElements[0].Value
if (-not [string]::IsNullOrEmpty($pipelineInputCommand)) {
# If previous pipeline element is a command, check if it exists as a command.
$possibleArguments += Get-Command -CommandType All |
Where-Object Name -Match "^$pipelineInputCommand$" |
# Collect properties for each documented output type.
ForEach-Object {$_.OutputType.Type} | ForEach-Object GetProperties |
# Group properties by Name to get unique ones, and sort them by
# the most frequent Name first. The sorting is a perk.
# A command can have multiple output types. If so, we might now
# have multiple properties with identical Name.
Group-Object Name -NoElement | Sort-Object Count -Descending |
ForEach-Object Name
}
}
} elseif ($preBoundParameters.ContainsKey("Object")) {
# If not in pipeline, but object has been given, get the object.
$detectedInputObject = $preBoundParameters["Object"]
}
if ($null -ne $detectedInputObject) {
# The input object might be an array of objects, if so, select the first one.
# We (at least I) are not interested in array properties, but the object element's properties.
if ($detectedInputObject -is [array]) {
$sampleInputObject = $detectedInputObject[0]
} else {
$sampleInputObject = $detectedInputObject
}
# Collect property names.
$possibleArguments += $sampleInputObject | Get-Member -MemberType Properties | ForEach-Object Name
}
# Refering to about_Functions_Argument_Completion documentation.
# The ArgumentCompleter script block must unroll the values using the pipeline,
# such as ForEach-Object, Where-Object, or another suitable method.
# Returning an array of values causes PowerShell to treat the entire array as one tab completion value.
$possibleArguments | Where-Object {$_ -like "$wordToComplete*"}
})]
[string] $Property
)
PROCESS {$Object.$Property}
Update: See betoz's helpful answer for a more complete solution that also supports pipeline input.
The part of the answer below that clarifies the limitations of pre-execution detection of the input objects' data type still applies.
The following solution uses a parameter-specific [ArgumentCompleter()] attribute as part of the definition of the Get-Property function itself, but the solution analogously applies to separately defining custom-completion logic via the Register-CommandCompleter cmdlet.
Limitations:
[See betoz's answer for how to overcome this limitation] The custom-completion script block ({ ... }) invoked by PowerShell fundamentally only sees values specified via parameters, not via the pipeline.
That is, if you type Get-Property -Object $obj -Property <tab>, the script block can determine that the value of $obj is to be bound to the -Object parameter, but that wouldn't work with
$obj | Get-Property -Property <tab> (even if -Object is declared as pipeline-binding).
Fundamentally, only values that can be evaluated without side effects are actually accessible in the script block; in concrete terms, this means:
Literal values (e.g., -Object ([pscustomobject] #{ foo = 1; bar = 2; baz = 3 })
Simple variable references (e.g., -Object $obj) or property-access or index-access expressions (e.g., -Object $obj.Foo or -Object $obj[0])
Notably, the following values are not accessible:
Method-call results (e.g., -Object $object.Foo())
Command output (via (...), $(...), or #(...), e.g.
-Object (Invoke-RestMethod http://example.org))
The reason for this limitation is that evaluating such values before actually submitting the command could have undesirable side effects and / or could take a long time to complete.
function Get-Property {
param(
[object] $Object,
[ArgumentCompleter({
# A fixed list of parameters is passed to an argument-completer script block.
# Here, only two are of interest:
# * $wordToComplete:
# The part of the value that the user has typed so far, if any.
# * $preBoundParameters (called $fakeBoundParameters
# in the docs):
# A hashtable of those (future) parameter values specified so
# far that are side effect-free (see above).
param($cmdName, $paramName, $wordToComplete, $cmdAst, $preBoundParameters)
# Was a side effect-free value specified for -Object?
if ($obj = $preBoundParameters['Object']) {
# Get all property names of the objects and filter them
# by the partial value already typed, if any,
# interpreted as a name prefix.
#($obj.psobject.Properties.Name) -like "$wordToComplete*"
}
})]
[string] $Property
)
# ...
}

PowerShell Pipeline iteration

I often write functions that need to accept pipelines like:
function Invoke-ExchangeHubChecks
{
[cmdletbinding()]
param
(
[Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[object[]]$oServersToTest,
[int]$QueueWarnThreshold = 50,
[int]$QueueFailThreshold = 100
)
begin
{
$oHubTests = #()
$c=0
}
Process
{
$c++
$Server = $oServersToTest[0]
$oHubTests += [pscustomobject]#{
Identity = $server.Identity
Queue = "blah"
Count = $c
}
}
end
{
$oHubTests
}
}
end
{
$oHubTests
}
}
I often see code snippets with a foreach inside the process block. My understanding is that the process block replaces the need for the foreach block (in most cases).
I need to add a "filter" to the input object "$oServersToTest". I have used "where-object" to create the filter:
$Server = $oServersToTest[0] | Where-Object {$_.IsHub}
However, as it iterates over the collection, it is adding objects that don't match the where clause to the custom object, proved by $c:
Output:
Identity Queue Count
-------- ----- -----
Server01 blah 1
Server02 blah 2
Server03 blah 3
Server04 blah 4
Server05 blah 5
blah 6
blah 7
Server06 blah 8
Server07 blah 9
Server08 blah 10
Server09 blah 11
Server10 blah 12
blah 13
blah 14
Is this an example where I must use a foreach inside the process block:
Process
{
$c++
#$Server = $oServersToTest[0] | Where-Object{$_.IsHubServer}
$TestResult = #()
$oServersToTest | Where-Object {$_.IsHubServer} | Foreach-object{
$TestResult += [pscustomobject]#{
Identity = $_.Identity
Queue = "blah"
Count = $c
}
}
$oHubTests += $TestResult
}
End
{
$oHubTests
}
Which works, but why does this work and not the other way? To me, it's doing twice the work.
(Powershell 5.0)
TIA
I would use a foreach() loop inside the process block.
As Jeff Zeitlin points out, this also lets you support processing of collections with named parameters and pipelining alike.
The reason for using foreach() over ForEach-Object is three-fold:
Performance:
When the input collection is already in memory (which it is in both cases), the overhead from implicit parameter binding will make ForEach-Object slower than foreach()
Readability:
IMO, having a named variable (like $Server) makes the script more readable and you avoid confusion with nested ForEach-Object or Where-Object clauses inside the loop
Functionality:
With a plain ol' foreach(), you get fast flow control options like break and continue for free.
process {
$oHubTests += foreach($server in $oServersToTest |Where-Object {$_.IsHubServer}){
$c++
[pscustomobject]#{
Identity = $server.Identity
Queue = "blah"
Count = $c
}
}
}
If the only thing your cmdlet does is take the input and construct corresponding [pscustomobject]'s, then I would cut out the $oHubTests part completely, and just output the objects as they flow through:
function Invoke-ExchangeHubChecks
{
[CmdletBinding()]
param
(
[Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[object[]]$oServersToTest,
[int]$QueueWarnThreshold = 50,
[int]$QueueFailThreshold = 100
)
begin {
$c=0
}
process {
foreach($server in $oServersToTest |Where-Object {$_.IsHubServer}){
$c++
[pscustomobject]#{
Identity = $server.Identity
Queue = "blah"
Count = $c
}
}
}
}
You are addressing one of the most complicated design structures of powershell for me. Creating a cmdlet that can process an array of objects as provided as a parameter or per item piped into the cmdlet.
For example do-something -input $items or $items | do-something.
The process block is meant mostly for when a cmdlet accept pipeline values. It doesn't replace the for loop as you think it. The pipe has already done an implicit loop on the collection although that is not 100% accurate but that is a different story. I'll continue with this somehow inaccurate statement for simplicity.
With such cases, the begin,process and end allow you to control what happens at each part of the pipe or imaginary loop. Before and end are executed only once and can used for initialization and finalizing. In the above example first begin executed, then process for each item piped from the collection and then end. But if the collection is used as a normal parameter then it's begin, process with a loop inside and then end.
I understand that is complicated but try to visualize with a do-something implementation that just writes to the host what is going on.
I had to understand this sequence when developing MarkdownPS. Check this example and the examples of usage on the root. Get-ChildItem | Select-Object Name | New-MDList. FYI the last example is a demonstration of the inaccurate statement I used in this answer.

Writing $null to Powershell Output Stream

There are powershell cmdlets in our project for finding data in a database. If no data is found, the cmdlets write out a $null to the output stream as follows:
Write-Output $null
Or, more accurately since the cmdlets are implemented in C#:
WriteOutput(null)
I have found that this causes some behavior that is very counter to the conventions employed elsewhere, including in the built-in cmdlets.
Are there any guidelines/rules, especially from Microsoft, that talk about this? I need help better explaining why this is a bad idea, or to be convinced that writing $null to the output stream is an okay practice. Here is some detail about the resulting behaviors that I see:
If the results are piped into another cmdlet, that cmdlet executes despite no results being found and the pipeline variable ($_) is $null. This means that I have to add checks for $null.
Find-DbRecord -Id 3 | For-Each { if ($_ -ne $null) { <do something with $_> }}
Similarly, If I want to get the array of records found, ensuring that it is an array, I might do the following:
$recsFound = #(Find-DbRecord -Category XYZ)
foreach ($record in $recsFound)
{
$record.Name = "Something New"
$record.Update()
}
The convention I have seen, this should work without issue. If no records are found, the foreach loop wouldn't execute. Since the Find cmdlet is writing null to the output, the $recsFound variable is set to an array with one item that is $null. Now I would need to check each item in the array for $null which clutters my code.
$null is not void. If you don't want null values in your pipeline, either don't write null values to the pipeline in the first place, or remove them from the pipeline with a filter like this:
... | Where-Object { $_ -ne $null } | ...
Depending on what you want to allow through the filter you could simplify it to this:
... | Where-Object { $_ } | ...
or (using the ? alias for Where-Object) to this:
... | ? { $_ } | ...
which would remove all values that PowerShell interprets as $false ($null, 0, empty string, empty array, etc.).

How to invoke a powershell scriptblock with $_ automatic variable

What works -
Lets say I have a scriptblock which I use with Select-Object cmdlet.
$jobTypeSelector = `
{
if ($_.Type -eq "Foo")
{
"Bar"
}
elseif ($_.Data -match "-Action ([a-zA-Z]+)")
{
$_.Type + " [" + $Matches[1] + "]"
}
else
{
$_.Type
}
}
$projectedData = $AllJobs | Select-Object -Property State, #{Name="Type"; Expression=$jobTypeSelector}
This works fine, and I get the results as expected.
What I am trying to do -
However, at a later point in code, I want to reuse the scriptblock defined as $jobTypeSelector.
For example, I expected below code to take $fooJob (note that it is a single object) passed as parameter below, and be used for $_ automatic variable in the scriptblock and return me the same result, as it returns when executed in context of Select-Object cmdlet.
$fooType = $jobTypeSelector.Invoke($fooJob)
What doesn't work -
It does not work as I expected and I get back empty value.
What I have already tried -
I checked, the properties are all correctly set, and it's not due to the relevant property itself being blank or $null.
I looked on the internet, and it's seemed pretty hard to find any relevant page on the subject, but I eventually found one which was close to explaining the issue in a slightly different context - calling the script blocks in PowerShell. The blog doesn't directly answer my question, and any relevant explanation only leads to a solution which would be very ugly, hard to read and maintain in my opinion.
Question -
So, what is the best way to invoke a scriptblock for a single object, which uses $_ automatic variable as parameter, (instead of param block)
After fiddling around with varoius options, I ended up sloving the problem, in a sort of Hackish way.. But I find it to be the best solution because it's small, easy to read, maintain and understand.
Even though we are talking about single object, use it in the pipeline (which is when PowerShell defines the $_ automatic variable) with ForEach-Object cmdlet
$fooType = $fooJob | ForEach-Object $jobTypeSelector
You can certainly use foreach or ForEach-Object as you mention.
You can also pipe to the ScriptBlock directly, if you change it from a function ScriptBlock to a filter ScriptBlock by setting IsFilter to $true:
$jobTypeSelector.IsFilter = $true
$fooType = $fooJob | $jobTypeSelector
But, what would be even better is if you used a named function instead of an anonymous ScriptBlock, for example:
function Get-JobType
{
Param (
[object] $Job
)
if ($Job.Type -eq "Foo")
{
"Bar"
}
elseif ($Job.Data -match "-Action ([a-zA-Z]+)")
{
$Job.Type + " [" + $Matches[1] + "]"
}
else
{
$Job.Type
}
}
Then you can use it with Select-Object aka select like this:
$projectedData = $AllJobs |
select -Property State, #{Name="Type"; Expression={Get-JobType $_}}
Or with a single job, like this:
$fooType = Get-JobType $fooJob

Is it possible to terminate or stop a PowerShell pipeline from within a filter

I have written a simple PowerShell filter that pushes the current object down the pipeline if its date is between the specified begin and end date. The objects coming down the pipeline are always in ascending date order so as soon as the date exceeds the specified end date I know my work is done and I would like to let tell the pipeline that the upstream commands can abandon their work so that the pipeline can finish its work. I am reading some very large log files and I will frequently want to examine just a portion of the log. I am pretty sure this is not possible but I wanted to ask to be sure.
It is possible to break a pipeline with anything that would otherwise break an outside loop or halt script execution altogether (like throwing an exception). The solution then is to wrap the pipeline in a loop that you can break if you need to stop the pipeline. For example, the below code will return the first item from the pipeline and then break the pipeline by breaking the outside do-while loop:
do {
Get-ChildItem|% { $_;break }
} while ($false)
This functionality can be wrapped into a function like this, where the last line accomplishes the same thing as above:
function Breakable-Pipeline([ScriptBlock]$ScriptBlock) {
do {
. $ScriptBlock
} while ($false)
}
Breakable-Pipeline { Get-ChildItem|% { $_;break } }
It is not possible to stop an upstream command from a downstream command.. it will continue to filter out objects that do not match your criteria, but the first command will process everything it was set to process.
The workaround will be to do more filtering on the upstream cmdlet or function/filter. Working with log files makes it a bit more comoplicated, but perhaps using Select-String and a regular expression to filter out the undesired dates might work for you.
Unless you know how many lines you want to take and from where, the whole file will be read to check for the pattern.
You can throw an exception when ending the pipeline.
gc demo.txt -ReadCount 1 | %{$num=0}{$num++; if($num -eq 5){throw "terminated pipeline!"}else{write-host $_}}
or
Look at this post about how to terminate a pipeline: https://web.archive.org/web/20160829015320/http://powershell.com/cs/blogs/tobias/archive/2010/01/01/cancelling-a-pipeline.aspx
Not sure about your exact needs, but it may be worth your time to look at Log Parser to see if you can't use a query to filter the data before it even hits the pipe.
If you're willing to use non-public members here is a way to stop the pipeline. It mimics what select-object does. invoke-method (alias im) is a function to invoke non-public methods. select-property (alias selp) is a function to select (similar to select-object) non-public properties - however it automatically acts like -ExpandProperty if only one matching property is found. (I wrote select-property and invoke-method at work, so can't share the source code of those).
# Get the system.management.automation assembly
$script:smaa=[appdomain]::currentdomain.getassemblies()|
? location -like "*system.management.automation*"
# Get the StopUpstreamCommandsException class
$script:upcet=$smaa.gettypes()| ? name -like "*StopUpstreamCommandsException *"
function stop-pipeline {
# Create a StopUpstreamCommandsException
$upce = [activator]::CreateInstance($upcet,#($pscmdlet))
$PipelineProcessor=$pscmdlet.CommandRuntime|select-property PipelineProcessor
$commands = $PipelineProcessor|select-property commands
$commandProcessor= $commands[0]
$ci = $commandProcessor|select-property commandinfo
$upce.RequestingCommandProcessor | im set_commandinfo #($ci)
$cr = $commandProcessor|select-property commandruntime
$upce.RequestingCommandProcessor| im set_commandruntime #($cr)
$null = $PipelineProcessor|
invoke-method recordfailure #($upce, $commandProcessor.command)
if ($commands.count -gt 1) {
$doCompletes = #()
1..($commands.count-1) | % {
write-debug "Stop-pipeline: added DoComplete for $($commands[$_])"
$doCompletes += $commands[$_] | invoke-method DoComplete -returnClosure
}
foreach ($DoComplete in $doCompletes) {
$null = & $DoComplete
}
}
throw $upce
}
EDIT: per mklement0's comment:
Here is a link to the Nivot ink blog on a script on the "poke" module which similarly gives access to non-public members.
As far as additional comments, I don't have meaningful ones at this point. This code just mimics what a decompilation of select-object reveals. The original MS comments (if any) are of course not in the decompilation. Frankly I don't know the purpose of the various types the function uses. Getting that level of understanding would likely require a considerable amount of effort.
My suggestion: get Oisin's poke module. Tweak the code to use that module. And then try it out. If you like the way it works, then use it and don't worry how it works (that's what I did).
Note: I haven't studied "poke" in any depth, but my guess is that it doesn't have anything like -returnClosure. However adding that should be easy as this:
if (-not $returnClosure) {
$methodInfo.Invoke($arguments)
} else {
{$methodInfo.Invoke($arguments)}.GetNewClosure()
}
Here's an - imperfect - implementation of a Stop-Pipeline cmdlet (requires PS v3+), gratefully adapted from this answer:
#requires -version 3
Filter Stop-Pipeline {
$sp = { Select-Object -First 1 }.GetSteppablePipeline($MyInvocation.CommandOrigin)
$sp.Begin($true)
$sp.Process(0)
}
# Example
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } # -> 1, 2
Caveat: I don't fully understand how it works, though fundamentally it takes advantage of Select -First's ability to stop the pipeline prematurely (PS v3+). However, in this case there is one crucial difference to how Select -First terminates the pipeline: downstream cmdlets (commands later in the pipeline) do not get a chance to run their end blocks.
Therefore, aggregating cmdlets (those that must receive all input before producing output, such as Sort-Object, Group-Object, and Measure-Object) will not produce output if placed later in the same pipeline; e.g.:
# !! NO output, because Sort-Object never finishes.
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } | Sort-Object
Background info that may lead to a better solution:
Thanks to PetSerAl, my answer here shows how to produce the same exception that Select-Object -First uses internally to stop upstream cmdlets.
However, there the exception is thrown from inside the cmdlet that is itself connected to the pipeline to stop, which is not the case here:
Stop-Pipeline, as used in the examples above, is not connected to the pipeline that should be stopped (only the enclosing ForEach-Object (%) block is), so the question is: How can the exception be thrown in the context of the target pipeline?
Try these filters, they'll force the pipeline to stop after the first object -or the first n elements- and store it -them- in a variable; you need to pass the name of the variable, if you don't the object(s) are pushed out but cannot be assigned to a variable.
filter FirstObject ([string]$vName = '') {
if ($vName) {sv $vName $_ -s 1} else {$_}
break
}
filter FirstElements ([int]$max = 2, [string]$vName = '') {
if ($max -le 0) {break} else {$_arr += ,$_}
if (!--$max) {
if ($vName) {sv $vName $_arr -s 1} else {$_arr}
break
}
}
# can't assign to a variable directly
$myLog = get-eventLog security | ... | firstObject
# pass the the varName
get-eventLog security | ... | firstObject myLog
$myLog
# can't assign to a variable directly
$myLogs = get-eventLog security | ... | firstElements 3
# pass the number of elements and the varName
get-eventLog security | ... | firstElements 3 myLogs
$myLogs
####################################
get-eventLog security | % {
if ($_.timegenerated -lt (date 11.09.08) -and`
$_.timegenerated -gt (date 11.01.08)) {$log1 = $_; break}
}
#
$log1
Another option would be to use the -file parameter on a switch statement. Using -file will read the file one line at a time, and you can use break to exit immediately without reading the rest of the file.
switch -file $someFile {
# Parse current line for later matches.
{ $script:line = [DateTime]$_ } { }
# If less than min date, keep looking.
{ $line -lt $minDate } { Write-Host "skipping: $line"; continue }
# If greater than max date, stop checking.
{ $line -gt $maxDate } { Write-Host "stopping: $line"; break }
# Otherwise, date is between min and max.
default { Write-Host "match: $line" }
}