Generate CLIXML string from a PowerShell object without serializing to disk first - powershell

I have the following code which exports an object to an XML file, then reads it back in and prints it on the Information stream.
try{
# Sample object
$Person = #{
Name = 'Bender'
Age = 'At least 1074'
}
$Person | Export-CliXml obj.xml
$cliXml = Get-Content -Raw ./obj.xml
Write-Host $cliXml
} finally {
if( Test-Path ./obj.xml ) {
Remove-Item -Force ./obj.xml -EV rError -EA SilentlyContinue
if( $rError ) {
Write-Warning "Failed to remove ./obj.xml: $($rError.Exception.Message)"
}
Remove-Variable -Force $rError -EA Continue
}
}
There is a system-local parent session which watches its STDOUT for this output and reconstructs it to an object in its own session.
NOTE: I know a local PSRemoting session would work, but I need this to also work on systems which PSRemoting has either not yet been configured or will not be.
I'd like to cut out the middleman and instead of writing the object to disk. Unfortunately,Import-CliXMl and Export-CliXml are the only cmdlets with CliXml in the name, and doing some .NET documentation sleuthing has turned up nothing so far.
Is there a way to simply serialize an object to a CliXml string without writing to disk first? I've considered using $Person | ConvertTo-Json -Compress -Depth 100 but this has two issues:
Only captures nested objects up to 100 levels deep. This is an edge case but still a limit I'd like to avoid. I could always use another library or another format, but;
I want these to be reconstructed into .NET objects of the same type they were before serialization. Recreating objects with CliXml is the only way I'm aware of that this can be done.

The CliXml serializer is exposed via the [PSSerializer] class:
$Person = #{
Name = 'Bender'
Age = 'At least 1074'
}
# produces the same XML ouput as `Export-CliXml $Person`
[System.Management.Automation.PSSerializer]::Serialize($Person)
To deserialize CliXml, use the Deserialize method:
$cliXml = [System.Management.Automation.PSSerializer]::Serialize($Person)
$deserializedPerson = [System.Management.Automation.PSSerializer]::Deserialize($cliXml)

To complement Mathias' helpful answer:
Introducing in-memory equivalents to the file-based Export-CliXml and Import-CliXml cmdlets - in the form of new ConvertTo-CliXml and ConvertFrom-CliXml cmdlets - has been green-lighted in principle, but is still awaiting implementation by the community (as of PowerShell 7.2.1) - see GitHub issue #3898 and the associated pending, but seemingly stalled community PR #12845
Note that [System.Management.Automation.PSSerializer]::Serialize() defaults to a recursion depth of 1, whereas Export-CliXml defaults to 2; use the overload that allows specifying the recursion depth explicitly, if needed (e.g., [System.Management.Automation.PSSerializer]::Serialize($Person, 2))

Related

Powershell: storing variables to a file [duplicate]

I would like to write out a hash table to a file with an array as one of the hash table items. My array item is written out, but it contains files=System.Object[]
Note - Once this works, I will want to reverse the process and read the hash table back in again.
clear-host
$resumeFile="c:\users\paul\resume.log"
$files = Get-ChildItem *.txt
$files.GetType()
write-host
$types="txt"
$in="c:\users\paul"
Remove-Item $resumeFile -ErrorAction SilentlyContinue
$resumeParms=#{}
$resumeParms['types']=$types
$resumeParms['in']=($in)
$resumeParms['files']=($files)
$resumeParms.GetEnumerator() | ForEach-Object {"{0}={1}" -f $_.Name,$_.Value} | Set-Content $resumeFile
write-host "Contents of $resumefile"
get-content $resumeFile
Results
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
Contents of c:\users\paul\resume.log
files=System.Object[]
types=txt
in=c:\users\paul
The immediate fix is to create your own array representation, by enumerating the elements and separating them with ,, enclosing string values in '...':
# Sample input hashtable. [ordered] preserves the entry order.
$resumeParms = [ordered] #{ foo = 42; bar = 'baz'; arr = (Get-ChildItem *.txt) }
$resumeParms.GetEnumerator() |
ForEach-Object {
"{0}={1}" -f $_.Name, (
$_.Value.ForEach({
(("'{0}'" -f ($_ -replace "'", "''")), $_)[$_.GetType().IsPrimitive]
}) -join ','
)
}
Not that this represents all non-primitive .NET types as strings, by their .ToString() representation, which may or may not be good enough.
The above outputs something like:
foo=42
bar='baz'
arr='C:\Users\jdoe\file1.txt','C:\Users\jdoe\file2.txt','C:\Users\jdoe\file3.txt'
See the bottom section for a variation that creates a *.psd1 file that can later be read back into a hashtable instance with Import-PowerShellDataFile.
Alternatives for saving settings / configuration data in text files:
If you don't mind taking on a dependency on a third-party module:
Consider using the PSIni module, which uses the Windows initialization file (*.ini) file format; see this answer for a usage example.
Adding support for initialization files to PowerShell itself (not present as of 7.0) is being proposed in GitHub issue #9035.
Consider using YAML as the file format; e.g., via the FXPSYaml module.
Adding support for YAML files to PowerShell itself (not present as of 7.0) is being proposed in GitHub issue #3607.
The Configuration module provides commands to write to and read from *.psd1 files, based on persisted PowerShell hashtable literals, as you would declare them in source code.
Alternatively, you could modify the output format in the code at the top to produce such files yourself, which allows you to read them back in via
Import-PowerShellDataFile, as shown in the bottom section.
As of PowerShell 7.0 there's no built-in support for writing such as representation; that is, there is no complementary Export-PowerShellDataFile cmdlet.
However, adding this ability is being proposed in GitHub issue #11300.
If creating a (mostly) plain-text file is not a must:
The solution that provides the most flexibility with respect to the data types it supports is the XML-based CLIXML format that Export-Clixml creates, as Lee Dailey suggests, whose output can later be read with Import-Clixml.
However, this format too has limitations with respect to type fidelity, as explained in this answer.
Saving a JSON representation of the data, as Lee also suggests, via ConvertTo-Json / ConvertFrom-Json, is another option, which makes for human-friendlier output than XML, but is still not as friendly as a plain-text representation; notably, all \ chars. in file paths must be escaped as \\ in JSON.
Writing a *.psd1 file that can be read with Import-PowerShellDataFile
Within the stated constraints regarding data types - in essence, anything that isn't a number or a string becomes a string - it is fairly easy to modify the code at the top to write a PowerShell hashtable-literal representation to a *.psd1 file so that it can be read back in as a [hashtable] instance via Import-PowerShellDataFile:
As noted, if you don't mind installing a module, consider the Configuration module, which has this functionality built int.
# Sample input hashtable.
$resumeParms = [ordered] #{ foo = 42; bar = 'baz'; arr = (Get-ChildItem *.txt) }
# Create a hashtable-literal representation and save it to file settings.psd1
#"
#{
$(
($resumeParms.GetEnumerator() |
ForEach-Object {
" {0}={1}" -f $_.Name, (
$_.Value.ForEach({
(("'{0}'" -f ($_ -replace "'", "''")), $_)[$_.GetType().IsPrimitive]
}) -join ','
)
}
) -join "`n"
)
}
"# > settings.psd1
If you read settings.psd1 with Import-PowerShellDataFile settings.psd1 later, you'll get a [hashtable] instance whose entries you an access as usual and which produces the following display output:
Name Value
---- -----
bar baz
arr {C:\Users\jdoe\file1.txt, C:\Users\jdoe\file1.txt, C:\Users\jdoe\file1.txt}
foo 42
Note how the order of entries (keys) was not preserved, because hashtable entries are inherently unordered.
On writing the *.psd1 file you can preserve the key(-creation) order by declaring the input hashtable (System.Collections.Hashtable) as [ordered], as shown above (which creates a System.Collections.Specialized.OrderedDictionary instance), but the order is, unfortunately, lost on reading the *.psd1 file.
As of PowerShell 7.0, even if you place [ordered] before the opening #{ in the *.psd1 file, Import-PowerShellDataFile quietly ignores it and creates an unordered hashtable nonetheless.
This is a problem I deal with all the time and it drives me mad. I really think that there should be a function specifically for this action... so I wrote one.
function ConvertHashTo-CSV
{
Param (
[Parameter(Mandatory=$true)]
$hashtable,
[Parameter(Mandatory=$true)]
$OutputFileLocation
)
$hastableAverage = $NULL #This will only work for hashtables where each entry is consistent. This checks for consistency.
foreach ($hashtabl in $hashtable)
{
$hastableAverage = $hastableAverage + $hashtabl.count #Counts the amount of headings.
}
$Paritycheck = $hastableAverage / $hashtable.count #Gets the average amount of headings
if ( ($parity = $Paritycheck -is [int]) -eq $False) #if the average is not an int the hashtable is not consistent
{
write-host "Error. Hashtable is inconsistent" -ForegroundColor red
Start-Sleep -Seconds 5
return
}
$HashTableHeadings = $hashtable[0].GetEnumerator().name #Get the hashtable headings
$HashTableCount = ($hashtable[0].GetEnumerator().name).count #Count the headings
$HashTableString = $null # Strange to hold the CSV
foreach ($HashTableHeading in $HashTableHeadings) #Creates the first row containing the column headings
{
$HashTableString += $HashTableHeading
$HashTableString += ", "
}
$HashTableString = $HashTableString -replace ".{2}$" #Removed the last , added by the above loop in error
$HashTableString += "`n"
foreach ($hashtabl in $hashtable) #Adds the data
{
for($i=0;$i -lt $HashTableCount;$i++)
{
$HashTableString += $hashtabl[$i]
if ($i -lt ($HashTableCount - 1))
{
$HashTableString += ", "
}
}
$HashTableString += "`n"
}
$HashTableString | Out-File -FilePath $OutputFileLocation #writes the CSV to a file
}
To use this copy the function into your script, run it, and then
ConvertHashTo-CSV -$hashtable $Hasharray -$OutputFileLocation c:\temp\data.CSV
The code is annotated but a brief explanation of what it does. Steps through the arrays and hashtables and adds them to a string adding the required formatting to make the string a CSV file, then outputs that to a file.
The main limitation of this is that the Hashtabes in the array all have to contain the same amount of fields. To get around this if a hashtable has a field that doesnt contain data ensure it contains at least a space.
More on this can be found here : https://grumpy.tech/powershell-convert-hashtable-to-csv/

Change nested property of an object dynamically (eg: $_.Property.SubProperty) [duplicate]

Say I have JSON like:
{
"a" : {
"b" : 1,
"c" : 2
}
}
Now ConvertTo-Json will happily create PSObjects out of that. I want to access an item I could do $json.a.b and get 1 - nicely nested properties.
Now if I have the string "a.b" the question is how to use that string to access the same item in that structure? Seems like there should be some special syntax I'm missing like & for dynamic function calls because otherwise you have to interpret the string yourself using Get-Member repeatedly I expect.
No, there is no special syntax, but there is a simple workaround, using iex, the built-in alias[1] for the Invoke-Expression cmdlet:
$propertyPath = 'a.b'
# Note the ` (backtick) before $json, to prevent premature expansion.
iex "`$json.$propertyPath" # Same as: $json.a.b
# You can use the same approach for *setting* a property value:
$newValue = 'foo'
iex "`$json.$propertyPath = `$newValue" # Same as: $json.a.b = $newValue
Caveat: Do this only if you fully control or implicitly trust the value of $propertyPath.
Only in rare situation is Invoke-Expression truly needed, and it should generally be avoided, because it can be a security risk.
Note that if the target property contains an instance of a specific collection type and you want to preserve it as-is (which is not common) (e.g., if the property value is a strongly typed array such as [int[]], or an instance of a list type such as [System.Collections.Generic.List`1]), use the following:
# "," constructs an aux., transient array that is enumerated by
# Invoke-Expression and therefore returns the original property value as-is.
iex ", `$json.$propertyPath"
Without the , technique, Invoke-Expression enumerates the elements of a collection-valued property and you'll end up with a regular PowerShell array, which is of type [object[]] - typically, however, this distinction won't matter.
Note: If you were to send the result of the , technique directly through the pipeline, a collection-valued property value would be sent as a single object instead of getting enumerated, as usual. (By contrast, if you save the result in a variable first and the send it through the pipeline, the usual enumeration occurs). While you can force enumeration simply by enclosing the Invoke-Expression call in (...), there is no reason to use the , technique to begin with in this case, given that enumeration invariably entails loss of the information about the type of the collection whose elements are being enumerated.
Read on for packaged solutions.
Note:
The following packaged solutions originally used Invoke-Expression combined with sanitizing the specified property paths in order to prevent inadvertent/malicious injection of commands. However, the solutions now use a different approach, namely splitting the property path into individual property names and iteratively drilling down into the object, as shown in Gyula Kokas's helpful answer. This not only obviates the need for sanitizing, but turns out to be faster than use of Invoke-Expression (the latter is still worth considering for one-off use).
The no-frills, get-only, always-enumerate version of this technique would be the following function:
# Sample call: propByPath $json 'a.b'
function propByPath { param($obj, $propPath) foreach ($prop in $propPath.Split('.')) { $obj = $obj.$prop }; $obj }
What the more elaborate solutions below offer: parameter validation, the ability to also set a property value by path, and - in the case of the propByPath function - the option to prevent enumeration of property values that are collections (see next point).
The propByPath function offers a -NoEnumerate switch to optionally request preserving a property value's specific collection type.
By contrast, this feature is omitted from the .PropByPath() method, because there is no syntactically convenient way to request it (methods only support positional arguments). A possible solution is to create a second method, say .PropByPathNoEnumerate(), that applies the , technique discussed above.
Helper function propByPath:
function propByPath {
param(
[Parameter(Mandatory)] $Object,
[Parameter(Mandatory)] [string] $PropertyPath,
$Value, # optional value to SET
[switch] $NoEnumerate # only applies to GET
)
Set-StrictMode -Version 1
# Note: Iteratively drilling down into the object turns out to be *faster*
# than using Invoke-Expression; it also obviates the need to sanitize
# the property-path string.
$props = $PropertyPath.Split('.') # Split the path into an array of property names.
if ($PSBoundParameters.ContainsKey('Value')) { # SET
$parentObject = $Object
if ($props.Count -gt 1) {
foreach ($prop in $props[0..($props.Count-2)]) { $parentObject = $parentObject.$prop }
}
$parentObject.($props[-1]) = $Value
}
else { # GET
$value = $Object
foreach ($prop in $props) { $value = $value.$prop }
if ($NoEnumerate) {
, $value
} else {
$value
}
}
}
Instead of the Invoke-Expression call you would then use:
# GET
propByPath $obj $propertyPath
# GET, with preservation of the property value's specific collection type.
propByPath $obj $propertyPath -NoEnumerate
# SET
propByPath $obj $propertyPath 'new value'
You could even use PowerShell's ETS (extended type system) to attach a .PropByPath() method to all [pscustomobject] instances (PSv3+ syntax; in PSv2 you'd have to create a *.types.ps1xml file and load it with Update-TypeData -PrependPath):
'System.Management.Automation.PSCustomObject',
'Deserialized.System.Management.Automation.PSCustomObject' |
Update-TypeData -TypeName { $_ } `
-MemberType ScriptMethod -MemberName PropByPath -Value { #`
param(
[Parameter(Mandatory)] [string] $PropertyPath,
$Value
)
Set-StrictMode -Version 1
$props = $PropertyPath.Split('.') # Split the path into an array of property names.
if ($PSBoundParameters.ContainsKey('Value')) { # SET
$parentObject = $this
if ($props.Count -gt 1) {
foreach ($prop in $props[0..($props.Count-2)]) { $parentObject = $parentObject.$prop }
}
$parentObject.($props[-1]) = $Value
}
else { # GET
# Note: Iteratively drilling down into the object turns out to be *faster*
# than using Invoke-Expression; it also obviates the need to sanitize
# the property-path string.
$value = $this
foreach ($prop in $PropertyPath.Split('.')) { $value = $value.$prop }
$value
}
}
You could then call $obj.PropByPath('a.b') or $obj.PropByPath('a.b', 'new value')
Note: Type Deserialized.System.Management.Automation.PSCustomObject is targeted in addition to System.Management.Automation.PSCustomObject in order to also cover deserialized custom objects, which are returned in a number of scenarios, such as using Import-CliXml, receiving output from background jobs, and using remoting.
.PropByPath() will be available on any [pscustomobject] instance in the remainder of the session (even on instances created prior to the Update-TypeData call [2]); place the Update-TypeData call in your $PROFILE (profile file) to make the method available by default.
[1] Note: While it is generally advisable to limit aliases to interactive use and use full cmdlet names in scripts, use of iex to me is acceptable, because it is a built-in alias and enables a concise solution.
[2] Verify with (all on one line) $co = New-Object PSCustomObject; Update-TypeData -TypeName System.Management.Automation.PSCustomObject -MemberType ScriptMethod -MemberName GetFoo -Value { 'foo' }; $co.GetFoo(), which outputs foo even though $co was created before Update-TypeData was called.
This workaround is maybe useful to somebody.
The result goes always deeper, until it hits the right object.
$json=(Get-Content ./json.json | ConvertFrom-Json)
$result=$json
$search="a.c"
$search.split(".")|% {$result=$result.($_) }
$result
You can have 2 variables.
$json = '{
"a" : {
"b" : 1,
"c" : 2
}
}' | convertfrom-json
$a,$b = 'a','b'
$json.$a.$b
1

Problem looping through files (using Powershell) and moving them one at a time, based on error result

I am using Powershell to validate multiple XML files against multiple XSDs; this portion of the code is working as expected, however I also need to move any XML which fails to validate to an "Invalid" folder. I am attempting to loop through these files using ForEach, and then - using an If statement - to move any file which errored. My problem is that all files are being moved, whether or not they errored.
I've written this loop in as many different ways as I could conceive, but I'm not getting the result I expect. (I have also scoured the web for days to find the answer.) I need ForEach to apply the code to each file, one at a time. Is this a problem with my syntax? Perhaps I'm missing something very obvious, but I'm now at a loss.
I am using this function (found on Stack Overflow, and as shown here, slightly tweaked) to validate the XMLs.
function Test-XmlFile
{
<#
.Synopsis
Validates an xml file against an xml schema file.
.Example
PS> dir *.xml | Test-XmlFile schema.xsd
#>
[CmdletBinding()]
param (
[Parameter(ValueFromPipeline=$true, Mandatory=$true)]
[SupportsWildcards()]
$SchemaFile,
[Parameter(ValueFromPipeline=$true, Mandatory=$true)]
[SupportsWildcards()]
[alias('Fullname')]
$XmlFile,
[scriptblock] $ValidationEventHandler = { Write-Error $args[1].Exception }
)
begin {
$schemaReader = New-Object System.Xml.XmlTextReader $SchemaFile
$schema = [System.Xml.Schema.XmlSchema]::Read($schemaReader, $ValidationEventHandler)
}
process {
$ret = $true
try {
$xml = New-Object System.Xml.XmlDocument
$xml.Schemas.Add($schema) | Out-Null
$xml.Load($XmlFile)
$xml.Validate({
throw ([PsCustomObject] #{
SchemaFile = $SchemaFile
XmlFile = $XmlFile
Exception = $args[1].Exception
})
})
} catch {
Write-Error $_
$ret = $false
}
$ret
}
end {
$schemaReader.Close()
}
}
And here is how I am selecting XMLs for validation against their given schemas.
$allfiles = "..\Schema Validation\XMLs\*.xml"
$xml1 = Get-ChildItem $allfiles -Recurse | Select-String "<UniqueElement>" -List | Resolve-Path
$xml2 = Get-ChildItem $allfiles -Recurse | Select-String "<UniqueElement>" -List | Resolve-Path
$xsd1 = "..\Schema Validation\Schemas\Schema1.xsd"
$xsd2 = "..\Schema Validation\Schemas\Schema2.xsd"
And here is the ForEach loop that's not working for me. (In its current configuration, though I've written it a dozen different ways.)
ForEach ($xml in $xml1) {
$xml | Test-XmlFile $xsd1
If ($Error) {
$Error[0].Exception, "`r" | Out-File "..\Schema Validation\Results\log.txt"
Move-Item $xml -Destination "..\Schema Validation\Invalid"
}}
The ForEach loop above is also repeated for the $xml2 and $xsd2 variables.
(And as you can see from the Out-File, I'm also capturing the exception message in a text file for a log of sorts.)
I expected only those XMLs which error and hit an exception to be moved, due to the "If ($Error)" statement and the fact that I'm attempting to loop through the files one at a time; however, what happens is that any XML which contains the unique string that identifies it as part of the $xml1 or $xml2 group is moved to the Invalid folder, error or no error. So what painfully obvious thing am I missing?? (Incidentally, the exception text populates the error log as expected, so at least that part is working as I hoped.)
EDIT: On second thought, I shouldn't say that the exception text populates the log "as expected". It does populate the log, but it writes the message to the log file once for each file included in the variable (each file in $xml1, for example), whether or not the file actually errored. So if there are two files in $xml1, but only one is invalid, the single exception message for that one invalid file will be written to the log twice. So it's writing something for every file that is looped, regardless of validity or errors. Hope that makes sense.
$Error gives you a list of the last errors, it is not cleared if the previous operation succeeded - it will still contain the last encountered errors. So every file will be copied to the 'invalid' directory after the first error.
You already have error handling in your cmdlet, so you could modify that code to also handle moving the file to the other directory.
Your Test-XmlFile function outputs a Boolean indicating the success status of the validation, so I suggest using that directly to determine validation success vs. failure; additionally, to capture the specifics of the validation failure use the -ErrorVariable common parameter:
ForEach ($xml in $xml1) {
if (-not (Test-XmlFile -XmlFile $xml -SchemaFile $xsd1 -ErrorVariable err)) {
$err.Exception, "`n" | Out-File -Append "..\Schema Validation\Results\log.txt"
Move-Item $xml -Destination "..\Schema Validation\Invalid"
}
}
Note that I've added -Append to your Out-File call to ensure that multiple failures don't overwrite each other in the log file. Also, it's better to use "`n" or "`r`n" or, platform-appropriately, [Environment]::NewLine] to create a newline (line break).
As for what you tried:
The automatic $Error variable is a collection (of type [System.Collections.ArrayList]) of all errors that have occurred in the current session, in reverse chronological error.
Thus, if you use if ($Error), you're effectively asking if at least one error has occurred in the session so far, not whether the most recently executed command reported errors (the last of which would be reflected in $Error[0]), because $Error in a Boolean context evaluates to $true once the collection has at least one entry.
There is also the automatic $? variable, which is a Boolean that indicates whether the most recently executed command reported an error (at least one error) - unfortunately, however, this behavior is plagued with inconsistencies (any expression resets $? to $true, and Write-Error does not set $? to $false, unlike compiled cmdlets reporting errors), so it's better to rely on the explicit Boolean return value from Test-XmlFile in your case.

Checking for process and process status in PowerShell version 2 and getting two different outputs depending on single process or multiple processes?

Here is the code below I am using:
$ProcessesToCheckFor = (
'company_name_uat-Historian'
)
$FoundProcesses = Get-Process -Name $ProcessesToCheckFor -ErrorAction SilentlyContinue
foreach ($Item in $ProcessesToCheckFor)
{
if ($FoundProcesses.Name -contains $Item)
{
'{0} runn1ng' -f $Item
}
else
{
'{0} fai1ed' -f $Item
}
}
The code checks to see if company_name_uat-Historian process is running on a server, and if it is running, it will output runn1ng and fai1ed if not.
The problem is, it works when checking only one process just like the code above, but not when you try to check a list of processes.
I need to check a list of processes, so when I chain the rest as listed below:
$ProcessesToCheckFor = (
'company_name_uat-Historian',
'company_name_uat-KEReviewCollector',
'company_name_uat-lwm',
'company_name_uat-MQAck',
'company_name_uat-MQOutput',
'company_name_uat-SQAC',
'company_name_uat-Store',
'company_name_uat-Store_STS',
'company_name_uat-StoreLegacy',
'spotify'
)
$FoundProcesses = Get-Process -Name $ProcessesToCheckFor -ErrorAction SilentlyContinue
foreach ($Item in $ProcessesToCheckFor)
{
if ($FoundProcesses.Name -contains $Item)
{
'{0} runn1ng' -f $Item
}
else
{
'{0} fai1ed' -f $Item
}
They all output fai1ed.
If I do them one by one, each different process will return runn1ng, just all bunched up together they return fai1ed.
Side notes (if anyone is wondering):
runn1ng and fai1ed are 'code words' that get replaced by images using JavaScript. I am making an HTML dashboard that monitors my Windows servers for work, so think green checkmarks and red x marks and what not.
Using PowerShell version 2 is not my choice at all, some of the servers I am responsible for monitoring are Windows 2008 R2. I believe they are going to be updated sometime this year, but the project deliverable requires me to enact a monitoring solution immediately. There are also some servers which are 2012 that I love to write PowerShell scripts for.
spotify is listed as a process because I know it's a legit process but is not installed on our servers. When I was writing the script, I initially tested on my own personal machine and used spotify as a means to test runn1ng if I opened it or fai1ed if I closed it. If all my processes are runn1ng and spotify is fai1ed, then that's an indication that the PS code is working.
It appears to happen in PowerShell version 2.
Any ideas what is causing it and how I can rewrite it?
The Answer
if (($FoundProcesses | Select-Object -ExpandProperty Name) -contains $Item) {
Why
PowerShell 3 added something called member enumeration. If you have an array of objects in 2.0, you can't directly call properties of the array, because it looks for those properties on the array object itself. In 3.0+ if the member doesn't exist on the array, it will check the items for the member as well. Using Select-Object -ExpandProperty is a more explicit way of calling the members.
You could also just move the Get-Process call into the foreach loop.
foreach ($Item in $ProcessesToCheckFor)
{
if (Get-Process -Name $Item -ErrorAction SilentlyContinue)
{
'{0} runn1ng' -f $Item
}
else
{
'{0} fai1ed' -f $Item
}
}
Patrick Meinecke's answer is effective and helpful, but let me attempt a more detailed explanation and a more efficient solution:
First things first: Your code works just fine in PSv3+, which should provide an incentive to leave PSv2 behind.
PSv3 introduced unified handling of scalars and collections through member(-access) enumeration, which bridged the great scalar/array divide that plagued PSv2-.
Specifically, in PSv3 you needn't worry about whether Get-Process -Name $ProcessesToCheckFor happens to return a single or multiple items: in either case you can call .Count on the output to count the number of items returned and you can invoke a member (property) on the result, which, in the case of an array result (multiple items) means that the member is called on each element (e.g., .Name), and the result can be again be used as a scalar or array as needed.
In PSv2-, however, you need to be aware of whether a cmdlet - situationally - happens to return a single item or multiple ones, which means:
Use array-subexpression operator #(...) to ensure that even if the enclosed command happens to return a single item (scalar), the result is an array.
To collect the property values of the elements of a given array or of a scalar in a new array or scalar, use ... | Select-Object -ExpandProperty - again, apply #(...) to ensure that the result is an array.
Note that PSv2- had some unified scalar/array handling, but only with respect to pipeline processing: to send a scalar (single item) to the pipeline, you can use it as-is, without needing to wrap it in #(...):
'foo' | ... works just fine - no need for #('foo') | ... (even though that works too).
Applied to your code we get:
$ProcessesToCheckFor = (
'company_name_uat-Historian',
'company_name_uat-KEReviewCollector',
'company_name_uat-lwm',
'company_name_uat-MQAck',
'company_name_uat-MQOutput',
'company_name_uat-SQAC',
'company_name_uat-Store',
'company_name_uat-Store_STS',
'company_name_uat-StoreLegacy',
'spotify'
)
# Collect the names of all running processes and
# make sure the result is an array (`#(...)`).
$FoundProcesses = #(Get-Process -Name $ProcessesToCheckFor -ErrorAction SilentlyContinue |
Select-Object -ExpandProperty Name)
foreach ($Item in $ProcessesToCheckFor) {
if ($FoundProcesses -contains $Item) {
'{0} runn1ng' -f $Item
} else {
'{0} fai1ed' -f $Item
}
}

Is it possible to terminate or stop a PowerShell pipeline from within a filter

I have written a simple PowerShell filter that pushes the current object down the pipeline if its date is between the specified begin and end date. The objects coming down the pipeline are always in ascending date order so as soon as the date exceeds the specified end date I know my work is done and I would like to let tell the pipeline that the upstream commands can abandon their work so that the pipeline can finish its work. I am reading some very large log files and I will frequently want to examine just a portion of the log. I am pretty sure this is not possible but I wanted to ask to be sure.
It is possible to break a pipeline with anything that would otherwise break an outside loop or halt script execution altogether (like throwing an exception). The solution then is to wrap the pipeline in a loop that you can break if you need to stop the pipeline. For example, the below code will return the first item from the pipeline and then break the pipeline by breaking the outside do-while loop:
do {
Get-ChildItem|% { $_;break }
} while ($false)
This functionality can be wrapped into a function like this, where the last line accomplishes the same thing as above:
function Breakable-Pipeline([ScriptBlock]$ScriptBlock) {
do {
. $ScriptBlock
} while ($false)
}
Breakable-Pipeline { Get-ChildItem|% { $_;break } }
It is not possible to stop an upstream command from a downstream command.. it will continue to filter out objects that do not match your criteria, but the first command will process everything it was set to process.
The workaround will be to do more filtering on the upstream cmdlet or function/filter. Working with log files makes it a bit more comoplicated, but perhaps using Select-String and a regular expression to filter out the undesired dates might work for you.
Unless you know how many lines you want to take and from where, the whole file will be read to check for the pattern.
You can throw an exception when ending the pipeline.
gc demo.txt -ReadCount 1 | %{$num=0}{$num++; if($num -eq 5){throw "terminated pipeline!"}else{write-host $_}}
or
Look at this post about how to terminate a pipeline: https://web.archive.org/web/20160829015320/http://powershell.com/cs/blogs/tobias/archive/2010/01/01/cancelling-a-pipeline.aspx
Not sure about your exact needs, but it may be worth your time to look at Log Parser to see if you can't use a query to filter the data before it even hits the pipe.
If you're willing to use non-public members here is a way to stop the pipeline. It mimics what select-object does. invoke-method (alias im) is a function to invoke non-public methods. select-property (alias selp) is a function to select (similar to select-object) non-public properties - however it automatically acts like -ExpandProperty if only one matching property is found. (I wrote select-property and invoke-method at work, so can't share the source code of those).
# Get the system.management.automation assembly
$script:smaa=[appdomain]::currentdomain.getassemblies()|
? location -like "*system.management.automation*"
# Get the StopUpstreamCommandsException class
$script:upcet=$smaa.gettypes()| ? name -like "*StopUpstreamCommandsException *"
function stop-pipeline {
# Create a StopUpstreamCommandsException
$upce = [activator]::CreateInstance($upcet,#($pscmdlet))
$PipelineProcessor=$pscmdlet.CommandRuntime|select-property PipelineProcessor
$commands = $PipelineProcessor|select-property commands
$commandProcessor= $commands[0]
$ci = $commandProcessor|select-property commandinfo
$upce.RequestingCommandProcessor | im set_commandinfo #($ci)
$cr = $commandProcessor|select-property commandruntime
$upce.RequestingCommandProcessor| im set_commandruntime #($cr)
$null = $PipelineProcessor|
invoke-method recordfailure #($upce, $commandProcessor.command)
if ($commands.count -gt 1) {
$doCompletes = #()
1..($commands.count-1) | % {
write-debug "Stop-pipeline: added DoComplete for $($commands[$_])"
$doCompletes += $commands[$_] | invoke-method DoComplete -returnClosure
}
foreach ($DoComplete in $doCompletes) {
$null = & $DoComplete
}
}
throw $upce
}
EDIT: per mklement0's comment:
Here is a link to the Nivot ink blog on a script on the "poke" module which similarly gives access to non-public members.
As far as additional comments, I don't have meaningful ones at this point. This code just mimics what a decompilation of select-object reveals. The original MS comments (if any) are of course not in the decompilation. Frankly I don't know the purpose of the various types the function uses. Getting that level of understanding would likely require a considerable amount of effort.
My suggestion: get Oisin's poke module. Tweak the code to use that module. And then try it out. If you like the way it works, then use it and don't worry how it works (that's what I did).
Note: I haven't studied "poke" in any depth, but my guess is that it doesn't have anything like -returnClosure. However adding that should be easy as this:
if (-not $returnClosure) {
$methodInfo.Invoke($arguments)
} else {
{$methodInfo.Invoke($arguments)}.GetNewClosure()
}
Here's an - imperfect - implementation of a Stop-Pipeline cmdlet (requires PS v3+), gratefully adapted from this answer:
#requires -version 3
Filter Stop-Pipeline {
$sp = { Select-Object -First 1 }.GetSteppablePipeline($MyInvocation.CommandOrigin)
$sp.Begin($true)
$sp.Process(0)
}
# Example
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } # -> 1, 2
Caveat: I don't fully understand how it works, though fundamentally it takes advantage of Select -First's ability to stop the pipeline prematurely (PS v3+). However, in this case there is one crucial difference to how Select -First terminates the pipeline: downstream cmdlets (commands later in the pipeline) do not get a chance to run their end blocks.
Therefore, aggregating cmdlets (those that must receive all input before producing output, such as Sort-Object, Group-Object, and Measure-Object) will not produce output if placed later in the same pipeline; e.g.:
# !! NO output, because Sort-Object never finishes.
1..5 | % { if ($_ -gt 2) { Stop-Pipeline }; $_ } | Sort-Object
Background info that may lead to a better solution:
Thanks to PetSerAl, my answer here shows how to produce the same exception that Select-Object -First uses internally to stop upstream cmdlets.
However, there the exception is thrown from inside the cmdlet that is itself connected to the pipeline to stop, which is not the case here:
Stop-Pipeline, as used in the examples above, is not connected to the pipeline that should be stopped (only the enclosing ForEach-Object (%) block is), so the question is: How can the exception be thrown in the context of the target pipeline?
Try these filters, they'll force the pipeline to stop after the first object -or the first n elements- and store it -them- in a variable; you need to pass the name of the variable, if you don't the object(s) are pushed out but cannot be assigned to a variable.
filter FirstObject ([string]$vName = '') {
if ($vName) {sv $vName $_ -s 1} else {$_}
break
}
filter FirstElements ([int]$max = 2, [string]$vName = '') {
if ($max -le 0) {break} else {$_arr += ,$_}
if (!--$max) {
if ($vName) {sv $vName $_arr -s 1} else {$_arr}
break
}
}
# can't assign to a variable directly
$myLog = get-eventLog security | ... | firstObject
# pass the the varName
get-eventLog security | ... | firstObject myLog
$myLog
# can't assign to a variable directly
$myLogs = get-eventLog security | ... | firstElements 3
# pass the number of elements and the varName
get-eventLog security | ... | firstElements 3 myLogs
$myLogs
####################################
get-eventLog security | % {
if ($_.timegenerated -lt (date 11.09.08) -and`
$_.timegenerated -gt (date 11.01.08)) {$log1 = $_; break}
}
#
$log1
Another option would be to use the -file parameter on a switch statement. Using -file will read the file one line at a time, and you can use break to exit immediately without reading the rest of the file.
switch -file $someFile {
# Parse current line for later matches.
{ $script:line = [DateTime]$_ } { }
# If less than min date, keep looking.
{ $line -lt $minDate } { Write-Host "skipping: $line"; continue }
# If greater than max date, stop checking.
{ $line -gt $maxDate } { Write-Host "stopping: $line"; break }
# Otherwise, date is between min and max.
default { Write-Host "match: $line" }
}