PowerShell hash table in a file [duplicate] - powershell

If I want to write an object / HashTable to disk and load it up again later does PowerShell support that?

Sure, you can use PowerShell's native CliXml format:
#{
a = 1
b = [pscustomobject]#{
prop = "value"
}
} | Export-Clixml -Path hashtable.ps1xml
Deserialize with Import-CliXml:
PS C:\> $ht = Import-CliXml hashtable.ps1xml
PS C:\> $ht['b'].prop -eq 'value'
True

As the default PowerShell hash table (#{...}) is of type Object, Object it doesn't concern just a HashTable type but the question implies serializing of any (value) type to disk.
In addition to the answer from #Mathias R. Jessen, you might use the PowerShell serializer (System.Management.Automation.PSSerializer) for this:
Serialize to disk
[System.Management.Automation.PSSerializer]::Serialize($HashTable) | Out-File .\HashTable.txt
Deserialize from disk
$PSSerial = Get-Content .\HashTable.txt
$HashTable = [System.Management.Automation.PSSerializer]::Deserialize($PSSerial)
You could also use this ConvertTo-Expression cmdlet. The downside is that is concerns a non-standard PowerShell cmdlet for serializing but the advantage is that you might use the standard and easy dot-sourcing technique to restore it:
Serialize to disk
$HashTable | ConvertTo-Expression | Out-File .\HashTable.ps1
Deserialize from disk
$HashTable = . .\HashTable.ps1

The answer may depend on the data in your hashtable. For relatively simple data
Export-Clixml and Import-CliXml is the native and straightforward
PowerShell solution, see another answer.
For more complex data, not well serialized via CliXml, but .NET serializable,
you may use one of the standard .NET serializers. For example, BinaryFormatter.
You may use (or learn the code) two ready scripts: Export-Binary.ps1 and Import-Binary.ps1.
You may find demo examples, including the hashtable, in Export-Binary.test.ps1.
And, if you want to store many hashtables effectively then look for some sort
of document store solutions. I recently found that LiteDB is very good for many
PowerShell scenarios. So I created Ldbc, the PowerShell wrapper of LiteDB, batteries included.
Using this way, you may store and retrieve thousands of hashtables.
UPDATE: If you prefer to store relatively simple data in PSD1 (native PowerShell data format), you may also use the script module PsdKit. (Thank you #iRon for reminding)

Related

Get serialnumber from asset list

Started in recent weeks in a Junior infrastructure role, and begun playing around with powershell to help save some time here and there.
I am trying to do the following:
1- I'm port a CSV file with a single column names asset
2- Perform a "ForEach" check on each line to find the device's serial number
3- Output results to a CSV with two column "asset" and "serialnumber"
I have dabbled in a few areas, and am currently sitting at something like this:
$file1 = Import-Csv -path "c:\temp\assets.csv" | ForEach-Object {
$asset = $_.asset
}
wmic /node:$asset bios get serialnumber
Export-Csv -Path "c:\temp\assetandserial.csv" -NoTypeInformation
As you may or may not see, I tried to set the column labelled "asset" as the variable, however, not sure if I placed it correctly.
I have tried a few other things, but honestly it's all new to me, so I haven't the foggiest idea where to go from here.
wmic is deprecated, and, for rich type support (OO processing), using PowerShell-native cmdlets is preferable in general.
wmic's immediate PowerShell counterpart is Get-WmiObject, which, however, is equally deprecated, in favor of Get-CimInstance.
Important: The command below uses Get-CimInstance, but note that the CIM cmdlets use a different remoting protocol than the obsolete WMI cmdlets. In short: To use the CIM cmdlets with remote computers, those computers must be set up in the same way that PowerShell remoting requires - see this answer for details.
Get-CimInstance Win32_BIOS -ComputerName (Import-Csv c:\temp\assets.csv).asset |
Select-Object #{ n='Asset'; e='PSComputerName' }, SerialNumber |
Sort-Object Asset |
Export-Csv c:\temp\assetandserial.csv -NoTypeInformation
Note the use of member-access enumeration to extract all .asset values directly from the collection of objects returned from Import-Csv.
All computer (asset) names are passed at once to Get-CimInstance, which queries them in parallel. Since the ordering of the responses from the targeted remote machines isn't guaranteed, Sort-Object is used to sort the results.
A calculated property is used with Select-Object to rename the automatically added .PSComputerName property to Asset.

Is there an easy way to have Powershell always show the actual value?

These things drive me nuts:
Is there an easy way to have Powershell just show me the empty string and the list with an empty string in it?
For a while I am maintaining a ConvertTo-Expression which converts an (complex) object into a PowerShell expression which eventually can be used to rebuild most objects. It might useful in situations as comparing test results but also to reveal objects. For details see: readme.
Source and installation
The ConvertTo-Expression script can be installed from the PowerShell Gallery:
Install-Script -Name ConvertTo-Expression
As it concerns a standalone script, installation isn't really required. If you don't have administrator rights, you might just download the script (or copy it) to the required location. You might than simply invoke the script using PowerShell dot sourcing:
. .\ConvertTo-Expression.ps1
Example
The following command outputs the same expression as used to build the object:
$Object = [Ordered]#{
'Null' = $Null
'EmptyString' = ''
'EmptyArray' = #()
'SingleItem' = ,''
'EmptyHashtable' = #{}
}
ConvertTo-Expression $Object
Note the comment from #Mathias there's no functional difference between "one string" and "an array of one string", the pipeline consumes 1 string either way. PowerShell is not node which is described here: PowerShell enumerate an array that contains only one inner array. Some objects might be really different than you expect.
See also: Save hash table in PowerShell object notation (PSON)
This is PowerShell, not Node. So it's not JavaScript or JSON. Also, PowerShell is not Bash or CMD any other regular text-based shell. PowerShell works with objects. .NET objects, in particular. And how objects are represented as text is ... quite a matter of taste. How to represent null? Of course: nothing. How to represent an empty string? Nothing, either. An empty array ... you get my point.
All pipeline output is by default send to Out-Default. In general, the way objects are represented can be controlled by format files: about_Format.ps1xml and about_Types.ps1xml. From PowerShell 6 upwards, the default formats are compiled into the source code, but you can extend them. How you do so, depends on your personal taste. Some options were already mentioned ConvertTo-Json "", ConvertTo-Json #("")), but this would be veryyy JSON-specific.
tl;dr Don't care too much about how objects are represented textually. As you see, there are many possible ways to do so, and also some others. Just make sure your scripts are always object-oriented.
You mean like Python's repr() function? A serialization? "Give me a canonical representation of the object that, when property evaluated, will return an object of the type originally passed?" No, that's not possible unless you write it yourself or you serialize it to XML, JSON, or similar. That's what the *-CliXml and ConvertTo-*/ConvertFrom-* commands are for.
On Powershell 7:
PS C:\> ''.Split(',') | ConvertTo-Json -Compress -AsArray
[""]
PS C:\> '1,2,3,,5'.Split(',') | ConvertTo-Json -Compress -AsArray
["1","2","3","","5"]
The closest would be the ToString() common method. However, that's intended for formatting output and typecasting, not canonical representations or serializations. Not every object even has a string representation that can be converted back into that object. The language isn't designed with that in mind. It's based on C#, a compiled language that favors binary serializations. Javascript requires essentially everything to have a string serialization in order to fulfill it's original design. Python, too, has string representations as fundamental to it's design which is why the repr() function is foundational for serialization and introspection and so on.
C# and .Net go the other way. In a .Net application, if you want to specify a literal empty string you're encouraged to use String.Empty ([String]::Empty in Powershell) because it's easier to see that it's explicit. Why would you ever want a compiled application to tell the user how the canonical representation of an object in memory? You see why that's not a useful thing for C# even if it might be for Powershell?

Does PowerShell support HashTable Serialization?

If I want to write an object / HashTable to disk and load it up again later does PowerShell support that?
Sure, you can use PowerShell's native CliXml format:
#{
a = 1
b = [pscustomobject]#{
prop = "value"
}
} | Export-Clixml -Path hashtable.ps1xml
Deserialize with Import-CliXml:
PS C:\> $ht = Import-CliXml hashtable.ps1xml
PS C:\> $ht['b'].prop -eq 'value'
True
As the default PowerShell hash table (#{...}) is of type Object, Object it doesn't concern just a HashTable type but the question implies serializing of any (value) type to disk.
In addition to the answer from #Mathias R. Jessen, you might use the PowerShell serializer (System.Management.Automation.PSSerializer) for this:
Serialize to disk
[System.Management.Automation.PSSerializer]::Serialize($HashTable) | Out-File .\HashTable.txt
Deserialize from disk
$PSSerial = Get-Content .\HashTable.txt
$HashTable = [System.Management.Automation.PSSerializer]::Deserialize($PSSerial)
You could also use this ConvertTo-Expression cmdlet. The downside is that is concerns a non-standard PowerShell cmdlet for serializing but the advantage is that you might use the standard and easy dot-sourcing technique to restore it:
Serialize to disk
$HashTable | ConvertTo-Expression | Out-File .\HashTable.ps1
Deserialize from disk
$HashTable = . .\HashTable.ps1
The answer may depend on the data in your hashtable. For relatively simple data
Export-Clixml and Import-CliXml is the native and straightforward
PowerShell solution, see another answer.
For more complex data, not well serialized via CliXml, but .NET serializable,
you may use one of the standard .NET serializers. For example, BinaryFormatter.
You may use (or learn the code) two ready scripts: Export-Binary.ps1 and Import-Binary.ps1.
You may find demo examples, including the hashtable, in Export-Binary.test.ps1.
And, if you want to store many hashtables effectively then look for some sort
of document store solutions. I recently found that LiteDB is very good for many
PowerShell scenarios. So I created Ldbc, the PowerShell wrapper of LiteDB, batteries included.
Using this way, you may store and retrieve thousands of hashtables.
UPDATE: If you prefer to store relatively simple data in PSD1 (native PowerShell data format), you may also use the script module PsdKit. (Thank you #iRon for reminding)

PowerShell: What is the point of ForEach-Object with InputObject?

The documentation for ForEach-object says "When you use the InputObject parameter with ForEach-Object, instead of piping command results to ForEach-Object, the InputObject value is treated as a single object." This behavior can easily be observed directly:
PS C:\WINDOWS\system32> ForEach-Object -InputObject #(1, 2, 3) {write-host $_}
1 2 3
This seems weird. What is the point of a "ForEach" if there is no "each" to do "for" on? Is there really no way to get ForEach-object to act directly on the individual elements of an array without piping? if not, it seems that ForEach-Object with InputObject is completely useless. Is there something I don't understand about that?
In the case of ForEach-Object, or any cmdlet designed to operate on a collection, using the -InputObject as a direct parameter doesn't make sense because the cmdlet is designed to operate on a collection, which needs to be unrolled and processed one element at a time. However, I would also not call the parameter "useless" because it still needs to be defined so it can be set to allow input via the pipeline.
Why is it this way?
-InputObject is, by convention, a generic parameter name for what should be considered to be pipeline input. It's a parameter with [Parameter(ValueFromPipeline = $true)] set to it, and as such is better suited to take input from the pipeline rather passed as a direct argument. The main drawback of passing it in as a direct argument is that the collection is not guaranteed to be unwrapped, and may exhibit some other behavior that may not be intended. From the about_pipelines page linked to above:
When you pipe multiple objects to a command, PowerShell sends the objects to the command one at a time. When you use a command parameter, the objects are sent as a single array object. This minor difference has significant consequences.
To explain the above quote in different words, passing in a collection (e.g. an array or a list) through the pipeline will automatically unroll the collection and pass it to the next command in the pipeline one at a time. The cmdlet does not unroll -InputObject itself, the data is delivered one element at a time. This is why you might see problems when passing a collection to the -InputObject parameter directly - because the cmdlet is probably not designed to unroll a collection itself, it expects each collection element to be handed to it in a piecemeal fashion.
Consider the following example:
# Array of hashes with a common key
$myHash = #{name = 'Alex'}, #{name='Bob'}, #{name = 'Sarah'}
# This works as intended
$myHash | Where-Object { $_.name -match 'alex' }
The above code outputs the following as expected:
Name Value
---- -----
name Alex
But if you pass the hash as InputArgument directly like this:
Where-Object -InputObject $myHash { $_.name -match 'alex' }
It returns the whole collection, because -InputObject was never unrolled as it is when passed in via the pipeline, but in this context $_.name -match 'alex' still returns true. In other words, when providing a collection as a direct parameter to -InputObject, it's treated as a single object rather than executing each time against each element in the collection. This can also give the appearance of working as expected when checking for a false condition against that data set:
Where-Object -InputObject $myHash { $_.name -match 'frodo' }
which ends up returning nothing, because even in this context frodo is not the value of any of the name keys in the collection of hashes.
In short, if something expects the input to be passed in as pipeline input, it's usually, if not always, a safer bet to do it that way, especially when passing in a collection. However, if you are working with a non-collection, then there is likely no issue if you opt to use the -InputObject parameter directly.
Bender the Greatest's helpful answer explains the current behavior well.
For the vast majority of cmdlets, direct use of the -InputObject parameter is indeed pointless and the parameter should be considered an implementation detail whose sole purpose is to facilitate pipeline input.
There are exceptions, however, such as the Get-Member cmdlet, where direct use of -InputObject allows you to inspect the type of a collection itself, whereas providing that collection via the pipeline would report information about its elements' types.
Given how things currently work, it is quite unfortunate that the -InputObject features so prominently in most cmdlets' help topics, alongside "real" parameters, and does not frame the issue with enough clarity (as of this writing): The description should clearly convey the message "Don't use this parameter directly, use the pipeline instead".
This GitHub issue provides an categorized overview of which cmdlets process direct -InputObject arguments how.
Taking a step back:
While technically a breaking change, it would make sense for -InputObject parameters (or any pipeline-binding parameter) to by default accept and enumerate collections even when they're passed by direct argument rather than via the pipeline, in a manner that is transparent to the implementing command.
This would put direct-argument input on par with pipeline input, with the added benefit of the former resulting in faster processing of already-in-memory collections.

common ways to pass state from cmdlet to cmdlet

I am creating my own set of cmdlets. They all need the same state data (like location of DB and credentials for connecting to DB). I assume this must be a common need and wonder what the common idiom for doing this is.
the obvious one is something like
$db = my-make-dbcreds db=xxx creds=yyyy ...
my-verb1 $db | my-verb2 $db -foo 42...
my-verb8 $db bar wiz
.....
but i was wondering about other ways. Can I silently pipe the state from one to another. I know I can do this if state is the only thing I pipe but these cmdlets return data
Can I set up global variables that I use if the user doesnt specify state in the command
Passing the information state through the pipe is a little lost on me. You could update your cmdlets to return objects that the next cmdlet will accept via ValueFromPipeline. When you mentioned
like location of DB and credentials for connecting to DB
the best this I could think that you want is....
SPLATTING!
Splatting is a method of passing a collection of parameter
values to a command as unit. Windows PowerShell associates
each value in the collection with a command parameter.
In its simplest form
$params = #{
Filter = "*.txt"
Path = "C:\temp"
}
Get-ChildItem #params
Create a hashtable of parameters and values and splat them to the command. The you can edit the table as the unique call to the cmdlet would allow.
$params.Path = "C:\eventemperor"
Get-ChildItem #params
I changed the path but left the filter the same. You also dont have to have everything in $params you splat and use other parameters in the same call.
It is just a matter of populating the variables as you see fit and changing them as the case requires.
Spewing on the pipeline
Pretty that is what it is actually called. If you use advanced function parameters you can chain properties from one cmdlet to the next if you really wanted to. FWIW I think splatting is better in your case but have a look at the following.
function One{
param(
[parameter(Mandatory=$true,
ValueFromPipeline=$True,
ValueFromPipelineByPropertyName=$true)]
[String[]]
$Pastry
)
write-host "You Must Like $Pastry"
Write-Output (New-Object -TypeName PSCustomObject -Property #{Pastry= $pastry})
# If you have at least PowerShell 3.0
# [pscustomobject]#{Pastry= $pastry}
}
Simple function that writes the variable $pastry to the console but also outputs an object for the next pipe. So running the command
"Eclairs" | One | One | Out-Null
We get the following output
You Must Like Eclairs
You Must Like Eclairs
We need to pipe to Out-Null at the end else you would get this.
Pastry
------
{Eclairs}
Perhaps not the best example but you should get the idea. If you wanted to extract information between the pipe calls you could use Tee-Object.
"Eclair" | One | Tee-Object -Variable oneresults | One | Out-Null
$oneresults
Consider Parameter Default Values
Revisiting this concept after trying to find a better way to pass SQL connection information between many function working against the same database. I am not sure if this is the best thing to do but it certainly simplifies thing for me.
The basic idea is to add a rule for your cmdlet or wildcard rule if your cmdlets share a naming convention. For instance I have a series of functions that interact with our ticketing system. They all start with Get-Task.... and all configured with SQL connection information.
$invokeSQLParameters = #{
ServerInstance = "serverName"
Username = $Credentials.UserName
Password = $Credentials.GetNetworkCredential().Password
}
$PSDefaultParameterValues.Add("New-Task*:Connection",$invokeSQLParameters)
$PSDefaultParameterValues.Add("Get-Task*:Connection",$invokeSQLParameters)
So now in my functions I have a parameter called Connection that will always be populated with $invokeSQLParameters as long as the above is done before the call. I still use splatting as well
Invoke-Sqlcmd -Query $getCommentsQuery #Connection
You can read up more about this at about_parameters_default_values