I searched for answer to this before posting, so I apologise in advance if it's here and I can't find it!
I'm using Powershell to pull a heap of data from my Exchange server and it is put into a hashtable.
I have no problem formatting this hashtable to suit my own reporting needs, but now I want to put this data into Splunk (I know about the Splunk Exchange App, this is for different needs).
So Splunk can pull the data in without any pre-processing work I need it to look like the below.
timestamp key=value,key=value,key=value,key=value
timestamp key=value,key=value,key=value,key=value
timestamp key=value,key=value,key=value,key=value
timestamp key=value,key=value,key=value,key=value
timestamp key=value,key=value,key=value,key=value
Try this:
$ht = #{one=1; two=2; three=3}
$KeysAndValues = $ht.GetEnumerator() | Foreach-Object { '{0}={1}' -f $_.Key,$_.Value }
'{0:MM/dd/yyyy} {1}' -f (Get-Date),($KeysAndValues -join ',')
Let's build output the output string with a StringBuilder.
$sb=new-object Text.StringBuilder
# Append timestamp, here from system time
[void]$sb.Append("{0} " -f [datetime]::now.tostring("u"))
# Populate sample hashtable with some data
$ht = #{"foo"="oof"; "bar"="rab"; "baz"="zab"; "qux"="xug" }
# Enumerate the hashtable by sorted names just for fun
$ht.GetEnumerator() | sort name | % {
# Append keys and values to the stringbuilder
[void]$sb.Append($("{0}={1}," -f $_.Name, $_.Value))
}
# Get rid of the tailing comma
[void]$sb.Remove($sb.Length-1, 1)
# Print output
$sb.ToString()
Related
I would like to write out a hash table to a file with an array as one of the hash table items. My array item is written out, but it contains files=System.Object[]
Note - Once this works, I will want to reverse the process and read the hash table back in again.
clear-host
$resumeFile="c:\users\paul\resume.log"
$files = Get-ChildItem *.txt
$files.GetType()
write-host
$types="txt"
$in="c:\users\paul"
Remove-Item $resumeFile -ErrorAction SilentlyContinue
$resumeParms=#{}
$resumeParms['types']=$types
$resumeParms['in']=($in)
$resumeParms['files']=($files)
$resumeParms.GetEnumerator() | ForEach-Object {"{0}={1}" -f $_.Name,$_.Value} | Set-Content $resumeFile
write-host "Contents of $resumefile"
get-content $resumeFile
Results
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
Contents of c:\users\paul\resume.log
files=System.Object[]
types=txt
in=c:\users\paul
The immediate fix is to create your own array representation, by enumerating the elements and separating them with ,, enclosing string values in '...':
# Sample input hashtable. [ordered] preserves the entry order.
$resumeParms = [ordered] #{ foo = 42; bar = 'baz'; arr = (Get-ChildItem *.txt) }
$resumeParms.GetEnumerator() |
ForEach-Object {
"{0}={1}" -f $_.Name, (
$_.Value.ForEach({
(("'{0}'" -f ($_ -replace "'", "''")), $_)[$_.GetType().IsPrimitive]
}) -join ','
)
}
Not that this represents all non-primitive .NET types as strings, by their .ToString() representation, which may or may not be good enough.
The above outputs something like:
foo=42
bar='baz'
arr='C:\Users\jdoe\file1.txt','C:\Users\jdoe\file2.txt','C:\Users\jdoe\file3.txt'
See the bottom section for a variation that creates a *.psd1 file that can later be read back into a hashtable instance with Import-PowerShellDataFile.
Alternatives for saving settings / configuration data in text files:
If you don't mind taking on a dependency on a third-party module:
Consider using the PSIni module, which uses the Windows initialization file (*.ini) file format; see this answer for a usage example.
Adding support for initialization files to PowerShell itself (not present as of 7.0) is being proposed in GitHub issue #9035.
Consider using YAML as the file format; e.g., via the FXPSYaml module.
Adding support for YAML files to PowerShell itself (not present as of 7.0) is being proposed in GitHub issue #3607.
The Configuration module provides commands to write to and read from *.psd1 files, based on persisted PowerShell hashtable literals, as you would declare them in source code.
Alternatively, you could modify the output format in the code at the top to produce such files yourself, which allows you to read them back in via
Import-PowerShellDataFile, as shown in the bottom section.
As of PowerShell 7.0 there's no built-in support for writing such as representation; that is, there is no complementary Export-PowerShellDataFile cmdlet.
However, adding this ability is being proposed in GitHub issue #11300.
If creating a (mostly) plain-text file is not a must:
The solution that provides the most flexibility with respect to the data types it supports is the XML-based CLIXML format that Export-Clixml creates, as Lee Dailey suggests, whose output can later be read with Import-Clixml.
However, this format too has limitations with respect to type fidelity, as explained in this answer.
Saving a JSON representation of the data, as Lee also suggests, via ConvertTo-Json / ConvertFrom-Json, is another option, which makes for human-friendlier output than XML, but is still not as friendly as a plain-text representation; notably, all \ chars. in file paths must be escaped as \\ in JSON.
Writing a *.psd1 file that can be read with Import-PowerShellDataFile
Within the stated constraints regarding data types - in essence, anything that isn't a number or a string becomes a string - it is fairly easy to modify the code at the top to write a PowerShell hashtable-literal representation to a *.psd1 file so that it can be read back in as a [hashtable] instance via Import-PowerShellDataFile:
As noted, if you don't mind installing a module, consider the Configuration module, which has this functionality built int.
# Sample input hashtable.
$resumeParms = [ordered] #{ foo = 42; bar = 'baz'; arr = (Get-ChildItem *.txt) }
# Create a hashtable-literal representation and save it to file settings.psd1
#"
#{
$(
($resumeParms.GetEnumerator() |
ForEach-Object {
" {0}={1}" -f $_.Name, (
$_.Value.ForEach({
(("'{0}'" -f ($_ -replace "'", "''")), $_)[$_.GetType().IsPrimitive]
}) -join ','
)
}
) -join "`n"
)
}
"# > settings.psd1
If you read settings.psd1 with Import-PowerShellDataFile settings.psd1 later, you'll get a [hashtable] instance whose entries you an access as usual and which produces the following display output:
Name Value
---- -----
bar baz
arr {C:\Users\jdoe\file1.txt, C:\Users\jdoe\file1.txt, C:\Users\jdoe\file1.txt}
foo 42
Note how the order of entries (keys) was not preserved, because hashtable entries are inherently unordered.
On writing the *.psd1 file you can preserve the key(-creation) order by declaring the input hashtable (System.Collections.Hashtable) as [ordered], as shown above (which creates a System.Collections.Specialized.OrderedDictionary instance), but the order is, unfortunately, lost on reading the *.psd1 file.
As of PowerShell 7.0, even if you place [ordered] before the opening #{ in the *.psd1 file, Import-PowerShellDataFile quietly ignores it and creates an unordered hashtable nonetheless.
This is a problem I deal with all the time and it drives me mad. I really think that there should be a function specifically for this action... so I wrote one.
function ConvertHashTo-CSV
{
Param (
[Parameter(Mandatory=$true)]
$hashtable,
[Parameter(Mandatory=$true)]
$OutputFileLocation
)
$hastableAverage = $NULL #This will only work for hashtables where each entry is consistent. This checks for consistency.
foreach ($hashtabl in $hashtable)
{
$hastableAverage = $hastableAverage + $hashtabl.count #Counts the amount of headings.
}
$Paritycheck = $hastableAverage / $hashtable.count #Gets the average amount of headings
if ( ($parity = $Paritycheck -is [int]) -eq $False) #if the average is not an int the hashtable is not consistent
{
write-host "Error. Hashtable is inconsistent" -ForegroundColor red
Start-Sleep -Seconds 5
return
}
$HashTableHeadings = $hashtable[0].GetEnumerator().name #Get the hashtable headings
$HashTableCount = ($hashtable[0].GetEnumerator().name).count #Count the headings
$HashTableString = $null # Strange to hold the CSV
foreach ($HashTableHeading in $HashTableHeadings) #Creates the first row containing the column headings
{
$HashTableString += $HashTableHeading
$HashTableString += ", "
}
$HashTableString = $HashTableString -replace ".{2}$" #Removed the last , added by the above loop in error
$HashTableString += "`n"
foreach ($hashtabl in $hashtable) #Adds the data
{
for($i=0;$i -lt $HashTableCount;$i++)
{
$HashTableString += $hashtabl[$i]
if ($i -lt ($HashTableCount - 1))
{
$HashTableString += ", "
}
}
$HashTableString += "`n"
}
$HashTableString | Out-File -FilePath $OutputFileLocation #writes the CSV to a file
}
To use this copy the function into your script, run it, and then
ConvertHashTo-CSV -$hashtable $Hasharray -$OutputFileLocation c:\temp\data.CSV
The code is annotated but a brief explanation of what it does. Steps through the arrays and hashtables and adds them to a string adding the required formatting to make the string a CSV file, then outputs that to a file.
The main limitation of this is that the Hashtabes in the array all have to contain the same amount of fields. To get around this if a hashtable has a field that doesnt contain data ensure it contains at least a space.
More on this can be found here : https://grumpy.tech/powershell-convert-hashtable-to-csv/
Just beginning with Powershell. I have a text file that contains the string "CloseYear/2019" and looking for a way to increment the "2019" to "2020". Any advice would be appreciated. Thank you.
If the question is how to update text within a file, you can do the following, which will replace specified text with more specified text. The file (t.txt) is read with Get-Content, the targeted text is updated with the String class Replace method, and the file is rewritten using Set-Content.
(Get-Content t.txt).Replace('CloseYear/2019','CloseYear/2020') | Set-Content t.txt
Additional Considerations:
General incrementing would require a object type that supports incrementing. You can isolate the numeric data using -split, increment it, and create a new, joined string. This solution assumes working with 32-bit integers but can be updated to other numeric types.
$str = 'CloseYear/2019'
-join ($str -split "(\d+)" | Foreach-Object {
if ($_ -as [int]) {
[int]$_ + 1
}
else {
$_
}
})
Putting it all together, the following would result in incrementing all complete numbers (123 as opposed to 1 and 2 and 3 individually) in a text file. Again, this can be tailored to target more specific numbers.
$contents = Get-Content t.txt -Raw # Raw to prevent an array output
-join ($contents -split "(\d+)" | Foreach-Object {
if ($_ -as [int]) {
[int]$_ + 1
}
else {
$_
}
}) | Set-Content t.txt
Explanation:
-split uses regex matching to split on the matched result resulting in an array. By default, -split removes the matched text. Creating a capture group using (), ensures the matched text displays as is and is not removed. \d+ is a regex mechanism matching a digit (\d) one or more (+) successive times.
Using the -as operator, we can test that each item in the split array can be cast to [int]. If successful, the if statement will evaluate to true, the text will be cast to [int], and the integer will be incremented by 1. If the -as operator is not successful, the pipeline object will remain as a string and just be output.
The -join operator just joins the resulting array (from the Foreach-Object) into a single string.
AdminOfThings' answer is very detailed and the correct answer.
I wanted to provide another answer for options.
Depending on what your end goal is, you might need to convert the date to a datetime object for future use.
Example:
$yearString = 'CloseYear/2019'
#convert to datetime
[datetime]$dateConvert = [datetime]::new((($yearString -split "/")[-1]),1,1)
#add year
$yearAdded = $dateConvert.AddYears(1)
#if you want to display "CloseYear" with the new date and write-host
$out = "CloseYear/{0}" -f $yearAdded.Year
Write-Host $out
This approach would allow you to use $dateConvert and $yearAdded as a datetime allowing you to accurately manipulate dates and cultures, for example.
I'm trying to write a simple usage logger into my script that would store information about the time when user opened the script, finished using the script and the user name.
The first part of the logger where I gather the first two data works fine and adds two necessary columns with values to the CSV file. Yet when I run the second part of the logger it does not add a new column to my existing CSV file.
#Code I will add at the very beginning of my script
$FileNameDate = Get-Date -Format "MMM_yyyy"
$FilePath = "C:\Users\Username\Desktop\Script\Logs\${FileNameDate}_MonthlyLog.csv"
$TimeStamp = (Get-Date).toString("dd/MMM/yyyy HH:mm:ss")
$UserName = [string]($env:UserName)
$LogArray = #()
$LogArrayDetails = #{
Username = $UserName
StartDate = $TimeStamp
}
$LogArray += New-Object PSObject -Property $LogArrayDetails | Export-Csv $FilePath -Notypeinformation -Append
#Code I will add at the very end of my script
$logArrayFinishDetails = #{FinishDate = $TimeStamp}
$LogCsv = Import-Csv $FilePath | Select Username, StartDate, #{$LogArrayFinishDetails} | Export-Csv $FilePath -NoTypeInformation -Append
CSV file should look like this when the script is closed:
Username StartDate FinishDate
anyplane 08/Apr/2018 23:47:55 08/Apr/2018 23:48:55
Yet it looks like this:
StartDate Username
08/Apr/2018 23:47:55 anyplane
The other weird thing is that it puts the StartDate first while I clearly stated in $LogArrayDetails that Username goes first.
Assuming that you only ever want to record the most recent run [see bottom if you want to record multiple runs] (PSv3+):
# Log start of execution.
[pscustomobject] #{ Username = $env:USERNAME; StartDate = $TimeStamp } |
Export-Csv -Notypeinformation $FilePath
# Perform script actions...
# Log end of execution.
(Import-Csv $FilePath) |
Select-Object *, #{ n='FinishDate'; e={ (Get-Date).toString("dd/MMM/yyyy HH:mm:ss") } } |
Export-Csv -Notypeinformation $FilePath
As noted in boxdog's helpful answer, using -Append with Export-Csv won't add additional columns.
However, since you're seemingly attempting to rewrite the entire file, there is no need to use
-Append at all.
So as to ensure that the old version of the file has been read in full before you attempt to replace it with Export-Csv, be sure to enclose your Import-Csv $FilePath call in (...), however.
This is not strictly necessary with a 1-line file such as in this case, but a good habit to form for such rewrites; do note that this approach is somewhat brittle in general, as something could go wrong while rewriting the file, resulting in potential data loss.
#{ n='FinishDate'; e={ (Get-Date).toString("dd/MMM/yyyy HH:mm:ss") } is an example of a calculated property/column that is appended to the preexisting columns (*)
The other weird thing is that it puts the StartDate first while I clearly stated in $LogArrayDetails that Username goes first.
You've used a hashtable (#{ ... }) to declare the columns for the output CSV, but the order in which a hashtable's entries are enumerated is not guaranteed.
In PSv3+, you can use an ordered hashtable instead ([ordered] #{ ... }) to achieve predictable enumeration, which you also get if you convert the hashtable to a custom object by casting to [pscustomobject], as shown above.
If you do want to append to the existing file, you can use the following, but note that:
this approach does not scale well, because the entire log file is read into memory every time (and converted to objects), though limiting the entries to a month's worth should be fine.
as stated, the approach is brittle, as things can go wrong while rewriting the file; consider simply writing 2 rows per execution instead, which allows you to append to the file line by line.
there's no concurrency management, so the assumption is that only ever one instance of the script is run at a time.
$FilePath = './t.csv'
$TimeStamp = (Get-Date).toString("dd/MMM/yyyy HH:mm:ss")
$env:USERNAME = $env:USER
# Log start of execution. Note the empty 'FinishDate' property
# to ensure all rows ultimately have the same column structure.
[pscustomobject] #{ Username = $env:USERNAME; StartDate = $TimeStamp; FinishDate = '' } |
Export-Csv -Notypeinformation -Append $FilePath
# Perform script actions...
# Log end of execution:
# Read the entire existing file...
$logRows = Import-Csv $FilePath
# ... update the last row's .FinishDate property
$logRows[-1].FinishDate = (Get-Date).toString("dd/MMM/yyyy HH:mm:ss")
# ... and rewrite the entire file, keeping only the last 30 entries
$logRows[-30..-1] | Export-Csv -Notypeinformation $FilePath
Because your CSV already has a structure (i.e. defined headers), PowerShell honours this when appending and doesn't add additional columns. It is (sort of) explained in this excerpt from the Export-Csv help:
When you submit multiple objects to Export-CSV, Export-CSV organizes
the file based on the properties of the first object that you submit.
If the remaining objects do not have one of the specified properties,
the property value of that object is null, as represented by two
consecutive commas. If the remaining objects have additional
properties, those property values are not included in the file.
You could include the FinishDate property in the original file (even though it would be empty), but the best option might be to export your output to a different CSV at the end, perhaps deleting the original after import then recreating it with the additional data. In fact, just removing the -Append will likely give the result you want.
I have a geojson file that needs to be submitted to an API. I am modifying a preexisting powershell script to execute this query but I am having trouble getting the geojson to parse correctly in the query string to pass to the API. Powershell is not my language at all but I've been able to get it to read the geojson. My print statement in my code looks like this:
$inputjson = Get-Content -Raw -Path C:/path/to/file.geojson | ConvertFrom-Json
Foreach ($feature in $inputjson.features){
$gjson = $feature.geometry
Write-Host $gjson
My output is then:
#{type=Polygon; coordinates=System.Object[]}
I have tried ToString() or even casting $gjson as string to try and force this to read as it appears in the file. In python I can do this easily enough but this is a complex script I don't have the time to rewrite from scratch. How do I get this to translate to string correctly? What exactly does that '#' decorator connote in json subfield in Powershell?
The point is that GeoJSON is not a flat object. This means that you have to (recursively) iterate through each embedded object to get each containing subitem:
$inputjson = Get-Content -Raw -Path C:/path/to/file.geojson | ConvertFrom-Json
Foreach ($feature in $inputjson.features){
$gjson = $feature.geometry
Write-Host "Type = " $gjson.Type
Foreach ($coordinate in $coordinates){
Write-Host "coordinate = " $coordinate
Maybe this will help you: $inputjson | Flatten, see: https://stackoverflow.com/a/46081131/1701026
#{key=value} is an hashtable
it's not clear what you are trying to achieve, maybe you want to reconvert ypur geometry to json ?
if so
$feature.geometry | ConvertTo-Json
is what you need
I am using a text file as the backend for an application that I am developing. I first started off leaving the text file in a human-readable format but I decided that there was no sense in that figured it would be best to leave out formatting.
Where I am now in the backend dev process is creating a single-line hashtable with identical keys but different values for each entry. Seems logical and easy to work with.
Here is a mock-up of the entries in the text file:
#{'bName'='1xx'; 'bTotal'='1yy'; 'bSet'='1zz'}
#{'bName'='2xx'; 'bTotal'='2yy'; 'bSet'='2zz'}
#{'bName'='3xx'; 'bTotal'='3yy'; 'bSet'='3zz'}
As you can see, the keys for each entry are identical, however, the values are going to be different. (The numerical and repetitious nature of the values are purely coincidental and put in place for the sake of a mock-up. Actual values will not be numerically-oriented and won't be repetitious as seen in the example.)
I am able to access keys and values by typing:
$hash = Get-Content .\Desktop\Test.txt | Out-String | iex
which outputs:
Name Value
---- -----
bName 1xx
bTotal 1yy
bSet 1zz
bName 2xx
bTotal 2yy
bSet 2zz
bName 3xx
bTotal 3yy
bSet 3zz
What I ultimately want to do is gather each of the values for bName, bTotal, and bSet so that I can append each to a separate WinForms ComboBox. The WinForms part will be simple, I am just having a bit of an issue with getting the values from each hashtable in the text file.
I tried:
$hash.Values | ?{$hash.Keys -contains 'bName'}
but it just prints out every $hash.Value regardless of the $hash.Key match given in the pipe.
I understand that $hash is an array and I figured I may have to pipe out each iteration in a foreach ($hash | %{}) loop but I'm not quite sure the correct way to do this. For example, when I try:
$hash | $_.Keys
or
$hash | $_.Values
it isn't treating each iteration like a hashtable.
What am I doing wrong here? Am I going about it in a convoluted way while there is a much easier way to accomplish this? I am open to all sorts of ideas or suggestions.
As an afterthought: It is kind of funny how often an obvious solution presents itself when you step away and divert your attention towards something else.
I went to grab lunch and I can't, for the life of me, begin to comprehend why I didn't realize that I could just very easily do this:
$hash.bName
or:
$hash.bTotal
or:
$hash.bSet
That will do exact as I was wanting to do. However, considering the answers provided, I may go a different route in terms of using an .ini file in CSV format rather than creating an array of hashtables.
One way of storing hashtables in a text file is the INI format.
[hashtable1]
bName=1xx
bTotal=1yy
bSet=1zz
[hashtable2]
bName=2xx
bTotal=2yy
bSet=2zz
[hashtable3]
bName=3xx
bTotal=3yy
bSet=3zz
INI files are basically a hashtable of hashtables in text form. They can be read like this:
$ht = #{}
Get-Content 'C:\path\to\hashtables.txt' | ForEach-Object {
$_.Trim()
} | Where-Object {
$_ -notmatch '^(;|$)'
} | ForEach-Object {
if ($_ -match '^\[.*\]$') {
$section = $_ -replace '\[|\]'
$ht[$section] = #{}
} else {
$key, $value = $_ -split '\s*=\s*', 2
$ht[$section][$key] = $value
}
}
and written like this:
$ht.Keys | ForEach-Object {
'[{0}]' -f $_
foreach ($key in $ht[$_].Keys) {
'{0}={1}' -f $key, $ht[$_][$key]
}
} | Set-Content 'C:\path\to\hashtables.txt'
Individual values in such a hashtable of hashtables can be accessed like this:
$ht['section']['key']
or like this:
$ht.section.key
Another option would be to store each hashtable in a separate file
hashtable1.txt:
bName=1xx
bTotal=1yy
bSet=1zz
hashtable2.txt.
bName=2xx
bTotal=2yy
bSet=2zz
hashtable3.txt:
bName=3xx
bTotal=3yy
bSet=3zz
That would allow you to import each file into a hashtable via ConvertFrom-StringData:
$ht1 = Get-Content 'C:\path\to\hashtable1.txt' | Out-String |
ConvertFrom-Stringdata
Writing the files would basically be the same as above (there is no ConverTo-StringData cmdlet):
$ht1.Keys | ForEach-Object {
'{0}={1}' -f $_, $ht[$_]
} | Set-Content 'C:\path\to\hashtables1.txt'
PowerShell has built in csv handling so it makes it a good choice to use in this case. So, assuming you had your data stored in a file in the standard csv format with headers:
"bName","bTotal","bSet"
"1xx","1yy","1zz"
"2xx","2yy","2zz"
"3xx","3yy","3zz"
Then you import your data like this:
$data = Import-Csv $path
Now you have an array of PsCustomObject and each header in the csv file is a property of the object. So if, for example, you wanted to get the bTotal of the second object you would do the following:
$data[1].bTotal
2yy