How to make netstat output's headings show properly in out-gridview? - powershell

when I use:
netstat -f | out-gridview
in PowerShell 7.3, I get the window but it has only one column which is a string. I don't know why it's not properly creating a column for each of the headings like Proto, Local Address etc.
how can I fix this?

While commenter Toni makes a good point to use Get-NetTCPConnection | Out-GridView instead, this answer addresses the question as asked.
To be able to show output of netstat in grid view, we have to parse its textual output into objects.
Fortunately, all fields are separated by at least two space characters, so after replacing these with comma, we can simply use ConvertFrom-CSV (thanks to an idea of commenter Doug Maurer).
netstat -f |
# Skip unwanted lines at the beginning
Select-Object -skip 3 |
# Replace two or more white space characters by comma, except at start of line
ForEach-Object { $_ -replace '(?<!^)\s{2,}', ',' } |
# Convert into an object and add it to grid view
ConvertFrom-Csv | Out-GridView
For a detailed explanation of the RegEx pattern used with the -replace operator, see this RegEx101 demo page.
This is the code of my original answer, which is functionally equivalent. I'll keep it as an example of how choosing the right tool for the job can greatly simplify code.
$headers = #()
# Skip first 3 lines of output which we don't need
netstat -f | Select-Object -skip 3 | ForEach-Object {
# Split each line into columns
$columns = $_.Trim() -split '\s{2,}'
if( -not $headers ) {
# First line is the header row
$headers = $columns
}
else {
# Create an ordered hashtable
$objectProperties = [ordered] #{}
$i = 0
# Loop over the columns and use the header columns as property names
foreach( $key in $headers ) {
$objectProperties[ $key ] = $columns[ $i++ ]
}
# Convert the hashtable into an object that can be shown by Out-GridView
[PSCustomObject] $objectProperties
}
} | Out-GridView

Related

Powershell: Import-csv, rename all headers

In our company there are many users and many applications with restricted access and database with evidence of those accessess. I don´t have access to that database, but what I do have is automatically generated (once a day) csv file with all accessess of all my users. I want them to have a chance to check their access situation so i am writing a simple powershell script for this purpose.
CSV:
user;database1_dat;database2_dat;database3_dat
john;0;0;1
peter;1;0;1
I can do:
import-csv foo.csv | where {$_.user -eq $user}
But this will show me original ugly headres (with "_dat" suffix). Can I delete last four characters from every header which ends with "_dat", when i can´t predict how many headers will be there tomorrow?
I am aware of calculated property like:
Select-Object #{ expression={$_.database1_dat}; label='database1' }
but i have to know all column names for that, as far as I know.
Am I convicted to "overingeneer" it by separate function and build whole "calculated property expression" from scratch dynamically or is there a simple way i am missing?
Thanks :-)
Assuming that file foo.csv fits into memory as a whole, the following solution performs well:
If you need a memory-throttled - but invariably much slower - solution, see Santiago Squarzon's helpful answer or the alternative approach in the bottom section.
$headerRow, $dataRows = (Get-Content -Raw foo.csv) -split '\r?\n', 2
# You can pipe the result to `where {$_.user -eq $user}`
ConvertFrom-Csv ($headerRow -replace '_dat(?=;|$)'), $dataRows -Delimiter ';'
Get-Content -Raw reads the entire file into memory, which is much faster than reading it line by line (the default).
-split '\r?\n', 2 splits the resulting multi-line string into two: the header line and all remaining lines.
Regex \r?\n matches a newline (both a CRLF (\r\n) and a LF-only newline (\n))
, 2 limits the number of tokens to return to 2, meaning that splitting stops once the 1st token (the header row) has been found, and the remainder of the input string (comprising all data rows) is returned as-is as the last token.
Note the $null as the first target variable in the multi-assignment, which is used to discard the empty token that results from the separator regex matching at the very start of the string.
$headerRow -replace '_dat(?=;|$)'
-replace '_dat(?=;|$)' uses a regex to remove any _dat column-name suffixes (followed by a ; or the end of the string); if substring _dat only ever occurs as a name suffix (not also inside names), you can simplify to -replace '_dat'
ConvertFrom-Csv directly accepts arrays of strings, so the cleaned-up header row and the string with all data rows can be passed as-is.
Alternative solution: algorithmic renaming of an object's properties:
Note: This solution is slow, but may be an option if you only extract a few objects from the CSV file.
As you note in the question, use of Select-Object with calculated properties is not an option in your case, because you neither know the column names nor their number in advance.
However, you can use a ForEach-Object command in which you use .psobject.Properties, an intrinsic member, for reflection on the input objects:
Import-Csv -Delimiter ';' foo.csv | where { $_.user -eq $user } | ForEach-Object {
# Initialize an aux. ordered hashtable to store the renamed
# property name-value pairs.
$renamedProperties = [ordered] #{}
# Process all properties of the input object and
# add them with cleaned-up names to the hashtable.
foreach ($prop in $_.psobject.Properties) {
$renamedProperties[($prop.Name -replace '_dat(?=.|$)')] = $prop.Value
}
# Convert the aux. hashtable to a custom object and output it.
[pscustomobject] $renamedProperties
}
You can do something like this:
$textInfo = (Get-Culture).TextInfo
$headers = (Get-Content .\test.csv | Select-Object -First 1).Split(';') |
ForEach-Object {
$textInfo.ToTitleCase($_) -replace '_dat'
}
$user = 'peter'
Get-Content .\test.csv | Select-Object -Skip 1 |
ConvertFrom-Csv -Delimiter ';' -Header $headers |
Where-Object User -EQ $user
User Database1 Database2 Database3
---- --------- --------- ---------
peter 1 0 1
Not super efficient but does the trick.

Powershell CSV removing rows and then remove from whole file if A column matches

I've created the following small script to remove 2++ strings from a CSV.
Each row is a log of a given person and a answer they give.
The CSV has X columns.
The column named FIRST identifies the person.
What I need to do is when I delete a row matching the answer, I also need to delete the person from the whole CSV if it had one of the two strings.
What I've made so far, removes the row of people having the answers but the person is still left in the overall CSV with other answers. I want to remove the person fully if the questions have been answered.
Can somebody help me out with making the addition or changes to make this happen?
INPUT File
FIRST,LAST,ADDR,ADDR2,GENDER,HOME,WORK
1,N/A,N/A,N/A,N/A,BAF,N/A
10005,JAS,AA,N/A,,ZAV,N/A
10007,JADE,BB,N/A,OMA,N/A,N/A
10007,JADE,N/A,RAV,N/A,N/A,N/A
10011,KIAH,N/A,N/A,BALI,BB,N/A
SCRIPT
$CSVfile = "C:\Temp\Test\Test.csv"
$CSVfile_filtered = "C:\Temp\Test\Test.csv"
$regex001 = "AA"
$regex002 = "BB"
$filterArray = #($regex001,$regex002)
Get-Content $CSVfile | Select-String -pattern $filterArray -notmatch | Set-Content $CSVfile_filtered
The file should then remove 10005, 10011 and both lines of 10007. But my version only removes one of the 10007 since it only matches one of the two patterns.
Using more of PowerShell's built-in cmdlets can make this a little easier to manage.
# Assuming searching only properties ADDR and ADDR2
$filter = 'AA','BB'
# Grouping by First and Last values to easily remove duplicates
# -match uses regex so | is needed for an OR of multiple items
Import-Csv Test.csv | Group-Object First,Last |
Where {!($_.Group.ADDR,$_.Group.ADDR2 -match ($filter -join '|'))} |
Foreach-Object Group |
Export-Csv output.csv -NoType
You would think strictly using text manipulation would be simpler, but it adds other scenarios to consider:
You will need to track users that have duplicate entries and potentially back track to remove them (if not grouping). This could require reading the file contents twice.
Your header row could match the string you want to filter so you will need to add it to the output if filtering removes it.
Keeping the scenarios above in mind, you can still use a grouping concept:
$filter = 'AA','BB'
$file = Get-Content Test.csv
# $file[0] is the header row
# -split string uses regex and splits at the second comma
# -split results' [0] element is First,Last values
$file[0],($file |
Select-Object -Skip 1 |
Group-Object {($_ -split '(?<=^[^,]*,[^,]*),')[0]} |
where {!($_.Group -match ($filter -join '|'))} |
Foreach-Object Group) | Set-Content output.csv
If I got it right you could do something like this:
$SearchPattern = 'AA', 'BB'
$INPUTCSV = #'
FIRST,LAST,ADDR,ADDR2,GENDER,HOME,WORK
1,N/A,N/A,N/A,N/A,BAF,N/A
10005,JAS,AA,N/A,,ZAV,N/A
10007,JADE,BB,N/A,OMA,N/A,N/A
10007,JADE,N/A,RAV,N/A,N/A,N/A
10011,KIAH,N/A,N/A,BALI,BB,N/A
'# | ConvertFrom-Csv
$ActualSearchPattern =
$INPUTCSV |
Where-Object {
$_.LAST -in $SearchPattern -or
$_.ADDR -in $SearchPattern -or
$_.ADDR2 -in $SearchPattern -or
$_.GENDER -in $SearchPattern -or
$_.HOME -in $SearchPattern -or
$_.Work -in $SearchPattern
} |
Select-Object -ExpandProperty FIRST
$INPUTCSV |
Where-Object -Property FIRST -NotIn -Value $ActualSearchPattern |
Format-Table -AutoSize
There might be more sophisticated or more elegant ways but I cannot think about one at the moment. ;-)
There is a nice PowerShell module you can use to manipulate the content of a csv or xlsx file: ImportExcel
This give you a lot of options to manipulate the sheets, columns etc.

How to convert filecontent using powershell

I have a log file with a weird format that I would like to convert to a table. The format is that each line contains multiple keyvalue pairs (same pairs on each row). I want to convert these rows so that each property becomes a column in a table containing the value from the row.
Note that the original log file contains 39 properies on each row and the log file is about 80MB.
Example rows:
date=2019-12-02 srcip=8.8.8.8 destip=8.8.4.4 srcintf="port2"
date=2019-12-01 srcip=8.8.8.8 destip=8.8.4.4 srcintf="xyz abc"
date=2019-12-03 srcip=8.8.8.8 destip=8.8.4.4 srcintf="port2"
date=2019-12-05 srcip=8.8.8.8 destip=8.8.4.4 srcintf="port2"
date=2019-12-07 srcip=8.8.8.8 destip=8.8.4.4 srcintf="port2"
I have tried:
Get-Content .\testfile.log | select -First 10 | ConvertFrom-String | select p1, p2, p3 | ft | Format-Wide
But this will not break out the property name to the column name. So in this example i want P1 to be date, p2 srcip, and p3 destip and that the first part of each value is removed.
Anyone have any tips or creative ideas how to convert this to a table?
ConvertFrom-String provides separator-based parsing as well as heuristics-based parsing based on templates containing example values. The separator-based parsing applies automatic type conversions you cannot control, and the template language is poorly documented, with the exact behavior hard to predict - it's best to avoid this cmdlet altogether. Also note that it's not available in PowerShell [Core] v6+.
Instead, I suggest an approach based on the switch statement[1] and the -split operator to create a collection of custom objects ([pscustomobject]) representing the log lines:
# Use $objects = switch ... to capture the generated objects in a variable.
switch -File .\testfile.log { # Loop over all file lines
default {
$oht = [ordered] #{ } # Define an aux. ordered hashtable
foreach ($keyValue in -split $_) { # Loop over key-value pairs
$key, $value = $keyValue -split '=', 2 # Split pair into key and value
$oht[$key] = $value -replace '^"|"$' # Add to hashtable with "..." removed
}
[pscustomobject] $oht # Convert to custom object and output.
}
}
Note:
The above assumes that your values have no embedded spaces; if they do, more work is needed - see next section.
To capture the generated custom objects in a variable, simply use $objects = switch ...
With two ore more log lines, $objects becomes an [object[]] array of [pscustomobject] instances. If you want to ensure that $objects also becomes an array even if there happens to be just one log line, use [array] $objects = switch ... ([array] is effectively the same as [object[]]).
To directly send the output objects through the pipeline to other cmdlets, enclose the switch statement in & { ... }
With your sample input, this yields:
date srcip destip srcintf
---- ----- ------ -------
2019-12-02 8.8.8.8 8.8.4.4 port2
2019-12-01 8.8.8.8 8.8.4.4 port2
2019-12-03 8.8.8.8 8.8.4.4 port2
2019-12-05 8.8.8.8 8.8.4.4 port2
2019-12-07 8.8.8.8 8.8.4.4 port2
Variant with support for values with embedded spaces inside "..." (e.g., srcintf="port 2"):
switch -file .\testfile.log {
default {
$oht = [ordered] #{ }
foreach ($keyValue in $_ -split '(\w+=(?:[^"][^ ]*|"[^"]*"))' -notmatch '^\s*$') {
$key, $value = $keyValue -split '=', 2
$oht[$key] = $value -replace '^"|"$'
}
[pscustomobject] $oht
}
}
Note that there's no support for embedded escaped " instances (e.g, srcintf="port \"2\"" won't work).
Explanation:
$_ -split '(\w+=(?:[^"][^ ]*|"[^"]*"))' splits by a regex that matches key=valueWithoutSpaces and key="value that may have spaces" tokens and, by virtue of enclosing the expression in (...) (creating a capture group), includes these "separators" in the tokens that -split outputs (by default, separators aren't included).
-notmatch '^\s*$' then weeds out empty and all-spaces tokens from the result (the "data tokens", which aren't of interest in our case), leaving effectively just the key-value pairs.
$key, $value = $keyValue -split '=', 2 splits the given key-value token by = into at most 2 tokens, and uses a destructuring assignment to assign the key and the value to separate variables.
$oht[$key] = $value -replace '^"|"$' adds an entry to the aux. hashtable with the key and value at hand, where -replace '^"|"$' uses the -replace operator to remove " from the beginning and end of the value, if present.
[1] switch -File is a flexible and much faster alternative to processing a file line by line with a combination of Get-Content and ForEach-Object.
So what you could do is cut each line into a hashtable of key value pairs passing those to ConvertFrom-StringData instead. There is a couple of caveats with this approach. In keeping it simple your source data is space delimited. This would break if you real data contained spaces (which can be mitigated.) Other obvious caveat is you can't guarantee property order.
Get-Content c:\temp\so.txt | ForEach-Object{
[PSCustomObject](($_ -split " ") -join "`r`n" | ConvertFrom-StringData)
} | Select-Object date, srcip, destip, srcintf
Output:
date srcip destip srcintf
---- ----- ------ -------
2019-12-02 8.8.8.8 8.8.4.4 "port2"
2019-12-01 8.8.8.8 8.8.4.4 "port2"
2019-12-03 8.8.8.8 8.8.4.4 "port2"
2019-12-05 8.8.8.8 8.8.4.4 "port2"
2019-12-07 8.8.8.8 8.8.4.4 "port2"
OK, for the purposes of discussion, I am going to assume the following:
The data is in a file PSDATA.TXT
There are no spaces in the data other than the spaces separating the name-value pairs.
It is acceptable for the resulting tabular data to treat all the values as strings.
Given that...
Get-Content -Path PSDATA.TXT |
ForEach-Object {$_ -replace ' ','";' -replace '=','="' -replace '""','"'} |
ForEach-Object {New-Object PSObject -Property (Invoke-Expression ("[Ordered]#{{{0}}}" -f $_))}
... will generate a table where each line in the file becomes a PSObject with fields taking their names from the name in each name-value pair, and the associated value being the value of the field, as a string. If you're not using PowerShell v4 or later (I'm not sure about 3), you can omit the [Ordered], with the side effect of the order of the fields in the PSObject not necessarily being in the same order as in the file.
If you wanted to have an array of these PSObjects for further processing, you could wrap the whole line above in a variable assignment, e.g., $A=(«that whole thing above, on one line»), and if you wanted to send it to a CSV file, you could just add | Export-CSV -path NewCSVFile.CSV to the end.
I would prefer a datatable, so you easily can sort, filter, merge etc. the logfile:
$logFilePath = 'C:\test\test.log'
$dt = New-Object system.Data.DataTable
[void]$dt.Columns.Add('P1',[string]::empty.GetType() )
[void]$dt.Columns.Add('P2',[string]::empty.GetType() )
[void]$dt.Columns.Add('P3',[string]::empty.GetType() )
foreach( $line in [System.IO.File]::ReadLines($logFilePath) )
{
$tokenArray = $line -split '[= ]'
$row = $dt.NewRow()
$row.P1 = $tokenArray[1]
$row.P2 = $tokenArray[3]
$row.P3 = $tokenArray[5]
[void]$dt.Rows.Add( $row )
}
$dt

Powershell Remove spaces in the header only of a csv

First line of csv looks like this spaces are at after Path as well
author ,Revision ,Date ,SVNFolder ,Rev,Status,Path
I am trying to remove spaces only and rest of the content will be the same .
author,Revision,Date,SVNFolder,Rev,Status,Path
I tried below
Import-CSV .\script.csv | ForEach-Object {$_.Trimend()}
expanding on the comment with an example since it looks like you may be new:
$text = get-content .\script.csv
$text[0] = $text[0] -replace " ", ""
$csv = $text | ConvertFrom-CSV
Note: The solutions below avoid loading the entire CSV file into memory.
First, get the header row and fix it by removing all whitespace from it:
$header = (Get-Content -TotalCount 1 .\script.csv) -replace '\s+'
If you want to rewrite the CSV file to fix its header problem:
# Write the corrected header and the remaining lines to the output file.
# Note: I'm outputting to a *new* file, to be safe.
# If the file fits into memory as a whole, you can enclose
# Get-Content ... | Select-Object ... in (...) and write back to the
# input file, but note that there's a small risk of data loss, if
# writing back gets interrupted.
& { $header; Get-Content .\script.csv | Select-Object -Skip 1 } |
Set-content -Encoding utf8 .\fixed.csv
Note: I've chosen -Encoding utf8 as the example output character encoding; adjust as needed; note that the default is ASCII(!), which can result in data loss.
If you just want to import the CSV using the fixed headers:
& { $header; Get-Content .\script.csv | Select-Object -Skip 1 } | ConvertFrom-Csv
As for what you tried:
Import-Csv uses the column names in the header as property names of the custom objects it constructs from the input rows.
This property names are locked in at the time of reading the file, and cannot be changed later - unless you explicitly construct new custom objects from the old ones with the property names trimmed.
Import-Csv ... | ForEach-Object {$_.Trimend()}
Since Import-Csv outputs [pscustomobject] instances, reflected one by one in $_ in the ForEach-Object block, your code tries call .TrimEnd() directly on them, which will fail (because it is only [string] instances that have such a method).
Aside from that, as stated, your goal is to trim the property names of these objects, and that cannot be done without constructing new objects.
Read the whole file into an array:
$a = Get-Content test.txt
Replace the spaces in the first array element ([0]) with empty strings:
$a[0] = $a[0] -replace " ", ""
Write over the original file: (Don't forget backups!)
$a | Set-Content test.txt
$inFilePath = "C:\temp\headerwithspaces.csv"
$content = Get-Content $inFilePath
$csvColumnNames = ($content | Select-Object -First 1) -Replace '\s',''
$csvColumnNames = $csvColumnNames -Replace '\s',''
$remainingFile = ($content | Select-Object -Skip 1)

Adding numbers into 2 totals and put into each its own variable

Hope you can help me with this little puzzle.
I have ONE txt file looking like this:
firstnumbers
348.92
237
230
329.31
secondnumbers
18.21
48.92
37
30
29.31
So a txt file with one Column that has 2 strings and some numbers on each line.
I want to take the total of each column and put it into each variable like say $a and $b
Yes it is 1 column, just to make sure no misunderstanding
It's pretty easy, if I use 2 files with each column of numbers without the headers(strings)
$a = (Get-Content 'firstnumbers.txt' | Measure-Object -Sum).Sum
$b = (Get-Content 'secondnumbers.txt' | Measure-Object -Sum).Sum
But it would be a little more cool to have them in one txt file, like the aforementioned with a header over each row of numbers.
I've tried removing the the headers with i.e. $a.Replace("first", $null).Replace("sec", $null) and then doing a $b.Split(" ")[1,2,3,4,5] ending with | measure -sum
That gives me the correct number of firstnumbers - but it won't work if I don't keep the specific set of numbers each time. They'll change and there's gonna be more or less of them.
It should be pretty easy I'm guessing. I just can't to seem wrap my head around it at the moment.
Any advice would be awesome!
cheers
Something like this should work:
$file = "C:\path\to\your.txt"
[IO.File]::ReadAllText($file) | % {
$_ -replace "`n+([0-9])", ' $1' -split "`n"
} | ? { $_ -ne "" } | % {
$a = $_ -split " ", 2
$v = $a[1] -split " " | Measure-Object -Sum
"{0}`t{1}" -f ($a[0], $v.Sum)
}
Output:
firstnumbers 1145,23
secondnumbers 163,44
Here's another approach, rather than parsing the text as one big blob, you could test each line to see if it contains a # or text, if it's text, then it triggers the creation of a new entry in a hashtable where the sums are stored:
# C:\Temp> get-content .\numbers.txt | foreach{
$val=0;
if([Decimal]::TryParse($_,[ref]$val)){
$sums[$key]+=$val
}else{
$sums += #{"$_"=0}; #add new entry to hashtable
$key=$_;}
} -end {$sums}
Name Value
---- -----
secondnumbers 163.44
firstnumbers 1145.23
Edit: As noted in the comments, the $sums variable persists for each run which causes problems if you run this command twice. You could call Remove-variable sums after each run, or add it to the end processing block like this:
# C:\Temp> get-content .\numbers.txt | foreach{
$val=0;
if([Decimal]::TryParse($_,[ref]$val)){
$sums[$key]+=$val
}else{
$sums += #{"$_"=0}; #add new entry to hashtable
$key=$_;}
} -end {$sums; remove-variable sums;}