$outFile = "C:\PS logs\Outlook_autofill\test.csv"
$tests = Import-Csv -Header username,firstname,surname,pcname,5,6,7,8,9,10,11,12,13,14,15 $outFile |
sort -property #{Expression="username";Descending=$true}, #{Expression="pcname";Descending=$false}
$tests[0]
for ($i=1; $i -le $tests.length -1; $i++)
{
if ($tests[$i]."username" -eq $tests[$i-1]."username" -AND $tests[$i]."pcname" -eq $tests[$i-1]."pcname")
{
continue
}
else {$tests[$i]}
}
I managed to download the code from a site on the Internet and get it working and it appears to do what I would like. However, I am unsure how to output it back into a CSV?
Would I put an output line in the same loop as the continue
thank you kindly for any help.
You can use this construction right inside your loop to add each line to the file you need.
$NewLine = "{0},{1},{2}" -f $ValueForColumn1, $ValueForcolumn2, $ValueForcolumn3
Add-Content -Path $PathToFile -Value $NewLine
Related
I am using Powershell 7.
We have the following PowerShell script that will parse some very large file.
I no longer want to use 'Get-Content' as this is to slow.
The script below works, but it takes a very long time to process even a 10 MB file.
I have about 200 files 10MB file with over 10000 lines.
Sample Log:
#Fields:1
#Fields:2
#Fields:3
#Fields:4
#Fields: date-time,connector-id,session-id,sequence-number,local-endpoint,remote-endpoint,event,data,context
2023-01-31T13:53:50.404Z,EXCH1\Relay-EXCH1,08DAD23366676FF1,41,10.10.10.2:25,195.85.212.22:15650,<,DATA,
2023-01-31T13:53:50.404Z,EXCH1\Relay-EXCH1,08DAD23366676FF1,41,10.10.10.2:25,195.85.212.25:15650,<,DATA,
Script:
$Output = #()
$LogFilePath = "C:\LOGS\*.log"
$LogFiles = Get-Item $LogFilePath
$Count = #($logfiles).count
ForEach ($Log in $LogFiles)
{
$Int = $Int + 1
$Percent = $Int/$Count * 100
Write-Progress -Activity "Collecting Log details" -Status "Processing log File $Int of $Count - $LogFile" -PercentComplete $Percent
Write-Host "Processing Log File $Log" -ForegroundColor Magenta
Write-Host
$FileContent = Get-Content $Log | Select-Object -Skip 5
ForEach ($Line IN $FileContent)
{
$Socket = $Line | Foreach {$_.split(",")[5] }
$IP = $Socket.Split(":")[0]
$Output += $IP
}
}
$Output = $Output | Select-Object -Unique
$Output = $Output | Sort-Object
Write-Host "List of noted remove IPs:"
$Output
Write-Host
$Output | Out-File $PWD\Output.txt
As #iRon Suggests the assignment operator (+=) is a lot of overhead. As well as reading entire file to a variable then processing it. Perhaps process it strictly as a pipeline. I achieved same results, using your sample data, with the code written this way below.
$LogFilePath = "C:\LOGS\*.log"
$LogFiles = Get-ChildItem $LogFilePath
$Count = #($logfiles).count
$Output = ForEach($Log in $Logfiles) {
# Code for Write-Progress here
Get-Content -Path $Log.FullName | Select-Object -Skip 5 | ForEach-Object {
$Socket = $_.split(",")[5]
$IP = $Socket.Split(":")[0]
$IP
}
}
$Output = $Output | Select-Object -Unique
$Output = $Output | Sort-Object
Write-Host "List of noted remove IPs:"
$Output
Apart from the notable points in the comments, I believe this question is more suitable to Code Review. Nonetheless, here's my take on this using the StreamReader class:
$LogFilePath = "C:\LOGS\*.log"
$LogFiles = Get-Item -Path $LogFilePath
$OutPut = [System.Collections.ArrayList]::new()
foreach ($log in $LogFiles)
{
$skip = 0
$stop = $false
$stream = [System.IO.StreamReader]::new($log.FullName)
while ($line = $stream.ReadLine())
{
if (-not$stop)
{
if ($skip++ -eq 5)
{
$stop = $true
}
continue
}
elseif ($OutPut.Contains(($IP = ($line -split ',|:')[6])))
{
continue
}
$null = $OutPut.Add($IP)
}
$stream.Close()
$stream.Dispose()
}
# Display OutPut and save to file
Write-Host -Object "List of noted remove IPs:"
$OutPut | Sort-Object | Tee-Object -FilePath "$PWD\Output.txt"
This way you can output unique IP's since it's being handled by an if statement checking against what's in $OutPut; essentially replacing Select-Object -Unique. You should see a speed increase as you're no longer adding to a fixed array (+=), and piping to other cmdlets.
You can combine File.ReadLines with Enumerable.Skip to read your files and skip their first 5 lines. This method is much faster than Get-Content. Then for sorting and getting unique strings at the same time you can use a SortedSet<T>.
You should avoid using Write-Progress as this will slow your script down in Windows PowerShell (this has been fixed in newer versions of PowerShell Core).
Do note that because you're looking to sort the result, all strings must be contained in memory before outputting to a file. This would be much more efficient if sorting was not needed, there you would use a HashSet<T> instead for getting unique values.
Get-Item C:\LOGS\*.log | & {
begin { $set = [Collections.Generic.SortedSet[string]]::new() }
process {
foreach($line in [Linq.Enumerable]::Skip([IO.File]::ReadLines($_.FullName), 5)) {
$null = $set.Add($line.Split(',')[5].Split(':')[0])
}
}
end {
$set
}
} | Set-Content $PWD\Output.txt
Input file is a fixed-width txt file. My client normally opens it in Excel and manually specifies the column breaks. I'm hoping to replace certain blank spaces with a comma, so that I can parse as CSV and save as XLS or whatever.
$columBreaks = 20, 35, 50, 80, 100, 111, 131, 158, 161, 167, 183
[array]::Reverse($columBreaks) #too lazy to re-write array after finding out I need to iterate in reverse
$files = get-childitem ./ |where-object {$_.Name -like "FileFormat*.txt"}
foreach($file in $files)
{
$name = $file.Name.split(".")
$csvFile = $name[0]+".csv"
if (!(get-childitem ./ |where-object {$_.Name -like $csvFile})) #check whether file has been processed
{
$text = (gc $file)
foreach ($line in $text)
{
foreach ($pos in $columBreaks)
{
#$line.Substring($char-1,3).replace(" ", ",")
$line = $line.Insert($pos,",")
#out-file -append?
}
}
}
#set-content?
}
So what's the most efficient way to write this content out? I had hoped to use set-content, but I don't think that's possible since we're processing line by line, so I think I would either have to build an array of lines for set-content, or use write-out -append for each iteration. Is there a more efficient way to do this?
Set-Content should work fine with some minor adjustments. Here is an example of how it should work (this is everything within your outer foreach loop):
$csvFile = $file.BaseName
if (!(get-childitem ./ |where-object {$_.Name -like $csvFile})) #check whether file has been processed
{
(gc $file | foreach {
$_.Insert($columBreaks[0],",").Insert($columBreaks[1],",").Insert($columBreaks[2],",").`
Insert($columBreaks[3],",").Insert($columBreaks[4],",").Insert($columBreaks[5],",").`
Insert($columBreaks[6],",").Insert($columBreaks[7],",").Insert($columBreaks[8],",").`
Insert($columBreaks[9],",").Insert($columBreaks[10],",")
}) | set-content $csvFile #note parenthesis around everything that gets piped to set-content
}
By the way, instead of splitting the filename on the '.', you can just get the name without the extension by using $file.BaseName:
$csvFile = $file.BaseName + ".csv"
I would think this comes up a lot. Here's an example that actually goes overboard and turns the fixed width file into objects. Then it's simple to export that to a csv. This should work for converting legacy commands like netstat as well.
$cols = 0,19,38,59,81,97,120,123 # fake extra column at the end, assumes all rows are that wide, padded with spaces
$colsfile = 'columns.txt'
$csvfile = 'cust.csv'
$firstline = get-content $colsfile | select -first 1
$headers = for ($i = 0; $i -lt $cols.count - 1; $i++) {
$firstline.substring($cols[$i], $cols[$i+1]-$cols[$i]).trim()
}
# string Substring(int startIndex, int length)
Get-Content $colsfile | select -skip 1 | ForEach {
$hash = [ordered]#{}
for ($i = 0; $i -lt $headers.length; $i++) {
$value = $_.substring($cols[$i], $cols[$i+1]-$cols[$i]).trim()
$hash += #{$headers[$i] = $value}
}
[pscustomobject]$hash
} | export-csv $csvfile
Here is the working code. Fixed few bugs.
CD 'C:\\FOLDERPATH\'
$filter = "FILE_NAME_*.txt"
$columns = 11,22,32,42,54
# DO NOT NEED TO REVERSE [array]::Reverse($columns) #too lazy to re-write array after finding out I need to iterate in reverse
$files = get-childitem ./ |where-object {$_.Name -like $filter}
$newDelimiter = '|'
foreach($file in $files)
{
$file
$csvFile = 'C:\\FOLDERPATH\NEW_' + $file.BaseName + '.txt'
if (!(get-childitem ./ |where-object {$_.Name -like $csvFile})) #check whether file has been processed
{
$content | ForEach {
$line = $_
$counter = 0
$columns | ForEach {
$line = $line.Insert($_+$counter, $newDelimiter)
$counter = $counter + 1
}
$line = $line.Trim($newDelimiter)
$line
} | set-content $csvFile
}
}
I would like to replace a string in a file, and then know if something was actually replaced.
I have many files to parse, and I know only very few will have to be corrected.
So I would like to write the file out only if a changed occurs. Also I would like to be able to trace the changes in a log...
For example, I've been trying this :
(Get-Content $item.Fullname) | Foreach-Object {$_ -replace $old, $new} |
Out-File $item.Fullname
But using I can't tell if any changes were done or not...
Do you have any solution?
Do it in multiple steps:
$content = [System.IO.File]::ReadAllText($item.FullName)
$changedContent = $content -replace $old,$new
if ($content -ne $changedContent) {
# A change was made
# log here
$changedContent | Set-Content $item.FullName
} else {
# No change
}
Use select-string like grep to detect the string and log a message, then use get- and set-content to replace the string:
$item = 'myfile.txt'
$searchstr = "searchstring"
$replacestr = "replacestring"
if (select-string -path $item -pattern $searchstr) {
write-output "found a match for: $searchstr in file: $item"
$oldtext = get-content $item
$newtext = $oldtext.replace($searchstr, $replacestr)
set-content -path $item -value $newtext
}
I'm new to Powershell, I'm creating a code to delete a file/s if more than "x" days.
I'm almost done. Need your help in representing my date (table) and should not produce a log file if no files will be delete.
Here's my code:
$max_days = "-30"
$curr_date = Get-Date
$del_date = $curr_date.AddDays($max_days)
$Path = "C:\Desktop\Code"
$DateTime = Get-Date -Format "D=yyyy-MM-dd_T=HH-mm-ss"
$itemsearch = Get-ChildItem C:\Test -Recurse | Where-Object { $_.LastWriteTime -lt $del_date}
Foreach ($item in $itemsearch)
{
Write "File:", $item.Name "Modified:", $item.LastWriteTime "Path:", $item.FullName "Date Deleted:" $del_date | Out-File "C:\Desktop\Code\Deleted\SFTP_DeleteFiles_WORKSPACE_$DateTime.txt" -append
$item | Remove-Item
}
Can anyone please help me? It's already working by the way.
Just need to present the data in table form and don't create a log file if there's nothing to delete.
Update:
Already solved the condition statement by doing:
if($itemsearch)
{
Foreach ($item in $itemsearch)
{
Write "File:", $item.Name "Modified:", $item.LastWriteTime "Path:", $item.FullName "Date Deleted:" $del_date | Out-File "C:\Desktop\Code\Deleted\SFTP_DeleteFiles_WORKSPACE_$DateTime.txt" -append
$item | Remove-Item
}
}
else
{
Write "No files will be deleted."
}
Thanks!
What I want to display it in Excel/Text file is like this one:
http://i59.tinypic.com/30wv33d.jpg
Anyone?
It returns me with this one:
IsReadOnly;"IsFixedSize";"IsSynchronized";"Keys";"Values";"SyncRoot";"Count"
False;"False";"False";"System.Collections.Hashtable+KeyCollection";"System.Collections.Hashtable+ValueCollection";"System.Object";"4"
False;"False";"False";"System.Collections.Hashtable+KeyCollection";"System.Collections.Hashtable+ValueCollection";"System.Object";"4"
False;"False";"False";"System.Collections.Hashtable+KeyCollection";"System.Collections.Hashtable+ValueCollection";"System.Object";"4"
False;"False";"False";"System.Collections.Hashtable+KeyCollection";"System.Collections.Hashtable+ValueCollection";"System.Object";"4"
In Excel. Do you have any idea? I have to search it though.
To introduce tabular logging I would use a CSV file as output by replacing your foreach block by this code:
$results = #()
foreach ($item in $itemsearch)
{
$success = $true
try
{
$item | Remove-Item
}
catch
{
$success = $false
}
if( $success -eq $true )
{
Write-Host $item.FullName 'successfully deleted.'
$results += [PSCustomObject]#{'File'=$item.Name;'Modified'=$item.LastWriteTime;'Path'=$item.FullName;'Date Deleted'=$del_date;'State'='SUCCESS'}
}
else
{
Write-Host 'Error deleting' $item.FullName
$results += [PSCustomObject]#{'File'=$item.Name;'Modified'=$item.LastWriteTime;'Path'=$item.FullName;'Date Deleted'=$del_date;'State'='ERROR'}
}
}
$results | Export-Csv -Path "C:\Desktop\Code\Deleted\SFTP_DeleteFiles_WORKSPACE_$DateTime.csv" -Encoding UTF8 -Delimiter ';' -NoTypeInformation
First an empty array is created ($results).
The try/catch block is here to detect if the deletion succeeded or not, then the appropriate line is added to $results.
At the end the $results array is exported to CSV with ';' separator so you can open it right away with Excel.
I'm working with a big text file, I mean more than 100 MB big, and I need to loop through a specific number of lines, a kind of subset so I'm trying with this,
$info = Get-Content -Path $TextFile | Select-Object -Index $from,$to
foreach ($line in $info)
{
,,,
But it does not work. It is like if it only gets the first line in the subset.
I don't find documentation about the Index attribute, so is this possible or should I try using a different approach considering the file size?
PS> help select -param index
-Index <Int32[]>
Selects objects from an array based on their index values. Enter the indexes in a comma-separated list.
Indexes in an array begin with 0, where 0 represents the first value and (n-1) represents the last value.
Required? false
Position? named
Default value None
Accept pipeline input? false
Accept wildcard characters? false
Based on the above, '8,13' will get you just two lines. One thing you can do is pass an array of numbers, you can use the range operator:
Get-Content -Path $TextFile | Select-Object -Index (8..13) | Foreach-Object {...}
Are the rows of fixed length? If they are, you can seek to desired position by simply calculating offset*row length and using something like .Net FileStream.Seek(). If they are not, all you can do is to read file row by row.
To extract lines m, n, try something like
# Open text file
$reader = [IO.File]::OpenText($myFile)
$i=0
# Read lines until there are no lines left. Count the lines too
while( ($l = $reader.ReadLine()) -ne $null) {
# If current line is within extract range, print it
if($i -ge $m -and $i -le $n) {
$("Row {0}: {1}" -f $i, $l)
}
$i++
if($i -gt $n) { break } # Stop processing the file when row $n is reached.
}
# Close the text file reader
$reader.Close()
$reader.Dispose()
The below is working for me.
It extract all the content between 2 lines.
$name = "MDSinfo"
$MDSinfo = "$PSScriptRoot\$name.txt" #create text file
$MDSinfo = gc $MDSinfo
$from = ($MDSinfo | Select-String -pattern "sh feature" | Select-Object LineNumber).LineNumber
$to = ($MDSinfo | Select-String -pattern "sh flogi database " | Select-Object LineNumber).LineNumber
$i = 0
$array = #()
foreach ($line in $MDSinfo)
{
foreach-object { $i++ }
if (($i -gt $from) -and ($i -lt $to))
{
$array += $line
}
}
$array
The Get-Content cmdlet has a readcount and totalcount parameters. I would play around with those and try to set it up so that the lines your interested in get assigned to an object, then use that object for your loops.
Try this code:
Select-String $FilePath -pattern "FromHere" | Out-Null
$FromHereStartingLine = Select-String $FilePath -pattern "FromHere" | Select-Object LineNumber
$UptoHereStartingLine = Select-String $FilePath -pattern "UptoHere" | Select-Object LineNumber
for($i=$FromHereStartingLine.LineNumber; $i -lt $UptoHereStartingLine.LineNumber; $i+=1)
{
$HoldInVariable += Get-Content -Path $FilePath | Foreach-Object { ($_ -replace "`r*`n*","") } | Select-Object -Index $i
}
Write-Host "HoldInVariable : " $HoldInVariable