Find the latest string based on time and display those - powershell

I have a text file as shown below:
testdatabase-21-07-15-12-00
testdatabase-21-07-15-18-00
testdatabase-21-07-15-23-00
testdatabase-22-07-15-12-00
testdatabase-22-07-15-18-00
testdatabase-22-07-15-23-00
and many more like this (dynamically generated)
I am comparing (21/22-07-15) with another text file and if a match is found, I need to see which is the latest one. Like, if match is found for date 21-07-15, I need to retrieve the latest (which is 23) from the many of 21. Same as the case for 22,.... if match is found.
What I have done so far is:
$temp = Get-Content "C:\RDS\temp.txt"
foreach($te in $temp)
{
$t = $te -split '-'
$da = $t[1]
$mo = $t[2]
$yea = $t[3]
if("$da-$mo-$yea" -match $temp1)
{
# need to write the concept here
}else
{
#nothing
}
}
How can I get this done.? Any help would be really appreciated.

You can sort the lines read from the input file by a calculated property:
$fmt = 'dd-MM-yy-HH-mm'
$culture = [Globalization.CultureInfo]::InvariantCulture
Get-Content "C:\RDS\temp.txt" |
sort { [DateTime]::ParseExact(($_ -split '-', 2)[1], $fmt, $culture) } |
select -Last 1

Related

How can subtract a character from csv using PowerShell

I'm trying to insert my CSV into my SQL Server database but just wondering how can I subtract the last three character from CSV GID column and then assigned it to my $CSVHold1 variable.
My CSV file look like this
GID Source Type Message Time
KLEMOE http://google.com Od Hello 12/22/2022
EEINGJ http://facebook.com Od hey 12/22/2022
Basically I'm trying to get only the first three character from GID and pass that value to my $CSVHold1 variable.
$CSVImport = Import-CSV $Global:ErrorReport
ForEach ($CSVLine1 in $CSVImport) {
$CSVHold1 = $CSVLine1.GID | ForEach-Object { $_.$GID = $_.$GID.subString(0, $_.$GID.Length - 3); $_ }
$CSVGID1 = $CSVLine1.GID
$CSVSource1 = $CSVLine1.Source
$CSVTYPE1 = $CSVLine1.TYPE
$CSVMessage1 = $CSVLine1.Message
}
I'm trying to do like above but some reason I'm getting an error.
You cannot call a method on a null-valued expression.
Your original line 3 was/is not valid syntax as Santiago pointed out.
$CSVHold1 = $CSVLine1.GID | ForEach-Object { $_.$GID = $_.$GID.subString(0, $_.$GID.Length - 3); $_ }
You are calling $_.$GID but you're wanting $_.GID
You also don't need to pipe the object into a loop to achieve what it seems you are asking.
#!/usr/bin/env powershell
$csvimport = Import-Csv -Path $env:HOMEDRIVE\Powershell\TestCSVs\test1.csv
##$CSVImport = Import-CSV $Global:ErrorReport
ForEach ($CSVLine1 in $CSVImport) {
$CSVHold1 = $CSVLine1.GID.SubString(0, $CSVLine1.GID.Length - 3)
$CSVGID1 = $CSVLine1.GID
$CSVSource1 = $CSVLine1.Source
$CSVTYPE1 = $CSVLine1.TYPE
$CSVMessage1 = $CSVLine1.Message
Write-Output -InputObject ('Changing {0} to {1}' -f $CSVLine1.gid, $CSVHold1)
}
Using your sample data, the above outputs:
C:> . 'C:\Powershell\Scripts\dchero.ps1'
Changing KLEMOE to KLE
Changing EEINGJ to EEI
Lastly, be aware that that the SubString method will fail if the length of $CSVLine1.GID is less than 3.

Windows PowerShell: How to parse the log file?

I have an input file with below contents:
27/08/2020 02:47:37.365 (-0516) hostname12 ult_licesrv ULT 5 LiceSrv Main[108 00000 Session 'session1' (from 'vmpms1\app1#pmc21app20.pm.com') request for 1 additional licenses for module 'SA-XT' - 1 licenses have been allocated by concurrent usage category 'Unlimited' (session module usage now 1, session category usage now 1, total module concurrent usage now 1, total category usage now 1)
27/08/2020 02:47:37.600 (-0516) hostname13 ult_licesrv ULT 5 LiceSrv Main[108 00000 Session 'sssion2' (from 'vmpms2\app1#pmc21app20.pm.com') request for 1 additional licenses for module 'SA-XT-Read' - 1 licenses have been allocated by concurrent usage category 'Floating' (session module usage now 2, session category usage now 2, total module concurrent usage now 1, total category usage now 1)
27/08/2020 02:47:37.115 (-0516) hostname141 ult_licesrv CMN 5 Logging Housekee 00000 Deleting old log file 'C:\Program Files\PMCOM Global\License Server\diag_ult_licesrv_20200824_011130.log.gz' as it exceeds the purge threashold of 72 hours
27/08/2020 02:47:37.115 (-0516) hostname141 ult_licesrv CMN 5 Logging Housekee 00000 Deleting old log file 'C:\Program Files\PMCOM Global\License Server\diag_ult_licesrv_20200824_021310.log.gz' as it exceeds the purge threashold of 72 hours
27/08/2020 02:47:37.625 (-0516) hostname150 ult_licesrv ULT 5 LiceSrv Main[108 00000 Session 'session1' (from 'vmpms1\app1#pmc21app20.pm.com') request for 1 additional licenses for module 'SA-XT' - 1 licenses have been allocated by concurrent usage category 'Unlimited' (session module usage now 2, session category usage now 1, total module concurrent usage now 2, total category usage now 1)
I need to generate and output file like below:
Date,time,hostname,session_module_usage,session_category_usage,module_concurrent_usage,total_category_usage
27/08/2020,02:47:37.365 (-0516),hostname12,1,1,1,1
27/08/2020,02:47:37.600 (-0516),hostname13,2,2,1,1
27/08/2020,02:47:37.115 (-0516),hostname141,0,0,0,0
27/08/2020,02:47:37.115 (-0516),hostname141,0,0,0,0
27/08/2020,02:47:37.625 (-0516),hostname150,2,1,2,1
The output data order is: Date,time,hostname,session_module_usage,session_category_usage,module_concurrent_usage,total_category_usage.
Put 0,0,0,0 if no entry for session_module_usage,session_category_usage,module_concurrent_usage,total_category_usage
I need to get content from the input file and write the output to another file.
Update
I have created a file input.txt in F drive and pasted the log details into it.
Then I form an array by splitting the file content when a new line occurs like below.
$myList = (Get-Content -Path F:\input.txt) -split '\n'
Now I got 5 items in my array myList. Then I replace the multiple blank spaces with a single blank space and formed a new array by splitting each element by blank space. Then I print the 0 to 3 array elements. Now I need to add the end values (session_module_usage,session_category_usage,module_concurrent_usage,total_category_usage).
PS C:\Users\user> $myList = (Get-Content -Path F:\input.txt) -split '\n'
PS C:\Users\user> $myList.Length
5
PS C:\Users\user> $myList = (Get-Content -Path F:\input.txt) -split '\n'
PS C:\Users\user> $myList.Length
5
PS C:\Users\user> for ($i = 0; $i -le ($myList.length - 1); $i += 1) {
>> $newList = ($myList[$i] -replace '\s+', ' ') -split ' '
>> $newList[0]+','+$newList[1]+' '+$newList[2]+','+$newList[3]
>> }
27/08/2020,02:47:37.365 (-0516),hostname12
27/08/2020,02:47:37.600 (-0516),hostname13
27/08/2020,02:47:37.115 (-0516),hostname141
27/08/2020,02:47:37.115 (-0516),hostname141
27/08/2020,02:47:37.625 (-0516),hostname150
If you really need to filter on the granularity that you're looking for, then you may need to use regex to filter the lines.
This would assume that the rows have similarly labeled lines before the values you're looking for, so keep that in mind.
[System.Collections.ArrayList]$filteredRows = #()
$log = Get-Content -Path C:\logfile.log
foreach ($row in $log) {
$rowIndex = $log.IndexOf($row)
$date = ([regex]::Match($log[$rowIndex],'^\d+\/\d+\/\d+')).value
$time = ([regex]::Match($log[$rowIndex],'\d+:\d+:\d+\.\d+\s\(\S+\)')).value
$hostname = ([regex]::Match($log[$rowIndex],'(?<=\d\d\d\d\) )\w+')).value
$sessionModuleUsage = ([regex]::Match($log[$rowIndex],'(?<=session module usage now )\d')).value
if (!$sessionModuleUsage) {
$sessionModuleUsage = 0
}
$sessionCategoryUsage = ([regex]::Match($log[$rowIndex],'(?<=session category usage now )\d')).value
if (!$sessionCategoryUsage) {
$sessionCategoryUsage = 0
}
$moduleConcurrentUsage = ([regex]::Match($log[$rowIndex],'(?<=total module concurrent usage now )\d')).value
if (!$moduleConcurrentUsage) {
$moduleConcurrentUsage = 0
}
$totalCategoryUsage = ([regex]::Match($log[$rowIndex],'(?<=total category usage now )\d')).value
if (!$totalCategoryUsage) {
$totalCategoryUsage = 0
}
$hash = [ordered]#{
Date = $date
time = $time
hostname = $hostname
session_module_usage = $sessionModuleUsage
session_category_usage = $sessionCategoryUsage
module_concurrent_usage = $moduleConcurrentUsage
total_category_usage = $totalCategoryUsage
}
$rowData = New-Object -TypeName 'psobject' -Property $hash
$filteredRows.Add($rowData) > $null
}
$csv = $filteredRows | convertto-csv -NoTypeInformation -Delimiter "," | foreach {$_ -replace '"',''}
$csv | Out-File C:\results.csv
What essentially needs to happen is that we need to get-content of the log, which returns an array with each item terminated on a newline.
Once we have the rows, we need to grab the values via regex
Since you want zeroes in some of the items if those values don't exist, I have if statements that assign '0' if the regex returns nothing
Finally, we add each filtered item to a PSObject and append that object to an array of objects in each iteration.
Then export to a CSV.
You can probably pick apart the lines with a regex and substrings easily enough. Basically something like the following:
# Iterate over the lines of the input file
Get-Content F:\input.txt |
ForEach-Object {
# Extract the individual fields
$Date = $_.Substring(0, 10)
$Time = $_.Substring(12, $_.IndexOf(')') - 11)
$Hostname = $_.Substring(34, $_.IndexOf(' ', 34) - 34)
$session_module_usage = 0
$session_category_usage = 0
$module_concurrent_usage = 0
$total_category_usage = 0
if ($_ -match 'session module usage now (\d+), session category usage now (\d+), total module concurrent usage now (\d+), total category usage now (\d+)') {
$session_module_usage = $Matches[1]
$session_category_usage = $Matches[2]
$module_concurrent_usage = $Matches[3]
$total_category_usage = $Matches[4]
}
# Create custom object with those properties
New-Object PSObject -Property #{
Date = $Date
time = $Time
hostname = $Hostname
session_module_usage = $session_module_usage
session_category_usage = $session_category_usage
module_concurrent_usage = $module_concurrent_usage
total_category_usage = $total_category_usage
}
} |
# Ensure column order in output
Select-Object Date,time,hostname,session_module_usage,session_category_usage,module_concurrent_usage,total_category_usage |
# Write as CSV - without quotes
ConvertTo-Csv -NoTypeInformation |
ForEach-Object { $_ -replace '"' } |
Out-File F:\output.csv
Whether to pull the date, time, and host name from the line with substrings or regex is probably a matter of taste. Same goes for how strict the format must be matched, but that to me mostly depends on how rigid the format is. For more free-form things where different lines would match different regexes, or multiple lines makes up a single record, I also quite like switch -Regex to iterate over the lines.

Is this the best way to replace text in all of an object's properties in powershell?

I have a large CSV file in which some fields have a new line embedded. Excel 2016 produces errors when importing a CSV with rows which have fields with a new line embedded.
Based on this post, I wrote code to replace any new line in any field with a space. Below is a code block that duplicates the functionality and issue. Option 1 works. Option 2, which is commented out, casts my object to a string. I was hoping Option 2 might run faster.
Question: Is there a better way to do this to optimize for performance processing very large files?
$array = #([PSCustomObject]#{"ID"="1"; "Name"="Joe`nSmith"},
[PSCustomObject]#{"ID"="2"; "Name"="Jasmine Baker"})
$array = $array | ForEach-Object {
#Option 1: produces an Object, but is code optimized?
foreach ($n in $_.PSObject.Properties.Name) {
$_.PSObject.Properties[$n].Value = `
$_.PSObject.Properties[$n].Value -replace "`n"," "
}
#Option 2: produces a string, not an object
#$_ = $_ -replace "`n"," "
$_
}
Keep in mind that in my real-world use case, each row has > 15 fields and any combination of them may have one or more new lines embedded.
Use the fast TextFieldParser to read, process, and build the CSV from the file (PowerShell 3+):
[Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') >$null
$parser = New-Object Microsoft.VisualBasic.FileIO.TextFieldParser 'r:\1.csv'
$parser.SetDelimiters(',')
$header = $parser.ReadFields()
$CSV = while (!$parser.EndOfData) {
$i = 0
$row = [ordered]#{}
foreach ($field in $parser.ReadFields()) {
$row[$header[$i++]] = $field.replace("`n", ' ')
}
[PSCustomObject]$row
}
Or modify each field in-place in an already existing CSV array:
foreach ($row in $CSV) {
foreach ($field in $row.PSObject.Properties) {
$field.value = $field.value.replace("`n", ' ')
}
}
Notes:
foreach statement is much faster than piping to ForEach-Object (also aliased as foreach)
$stringVariable.replace() is faster then -replace operator

Reading strings from text files using switch -regex returns null element

Question:
The intention of my script is to filter out the name and phone number from both text files and add them into a hash table with the name being the key and the phone number being the value.
The problem I am facing is
$name = $_.Current is returning $null, as a result of which my hash is not getting populated.
Can someone tell me what the issue is?
Contents of File1.txt:
Lori
234 east 2nd street
Raleigh nc 12345
9199617621
lori#hotmail.com
=================
Contents of File2.txt:
Robert
2531 10th Avenue
Seattle WA 93413
2068869421
robert#hotmail.com
Sample Code:
$hash = #{}
Switch -regex (Get-content -Path C:\Users\svats\Desktop\Fil*.txt)
{
'^[a-z]+$' { $name = $_.current}
'^\d{10}' {
$phone = $_.current
$hash.Add($name,$phone)
$name=$phone=$null
}
default
{
write-host "Nothing matched"
}
}
$hash
Remove the current property reference from $_:
$hash = #{}
Switch -regex (Get-content -Path C:\Users\svats\Desktop\Fil*.txt)
{
'^[a-z]+$' {
$name = $_
}
'^\d{10}' {
$phone = $_
$hash.Add($name, $phone)
$name = $phone = $null
}
default {
Write-Host "Nothing matched"
}
}
$hash
Mathias R. Jessen's helpful answer explains your problem and offers an effective solution:
it is automatic variable $_ / $PSItem itself that contains the current input object (whatever its type is - what properties $_ / $PSItem has therefore depends on the input object's specific type).
Aside from that, there's potential for making the code both less verbose and more efficient:
# Initialize the output hashtable.
$hash = #{}
# Create the regex that will be used on each input file's content.
# (?...) sets options: i ... case-insensitive; m ... ^ and $ match
# the beginning and end of every *line*.
$re = [regex] '(?im)^([a-z]+|\d{10})$'
# Loop over each input file's content (as a whole, thanks to -Raw).
Get-Content -Raw File*.txt | foreach {
# Look for name and phone number.
$matchColl = $re.Matches($_)
if ($matchColl.Count -eq 2) { # Both found, add hashtable entry.
$hash.Add($matchColl.Value[0], $matchColl.Value[1])
} else {
Write-Host "Nothing matched."
}
}
# Output the resulting hashtable.
$hash
A note on the construction of the .NET [System.Text.RegularExpressions.Regex] object (or [regex] for short), [regex] '(?im)^([a-z]+|\d{10})$':
Embedding matching options IgnoreCase and Multiline as inline options i and m directly in the regex string ((?im) is convenient, in that it allows using simple cast syntax ([regex] ...) to construct the regular-expression .NET object.
However, this syntax may be obscure and, furthermore, not all matching options are available in inline form, so here's the more verbose, but easier-to-read equivalent:
$re = New-Object regex -ArgumentList '^([a-z]+|\d{10})$', 'IgnoreCase, Multiline'
Note that the two options must be specified comma-separated, as a single string, which PowerShell translates into the bit-OR-ed values of the corresponding enumeration values.
other solution, use convertfrom-string
$template=#'
{name*:Lori}
{street:234 east 2nd street}
{city:Raleigh nc 12345}
{phone:9199617621}
{mail:lori#hotmail.com}
{name*:Robert}
{street:2531 10th Avenue}
{city:Seattle WA 93413}
{phone:2068869421}
{mail:robert#hotmail.com}
{name*:Robert}
{street:2531 Avenue}
{city:Seattle WA 93413}
{phone:2068869421}
{mail:robert#hotmail.com}
'#
Get-Content -Path "c:\temp\file*.txt" | ConvertFrom-String -TemplateContent $template | select name, phone

How to properly string replace in Powershell without appending the replaced variable to a newline?

I'm pretty new to powershell/programming so bear with me. I have this bug that appends the new renamed path to a new-line without the rest of path.
The console output:
/content/pizza/en/ingredients/
helloworld/menu-eng.html
What I want:
/content/pizza/en/ingredients/helloworld/menu-eng.html
What the code below is supposed to do is rename a bunch paths. Right now testName is hard-coded but after I get this to work properly it will be dynamic.
My code:
$testName = "helloworld"
$text = (Get-Content W:\test\Rename\rename.csv) | Out-String
$listOfUri = Import-Csv W:\test\Rename\rename.csv
foreach ($element in $listOfUri) {
if ($element -match "menu-eng.html") {
$elementString = $element.'ColumnTitle' | Out-String
$elementString = $elementString.Replace('menu-eng.html', '')
$varPath1 = $elementString
$elementString = $elementString.Insert('', 'http://www.pizza.com')
$elementName = ([System.Uri]$elementString).Segments[-1]
$elementString = $elementString.Replace($elementName, '')
$elementString = $elementString.Replace('http://www.pizza.com', '')
$varPath2 = $elementString.Insert($elementString.Length, $testName + '/')
$text = $text.Replace($varPath1.Trim(), $varPath2)
}
}
$text
Assuming your .csv file looks like this:
ColumnTitle,Idk
/content/pizza/en/ingredients/SPAM/menu-eng.html,Stuff
Then:
$testName = 'helloworld'
foreach ($row in Import-CSV d:\rename.csv) {
$bit = $row.'ColumnTitle'.Split('/')[-2]
$row.'ColumnTitle'.replace($bit, $testName)
}
I have no real idea what all the rest of your code is for, particularly my earlier comment, your line:
$text = (Get-Content W:\test\Rename\rename.csv) | Out-String
is making $text into an /array/ of all the lines in the file, including the headers. You can still use .Replace() on it in PowerShell, but it's going to do the replace on every line. I can't quite see how that gives you the output you get, but it will give you multiple lines for every line in the input file.