Powershell pull last 5 minutes of data from CSV - powershell

I have a log file that generates new data every five minutes, I am attempting to pull the last five minutes of the log file and pull specific data from the last five minutes. Currently I have code to convert it from .log to .csv with headers of "Date, Time, Error1, Error2, Error3". However, every attempt I have tried thus far is not pulling the data correctly.
The Date and Time of the CSV are formatted as: "dd/MM/yyyy","hh:mm:ss.ms"
Powershell does not give any visible errors, but the errorCOLLECTION.csv does not generate
The Current code I have:
Copy-Item -Path "C:\ProgramData\Blah\Blah\Blah Blah\error.log" -Destination "C:\Windows\Blah\Blah\Logs\Temp\Blah Blah\" -PassThru
Import-Csv "C:\Windows\Blah\Blah\Logs\Temp\Blah Blah\error.log" -delimiter "," -Header Date, Time, Error1, Error2, Error3 |
Export-Csv -NoTypeInformation "C:\Windows\Blah\Blah\Logs\Temp\Blah Blah\error.csv"
$referenceTime = '{0:dd/MM/yyyy,HH:mm:ss.ms}' -f (Get-Date '2019/02/25,19:09:00.590').AddMinutes(-5)
$regexSearch = '\bSdata:\s*\[(\d{2})]'
switch -Regex -File "C:\Windows\Blah\Blah\Logs\Temp\Blah Blah\error.csv" {
$regexSearch {
if (($_ -split ',')[0] -gt $referenceTime) {
set-content "C:\Windows\Blah\Blah\Logs\Temp\Blah Blah\errorCOLLECTION.csv"
}
}
}
In response to Theo an example of the Log file:
29/11/2022,10:48:48.693,PINSP,DC,<PID>6324</PID><TID>2996</TID><F>INFO_GET_KEY_DETAIL</F><X>lpszKeyName [CommsKey]</X><SF>key_lib.cpp</SF><SL>1177</SL>
29/11/2022,10:48:48.693,MDMSP,DC,<PID>6200</PID><TID>5772</TID><G><X>W</X><HS>65535</HS><RI>0</RI></G><F>SXIO::ReceiveIOMessage</F><AR><AN>RetrieveMessage</AN><RV><I4>0</I4></RV><P><N>szMessage</N><S>messageCategory: 0x3 messageType: 0x554cc006 messageID: 0xd6d7928</S></P><P><N>response</N><S>hservice: 43 ucClass: 3 usTLen: 4 TData: [33 01 00 12] ucSLen: 1 SData: [00] ucMLen: 0 ucRSlen: 0 ucRClen: 0</S></P></AR><SF>SXIO.cpp</SF><SL>833</SL>
29/11/2022,10:48:48.693,PINSP,DC,<PID>6324</PID><TID>2996</TID><F>INFO_GET_KEY_DETAIL</F><X>Return Value [0]</X><X>lptKeyDetail [caKeyName [CommsKey] usKeyId [2] usKeyspaceId [4] wTypeOfAccess [0x2] bIsIV [0] bMasterKey [0] bLoadedFlag [1] bIsDouble [0]]</X><SF>key_lib.cpp</SF><SL>1214</SL>
29/11/2022,10:48:48.693,PINSP,DC,<PID>6324</PID><TID>2996</TID><F>INFO_GET_KEY_DETAIL</F><X>lpszKeyName [MACKey]</X><SF>key_lib.cpp</SF><SL>1177</SL>
29/11/2022,10:48:48.693,PINSP,DC,<PID>6324</PID><TID>2996</TID><F>INFO_GET_KEY_DETAIL</F><X>Return Value [0]</X><X>lptKeyDetail [caKeyName [MACKey] usKeyId [3] usKeyspaceId [3] wTypeOfAccess [0x4] bIsIV [0] bMasterKey [0] bLoadedFlag [0] bIsDouble [1]]</X><SF>key_lib.cpp</SF><SL>1214</SL>
29/11/2022,10:48:48.694,PINSP,DC,<PID>6324</PID><TID>2996</TID><F>INFO_GET_KEY_DETAIL</F><X>lpszKeyName [PEKey]</X><SF>key_lib.cpp</SF><SL>1177</SL>
29/11/2022,10:48:48.694,PINSP,DC,<PID>6324</PID><TID>2996</TID><F>INFO_GET_KEY_DETAIL</F><X>Return Value [0]</X><X>lptKeyDetail [caKeyName [PEKey] usKeyId [4] usKeyspaceId [4] wTypeOfAccess [0x2] bIsIV [0] bMasterKey [0] bLoadedFlag [0] bIsDouble [1]]</X><SF>key_lib.cpp</SF><SL>1214</SL>
29/11/2022,10:48:48.694,PINSP,FW,<PID>6324</PID><TID>2996</TID><G><X>W</X><HS>44</HS><RI>2267</RI></G><F>PostWFSResult</F><F>WFSResultData</F><P><N>hWnd</N><PT>263508</PT></P><P><N>lpWFSResult->RequestID</N><U4>2267</U4></P><P><N>lpWFSResult->hService</N><U4>44</U4></P><P><N>lpWFSResult->hResult</N><H>0</H></P><P><N>lpWFSResult->u.dwCommandCode</N><U4>401</U4></P><P><N>lpStatus</N><OB><M><N>fwDevice</N><U2>0</U2></M><M><N>fwEncStat</N><U2>0</U2></M><M><N>lpszExtra</N><PT>00000000</PT></M><M><N>guidlight</N><S>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</S></M><M><N>fwAutoBeepMode</N><U2>2</U2></M><M><N>dwCertificateState</N><U4>4</U4></M><M><N>wDevicePosition</N><U2>3</U2></M><M><N>usPowerSaveRecoveryTime</N><U2>0</U2></M><M><N>wAntiFraudModule</N><U2>0</U2></M></OB></P><SF>FWResultImpl.cpp</SF><SL>4638</SL><E>WFS_GETINFO_COMPLETE</E>
29/11/2022,10:48:48.697,PINSP,FW,<PID>6324</PID><TID>2996</TID><G><X>W</X><HS>44</HS><RI>2268</RI></G><F>PostWFSResult</F><F>WFSResultData</F><P><N>hWnd</N><PT>7014346</PT></P><P><N>lpWFSResult->RequestID</N><U4>2268</U4></P><P><N>lpWFSResult->hService</N><U4>44</U4></P><P><N>lpWFSResult->hResult</N><H>0</H></P><P><N>lpWFSResult->u.dwCommandCode</N><U4>408</U4></P><SF>FWResultImpl.cpp</SF><SL>4638</SL><E>WFS_GETINFO_COMPLETE</E>
29/11/2022,10:48:48.702,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>44</HS></G><F>WFSGetInfo</F><P><N>*lppResult</N><OB><M><N>RequestID</N><U4>2268</U4></M><M><N>hService</N><U2>44</U2></M><M><N>hResult</N><U4>0</U4></M><M><N>Code</N><U4>408</U4></M><M><N>lpBuffer</N><PT>280E284D</PT></M></OB></P><RV><H>0</H></RV><SF>MgrApi.cpp</SF><SL>1394</SL>
29/11/2022,10:48:48.702,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>0</HS></G><F>WFSFreeResult</F><P><N>lpResult</N><OB><M><N>RequestID</N><U4>2268</U4></M><M><N>hService</N><U2>44</U2></M><M><N>hResult</N><U4>0</U4></M><M><N>Code</N><U4>408</U4></M><M><N>lpBuffer</N><PT>280E284D</PT></M></OB></P><SF>MgrApi.cpp</SF><SL>1230</SL>
29/11/2022,10:48:48.702,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>0</HS></G><F>WFSFreeResult</F><RV><H>0</H></RV><SF>MgrApi.cpp</SF><SL>1240</SL>
29/11/2022,10:48:48.703,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>0</HS></G><F>WFSFreeResult</F><P><N>lpResult</N><OB><M><N>RequestID</N><U4>2266</U4></M><M><N>hService</N><U2>49</U2></M><M><N>hResult</N><U4>0</U4></M><M><N>Code</N><U4>301</U4></M><M><N>lpBuffer</N><PT>08120D85</PT></M></OB></P><SF>MgrApi.cpp</SF><SL>1230</SL>
29/11/2022,10:48:48.703,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>0</HS></G><F>WFSFreeResult</F><RV><H>0</H></RV><SF>MgrApi.cpp</SF><SL>1240</SL>
29/11/2022,10:48:48.703,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>0</HS></G><F>WFSFreeResult</F><P><N>lpResult</N><OB><M><N>RequestID</N><U4>2267</U4></M><M><N>hService</N><U2>44</U2></M><M><N>hResult</N><U4>0</U4></M><M><N>Code</N><U4>401</U4></M><M><N>lpBuffer</N><PT>281523A5</PT></M></OB></P><SF>MgrApi.cpp</SF><SL>1230</SL>
29/11/2022,10:48:48.703,Mgr,Mgr,<PID>6324</PID><TID>6588</TID><G><HS>0</HS></G><F>WFSFreeResult</F><RV><H>0</H></RV><SF>MgrApi.cpp</SF><SL>1240</SL>

Doing a slight modification to your code, because ($_ -split ',')[0] would be only targeting the Date and not the Time, the following works properly for me outputting the line starting with:
29/11/2022,10:48:48.693,MDMSP,DC,<PID>6200...
I'm also using DateTime.TryParse to convert these strings into a DateTime instance, I'm honestly not sure comparing these strings as strings would work correctly, at least converting them to DateTime instances we're 100% sure the comparison will be correct.
Aside from that, as was pointed out in comments by other users, Set-Content currently has the path to output but no value as argument.
& {
$referenceTime = (Get-Date '2019/02/25,19:09:00.590').AddMinutes(-5)
$regexSearch = '\bSdata:\s*\[(\d{2})]'
$parsedDate = [ref] [datetime]::new(0)
switch -Regex -File 'C:\bla\bla\error.csv' {
$regexSearch {
$success = [datetime]::TryParseExact(
('{0},{1}' -f $_.Split(',', 3)[0, 1]),
'dd/MM/yyyy,HH:mm:ss.fff',
[cultureinfo]::InvariantCulture,
[System.Globalization.DateTimeStyles]::AssumeLocal,
$parsedDate
)
if($success -and $parsedDate.Value -gt $referenceTime) {
$_
}
}
}
} | Set-Content 'C:\bla\bla\errorCOLLECTION.csv'

Related

Transpose specific rows into columns using PowerShell

So I have a CSV file with rows that I want to transpose (some of them) into columns using PowerShell.
The example is as follows:
ALPHA
CD
CL
CM
-5
0.1
-0.2
0.05
0
0.4
0.4
-0.08
5
0.5
0.8
-0.1
What I want is something like this:
Alpha CD CL CM
-5 0.1 -0.2 0.05
0 0.4 0.4 -0.08
5 0.5 0.8 -0.1
For reference I got these values from a .dat data file output with over 400 rows full of information. IĀ reformatted it into a CSV file using out-file, and I skipped all the rows I don't need.
The information was split into rows but not columns, meaning ALPHA CD CL CM were all in one cell with spaces in between, so I used the split command as shown below to break them into rows.
$ text .split () | where { $ _ }
Now I want to transpose SOME of them back into columns.
The problem is it's not fixed amounts, meaning it's not always four rows into four columns, sometimes I would get five rows that I want to turn into five columns, and THEN turn every four rows into four columns AFTER that.
Sorry if I'm rambling but it's something like this:
Row 1 > Column 1 Row 1
Row 2 > Column 2 Row 1
Row 3 > Column 3 Row 1
Row 4 > Column 4 Row 1
Row 5 > Column 5 Row 1
Row 6 > Column 1 Row 2
Row 7 > Column 2 Row 2
Row 8 > Column 3 Row 2
Row 9 > Column 4 Row 2
Row 10 > Column 5 Row 2
Row 11 > Column 1 Row 3
Row 12 > Column 2 Row 3
Row 13 > Column 3 Row 3
Row 14 > Column 4 Row 3
Row 15 > Column 1 Row 4
Please notice how it went from five columns to four columns now.
If it can be done easier in other methods other than PowerShell where I can use PowerShell to run them, i.e. a batch file that calls PowerShell, that would be good by me as I need to automate a very long process, and this is one of the later process steps.
PS: The data are NOT comma separated cleanly. The used program DATCOM outputs a data file where it looks neat and structured in text format, but when you export CSV it destroys it, so it has to be done using:
out-file name csv
PPS: There is no clear delimiter/cutoff point, and there are no repeating numbers or anything else that can be used as a hint. I have to do it by row number, which I know due to dealing with DATCOM before.
I explained more above, but I tried using split commands. It dropped them all into rows. So if there is a way that can just do a literal text to columns delimit using spaces (exactly like in Excel) that would be perfect, and even better than breaking them into rows then transposing to columns. However, it has to be EXACTLY like Excel. The problem is there are 4-8 "spaces" between each value, so if I try to
import-csv -delim " "
on the file I get something like Alpha H1 H2 H3 CD H4 H5 H6 H7 H8 CL and everything else gets destroyed, whereas if I actually open Excel, highlight cells, text to columns > delimited > check "spaces" the results are perfect.
Here are the files: https://easyupload.io/m/6q70ei
for006.dat is the data file generated by DATCOM.
Output1 is what I want done as described above (row to column).
Output2 is what I hope I can do later, i.e. delete a column and a row to make it cleaner, this is my ideal final output.
Mmm... I am afraid your description is pretty confusing, so I forgot it and focused in your files...
The Batch file below read the for006.dat file and generate your "ideal final output" Output2.xlsx file in .csv form.
#echo off
setlocal EnableDelayedExpansion
set "skip="
set "lines="
for /F "delims=:" %%a in ('findstr /N /L /C:" ALPHA" for006.dat') do (
if not defined skip (
set /A "skip=%%a-1"
) else if not defined lines (
set /A "lines=%%a-skip-1"
)
)
< for006.dat (
for /L %%a in (1,1,%skip%) do set /P "="
for /L %%a in (1,1,%lines%) do (
set /P "line="
set "line=!line:~2!"
if defined line call :reformat
)
) > Output2.csv
goto :EOF
:reformat
set "newLine=%line: = %"
if "%newLine%" == "%line%" goto continue
set "line=%newLine%"
goto reformat
:continue
if "%line:~0,1%" == " " set "line=%line:~1%"
if "%line:~-1%" == " " set "line=%line:~0,-1%"
echo "%line: =","%"
This is Output2.csv:
"ALPHA","CD","CL","CM","CN","CA","XCP","CLA","CMA","CYB","CNB","CLB"
"-6.0","0.013","-0.175","0.2807","-0.176","-0.006","-1.599","3.100E+00","-3.580E+00","-5.643E-02","-3.080E-03","-8.679E-02"
"-3.0","0.011","-0.011","0.0926","-0.012","0.010","-7.977","3.172E+00","-3.626E+00","-8.989E-02"
"0.0","0.013","0.157","-0.0990","0.157","0.013","-0.631","3.286E+00","-3.740E+00","-9.305E-02"
"3.0","0.019","0.333","-0.2991","0.334","0.001","-0.897","3.426E+00","-3.901E+00","-9.635E-02"
"6.0","0.029","0.516","-0.5075","0.516","-0.025","-0.984","3.529E+00","-4.084E+00","-9.979E-02"
"7.5","0.036","0.609","-0.6158","0.608","-0.044","-1.013","3.472E+00","-4.002E+00","-1.015E-01"
"9.0","0.043","0.698","-0.7171","0.696","-0.067","-1.031","3.218E+00","-3.679E+00","-1.032E-01"
"10.0","0.047","0.752","-0.7791","0.748","-0.084","-1.041","2.895E+00","-3.489E+00","-1.042E-01"
"11.0","0.051","0.799","-0.8388","0.794","-0.102","-1.057","2.572E+00","-3.345E+00","-1.051E-01"
"12.0","0.055","0.841","-0.8958","0.835","-0.121","-1.073","2.320E+00","-3.178E+00","-1.059E-01"
"13.0","0.059","0.880","-0.9498","0.870","-0.140","-1.091","2.041E+00","-2.983E+00","-1.066E-01"
"14.0","0.063","0.913","-0.9999","0.901","-0.160","-1.110","1.738E+00","-2.772E+00","-1.072E-01"
"15.0","0.066","0.940","-1.0465","0.925","-0.180","-1.131","1.356E+00","-2.567E+00","-1.077E-01"
"16.0","0.067","0.960","NA","0.941","-0.201","NA","1.798E-02","NA","-1.081E-01"
"18.0","0.055","0.883","NA","0.857","-0.220","NA","-4.434E+00","NA","-1.066E-01"
You can also generate the .csv output file with no quotes by just removing the quotes from the last echo command
Try following :
$columns = 4
$data =
"#ALPHA
CD
CL
CM
-5
0.1
-0.2
0.05
0
0.4
0.4
-0.08
5
0.5
0.8
-0.1#"
$data | Format-Table
$headers = [System.Collections.ArrayList]::new()
$table = [System.Collections.ArrayList]::new()
$rows = [System.IO.StringReader]::new($data)
for($i = 0; $i -lt $columns; $i++)
{
$headers.Add($rows.ReadLine())
}
$rowCount = 0
Write-Host $headers
While(($line = $rows.ReadLine()) -ne $null)
{
if($rowCount % $columns -eq 0)
{
$newRow = New-Object -TypeName psobject
$table.Add($newRow)
}
$newRow | Add-Member -NotePropertyName $headers[$rowCount % $columns] -NotePropertyValue $line
$rowCount++
}
$table | Format-Table
You can use PowerShell's Begin/Process/End lifecycle to "buffer" input data until you have enough for a "row", then output that and start collecting for the next row:
# define width of each row as well as the column separator
$columnCount = 5
$delimiter = "`t"
# read in the file contents, "cell-by-cell"
$rawCSVData = Get-Content path\to\input\file.txt |ForEach-Object -Begin {
# set up a buffer to hold 1 row at a time
$index = 0
$buffer = [psobject[]]::new($columnCount)
} -Process {
# add input to buffer, and optionally output
$buffer[$index++] = $_
if($index -eq $columnCount){
# output row, reset column index
$buffer -join $delimiter
$index = 0
}
} -End {
# Output any partial last row
if($index){
$buffer -join $delimiter
}
}
This will produce a list of strings that can either be written to disk or parsed using regular CSV-parsing tools in PowerShell:
$rawCSVData |Set-Content path\to\output.csv
# or
$rawCSVData |ConvertFrom-Csv -Delimiter $delimiter
Once you know how many rows form the headers and the data, you can convert the file into
an array of objects by using ConvertFrom-Csv.
When done, it is easy to create a new csv from this as below:
# in this example the first 4 lines are the columns, the rest is data
# other files may need a different number of columns
$columns = 4
$data = #(Get-Content -Path 'X:\Somewhere\data.txt')
$count = 0
$result = while ($count -lt ($data.Count - ($columns - 1))) {
$data[$count..($count + $columns - 1)] -join "`t" # join the lines with a TAB
$count += $columns
}
$result = $result | ConvertFrom-Csv -Delimiter "`t"
# output on screen
$result | Format-Table -AutoSize
# write to new csv file
$result | Export-Csv -Path 'X:\Somewhere\data_new.csv' -NoTypeInformation
Output on screen:
ALPHA CD CL CM
----- -- -- --
-5 0.1 -0.2 0.05
0 0.4 0.4 -0.08
5 0.5 0.8 -0.1
Two (custom) functions that might help you to scrape your data from the for006.dat:
SelectString -From -To (see: #15136 Add -From and -To parameters to Select-String)
ConvertFrom-SourceTable
Get-Content .\for006.dat |
SelectString -From '(?=0 ALPHA CD CL.*)' -To '^0.+' |
ForEach-Object { $_.SubString(1) } |
ConvertFrom-SourceTable |
ConvertTo-Csv
Results:
"ALPHA","CD","CL","CM","CN","CA","XCP","CLA","CMA","CYB","CNB","CLB"
"-6","0.013","-0.175","0.2807","-0.176","-0.006","-1.599","3.100E+00","-3.580E+00","-5.643E-02","-3.080E-03","-8.679E-02"
"-3","0.011","-0.011","0.0926","-0.012","0.01","-7.977","3.172E+00","-3.626E+00","","","-8.989E-02"
"0","0.013","0.157","-0.0990","0.157","0.013","-0.631","3.286E+00","-3.740E+00","","","-9.305E-02"
"3","0.019","0.333","-0.2991","0.334","0.001","-0.897","3.426E+00","-3.901E+00","","","-9.635E-02"
"6","0.029","0.516","-0.5075","0.516","-0.025","-0.984","3.529E+00","-4.084E+00","","","-9.979E-02"
"7.5","0.036","0.609","-0.6158","0.608","-0.044","-1.013","3.472E+00","-4.002E+00","","","-1.015E-01"
"9","0.043","0.698","-0.7171","0.696","-0.067","-1.031","3.218E+00","-3.679E+00","","","-1.032E-01"
"10","0.047","0.752","-0.7791","0.748","-0.084","-1.041","2.895E+00","-3.489E+00","","","-1.042E-01"
"11","0.051","0.799","-0.8388","0.794","-0.102","-1.057","2.572E+00","-3.345E+00","","","-1.051E-01"
"12","0.055","0.841","-0.8958","0.835","-0.121","-1.073","2.320E+00","-3.178E+00","","","-1.059E-01"
"13","0.059","0.88","-0.9498","0.87","-0.140","-1.091","2.041E+00","-2.983E+00","","","-1.066E-01"
"14","0.063","0.913","-0.9999","0.901","-0.160","-1.110","1.738E+00","-2.772E+00","","","-1.072E-01"
"15","0.066","0.94","-1.0465","0.925","-0.180","-1.131","1.356E+00","-2.567E+00","","","-1.077E-01"
"16","0.067","0.96","NA","0.941","-0.201","NA","1.798E-02","NA","","","-1.081E-01"
"18","0.055","0.883","NA","0.857","-0.220","NA","-4.434E+00","NA","","","-1.066E-01"

Powershell video length calculation

I have a calculation issue I cannot solve, any help appreciated! I receive video length of files in a more complex loop context using the following code:
$movs ="..\..\MOV"
$dura = New-Object -ComObject Shell.Application
$dura = Get-ChildItem -Path $movs -Recurse -Force | ForEach {
$Folder = $Shell.Namespace($_.DirectoryName)
$File = $Folder.ParseName($_.Name)
$Duration = $Folder.GetDetailsOf($File, 27)
[PSCustomObject]#{
vid-file= $_.Name -replace ".mov",""
duration = $Duration
}
}
Later on I match some IDs to $dura so that the result looks like this:
ID vid-file duration
1 move 00:01:08
1 run 00:01:12
1 fly 00:01:30
1 swim 00:01:08
1 sleep 00:02:20
2 move 00:01:08
2 swim 00:01:08
2 sleep 00:02:20
3 move 00:01:08
3 run 00:01:12
3 fly 00:01:30
3 swim 00:01:08
3 sleep 00:02:20
3 think 00:03:20
Now I need to calculate the starting points for each concatenated video case, i.e. I have to sum up the duration of the video for each part until the current position for every ID context and create a new column with it (every new ID starts at 00:00:00). The result would look like this:
ID vid-file duration videopart-start-at
1 move 00:01:08 00:00:00
1 run 00:01:12 00:01:08
1 fly 00:01:30 00:02:20
1 swim 00:01:08 00:03:50
1 sleep 00:02:20 00:04:58
2 move 00:01:08 00:00:00
2 swim 00:01:08 00:01:08
2 sleep 00:02:20 00:02:16
3 move 00:01:08 00:00:00
3 run 00:01:12 00:01:08
3 fly 00:01:30 00:02:20
3 swim 00:01:08 00:03:50
3 sleep 00:02:20 00:04:58
3 think 00:03:20 00:07:18
I think there could be some calculated object in the PSCustomObject but I can't figure it out..
[PSCustomObject]#{
vid-file= $_.Name -replace ".mov",""
duration = $Duration
videopart-start-at= $Duration | Measure-Object -Sum $Duration
}
Thanks, Daniel
I would think that there's an easier way of handling this - but I converted the time into seconds - then worked on the [TimeSpan] datatype.
$movs = 'c:\temp\sample' | Get-ChildItem -Recurse -Force -ErrorAction Stop
$dura = New-Object -ComObject Shell.Application
$result = Foreach ($mov in $movs) {
$Folder = $dura.Namespace($mov.DirectoryName)
$File = $Folder.ParseName($mov.Name)
$Duration = $Folder.GetDetailsOf($File, 27)
[PSCustomObject]#{
vidfile = $mov.Name -replace ".mov", ""
# Convert the string into an actual time data type
duration = $Duration
durationinseconds = ([TimeSpan]::Parse($Duration)).TotalSeconds
}
}
$i = 0
Foreach ($object in $result) {
# Skipping first and stopping on last (foreach will run out of objects to process)
if ($i -eq 0 -or $i -gt ($result.count)) {
# Adding one to counter
$i++
continue
}
$object.durationinseconds = $Object.durationinseconds + $result.durationinseconds[$i - 1]
$object.duration = [timespan]::fromseconds($object.durationinseconds)
("{0:hh\:mm\:ss}" -f $object.duration)
$i++
}
Thanks to Sebastian I found the following solution (I added the "startat" column in the pscustomobject, identical to durationinseconds):
$i = 0
Foreach ($object in $result) {
# skip lines gt result count
if ($i -gt ($result.count)) {
$i++
continue
}
# set start to 0 for first line
if ($i -eq 0) {
$object.startat = 0
}
# calculate start time for all following lines
if ($i -gt 0) {
$object.startat = $result.durationinseconds[$i - 1] + $result.startat[$i - 1]
}
# transform seconds to time value in duration var
$object.duration = [timespan]::fromseconds($object.startat)
# counter +1
$i++
}
$result
To calculate date/time differences, try something like this...
$current = Get-Date
$end = (Get-Date).AddHours(1)
$diff = New-TimeSpan -Start $current -End $end
"The time difference is: $diff"
# Results
<#
The time difference is: 01:00:00.0019997
#>
... then format as you need to.

Parsing and modifying a powershell object

I'm parsing HTML from a webserver (specifically a Fanuc controller) and assigning the innerText to a object.
#Make sure the controller respons
if ($webBody.StatusCode -eq 200) {
Write-Host "Response is Good!" -ForegroundColor DarkGreen
$preBody = $webBody.ParsedHtml.body.getElementsByTagName('PRE') | Select -ExpandProperty innerText
$preBody
}
The output looks a little like so:
[1-184 above]
[185] = 0 ''
[186] = 0 ''
[187] = 0 ''
[188] = 0 ''
[189] = 0 ''
[and so on]
I only want to read the data from 190, 191, 193 for example.
What's the best way to do this? I'm struggling to sanitize the unwanted data in the object.
Currently I have a vbscript app that outputs to a txt file, cleans the data then reads it back and manipulates it in to a sql insert. I'm trying to improve on it with powershell and keen to try and keep everything within the program if possible.
Any help greatly appreciated.
With the assumption that the data set is not too large to place everything into memory. You could parse with regex into a PowerShell Object, then you can use Where-Object to filter.
#Regex with a capture group for each important value
$RegEx = "\[(.*)\]\s=\s(\d+)\s+'(.*)'"
$IndexesToMatch = #(190, 191, 193)
$ParsedValues = $prebody.trim | ForEach-Object {
[PSCustomObject]#{
index = $_ -replace $regex,'$1'
int = $_ -replace $regex,'$2'
string = $_ -replace $regex,'$3'
}
}
$ParsedValues | Where-Object { $_.index -in $IndexesToMatch }
Input :
[190] = 1 'a'
[191] = 2 'b'
[192] = 3 'c'
[193] = 4 'd'
[194] = 5 'e'
Output :
index int string
----- --- ------
190 1 a
191 2 b
193 4 d

parsing whitespace-separated fields

Im trying to write a script which works as follows:
My input is a text file with 8 rows and 8 columns, filled with values 0 or 1, with a single space character each separating the columns.
I need to check the 4th number in each row, and output false, if it is 0, and true, if it is 1.
My code at the moment looks like this:
param($fname)
$rows = (Get-Content $fname)
for ($i=0;$i -lt $rows.Length;$i++)
{
if ($rows[$i][6] -eq 1)
{
Write-Host "true"
}
if ($rows[$i][6] -ne 1)
{
Write-Host "false"
}
}
So I use [$i][6], because I get that that's the 4th number, accounting for the number of spaces acting as separators.
I checked and thought it was perfect, but somehow it says false for every line, but when I Write-Host $rows[0][6] it is 1.
tl;dr
# Create sample input file, with values of interest in 4th field
# (0-based field index 3).
#'
0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0
'# > file
foreach ($line in (Get-Content file)) {
$fields = -split $line
if ($fields[3] -eq '1') {
"true"
} else {
"false"
}
}
yields:
false
true
There are many subtleties to consider in your original code, but the above code:
offers a more awk-like approach by splitting each input line into whitespace-separated fields, whatever the length of the fields, courtesy of the unary -split operator.
subscripts (indices) can then be based on field indices rather than character positions.
All fields returned by -split ... are strings, hence the comparison with string literal '1', but, generally, PowerShell performs a lot of behind-the-scenes conversion magic for you: with the code above - unlike with your own code - using 1 would have worked too.
As for why your approach failed:
Indexing into (using a subscript with) a string value in PowerShell is a special case: it implicitly treats the string as a character array, and, with a single index such as 6, returns a [char] instance.
It is the LHS (left-hand side) of an expression involving a binary operator such as -eq that determines what type the RHS (right-hand side) will be coerced to, if necessary, before applying the operator:
([char] '1') -eq 1 # !! $false
Coercing the (implied) [int] type of RHS 1 to the LHS type [char] yields Unicode codepoint U+0001, i.e., a control character rather than the "ASCII" digit '1', which is why the comparison fails.
#PetSerAl's helpful, but cryptic suggestion (in a comment on the question) to use '1'[0] rather than 1 as the RHS solves the problem in this particular case, because '1'[0] returns 1 as a [char] instance, but the solution doesn't generalize to multi-character field values.
'1' -eq 1 # $true; same as: ([string] 1) -eq 1 or ([string] 1) -eq '1'
Converting integer 1 to a string indeed is the same as '1'.
This script fills a true 2d array with the proper values from the matrix file, but the output doesn't fit.
$Array = New-Object "bool[,]" 8,8
[int]$i=0 ; [int]$j=0
get-content $fname|foreach{ foreach ($b in ($_ -split(' '))) {
"{0},{1}={2}" -f $i,$j,($b -eq 0)
$Array[$i,$j] = ($b -eq 0)
$j++}
$j=0; $i++}

Convert a returned string in PowerShell to delimited data and export

I am using PowerShell to remotely configure storage arrays from various vendors. The script allows me to connect to the arrays with no issue. From there I run commands using REST API calls or remote SSH via plink.exe. Here is my issue. When using plink, I need to query the array and then perform operations conditionally based on the output. The issue is that the output is returned in string format. This is causing a problem for me because I would like to sort and extract portions of the returned string and present users with options based on the output.
Example - List Volumes
if ($sel_vendor -eq 3){
$ibm_ex_vols = & $rem_ssh $rem_ssh_arg1 $rem_ssh_arg2 $array_user"#"$array_mgmt_ip "-pw" $readpass "lsvdisk"
foreach ($i in $ibm_ex_vols){
write-host $i
}
}
Here is the output of the code
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change compressed_copy_count parent_mdisk_grp_id parent_mdisk_grp_name
0 Test1 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074B 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
1 Test2 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074C 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
2 Test3 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074D 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
3 Test4 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074E 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
What I would like to be able to do is store this info and then select the headers and data from the id and name columns. I was able to output the data to a txt file using the out-file command. Once I did that I used Excel to convert it to a delimited file using the fixed with delimiter option. While this worked, I need to figure out a dynamic solution.
Here is a simple parsing function, which can split your data and produce custom object with properties to work with:
function Parse-Data{
begin{
$Headers=$null
}
process{
if(!$Headers){
$Headers=
[Regex]::Matches($_,'\S+')|
ForEach-Object {
$Header=$null
} {
if($Header){
$Header.SubArgs+=$_.Index-1-$Header.SubArgs[0]
$Header
}
$Header=[PSCustomObject]#{
Name=$_.Value
SubArgs=,$_.Index
}
} {
$Header
}
}else{
$String=$_
$Headers|
ForEach-Object {
$Object=[ordered]#{}
} {
$Object.Add($_.Name,$String.Substring.Invoke($_.SubArgs).TrimEnd())
} {
[PSCustomObject]$Object
}
}
}
}
And this is how you can invoke it:
$ibm_ex_vols=#'
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count RC_change compressed_copy_count parent_mdisk_grp_id parent_mdisk_grp_name
0 Test1 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074B 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
1 Test2 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074C 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
2 Test3 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074D 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
3 Test4 0 io_grp0 online 0 SVC_SYSTEM_POOL 10.00GB striped 600507680C80004E980000000000074E 0 1 empty 1 no 0 0 SVC_SYSTEM_POOL
'#-split'\r?\n'
$ibm_ex_vols|Parse-Data
Here's an even simpler, math-based solution (assuming that $ibm_ex_vols contains the output as a collection of strings):
$sOutFile = "outfile.csv"
# Splitting the headers line into chars.
$cChars = $ibm_ex_vols[0] -split ''
$cInsertIndices = #()
$j = 0
for ($i = 1; $i -lt $cChars.Count; $i++) {
# If previous character is a whitespace and the current character isn't
if ( ($cChars[$i - 1] -eq ' ') -and ($cChars[$i] -ne ' ') ) {
# we'll insert a delimiter here
$cInsertIndices += $i + $j - 1
$j++ # and each insert will increase the line length.
}
}
foreach ($sLine in $ibm_ex_vols) {
foreach ($i in $cInsertIndices) {
# Adding delimiter.
$sLine = $sLine.Insert($i, ',')
}
# Optionally we can also trim trailing whitespaces:
# $sLine = $sLine -replace '\s+(?=,)'
$sLine | Out-File -FilePath $sOutFile -Append
}
Of course here we don't do any actual parsing and hence don't get convenient PSObjects to work with.
Finally, if we could be sure that all data fields will be populated and won't contain any whitespace characters, we wouldn't need to rely on field width and could further simplify our code to something like this:
$sOutFile = "outfile.csv"
foreach ($sLine in $ibm_ex_vols) {
$sLine = $sLine -replace '\s+', ','
$sLine | Out-File -FilePath $sOutFile -Append
}