How to access Error Property after using cmdlet without loop - powershell

I'm trying to get some information of an Remove-Item operation in PowerShell.
Since I don't want the loop to stop when one Item on Remove-Item failes, I can't use Try{} catch{} and -ErrorAction Stop
Is there a way to get error information i want without clearing the error variable before Remove-Item, and also without having to use a loop to iterate over the files?
$error.clear()
$Files | Remove-Item -Force
0..($error.count - 1) | % {
$x = $Error[$_].CategoryInfo
$y = "{0}, {1}, {2}" -f $x.Category, $x.Reason, $x.TargetName
$ResultLog += [PSCustomObject]#{Result="Error"; Path=$p.path; Message=$y}
}

I like #HAL9256's gusto but I think using $Error.count is a bad idea. The count only goes up to 256 items before it stops counting up and starts dropping off the oldest errors. Depending on the volume of files and errors, you could easily run out of room there.
https://devblogs.microsoft.com/scripting/powershell-error-handling-and-why-you-should-care/
Rather than using the pipeline, I think a foreach would be better suited.
$ResultLog = #()
foreach ($file in $files) {
try {
Remove-Item $file -Force -ErrorAction Stop
} catch {
$x = $_.CategoryInfo
$y = "{0}, {1}, {2}" -f $x.Category, $x.Reason, $x.TargetName
$ResultLog += [PSCustomObject]#{Result="Error"; Path=$p.path; Message=$y}
}
}

Use -ErrorAction Continue It won't halt the running of the script but will still add to the $Error variable.
To not have to clear the $Error variable before running, since the $Error variable is an array, simply store the Error count before running, and then use a For loop to only iterate through the new messages.
$ErrorsBefore = $Error.Count
$Files | Remove-Item -Force -ErrorAction Continue
$ResultLog = #()
For($i=0 ; $i -lt ($error.count - $ErrorsBefore); $i++) {
$x = $Error[$i].CategoryInfo
$y = "{0}, {1}, {2}" -f $x.Category, $x.Reason, $x.TargetName
$ResultLog += [PSCustomObject]#{Result="Error"; Path=$p.path; Message=$y}
}

Related

Looping through File Groups such as FileGroup159, FileGroup160, etc. in Powershell

So I got the code to work how I like it for individual files. Based on some of the suggestions below, I was able to come up with this:
$Path = "C:\Users\User\Documents\PowerShell\"
$Num = 160
$ZipFile = "FileGroup0000000$Num.zip"
$File = "*$Num*.txt"
$n = dir -Path $Path$File | Measure
if($n.count -gt 0){
Remove-Item $Path$ZipFile
Compress-Archive -Path $Path$File -DestinationPath $Path
Rename-Item $Path'.zip' $Path'FileGroup0000000'$Num'.zip'
Remove-Item $Path$File
}
else {
Write-Output "No Files to Move for FileGroup$File"
}
The only thing I need to do now is have $Num increment after the program finishes each time. Therefore the program will run, and then move $Num to 160, 161, etc. and I will not have to re initiate the code manually. Thanks for the help so far.
Your filename formatting should go inside the loop and you should use the Format operator -f to get the preceding zeros, like:
159..1250 | ForEach-Object {
$UnzippedFile = 'FileGroup{0:0000000000}' -f $_
$ZipFile = "$UnzippedFile.zip"
Write-Host "Unzipping: $ZipFile"
# Do your thing here
}

Can't remove-item because it is in use, although it isn't

I have a script that creates a working folder, moves some stuff in, makes some changes and then moves the output and then deletes the working folder. I cannot for the life of me get this folder to remove properly.
Originally it was due a PDF tied up because I was manipulating it with pdftk, however after properly using .Dispose() it still gives me an error saying it's in use. I can manually delete it by right clicking, but if I try to use "remove-item $folder -recurse" it gives me the error. I've even tried using start-sleep to delay the action, but it does not make a difference.
Is there anything I'm not thinking of? I tried remove-variable after dispose, but it threw an error.
Here are some relevent snippets of my code:
$submitdir = "D:\SUBFTP\SUBMIT"
$local = "$submitdir\${filename}_dir"
#...
$reader = New-Object iTextSharp.text.pdf.pdfreader -ArgumentList $pdf
$global:outvar = ""
for ($page = 1; $page -le $reader.NumberOfPages; $page++) {
$lines = [char[]]$reader.GetPageContent($page) -join "" -split "`n"
$global:outvar = $global:outvar += ("`n" + $lines)
if(($page % 50) -eq 0) {
#echo "$file - parsing page $page";
LogWrite "$file - parsing page $page" }
}
#... end of script below...
$reader.Dispose()
start-sleep 20;
rm $local -recurse

Monitor a command and wait for it to complete before proceeding to next command?

I have written a PowerShell script that will:
grab all txt files from a directory
perform a line-by-line assessment of the first file (grabbing headers and appending, appending data to each line in file, saving to an output file)
for subsequent files, grab body (excluding header), append data, then add to output file
The problem is in the use of Add-Content where the process hangs so certain files don't get written because the output file is in use. I added a function (based on recommendations found in various places on StackExchange) that test the output file to determine if it is available for read-write. This seems like a 'brute-force' approach.
Is there a way to monitor the actual Add-Content process launched by PowerShell to identify when it is complete? Or is there some other way to disaggregate the code as written to use the process control commands in PowerShell?
Sample:
function IsFileAccessible([String]$FullFileName) {
[Boolean]$IsAccessible = $false
try {
[IO.File]::OpenWrite($FullFileName).Close();
$IsAccessible = $true
} catch {
$IsAccessible = $false
}
return $IsAccessible
}
cd '[filepath]'
del old_output.type
$filearray = #()
$files = Get-ChildItem '[filepath]' -Filter "*.txt"
$outfile = 'new_output.type'
for ($i=0; $i -lt $files.Count; $i++) {
# Define variables
$lastWriteTime = $files[$i].LastWriteTime
# Define process steps for appending data
filter Add-Time {"$_$lastWriteTime"}
if ($i -eq 0) {
$lines = Get-Content $files[$i]
for ($j=0;$j -lt $lines.Count; $j++) {
if ($j -eq 0) {
$appended_txt = 'New_Header'
filter Add-Header{"$_$appended_txt"}
$lines[$j] | Add-Header | Add-Content $outfile
} else {
$lines[$j] | Add-Time | Add-Content $outfile
}
}
} else {
do {
$ErrorActionPreference = 'SilentlyContinue'
$test = IsFileAccessible('[filepath-new_output.type]')
echo 'file open'
} until ($test -eq 'True')
$ErrorActionPreference = 'Continue'
echo 'okay'
(Get-Content $files[$i].FullName | Select-Object -Skip 1) |
Add-Time | Add-Content $outfile
}
}

What is the cleanest way to join in one array the result of two or more calls to Get-ChildItem?

I'm facing the problem of moving and copying some items on the file system with PowerShell.
I know by experiments the fact that, even with PowerShell v3, the cmdlet Copy-Item, Move-Item and Delete-Item cannot handle correctly reparse point like junction and symbolic link, and can lead to disasters if used with switch -Recurse.
I want to prevent this evenience. I have to handle two or more folder each run, so I was thinking to something like this.
$Strings = #{ ... }
$ori = Get-ChildItem $OriginPath -Recurse
$dri = Get-ChildItem $DestinationPath -Recurse
$items = ($ori + $dri) | where { $_.Attributes -match 'ReparsePoint' }
if ($items.Length -gt 0)
{
Write-Verbose ($Strings.LogExistingReparsePoint -f $items.Length)
$items | foreach { Write-Verbose " $($_.FullName)" }
throw ($Strings.ErrorExistingReparsePoint -f $items.Length)
}
This doen't work because $ori and $dri can be also single items and not arrays: the op-Addition will fail. Changing to
$items = #(#($ori) + #($dri)) | where { $_.Attributes -match 'ReparsePoint' }
poses another problem because $ori and $dri can also be $null and I can end with an array containing $null. When piping the join resutl to Where-Object, again, I can end with a $null, a single item, or an array.
The only apparently working solution is the more complex code following
$items = $()
if ($ori -ne $null) { $items += #($ori) }
if ($dri -ne $null) { $items += #($dri) }
$items = $items | where { $_.Attributes -match 'ReparsePoint' }
if ($items -ne $null)
{
Write-Verbose ($Strings.LogExistingReparsePoint -f #($items).Length)
$items | foreach { Write-Verbose " $($_.FullName)" }
throw ($Strings.ErrorExistingReparsePoint -f #($items).Length)
}
There is some better approch?
I'm interested for sure if there is a way to handle reparse point with PowerShell cmdlets in the correct way, but I'm much more interested to know how to join and filters two or more "PowerShell collections".
I conclude observing that, at present, this feature of PowerShell, the "polymorphic array", doen't appear such a benefit to me.
Thanks for reading.
Just add a filter to throw out nulls. You're on the right track.
$items = #(#($ori) + #($dri)) | ? { $_ -ne $null }
I've been on Powershell 3 for a while now but from what I can tell this should work in 2.0 as well:
$items = #($ori, $dri) | %{ $_ } | ? { $_.Attributes -match 'ReparsePoint' }
Basically %{ $_ } is a foreach loop that unrolls the inner arrays by iterating over them and passing each inner element ($_) down the pipeline. Nulls will automatically be excluded from the pipeline.

Get-Item fails with closed pipeline error

If I have an example function ...
function foo()
{
# get a list of files matched pattern and timestamp
$fs = Get-Item -Path "C:\Temp\*.txt"
| Where-Object {$_.lastwritetime -gt "11/01/2009"}
if ( $fs -ne $null ) # $fs may be empty, check it first
{
foreach ($o in $fs)
{
# new bak file
$fBack = "C:\Temp\test\" + $o.Name + ".bak"
# Exception here Get-Item! See following msg
# Exception thrown only Get-Item cannot find any files this time.
# If there is any matched file there, it is OK
$fs1 = Get-Item -Path $fBack
....
}
}
}
The exception message is ... The WriteObject and WriteError methods cannot be called after the pipeline has been closed. Please contact Microsoft Support Services.
Basically, I cannot use Get-Item again within the function or loop to get a list of files in a different folder.
Any explanation and what is the correct way to fix it?
By the way I am using PS 1.0.
This is just a minor variation of what has already been suggested, but it uses some techniques that make the code a bit simpler ...
function foo()
{
# Get a list of files matched pattern and timestamp
$fs = #(Get-Item C:\Temp\*.txt | Where {$_.lastwritetime -gt "11/01/2009"})
foreach ($o in $fs) {
# new bak file
$fBack = "C:\Temp\test\$($o.Name).bak"
if (!(Test-Path $fBack))
{
Copy-Item $fs.Fullname $fBack
}
$fs1 = Get-Item -Path $fBack
....
}
}
For more info on the issue with foreach and scalar null values check out this blog post.
I modified the above code slightly to create the backup file, but I am able to use the Get-Item within the loop successfully, with no exceptions being thrown. My code is:
function foo()
{
# get a list of files matched pattern and timestamp
$files = Get-Item -Path "C:\Temp\*.*" | Where-Object {$_.lastwritetime -gt "11/01/2009"}
foreach ($file in $files)
{
$fileBackup = [string]::Format("{0}{1}{2}", "C:\Temp\Test\", $file.Name , ".bak")
Copy-Item $file.FullName -destination $fileBackup
# Test that backup file exists
if (!(Test-Path $fileBackup))
{
Write-Host "$fileBackup does not exist!"
}
else
{
$fs1 = Get-Item -Path $fileBackup
...
}
}
}
I am also using PowerShell 1.0.