I have a Powershell script with If statement and multiple conditions. My code is working great but I am looking for to display which condition my object doesn't respect.
Get-ChildItem $Path -Directory -Force | ForEach-Object {
$FolderName = $_.BaseName -match $Folderpattern
$DateOK = $_.LastWriteTime -lt (Get-Date).AddDays(-3))
$Folder = $_.BaseName
if (($FolderName) -and ($DateOK) {
write-host("$Folder can be moved")
}
else {
write-host("$Folder can't be moved")
}
}
I would like to display "$folder can't be moved because it doesn't match the pattern" if it doesn't respect the $FolderName condition.
And display "$folder can't be moved because the last write time is less than 3 days" if it doesn't respect the $DateOK condition.
thanks for the help
If this is just for checking and not for keeping a log where you need those specific messages I might go for something simple where we just capture the true and false values for each of the tests.
$path = C:\temp
Get-ChildItem $Path -Directory -Force |
ForEach-Object {
[PSCustomObject]#{
Folder = $_.BaseName
LastWriteTime = $_.LastWriteTime
FolderNameTest = $_.BaseName -match 'test'
DateOKTest = $_.LastWriteTime -lt (Get-Date).AddDays(-30)
}
}
Sample Output
Folder LastWriteTime FolderNameTest DateOKTest
------ ------------- -------------- ----------
.git 06.09.2021 01:06:06 False True
.vscode 25.09.2021 10:06:11 False True
1 22.09.2021 22:30:26 False True
batch_test 02.05.2022 22:29:25 True False
cleanup 20.09.2021 10:02:51 False True
DeviceDatabase 26.09.2021 12:07:26 False True
host 22.09.2021 23:23:38 False True
move_logs 26.04.2022 19:28:59 False False
test_run 01.03.2022 22:14:14 True True
You can then pipe this to Export-Csv if you like
There are various ways to go about this; but this one is clean and easy to understand, so would be my preferred route:
Function Move-FolderConditional { # todo: give this cmdlet a better name for your context
[CmdletBinding()]
Param (
[Parameter(Mandatory, ValueFromPipeline)]
[System.IO.DirectoryInfo[]]$Path
,
[Parameter(Mandatory)]
[System.IO.DirectoryInfo]$Destination
,
# files matching this pattern get moved to the target
[Parameter(Mandatory)]
[string]$ArchivableFolderPattern
,
# Files older than this date get moved to the target
[Parameter()]
[string]$MinKeepDateUtc = (Get-Date).AddDays(-3).ToUniversalTime()
)
Process {
foreach ($directory in $Path) {
if ($directory.BaseName -notmatch $ArchivableFolderPattern) {
Write-Warning "Could not move folder '$($directory.FullName)' as the name does not match the required pattern"
continue;
}
if ($directory.LastWriteTimeUtc -ge $MinKeepDateUtc) {
Write-Warning "Could not archive folder '$($directory.FullName)' as it was last updated at '$($directory.LastWriteTimeUtc.ToString('u'))'"
continue;
}
try {
#Move-Item -Path $directory -Destination $Destination -ErrorAction Stop # Uncommend this if you actually want to move your files
Write-Information "Successfully moved '$($directory.FullName)' to '$($Destination.FullName)'"
} catch [System.Management.Automation.ItemNotFoundException] { # For this exception we'd probably check in the Begin block instead - but this is just to give the idea that we could add a try/catch if required
Write-Warning "Could not archive folder '$($directory.FullName)' the target directory does not exist: '$($Destination.FullName)'"
}
}
}
}
# Example usage
Get-ChildItem $path -Directory -Force | Move-FolderConditional -ArchivableFolderPattern '^log' -InformationAction Continue -Destination 'z:\archive\'
But other options are available (I've just included snippets to give the gist of these):
Switch Statement
switch ( $directory )
{
{$_.BaseName -notmatch $ArchivableFolderPattern}
{
Write-Warning "Could not move folder '$($_.FullName)' as the name does not match the required pattern"
break
}
{$_.LastWriteTimeUtc -ge $MinKeepDateUtc}
{
Write-Warning "Could not archive folder '$($_.FullName)' as it was last updated at '$($_.LastWriteTimeUtc.ToString('u'))'"
break
}
default
{
Write-Information "Successfully moved '$($_.FullName)' to '$($Destination.FullName)'"
}
}
Flags
[bool]$archiveFolder = $true
if ($directory.BaseName -notmatch $ArchivableFolderPattern) {
Write-Warning "Could not move folder '$($directory.FullName)' as the name does not match the required pattern"
$archiveFolder = $false
}
if ($directory.LastWriteTimeUtc -ge $MinKeepDateUtc) {
# note: this will process even if archivefolder is already false... you can use `else` or amend the condition to include `$archiveFolder -or ($directory.LastWriteTimeUtc -ge $MinKeepDateUtc)`; though if going that route it's better to use the switch statement.
Write-Warning "Could not archive folder '$($directory.FullName)' as it was last updated at '$($_.LastWriteTimeUtc.ToString('u'))'"
$archiveFolder = $false
}
if ($archiveFolder) {
Write-Information "Successfully moved '$($directory.FullName)' to '$($Destination.FullName)'"
}
Other
Or you can do combinations of the above (e.g. use the switch statement to set your flags (in which case you can optionally remove the break so that all issues are displayed).
Related
PowerShell novice here again with my proof of concept.
The code below successfully extracts attached files from .msg files located in folders and leaves the extracted filename without changing it. What I'm now looking for now is to extract part of the parent folder name, with standard format of...
nnnn+string (e.g. "8322 MyStudy") i.e. 4 digits followed by a space then string.
...to rename the extracted filename from...
ExtractedFilename.pdf to "0nnnn - ExtractedFilename.pdf". e.g. "08322 - ExtractedFilename.pdf"
My main problem is how to extract the numeric part of the parent folder name (from where my module will be run). I'm hoping that my poor PS formatting skills will allow me to do the rest.
Once again, any help appreciated.
##
## Source: https://chris.dziemborowicz.com/blog/2013/05/18/how-to-batch-extract-attachments-from-msg-files-using-powershell/
##
## Usage: Expand-MsgAttachment *
##
##
function Expand-MsgAttachment
{
[CmdletBinding()]
Param
(
[Parameter(ParameterSetName="Path", Position=0, Mandatory=$True)]
[String]$Path,
[Parameter(ParameterSetName="LiteralPath", Mandatory=$True)]
[String]$LiteralPath,
[Parameter(ParameterSetName="FileInfo", Mandatory=$True, ValueFromPipeline=$True)]
[System.IO.FileInfo]$Item
)
Begin
{
# Load application
Write-Verbose "Loading Microsoft Outlook..."
$outlook = New-Object -ComObject Outlook.Application
}
Process
{
switch ($PSCmdlet.ParameterSetName)
{
"Path" { $files = Get-ChildItem -Path $Path }
"LiteralPath" { $files = Get-ChildItem -LiteralPath $LiteralPath }
"FileInfo" { $files = $Item }
}
$files | % {
# Work out file names
$msgFn = $_.FullName
# extract path, e.g. 'c:\path\to\'
$msgPath = Split-Path -Path $msgFn
# Skip non-.msg files
if ($msgFn -notlike "*.msg") {
Write-Verbose "Skipping $_ (not an .msg file)..."
return
}
# Extract message body
Write-Verbose "Extracting attachments from $_..."
$msg = $outlook.CreateItemFromTemplate($msgFn)
$msg.Attachments | % {
# Work out attachment file name
#$attFn = $msgFn -replace '\.msg$', " - Attachment - $($_.FileName)"
$attFn = Join-Path -Path $msgPath -ChildPath ($_.FileName)
# Do not try to overwrite existing files
if (Test-Path -literalPath $attFn) {
Write-Verbose "Skipping $($_.FileName) (file already exists)..."
return
}
# Save attachment
Write-Verbose "Saving $($_.FileName)..."
$_.SaveAsFile($attFn)
# Output to pipeline
Get-ChildItem -LiteralPath $attFn
}
}
}
# This function to rename expanded attachment file to study renaming standards
Function RenameExpandedAttachments {
}
End
{
Write-Verbose "Done."
}
}
The currently running script is :
$script:MyInvocation.MyCommand.Path
Use Split-Path to get only the Path,
Split-Path $script:MyInvocation.MyCommand.Path
to get only the last element use again Split-Path with the -Leaf parameter
Split-Path -Leaf (Split-Path $script:MyInvocation.MyCommand.Path)
To extract leading numbers use a Regular Expression with a (capture group).
'^(\d+) (.*)$'
And wrap all this in an if:
If ((Split-Path -Leaf (Split-Path $script:MyInvocation.MyCommand.Path)) -match '^(\d+) (.*)$'){
$NewName = "{0:00000} - {1}" -f $Matches[1],$ExtractedFileName
} else {
"No numbers found in this path"
}
My aim is to compare two directories exactly - including the structure of the directories and sub-directories.
I need this, because I want to monitor if something in the folder E:\path2 was changed. For this case a copy of the full folder is in C:\path1. If someone changes something it has to be done in two directories.
It is important for us, because if something is changed in the directory (accidentally or not) it could break down other functions in our infrastructure.
This is the script I've already written:
# Compare files for "copy default folder"
# This Script compares the files and folders which are synced to every client.
# Source: https://mcpmag.com/articles/2016/04/14/contents-of-two-folders-with-powershell.aspx
# 1. Compare content and Name of every file recursively
$SourceDocsHash = Get-ChildItem -recurse –Path C:\path1 | foreach {Get-FileHash –Path $_.FullName}
$DestDocsHash = Get-ChildItem -recurse –Path E:\path2 | foreach {Get-FileHash –Path $_.FullName}
$ResultDocsHash = (Compare-Object -ReferenceObject $SourceDocsHash -DifferenceObject $DestDocsHash -Property hash -PassThru).Path
# 2. Compare name of every folder recursively
$SourceFolders = Get-ChildItem -recurse –Path C:\path1 #| where {!$_.PSIsContainer}
$DestFolders = Get-ChildItem -recurse –Path E:\path2 #| where {!$_.PSIsContainer}
$CompareFolders = Compare-Object -ReferenceObject $SourceFolders -DifferenceObject $DestFolders -PassThru -Property Name
$ResultFolders = $CompareFolders | Select-Object FullName
# 3. Check if UNC-Path is reachable
# Source: https://stackoverflow.com/questions/8095638/how-do-i-negate-a-condition-in-powershell
# Printout, if UNC-Path is not available.
if(-Not (Test-Path \\bb-srv-025.ftscu.be\DIP$\Settings\ftsCube\default-folder-on-client\00_ftsCube)){
$UNCpathReachable = "UNC-Path not reachable and maybe"
}
# 4. Count files for statistics
# Source: https://stackoverflow.com/questions/14714284/count-items-in-a-folder-with-powershell
$count = (Get-ChildItem -recurse –Path E:\path2 | Measure-Object ).Count;
# FINAL: Print out result for check_mk
if($ResultDocsHash -Or $ResultFolders -Or $UNCpathReachable){
echo "2 copy-default-folders-C-00_ftsCube files-and-folders-count=$count CRITIAL - $UNCpathReachable the following files or folders has been changed: $ResultDocs $ResultFolders (none if empty after ':')"
}
else{
echo "0 copy-default-folders-C-00_ftsCube files-and-folders-count=$count OK - no files has changed"
}
I know the output is not perfect formatted, but it's OK. :-)
This script spots the following changes successfully:
create new folder or new file
rename folder or file -> it is shown as error, but the output is empty. I can live with that. But maybe someone sees the reason. :-)
delete folder or file
change file content
This script does NOT spot the following changes:
move folder or file to other sub-folder. The script still says "everything OK"
I've been trying a lot of things, but could not solve this.
Does anyone can help me how the script can be extended to spot a moved folder or file?
I think your best bet is to use the .NET FileSystemWatcher class. It's not trivial to implement an advanced function that uses it, but I think it will simplify things for you.
I used the article Tracking Changes to a Folder Using PowerShell when I was learning this class. The author's code is below. I cleaned it up as little as I could stand. (That publishing platform's code formatting hurts my eyes.)
I think you want to run it like this.
New-FileSystemWatcher -Path E:\path2 -Recurse
I could be wrong.
Function New-FileSystemWatcher {
[cmdletbinding()]
Param (
[parameter()]
[string]$Path,
[parameter()]
[ValidateSet('Changed', 'Created', 'Deleted', 'Renamed')]
[string[]]$EventName,
[parameter()]
[string]$Filter,
[parameter()]
[System.IO.NotifyFilters]$NotifyFilter,
[parameter()]
[switch]$Recurse,
[parameter()]
[scriptblock]$Action
)
$FileSystemWatcher = New-Object System.IO.FileSystemWatcher
If (-NOT $PSBoundParameters.ContainsKey('Path')){
$Path = $PWD
}
$FileSystemWatcher.Path = $Path
If ($PSBoundParameters.ContainsKey('Filter')) {
$FileSystemWatcher.Filter = $Filter
}
If ($PSBoundParameters.ContainsKey('NotifyFilter')) {
$FileSystemWatcher.NotifyFilter = $NotifyFilter
}
If ($PSBoundParameters.ContainsKey('Recurse')) {
$FileSystemWatcher.IncludeSubdirectories = $True
}
If (-NOT $PSBoundParameters.ContainsKey('EventName')){
$EventName = 'Changed','Created','Deleted','Renamed'
}
If (-NOT $PSBoundParameters.ContainsKey('Action')){
$Action = {
Switch ($Event.SourceEventArgs.ChangeType) {
'Renamed' {
$Object = "{0} was {1} to {2} at {3}" -f $Event.SourceArgs[-1].OldFullPath,
$Event.SourceEventArgs.ChangeType,
$Event.SourceArgs[-1].FullPath,
$Event.TimeGenerated
}
Default {
$Object = "{0} was {1} at {2}" -f $Event.SourceEventArgs.FullPath,
$Event.SourceEventArgs.ChangeType,
$Event.TimeGenerated
}
}
$WriteHostParams = #{
ForegroundColor = 'Green'
BackgroundColor = 'Black'
Object = $Object
}
Write-Host #WriteHostParams
}
}
$ObjectEventParams = #{
InputObject = $FileSystemWatcher
Action = $Action
}
ForEach ($Item in $EventName) {
$ObjectEventParams.EventName = $Item
$ObjectEventParams.SourceIdentifier = "File.$($Item)"
Write-Verbose "Starting watcher for Event: $($Item)"
$Null = Register-ObjectEvent #ObjectEventParams
}
}
I don't think any example I've found online tells you how to stop watching the filesystem. The simplest way is to just close your PowerShell window. But I always seem to have 15 tabs open in each of five PowerShell windows, and closing one of them is a nuisance.
Instead, you can use Get-Job to get the Id of registered events. Then use Unregister-Event -SubscriptionId n to, well, unregister the event, where 'n' represents the number(s) you find in the Id property of Get-Job..
So basically you want to synchronize the two folders and note all the changes made on that:
I would suggest you to use
Sync-Folder Script
Or
FreeFile Sync.
1. Code Description alias how it is intended to work
User enters a path to a directory in PowerShell. Code checks if any folder within the declared directory contains no data at all. If so, the path of any empty folder will be shown on the prompt to the user and eventually removed from the system.
2. The Issue alias what I am struggling with
The code I just wrote doesn't count the depth of a folder hierarchy as I would expect (the column in the output table is blank). Besides that, the program works okay - I've still got to fix the issue where my code removes empty parent directories first and child directories later, which of course will cause an error in PowerShell; for instance, take
C:\Users\JohnMiller\Desktop\Homework
where Homework consists of Homework\Math\School Project and Homework\Computer Science\PowerShell Code. Note that all directories are supposed to be empty with the exception of PowerShell Code, the folder containing this script. (Side note: A folder is considered empty when no file dwells inside. At least that's what my code is based on for now.)
3. The Code
# Delete all empty (sub)folders in [$path]
[Console]::WriteLine("`n>> Start script for deleting all empty (sub)folders.")
$path = Read-Host -prompt ">> Specify a path"
if (test-path $path)
{
$allFolders = Get-ChildItem $path -recurse | Where {$_.PSisContainer -eq $True}
$allEmptyFolders = $allFolders | Where-Object {$_.GetFiles().Count -eq 0}
$allEmptyFolders | Select-Object FullName,#{Name = "FolderDepth"; Expression = {$_.DirectoryName.Split('\').Count}} | Sort-Object -descending FolderDepth,FullName
[Console]::WriteLine("`n>> Do you want do remove all these directories? Validate with [True] or [False].") #'#
$answer = Read-Host -prompt ">> Answer"
if ([System.Convert]::ToBoolean($answer) -eq $True)
{
$allEmptyFolders | Remove-Item -force -recurse
}
else
{
[Console]::WriteLine(">> Termination confirmed.`n")
exit
}
}
else
{
[Console]::WriteLine(">> ERROR: [$($path)] is an invalid directory. Program terminates.`n")
exit
}
The depth-count problem:
Your code references a .DirectoryName property in the calculated property passed to Select-Object, but the [System.IO.DirectoryInfo] instances output by Get-ChildItem have no such property. Use the .FullName property instead:
$allEmptyFolders |
Select-Object FullName,#{Name='FolderDepth'; Expression={$_.FullName.Split('\').Count}} |
Sort-Object -descending FolderDepth,FullName
Eliminating nested empty subfolders:
To recap your problem with a simple example:
If c:\foo is empty (no files) but has empty subdir. c:\foo\bar, your code outputs them both, and if you then delete c:\foo first, deleting c:\foo\bar next fails (because deleting c:\foo also removed c:\foo\bar).
If you eliminate all nested empty subdirs. up front, you not only declutter what you present to the user, but you can then safely iterative of the output and delete one by one.
With your approach you'd need a 2nd step to eliminate the nested empty dirs., but here's a depth-first recursive function that omits nested empty folders. To make it behave the same way as your code with respect to hidden files, pass -Force.
function Get-RecursivelyEmptyDirectories {
[cmdletbinding()]
param(
[string] $LiteralPath = '.',
[switch] $Force,
[switch] $DoNotValidatePath
)
$ErrorActionPreference = 'Stop'
if (-not $DoNotValidatePath) {
$dir = Get-Item -LiteralPath $LiteralPath
if (-not $dir.PSIsContainer) { Throw "Not a directory path: $LiteralPath" }
$LiteralPath = $dir.FullName
}
$haveFiles = [bool] (Get-ChildItem -LiteralPath $LiteralPath -File -Force:$Force | Select-Object -First 1)
$emptyChildDirCount = 0
$emptySubdirs = $null
if ($childDirs = Get-ChildItem -LiteralPath $LiteralPath -Directory -Force:$Force) {
$emptySubDirs = New-Object System.Collections.ArrayList
foreach($childDir in $childDirs) {
if ($childDir.LinkType -eq 'SymbolicLink') {
Write-Verbose "Ignoring symlink: $LiteralPath"
} else {
Write-Verbose "About to recurse on $($childDir.FullName)..."
try { # If .AddRange() fails due to exceeding the array list's capacity, we must fail too.
$emptySubDirs.AddRange(#(Get-RecursivelyEmptyDirectories -DoNotValidatePath -LiteralPath $childDir.FullName -Force:$Force))
} catch {
Throw
}
# If the last entry added is the child dir. at hand, that child dir.
# is by definition itself empty.
if ($emptySubDirs[-1] -eq $childDir.FullName) { ++$emptyChildDirCount }
}
} # foreach ($childDir ...
} # if ($childDirs = ...)
if (-not $haveFiles -and $emptyChildDirCount -eq $childDirs.Count) {
# There are no child files and all child dirs., if any, are themselves
# empty, so we only output the input path at hand, as the highest
# directory in this subtree that is empty (save for empty descendants).
$LiteralPath
} else {
# This directory is not itself empty, so output the (highest-level)
# descendants that are empty.
$emptySubDirs
}
}
Tips regarding your code:
Get-ChildItem -Directory is available in PSv3+, which is not only shorter but also more efficient than Get-ChildItem | .. Where { $_.PSisContainer -eq $True }.
Use Write-Host instead of [Console]::WriteLine
[System.Convert]::ToBoolean($answer) only works with the culture-invariant string literals 'True' and 'False' ([bool]::TrueString and [bool]::FalseString, although case variations and leading and trailing whitespace are allowed).
The following script creates a folder named the specified date on the servers in servers.txt, then copies the folder from which the script is run to that folder.
For example, it creates the folder "3-22-15 (night)" on SERVER1, SERVER2, etc., then copies "Directory Containing the Script" to "3-22-15 (night)".
$CurrentLocation = get-location
$DeploymentDate = "3-22-15 (night)"
Foreach($server in get-content "servers.txt"){
New-Item -ErrorAction SilentlyContinue -ItemType directory -Path \\$server\C$\Deployments\$DeploymentDate
copy-item -Path $CurrentLocation -Destination \\$server\C$\Deployments\$DeploymentDate\ -ErrorAction SilentlyContinue -recurse
}
How do I modify the script to include file verification for each file that is copied to \\$server\C$\Deployments\$DeploymentDate\?
I would like it to output an error message with the file that does not pass the verification check, but to continue copying.
I was going to try something like this:
function SafeCopy ($SourcePath,$DestinationPath,$SourceFileName) {
#MD5 Check Function
function Check-MD5 ($FilePath) {
$md5=New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider
$hash=[System.BitConverter]::ToString($md5.ComputeHash([System.IO.File]::ReadAllBytes($FilePath)))
Return $hash
} # EndOf function Check-MD5
$MD5Source=Check-MD5 $SourcePath
Copy-Item $SourcePath\$SourceFileName $DestinationPath
$MD5Destination=Check-MD5 $DestinationPath
if (Test-Path $DestinationPath) {
if ($MD5Destination -match $MD5Source) {
Write-Host "`nThe file `"$SourcePath`" has been copied in `"$DestinationPath`" successfully.`n" -ForegroundColor DarkGreen
} else {
Write-Host "`nThe file `"$SourcePath`" has been copied in `"$DestinationPath`" but the CRC check failed!`n" -ForegroundColor DarkRed
}
} else {
Write-Host "`nThe file `"$SourcePath`" has not been copied in `"$DestinationPath`"!`n" -ForegroundColor DarkRed
}
} # EndOf function SafeCopy
But I'm not sure how to implement it.
I am not a talented scripter nor do I play one on the radio but I've put this together and it seems to work. As mentioned, the larger the filesize being compared, the slower it goes.
Param
(
[parameter(Mandatory=$true,Position=1,ValueFromPipeLine=$true)][string]$source,
[parameter(Mandatory=$true,Position=2,ValueFromPipeLine=$true)][string]$dest
)
$countsource = #(Get-ChildItem -Path $source -Directory)
$countdest = #(Get-ChildItem -Path $dest -Directory)
IF($countsource = $countdest){
"$source equals $dest"
}
Else{
"$source does not match $dest"
}
Sample usage and output. TODO: Add folder item count compare, warn or break if the folders do not have the same number of items (subfolders and files).
So copy the folder then run Folder-Compare as a function once the copy is finished.
PS C:\Scripts> .\Folder-Compare.ps1 -source C:\Scripts\temp -dest C:\aaa
C:\Scripts\temp does not match C:\aaa
If I have an example function ...
function foo()
{
# get a list of files matched pattern and timestamp
$fs = Get-Item -Path "C:\Temp\*.txt"
| Where-Object {$_.lastwritetime -gt "11/01/2009"}
if ( $fs -ne $null ) # $fs may be empty, check it first
{
foreach ($o in $fs)
{
# new bak file
$fBack = "C:\Temp\test\" + $o.Name + ".bak"
# Exception here Get-Item! See following msg
# Exception thrown only Get-Item cannot find any files this time.
# If there is any matched file there, it is OK
$fs1 = Get-Item -Path $fBack
....
}
}
}
The exception message is ... The WriteObject and WriteError methods cannot be called after the pipeline has been closed. Please contact Microsoft Support Services.
Basically, I cannot use Get-Item again within the function or loop to get a list of files in a different folder.
Any explanation and what is the correct way to fix it?
By the way I am using PS 1.0.
This is just a minor variation of what has already been suggested, but it uses some techniques that make the code a bit simpler ...
function foo()
{
# Get a list of files matched pattern and timestamp
$fs = #(Get-Item C:\Temp\*.txt | Where {$_.lastwritetime -gt "11/01/2009"})
foreach ($o in $fs) {
# new bak file
$fBack = "C:\Temp\test\$($o.Name).bak"
if (!(Test-Path $fBack))
{
Copy-Item $fs.Fullname $fBack
}
$fs1 = Get-Item -Path $fBack
....
}
}
For more info on the issue with foreach and scalar null values check out this blog post.
I modified the above code slightly to create the backup file, but I am able to use the Get-Item within the loop successfully, with no exceptions being thrown. My code is:
function foo()
{
# get a list of files matched pattern and timestamp
$files = Get-Item -Path "C:\Temp\*.*" | Where-Object {$_.lastwritetime -gt "11/01/2009"}
foreach ($file in $files)
{
$fileBackup = [string]::Format("{0}{1}{2}", "C:\Temp\Test\", $file.Name , ".bak")
Copy-Item $file.FullName -destination $fileBackup
# Test that backup file exists
if (!(Test-Path $fileBackup))
{
Write-Host "$fileBackup does not exist!"
}
else
{
$fs1 = Get-Item -Path $fileBackup
...
}
}
}
I am also using PowerShell 1.0.