I have a script, and currently I do the following, which gets the full path to the files in the subdir:
$filenameOut = "out.html"
#get current working dir
$cwd = Get-ScriptDirectory
#get files to display in lists
$temp = Join-Path $cwd "Initial Forms"
$temp = Join-Path $temp "General Forms"
$InitialAppointmentGenArr = Get-ChildItem -Path $temp
So this will return a list where the first file in the array looks like this:
"//server/group/Creds/Documents/Initial Forms/General Forms/Background Check.pdf"
However, to have my generated web page work on our extranet, I can't give the full path to the file. I just need it to return:
"Initial Forms/General Forms/Background Check.pdf"
This will be a link I can use on the extranet. How do I get get-childitem to return just the sub-path?
My script is run from
//server/group/Creds/Documents
I can't find any examples similar to this. I'd like to avoid hard-coding the script location as well, in case it gets moved.
The easy way is to simply trim the unneeded path including trailing slash:
$filenameOut = "out.html"
#get current working dir
$cwd = Get-ScriptDirectory
#get files to display in lists
$temp = Join-Path $cwd "Initial Forms"
$temp = Join-Path $temp "General Forms"
$FullPath = Get-ChildItem -Path $temp
$InitialAppointmentGenArr = $FullPath | %{ $_.FullName.Replace($cwd + "\","")}
I suggest the following approch:
$relativeDirPath = Join-Path 'Initial Forms' 'General Forms'
Get-ChildItem -LiteralPath $PSScriptRoot/$relativeDirPath | ForEach-Object {
Join-Path $relativeDirPath $_.Name
}
Note that I've used $PSScriptRoot in lieu of $cwd, as it sound like the latter contains the directory in which your script is located, which automatic variable $PSScriptRoot directly reports.
Here's a generalized variation that also works with recursive use of Get-ChildItem:
$relativeDirPath = Join-Path 'Initial Forms' 'General Forms'
Get-ChildItem -LiteralPath $PSScriptRoot/$relativeDirPath | ForEach-Object {
$_.FullName.Substring($PSScriptRoot.Length + 1)
}
As an aside: In the cross-platform PowerShell (Core) 7+ edition, the underlying .NET Core framework's System.IO.Path type now has a .GetRelativePath() method, which is a convenient way to obtain a relative path from an absolute one, via a reference path:
# PowerShell (Core) 7+ only.
PS> [IO.Path]::GetRelativePath('/foo/bar', '/foo/bar/bam/baz.txt')
bam/baz.txt
Note:
Since .NET's working directory typically differs from PowerShell's, be sure to provide full input paths.
Also, be sure that the paths are file-system-native paths, not based on PowerShell-only drives.
Convert-Path can be used to determine full, file-system-native paths.
Related
I want to move some files from one place to another with saving of directory. Result of my script is broken encoding and it doesn't work. I use robocoby, because I have files with names more 256 simbols. I need to move files from different locations. And we are talking about several hundred files.
$source = Get-Content "C:\Users\bill\Downloads\111.TXT" -Encoding UTF8
$destination = "C:\Users\bill\OneDrive\Documents"
foreach($file in $source)
{
robocopy $file $destination /MOVE /E /copyall /log:C:\Users\bill\OneDrive\Documents\log.txt
}
Jane,
Try building your robocopy command like this:
*** UPDATED and TESTED ***
Clear-Host
$source = Get-Content "G:\Test\111.txt" -Encoding UTF8
$destination = "G:\BEKDocs\Test"
Foreach ($SrcPath in $source) {
$file = Split-Path -Path $SrcPath -Leaf
$path = Split-Path -Path $SrcPath -parent
$PLen = $Path.Length -3
$PAdd = $Path.Substring(3,$PLen)
$SavePath = Join-Path -Path $Destination -ChildPath $PAdd
$robocopyOptions = #('/Move', '/copyall')
$LogFile =
#('/log+:G:\BEKDocs\Transfer\log.txt')
$CmdLine = #($path, $SavePath, $file) +
$robocopyOptions + $LogFile
& 'robocopy.exe' $CmdLine
} #End Foreach
Note: Use single quotes as indicated!
Update notes:
Part of the problem was that you had the filenames attached to the paths in your file where RoboCopy wants SourcePath DestPath FileSpec as the first three arguments.
You're copying single files so there is no need for the Recurse /E parameter.
Since you want to preserve the directory structure you need to append the Source path, less the drive (d:) to your destination directory.
You're also calling RoboForm in a loop so you need the /Log+ parameter so each file is appended to the file rather than over writing it.
In my test I copied a file from the base directory, another from one level deep and a third from 2 levels deep. The code preserved the directory structure starting with the specified Destination as the Base or Root directory.
Took a bit of time to figure all this out but I wasn't going to let it go. Hope this works for you!
I am having a strange problem in Powershell (Version 2021.8.0) while creating folders and naming them. I start with a number of individual ebook files in a folder that I set using Set-Location. I use the file name minus the extension to create a new folder with the same name as the e-book file. The code works fine the majority of the time with various file extensions I have stored in an array beginning of the code.
What's happening is that the code creates the proper folder name the majority of the time and moves the source file into the folder after it's created.
The problem is, if the last letter of the source file name, on files with the extension ".epub" end in an "e", then the "e" is missing from the end of the created folder name. I thought that I saw it also drop "r" and "p" but I have been unable to replicate that error recently.
Below is my code. It is set up to run against file extensions for e-books and audiobooks. Please ignore the error messages that are being generated when files of a specific type don't exist in the working folder. I am just using the array for testing and it will be filled automatically later by reading the folder contents.
This Code Creates a Folder for Each File and moves the file into that Folder:
Clear-Host
$SourceFileFolder = 'N:\- Books\- - BMS\- Books Needing Folders'
Set-Location $SourceFileFolder
$MyArray = ( "*.azw3", "*.cbz", "*.doc", "*.docx", "*.djvu", "*.epub", "*.mobi", "*.mp3", "*.pdf", "*.txt" )
Foreach ($FileExtension in $MyArray) {
Get-ChildItem -Include $FileExtension -Name -Recurse | Sort-Object | ForEach-Object { $SourceFileName = $_
$NewDirectoryName = $SourceFileName.TrimEnd($FileExtension)
New-Item -Name $NewDirectoryName -ItemType "directory"
$OriginalFileName = Join-Path -Path $SourceFileFolder -ChildPath $SourceFileName
$DestinationFilename = Join-Path -Path $NewDirectoryName -ChildPath $SourceFileName
$DestinationFilename = Join-Path -Path $SourceFileFolder -ChildPath $DestinationFilename
Move-Item $OriginalFileName -Destination $DestinationFilename
}
}
Thanks for any help you can give. Driving me nuts and I am pretty sure it's something that I am doing wrong, like always.
String.TrimEnd()
Removes all the trailing occurrences of a set of characters specified in an array from the current string.
TrimEnd method will remove all characters that matches in the character array you provided. It does not look for whether or not .epub is at the end of the string, but rather it trims out any of the characters in the argument supplied from the end of the string. In your case, all dots,e,p,u,b will be removed from the end until no more of these characters are within the string. Now, you will eventually (and you do) remove more than what you intended for.
I'd suggest using EndsWith to match your extensions and performing a substring selection instead, as below. If you deal only with single extension (eg: not with .tar.gz or other double extensions type), you can also use the .net [System.IO.Path]::GetFileNameWithoutExtension($MyFileName) method.
$MyFileName = "Teste.epub"
$FileExt = '.epub'
# Wrong approach
$output = $MyFileName.TrimEnd($FileExt)
write-host $output -ForegroundColor Yellow
#Output returns Test
# Proper method
if ($MyFileName.EndsWith($FileExt)) {
$output = $MyFileName.Substring(0,$MyFileName.Length - $FileExt.Length)
Write-Host $output -ForegroundColor Cyan
}
# Returns Tested
#Alternative method. Won't work if you want to trim out double extensions (eg. tar.gz)
if ($MyFileName.EndsWith($FileExt)) {
$Output = [System.IO.Path]::GetFileNameWithoutExtension($MyFileName)
Write-Host $output -ForegroundColor Cyan
}
You're making this too hard on yourself. Use the .BaseName to get the filename without extension.
Your code simplified:
$SourceFileFolder = 'N:\- Books\- - BMS\- Books Needing Folders'
$MyArray = "*.azw3", "*.cbz", "*.doc", "*.docx", "*.djvu", "*.epub", "*.mobi", "*.mp3", "*.pdf", "*.txt"
(Get-ChildItem -Path $SourceFileFolder -Include $MyArray -File -Recurse) | Sort-Object Name | ForEach-Object {
# BaseName is the filename without extension
$NewDirectory = Join-Path -Path $SourceFileFolder -ChildPath $_.BaseName
$null = New-Item -Path $NewDirectory -ItemType Directory -Force
$_ | Move-Item -Destination $NewDirectory
}
I'm writing a PS script that looks for specific files in different directory.
My code looks like this
# $Path is provided by the user, it's a path like
# $Path = "C:\Some Directory\Project [ABC]\Files\"
# There's a check to ensure path ends with a backslash
$PDFFiles = Get-Item $($Path + "*.pdf")
for ($counter=0; $counter -lt $PDFFiles.Length; $counter++) {
# Do stuff
}
The issue is that $Path may have character considered as wildcard (eg [ or ] in my example), but I can't use -LitteralPath because I need the *.pdf wildcard to be interpreted.
How to properly handle strings to tell PShell that this part is litteral, and this one is a wildcard?
Use Get-ChildItem instead of Get-Item.
Pass the path of the folder to -LiteralPath and then use the -Filter parameter for the wildcard file name filter:
$PDFFiles = Get-ChildItem -LiteralPath $Path -Filter "*.pdf"
-LiteralPath will not attempt to expand wildcard sequences in the path
An alternative approach is to escape the $Path value:
$escapedPath = [WildcardPattern]::Escape($Path)
Get-Item -Path (Join-Path $escapedPath *.pdf)
Right up front apologies for my lack of knowledge with Powershell. Very new to the language . I need to copy some files located in a certain path to another similar path. For example:
C:\TEMP\Users\<username1>\Documents\<varyingfoldername>\*
C:\TEMP\Users\<username2>\Documents\<varyingfoldername>\*
C:\TEMP\Users\<username3>\Documents\<varyingfoldername>\*
C:\TEMP\Users\<username4>\Documents\<varyingfoldername>\*
etc....
to
C:\Files\Users\<username1>\Documents\<varyingfoldername>\*
C:\Files\Users\<username2>\Documents\<varyingfoldername>\*
C:\Files\Users\<username3>\Documents\<varyingfoldername>\*
C:\Files\Users\<username4>\Documents\<varyingfoldername>\*
etc....
So basically all files and directories from path one need to be copied to the second path for each one of the different paths. The only known constant is the first part of the path like C:\TEMP\Users...... and the first part of the destination like C:\Files\Users.....
I can get all the different paths and files by using:
gci C:\TEMP\[a-z]*\Documents\[a-z]*\
but I am not sure how to then pass what's found in the wildcards so I can use them when I do the copy. Any help would be appreciated here.
This should work:
Get-ChildItem "C:\TEMP\*\Documents\*" | ForEach-Object {
$old = $_.FullName
$new = $_.FullName.Replace("C:\TEMP\Users\","C:\Files\Users\")
Move-Item $old $new
}
For additional complexity in matching folder levels, something like this should work:
Get-ChildItem "C:\TEMP\*\Documents\*" -File | ForEach-Object {
$old = $_.FullName
$pathArray = $old.Split("\") # Splits the path into an array
$new = [system.String]::Join("\", $pathArray[0..1]) # Creates a starting point, in this case C:\Temp
$new += "\" + $pathArray[4] # Appends another folder level, you can change the index to match the folder you're after
$new += "\" + $pathArray[6] # You can repeat this line to keep matching different folders
Copy-Item -Recurse -Force $old $new
}
I'm trying to process a list of files that may or may not be up to date and may or may not yet exist. In doing so, I need to resolve the full path of an item, even though the item may be specified with relative paths. However, Resolve-Path prints an error when used with a non-existant file.
For example, What's the simplest, cleanest way to resolve ".\newdir\newfile.txt" to "C:\Current\Working\Directory\newdir\newfile.txt" in Powershell?
Note that System.IO.Path's static method use with the process's working directory - which isn't the powershell current location.
You want:
c:\path\exists\> $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath(".\nonexist\foo.txt")
returns:
c:\path\exists\nonexists\foo.txt
This has the advantage of working with PSPaths, not native filesystem paths. A PSPath may not map 1-1 to a filesystem path, for example if you mount a psdrive with a multi-letter drive name.
What's a pspath?
ps c:\> new-psdrive temp filesystem c:\temp
...
ps c:\> cd temp:
ps temp:\>
temp:\ is a drive-qualified pspath that maps to a win32 (native) path of c:\temp.
-Oisin
When Resolve-Path fails due to the file not existing, the fully resolved path is accessible from the thrown error object.
You can use a function like the following to fix Resolve-Path and make it work like you expect.
function Force-Resolve-Path {
<#
.SYNOPSIS
Calls Resolve-Path but works for files that don't exist.
.REMARKS
From http://devhawk.net/blog/2010/1/22/fixing-powershells-busted-resolve-path-cmdlet
#>
param (
[string] $FileName
)
$FileName = Resolve-Path $FileName -ErrorAction SilentlyContinue `
-ErrorVariable _frperror
if (-not($FileName)) {
$FileName = $_frperror[0].TargetObject
}
return $FileName
}
I think you're on the right path. Just use [Environment]::CurrentDirectory to set .NET's notion of the process's current dir e.g.:
[Environment]::CurrentDirectory = $pwd
[IO.Path]::GetFullPath(".\xyz")
Join-Path (Resolve-Path .) newdir\newfile.txt
This has the advantage of not having to set the CLR Environment's current directory:
[IO.Path]::Combine($pwd,"non\existing\path")
NOTE
This is not functionally equivalent to x0n's answer. System.IO.Path.Combine only combines string path segments. Its main utility is keeping the developer from having to worry about slashes. GetUnresolvedProviderPathFromPSPath will traverse the input path relative to the present working directory, according to the .'s and ..'s.
I've found that the following works well enough.
$workingDirectory = Convert-Path (Resolve-Path -path ".")
$newFile = "newDir\newFile.txt"
Do-Something-With "$workingDirectory\$newFile"
Convert-Path can be used to get the path as a string, although this is not always the case. See this entry on COnvert-Path for more details.
function Get-FullName()
{
[CmdletBinding()]
Param(
[Parameter(ValueFromPipeline = $True)] [object[]] $Path
)
Begin{
$Path = #($Path);
}
Process{
foreach($p in $Path)
{
if($p -eq $null -or $p -match '^\s*$'){$p = [IO.Path]::GetFullPath(".");}
elseif($p -is [System.IO.FileInfo]){$p = $p.FullName;}
else{$p = [IO.Path]::GetFullPath($p);}
$p;
}
}
}
I ended up with this code in my case. I needed to create a file later in the the script, so this code presumes you have write access to the target folder.
$File = ".\newdir\newfile.txt"
If (Test-Path $File) {
$Resolved = (Resolve-Path $File).Path
} else {
New-Item $File -ItemType File | Out-Null
$Resolved = (Resolve-Path $File).Path
Remove-Item $File
}
I also enclosed New-Item in try..catch block, but that goes out of this question.
I had a similar issue where I needed to find the folder 3 levels up from a folder that does not exist yet to determine the name for a new folder I wanted to create... It's complicated. Anyway, this is what I ended up doing:
($path -split "\\" | select -SkipLast 3) -join "\\"
You can just set the -errorAction to "SilentlyContinue" and use Resolve-Path
5 > (Resolve-Path .\AllFilerData.xml -ea 0).Path
C:\Users\Andy.Schneider\Documents\WindowsPowerShell\Scripts\AllFilerData.xml
6 > (Resolve-Path .\DoesNotExist -ea 0).Path
7 >
There is an accepted answer here, but it is quite lengthy and there is a simpler alternative available.
In any recent version of Powershell, you can use Test-Path -IsValid -Path 'C:\Probably Fake\Path.txt'
This simply verifies that there are no illegal characters in the path and that the path could be used to store a file. If the target doesn't exist, Test-Path won't care in this instance -- it's only being asked to test if the provided path is potentially valid.
Both most popular answers don't work correctly on paths on not existing drives.
function NormalizePath($filename)
{
$filename += '\'
$filename = $filename -replace '\\(\.?\\)+','\'
while ($filename -match '\\([^\\.]|\.[^\\.]|\.\.[^\\])[^\\]*\\\.\.\\') {
$filename = $filename -replace '\\([^\\.]|\.[^\\.]|\.\.[^\\])[^\\]*\\\.\.\\','\'
}
return $filename.TrimEnd('\')
}
Check if the file exists before resolving:
if(Test-Path .\newdir\newfile.txt) { (Resolve-Path .\newdir\newfile.txt).Path }