Creating a sub-folder structure - powershell

Looking for some advice on how to use PowerShell (or some other means) to create some folders.
I have a list of around 120 products that I've allocated separate folders. (using a CSV to generate the folders)
I want each product folder to have the same subfolder structure. As shown below:
[Products]
├── [Product 1]
│ ├── [1. Datasheet]
│ │ ├── [1. Current]
│ │ ├── [2. Archived]
│ │ ├── [3. Legacy]
│ │ ├── [4. Draft]
│ │ └── [5. Resources]
│ ├── [2. Images]
│ ├── [3. Brochures]
│ ├── [4. Manuals]
│ └── [5. Software]
│
├── [Product 2]
│ ├── [1. Datasheet]
│ │   ├── [1. Current]
│ │   ├── [2. Archived]
│ │   ├── [3. Legacy]
│ │   ├── [4. Draft]
│ │   └── [5. Resources]
│ ├── [2. Images]
│ ├── [3. Brochures]
│ ├── [4. Manuals]
│ └── [5. Software]
:
:
Essentially the first layer of subfolders in each would be:
[1. Datasheet], [2. Images], [3. Brochures], [4. Manuals], [5. Software]
Inside each of these would be the following:
[1.Current], [2.Archived], [3. Legacy], [4. Draft], [5. Resources]
I don't mind doing this in stages, it's just I don't know where to begin.

This could work:
$workingdir = 'c:\temp'
$products = Get-Content c:\temp\listofproducts.txt
$rootfolders = #(
'Datasheet'
'Images'
'Brochures'
'Manuals'
'Software'
)
$subfolders = #(
'Current'
'Archived'
'Legacy'
'Draft'
'Resources'
)
foreach ($product in $products)
{
$rootcount = 0
foreach ($root in $rootfolders)
{
$rootcount++
$subcount = 0
foreach ($sub in $subfolders)
{
$subcount++
mkdir (Join-Path $workingdir ("$product\$rootcount. $root\$subcount. $sub"))
}
}
}
or you could just create the first product folder then copy and paste it then rename the product

Thanks for the input.
Managed to find a strategy that worked for me, so will share it in case it's of use to anyone else like me.
Used a combination of the following 3 bits of code to achieve what I needed:
# Create Folders from CSV
$folder = "Z:\Products\"
$name = Import-Csv Z:\Products\names.csv
Foreach ($line in $name)
{
New-Item -path $folder -Name $line.Name -Type Directory
}
This above code allowed me to make a big list of folders from a CSV list made in Excel.
# Add a Subfolder
foreach($folder in (gci 'Z:\Products' -directory)){
new-item -ItemType directory -Path ($folder.fullname+"\subfolder")
}
This above code let me populate the list of folders with subfolders. I added them one at a time.
# Copy a file to folder
$folders = Get-ChildItem Z:\Products
foreach ($folder in $folders.name){
Copy-Item -Path "Z:\Datasheet\changelog.txt" -Destination "Z:\Products\$folder" -Recurse
}
This above code allowed me to copy items to subfolder locations.

Related

wildcards in directory path for transferring the files from source to destination using powershell

I have to fetch the files from SFTP account from directory "/sourcedir/Test/TRANS*/transferfiles.csv" using PUTTY and transfer them over to my local destination dir. Having trouble using the wildcard "TRANS*" in the directory path. How do i use multiple wildcards in the directory path?? I'm getting the error "/sourcedir/Test/TRANS*/*transferfiles*.csv": multiple-level wildcards unsupported.".
TIA
[string]$TransferResults = & 'C:\Program Files\PuTTY\pscp.exe' -l 'username' -pw 'password' "username#$(IPaddress):/sourcedir/Test/TRANS*/*transferfiles*.csv" $Destination
I tried the solution that #Cpt.Whale suggested.
Output:
Listing directory /sourcedir/Test/ drwxr--r-- 1 - - 0 Apr 28 14:43 TRANS_whiteplain drwxr--r-- 1 - - 0 Apr 28 14:43 TRANS_test_1
Code snippet to parse in foreach loop.
foreach($file in $parsedfolders){
[string]$fpath = $file.fullName
[string]$TransferResults = & 'C:\Program Files\PuTTY\pscp.exe' -l 'username' -pw 'password' "username#$(IPaddress):$fpath/*transferfiles*.csv"
$Destination
i get the error : unable to identify /transferfiles.csv: no such file or directory
I'll assume your output from pscp -ls looks like this, based on your comment:
Listing directory /sourcedir/Test/
drwxr--r-- 1 - - 0 Apr 28 14:43 TRANS_whiteplain
drwxr--r-- 1 - - 0 Apr 28 14:43 TRANS_test_1
drwxr--r-- 1 - - 0 Apr 28 14:43 TRANS test spaces
From that output, we can use regex to get the folder names:
# First, list the folder names in the upper level folder
$folders = & 'C:\Program Files\PuTTY\pscp.exe' -l 'username' -pw 'password' -ls "username#10.0.0.1:/sourcedir/Test/"
# only lines starting with d, select everything after time "0:00"
$pattern = '^d.+\d:\d{2} (.*)'
# parse output into folder names using regex
$parsedFolders = foreach ($folder in $folders) {
# trim whitespace
[regex]::Match($folder,$pattern).Groups[1].Value.trim() |
# discard empty results
Where { -not [string]::IsNullOrWhiteSpace($_) }
}
Now the parsed folders should be usable:
$parsedFolders
TRANS_whiteplain
TRANS_test_1
TRANS test spaces
So try and do your copy for each folder:
# Finally, copy files from each parsed folder to destination
$results = foreach ($parsedFolder in $parsedFolders) {
& 'C:\Program Files\PuTTY\pscp.exe' -l 'username' -pw 'password' "username#10.0.0.1:/sourcedir/Test/$parsedfolder/*transferfiles*.csv" $Destination
}

Create multiple subfolders with different parent folders CSV - Powershell

I'm trying to create multiple sub folders for different parent directories. I have a CSV file with almost 700 folders.
I have 4 columens in my CSV
Column A serial code (codigo)
Column B course (materia)
Column C faculty the course belongs to (facultad)
Column D career that have that course (carrera)
I have the following code for that
$materias = Import-Csv C:\Materias.csv
foreach{$EstMaterias in $materias)
{
$path = "C:\"
$codigo = $EstMaterias.codigo
$materia = $EstMaterias.materia
$facultad = $EstMaterias.facultad
$carrera = $EstMaterias.carrera
New-Item -Path("$path\$facultad\$carrera\$mateira") -Type directory
}
I'm not sure how to filter thee subfolders so that the careers are created inside their correct faculty and courses inside their correct career. With the code I run right now all courses are created inside the faculties and inside all of the careers.
You don't use $codigo = $EstMaterias.codigo, and you have typing error here "$path\$facultad\$carrera\$mateira". You typed in $mateira instead of $materia.
$materias = Import-Csv C:\Materias.csv
$path = "C:"
foreach ($EstMaterias in $materias) {
$codigo = $EstMaterias.codigo # What is this? You don't use it here.
$materia = $EstMaterias.materia
$facultad = $EstMaterias.facultad
$carrera = $EstMaterias.carrera
New-Item -Path "$path\$facultad\$carrera\$materia" -Type directory
}

PS script not emailing properly formatted output

I have the following command running from a powershell script which gives me the info I need and is all nicely formatted in a table. The command is:
gcloud --project $gcpProject compute snapshots list --format="table[box,title='GCP Project:$gcpProject snapshots for $yesterday'](name,creationTimestamp,diskSizeGb,storageBytes)" --filter="creationTimestamp.date('%Y-%m-%d')=$yesterday"
I have a Start-Transcript -path $Log1 near the beginning of that script.
This is the output from the gcloud command that I get in PS:
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ GCP snapshots │
├─────────────────────────────────────────────────────┬───────────────────────────────┬──────────────┬───────────────┤
│ NAME │ CREATION_TIMESTAMP │ DISK_SIZE_GB │ STORAGE_BYTES │
├─────────────────────────────────────────────────────┼───────────────────────────────┼──────────────┼───────────────┤
│ snapshot1-us-central1-a-20191024022411-1ub96cw9 │ 2019-10-23T19:24:11.743-07:00 │ 500 │ 1104631168 │
│ snapshot2-us-east1-b-20191024020148-iusphq0h │ 2019-10-23T19:01:49.100-07:00 │ 900 │ 1129102848 │
└─────────────────────────────────────────────────────┴───────────────────────────────┴──────────────┴───────────────┘
This is just how I want the email recipient to see it when they open their email. But I can't figure out what I need to do in order to send this as the $body of the email and properly formatted. In Notepad++ it looks perfect too but not if I copy & paste it into a new email.
When I get the email the table is all gibberish (lines are made with a bunch of ????) and table is not formatted properly. I tried ConvertTo-Html and -BodyAsHtml but none of that worked.
Here's my code for sending the email:
If (Test-Path $Log1) {
$body = #(Get-Content -Path $Log1).
Where({ $_.Trim() -notin $pattern2 -and $_ -NotMatch "End time: 20.*|Start time: 20.*" }) # Trimming some things from the Log1 file that I don't want included in the email
send-MailMessage -SmtpServer $SmtpServer -Port $SmtpPort -Credential $Cred -UseSsl -Verbose -To $to -From $from -Subject $subject -Body ($body | out-string)
}
The problem is most likely that when you paste the text from Notepad/Notepad++ into a new email, your client is not using a fixed width font. You can try changing your email font to something fixed-width, then pasting (keeping text-data only) or changing from an HTML formatted email to a plaintext one.
You will also want to make sure that when the email is sending from Powershell that it's either being sent as plaintext (Outlook and other clients usually render plaintext emails in a fixed width font by default), or that you are selecting a well-known fixed width font for your text in the HTML body. But at that point, you might as well just send an HTML-formatted table and plug in the data that way.
If this is used for any sort of automation though I'd highly recommend just putting the data in a CSV for ease of parsing.
I was able to resolve it by formatting the output from gcloud as csv then converting it to html with:
$body = Import-Csv c:\report.csv | ConverTo-Html -head a$ -Body "<H2>My Header</H2>"
where a$ sets the HTML head information:
$a = "<style>"
$a = $a + "BODY{background-color:peachpuff;}"
$a = $a + "TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;}"
$a = $a + "TH{border-width: 1px;padding: 0px;border-style: solid;border-color: black;background-color:red}"
$a = $a + "TD{border-width: 1px;padding: 0px;border-style: solid;border-color: black;background-color:green}"
$a = $a + "</style>"
Then for send-MailMessage I simply added -BodyAsHtml

robocopy /xd ignores a list of directories

I am putting together a Powershell script that syncs a source code tree into another directory. It creates a list of directories to ignore, per the output of robocopy /?:
# ----------------------------------------------------------------------------
# Source file directories
# ----------------------------------------------------------------------------
$reactjs = "C:\Users\me\Projects\prj\reactjs"
# ----------------------------------------------------------------------------
# Target directories
# ----------------------------------------------------------------------------
$synergy = "C:\Users\me\Projects\sync\reactjs"
$synReactjs = $synergy + "\prj-JS"
# ----------------------------------------------------------------------------
# Directories to be ignored
# ----------------------------------------------------------------------------
$reactIgnores = #(
$reactjs + "\.git",
$reactjs + "\node_modules",
$reactjs + "\build"
)
# ----------------------------------------------------------------------------
# Copy/sync files
# ----------------------------------------------------------------------------
Robocopy /l $reactjs $synReactjs /mir /xd $reactIgnores
However, robocopy immediately descends into the .git directory, though the output header looks like it parsed my switches correctly:
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Wednesday, October 31, 2018 12:00:32 PM
Source : C:\Users\me\Projects\prj\reactjs\
Dest : C:\Users\me\Projects\sync\reactjs\prj-JS\
Files : *.*
Exc Dirs : C:\Users\me\Projects\prj\reactjs\.git C:\Users\me\Projects\prj\reactjs\node_modules C:\Users\me\Projects\prj\reactjs\build
Options : *.* /L /S /E /DCOPY:DA /COPY:DAT /Z /R:1000000 /W:30
------------------------------------------------------------------------------
New Dir 8 C:\Users\me\Projects\prj\reactjs\
New File 634 .eslintrc.json
New File 206 .gitignore
New File 55 .npmrc
New File 521255 package-lock.json
New File 1015 package.json
New File 1001 package.json.react16
New File 10391 README
New File 1639 README.md
New Dir 9 C:\Users\me\Projects\prj\reactjs\.git\
New File 135 COMMIT_EDITMSG
New File 319 config
New File 73 description
New File 125 FETCH_HEAD
<...>
What am I missing with the /xd switch? It works fine with a single directory.
Edit: I included the Powershell tag because /xd ignores the list of files if I include them in the command line explicitly. Loading them via the array does not work. This results in an exc line of (compare to above):
Exc Dirs : C:\Users\me\Projects\prj\reactjs\.git
C:\Users\me\Projects\prj\reactjs\node_modules
C:\Users\me\Projects\prj\reactjs\build
In
$reactIgnores = #(
$reactjs + "\.git",
$reactjs + "\node_modules",
$reactjs + "\build"
)
try putting the full path.
This array worked for me, escaping double quotes with single quotes. You don't need the full path and I used a single /XD $eXcludeDirectories command:
$eXcludeDirectories = #(
'"Folder01"',
'"Folder02"',
'"Folder03"'
)
Hopefully this will help someone else.

How do I create a function that accepts multiple argument types from pipeline and command line?

I'm trying to write a function that takes multiple arguments, which can come either from the command line, or from the pipeline. The arguments can be strings or directory objects. The idea is that any of the following invocations should work:
Test-VEnv '.\MyPath', '.\AnotherPath'
Test-VEnv (dir)
'MyPath', 'AnotherPath' | Test-VEnv
dir | Test-VEnv
The following code almost works:
function Test-VEnv {
[CmdletBinding()]
param (
[Parameter(Mandatory=$true, Position=0,
ValueFromPipeline=$True,
ValueFromPipelineByPropertyName=$true)]
[Alias('FullName')]
[String[]]$Path
)
process {
foreach ($P in $Path) {
...
}
}
}
It handles strings both from the pipeline and the command argument, and handles directory objects from the pipeline (via ValueFromPipelineByPropertyName and the FullName alias). But it doesn't handle directory objects on the command line, so
dir | Where-Object { Test-VEnv $_ }
fails, as it converts the directory objects to strings, which uses the Name property rather than FullName, and the subsequent code fails.
Can anyone tell me how to achieve what I want?
I am aware that even if I can get this to work, it may not be a particularly good design. But as far as I can tell, it's how the built in Test-Path works, so I want to try following standard behaviour before I invent my own...
Since your parameter type is string it's coercing the file system info object into a string when you are not using the pipeline { Test-VEnv $_ }. If you call the ToString() method of either a System.IO.FileInfo or System.IO.DirectoryInfo object you'll see this. When you use the pipeline it binds the fullname alias giving you the full path.
You can see what PowerShell is doing to bind the input object using Trace-Command. Here is an example of how to use it:
trace-command -name parameterbinding -expression {(dir C:\)[0] | ? {Test-VEnv $_}} -pshost
Here is the important part of the output:
BIND arg [PerfLogs] to parameter [Path]
Executing DATA GENERATION metadata: [System.Management.Automation.ArgumentTypeConverterAttribute]
result returned from DATA GENERATION: System.String[]
COERCE arg to [System.String[]]
Parameter and arg types the same, no coercion is needed.
BIND arg [System.String[]] to param [Path] SUCCESSFUL
Test-Path does the same thing. Take a look at these three examples:
PS C:\Users\Andy> Test-Path (dir C:\)[0]
False
PS C:\Users\Andy> (dir C:\)[0] | Test-Path
True
PS C:\> Test-Path (dir C:\)[0]
True
Since my PWD is not C:\ I get FALSE because the DirectoryInfo object is converted to string (ToString()) which only gives the folder name. This is because the pipeline wasn't used.
Since the pipeline is used it works because it is binding to PsPath with this parameter:
[Parameter(ParameterSetName='LiteralPath', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
[Alias('PSPath')]
[string[]]
${LiteralPath},
Since the directory contains the folder the folder's name exists.
You might try the alias PsPath for your binding. This is what Test-Path uses:
param (
[Parameter(Mandatory=$true, Position=0,
ValueFromPipeline=$True,
ValueFromPipelineByPropertyName=$true)]
[Alias('PsPath')]
[String[]] $Path
)
process {
foreach ($P in $Path) {
Get-Item $p
}
}
Some tests:
Set-Location C:\
Write-Host 1
Test-VEnv '.\Windows', '.\Program Files'
Write-Host 2
Test-VEnv (dir)
Write-Host 3
'Windows', 'Program Files' | Test-VEnv
Write-Host 4
dir | Test-VEnv
Output:
1
Directory: C:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 3/14/2012 3:41 AM Windows
d-r-- 3/24/2012 7:46 PM Program Files
2
d---- 2/18/2012 4:32 AM PerfLogs
d-r-- 3/24/2012 7:46 PM Program Files
d-r-- 3/25/2012 4:49 PM Program Files (x86)
d---- 3/9/2012 9:57 PM Python27
d-r-- 3/4/2012 8:11 PM Users
d---- 3/14/2012 3:41 AM Windows
-a--- 3/4/2012 8:45 PM 1024 .rnd
3
d---- 3/14/2012 3:41 AM Windows
d-r-- 3/24/2012 7:46 PM Program Files
4
d---- 2/18/2012 4:32 AM PerfLogs
d-r-- 3/24/2012 7:46 PM Program Files
d-r-- 3/25/2012 4:49 PM Program Files (x86)
d---- 3/9/2012 9:57 PM Python27
d-r-- 3/4/2012 8:11 PM Users
d---- 3/14/2012 3:41 AM Windows
-a--- 3/4/2012 8:45 PM 1024 .rnd
#Andy gives some great information specifically addressing points in your question. My answer here is more of a supplement considering the broader implications. It probably only deserves to be a comment but the length and my included image prevent me from posting this as just a comment...
I recently examined the question of pipeline vs. direct input in Powershell with a specific goal towards making these input streams symmetric with respect to all classes of inputs and with respect to what defaults are applied. There are, by my reckoning, six equivalence classes of input to consider:
no input
null
empty
scalar
list of normal values
list of mixed values (i.e. some null or empty)
What one would typically expect when each of these inputs is sent to a function would be this corresponding list:
default value
null
empty
scalar
list of normal values
list of mixed values (i.e. some null or empty)
That is, with no input supplied the default value is used; otherwise the given value is used. This sounds almost trivial, practically a tautology, but there are some subtleties. Consider, for example, what does it mean to supply no input via the pipeline? Is it null or an empty collection? I contend the latter for, among other reasons, it allows the symmetry between streams I mentioned above. Furthermore, how you write both your function signature and your function body makes sometimes surprising impacts on some or all of these input classes with one or the other input stream. Thus, I further contend that there is a lot more to this "trivial" consideration than meets the eye at first glance. So much so that I wrote extensively about it in the article
Down the Rabbit Hole- A Study in PowerShell Pipelines, Functions, and Parameters,
published on Simple-Talk.com. Included with the article is a wallchart that shows a table of the six equivalence input classes and what you get for each with different function templates. Here is a thumbnail of the wallchart:
Does it work if you change the type of $path from String[] to [System.IO.DirectoryInfo[]]?
function Install-PathTransformation
{
[CmdletBinding()]
param()
if (-not $script:my_pathtransformation_types) {
$script:my_pathtransformation_types = Add-Type -TypeDefinition #"
using System;
using System.IO;
using System.Management.Automation;
public class ValidPathTransformationAttribute : ArgumentTransformationAttribute {
public bool Resolve {
get;
set;
}
public override Object Transform(EngineIntrinsics engineIntrinsics, Object inputObject) {
PSObject psobj = inputObject as PSObject;
if (psobj != null)
inputObject = psobj.BaseObject;
if (inputObject == null)
return inputObject;
FileSystemInfo test1 = inputObject as FileSystemInfo;
if (test1 != null)
return test1.FullName; // no need for further checks, path shoul de qualified!
PathInfo test2 = inputObject as PathInfo;
if (test2 != null)
return test2.Path; // no need for further checks, path shoul de qualified!
string test3 = inputObject as string;
if (test3 == null)
test3 = (string)LanguagePrimitives.ConvertTo(inputObject, typeof(string));
if (Resolve)
test3 = engineIntrinsics.SessionState.Path.GetUnresolvedProviderPathFromPSPath(test3);
else if (!engineIntrinsics.SessionState.Path.IsValid(test3))
throw new ArgumentTransformationMetadataException("Invalid path value: " + test3);
return test3;
}
}
"#
}
return $script:my_pathtransformation_types
}
Install-PathTransformation
function A(
[parameter(Mandatory=$false, ValueFromPipeline=$true)]
[ValidPathTransformation(Resolve=$true)]
[string] # optional, transformation returns always string
$z) {
Process {
Write-Host $("{0}: {1}" -f $z.GetType().FullName, $z)
}
}
& {
'mumu', 10, 10.5, ""
dir $env:Temp | select -First 5
} | A
How it works:
1) Create a Transformation Attribute to process the Parameter value.
2) During transformation, if Value is FileSystemInfo or PathInfo we take the value within, if not we convert value to string and make sure that "path" is valid (and resolve path if needed).
3) When applied, the result of Transformation is always string.