Hello Stack overflow community, for a security class, I'm trying to replicate a "bad" usb being plugged in and then sending a generated text file to a recipient via outlook.
I will say I am a PowerShell novice by all means so this may be something simple that I just don't understand.
I cannot get the relative path for $var1 to work.
When I use an absolute path to the generated file it sends the attachment fine.
I'm trying to use the Get-ChildItem e:\ -filter "*.txt" to populate this variable dynamically, but when it gets to the .Attchments.Add($var1) it breaks with Error while invoking Add. Could not find member.
Meanwhile, the .Count function on it does return a 1 signifying that the file is there.
I could not find a good indication of what this error means. Googling the error results in Microsoft documentation and nothing as it relates to programming in Powershell. I've tried messing with ''s around the Get-ChildItem statement and various other syntactical tweaks. I haven't found a good alternative to the way we've decided to send the email thus far. Securing SMTP for this isn't practical as it wouldn't fit the scenario.
New-Item -Path E:\ -Name "DynamicNameOfPC" -ItemType "file" -Value "Test"
(Get-ChildItem e:\ -filter "*.txt").Count
$var1 = Get-ChildItem e:\ -filter "*.txt"
$OL = New-Object -ComObject outlook.application
Start-Sleep 5
#Create Item
$mItem = $OL.CreateItem("olMailItem")
$mItem.To = "Test#test.com"
$mItem.Subject = "Testing Script"
$mItem.Body = "Testing"
$mItem.Attachments.Add($var1)
$mItem.Send()
`
Through analysis with a random user on a Discord, We were able to realize the Get-ChildItem was getting the file name but not the full path.
Adding a variable with the path of E:\ before $var1 got it working perfectly.
I'm writing a PowerShell script to make several directories and copy a bunch of files together to "compile" some technical documentation. I'd like to generate a manifest of the files and directories as part of the readme file, and I'd like PowerShell to do this, since I'm already working in PowerShell to do the "compiling".
I've done some searching already, and it seems that I need to use the cmdlet "Get-ChildItem", but it's giving me too much data, and I'm not clear on how to format and prune out what I don't want to get my desired results.
I would like an output similar to this:
Directory
file
file
file
Directory
file
file
file
Subdirectory
file
file
file
or maybe something like this:
+---FinGen
| \---doc
+---testVBFilter
| \---html
\---winzip
In other words, some kind of basic visual ASCII representation of the tree structure with the directory and file names and nothing else. I have seen programs that do this, but I am not sure if PowerShell can do this.
Can PowerShell do this? If so, would Get-ChildItem be the right cmdlet?
In your particular case what you want is Tree /f. You have a comment asking how to strip out the part at the front talking about the volume, serial number, and drive letter. That is possible filtering the output before you send it to file.
$Path = "C:\temp"
Tree $Path /F | Select-Object -Skip 2 | Set-Content C:\temp\output.tkt
Tree's output in the above example is a System.Array which we can manipulate. Select-Object -Skip 2 will remove the first 2 lines containing that data. Also, If Keith Hill was around he would also recommend the PowerShell Community Extensions(PSCX) that contain the cmdlet Show-Tree. Download from here if you are curious. Lots of powerful stuff there.
The following script will show the tree as a window, it can be added to any form present in the script
function tree {
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
# create Window
$Form = New-Object System.Windows.Forms.Form
$Form.Text = "Files"
$Form.Size = New-Object System.Drawing.Size(390, 390)
# create Treeview-Object
$TreeView = New-Object System.Windows.Forms.TreeView
$TreeView.Location = New-Object System.Drawing.Point(48, 12)
$TreeView.Size = New-Object System.Drawing.Size(290, 322)
$Form.Controls.Add($TreeView)
###### Add Nodes to Treeview
$rootnode = New-Object System.Windows.Forms.TreeNode
$rootnode.text = "Root"
$rootnode.name = "Root"
[void]$TreeView.Nodes.Add($rootnode)
#here i'm going to import the csv file into an array
$array=#(Get-ChildItem -Path D:\personalWorkspace\node)
Write-Host $array
foreach ( $obj in $array ) {
Write-Host $obj
$subnode = New-Object System.Windows.Forms.TreeNode
$subnode.text = $obj
[void]$rootnode.Nodes.Add($subnode)
}
# Show Form // this always needs to be at the bottom of the script!
$Form.Add_Shown({$Form.Activate()})
[void] $Form.ShowDialog()
}
tree
In Windows, navigate to the directory of interest
Shift+ right click mouse -> Open PowerShell window here
Get-ChildItem | tree /f > tree.log
The best and clear way for me is:
PS P:\> Start-Transcript -path C:\structure.txt -Append
PS P:\> tree c:\test /F
PS P:\> Stop-Transcript
You can use command Get-ChildItem -Path <yourDir> | tree >> myfile.txt this will output tree-like structure of a directory and write it to "myfile.txt"
Create file, lets say at C:\randomname\file.txt,
Now run the following script via PowerShell:
$shell = new-object -com shell.application
$folder = $shell.NameSpace("C:\randomname")
$folder.Items() | where {$_.Name -eq "file.txt"}
Observe no output is produced which is rather unexpected.
Any idea how to resolve this situation in a reasonable manner other than modifying Windows settings?
EDIT:
To prevent confusion, this is just a stripped down version of my actual problem. Reason why I am using shell.application and not Get-ChildItem is that my randomname folder is actually zipped, i.e. I have randomname.zip and my actual code looks like this:
$shell = new-object -com shell.application
$zip = $shell.NameSpace("C:\randomname.zip")
$folder = $zip.Items() | where {$_.Name -eq "randomname"}
$folder.GetFolder.Items() | where {$_.Name -eq "file.txt"}
FolderItem.Name return value depends on the value of particular Windows setting. Try the following:
Open Control Panel,
Folder Options, View tab,
Uncheck Hide extensions for known file types.
Re-run the script and you will see the expected output:
Application : System.__ComObject
Parent : System.__ComObject
Name : file.txt
Path : C:\randomname\file.txt
...
I was trying to write a portable script but after finding out how Name works this seems rather hard as I have no control over Windows settings of our customers and there is nothing like FullName for FolderItem so I can't figure out the reliable way out.
EDIT:
Based on suggestion from Nick Sedgewick, that .Path always returns filename with extension, unlike .Name, I was able to create a working workaround which does not depend on Windows settings and looks like this:
$shell = new-object -com shell.application
$folder = $shell.NameSpace("C:\")
$folder.Items() | where {(split-path $_.Path -leaf) -eq "file.txt"}
A Namespace Item has a PATH property, which returns full path and filename for files, and always includes the filename extension, whether the user has 'hide filename extensions' set or not.
So, use 'Path' instead of 'Name' and either write a function to pass $_.Path to which can the extract the filename part, or use an equivalent of the LIKE operator if there is one in powershell
This should be a simple task, but I have seen several attempts on how to get the path to the directory where the executed cmdlet is located with mixed success. For instance, when I execute C:\temp\myscripts\mycmdlet.ps1 which has a settings file at C:\temp\myscripts\settings.xml I would like to be able to store C:\temp\myscripts in a variable within mycmdlet.ps1.
This is one solution which works (although a bit cumbersome):
$invocation = (Get-Variable MyInvocation).Value
$directorypath = Split-Path $invocation.MyCommand.Path
$settingspath = $directorypath + '\settings.xml'
Another one suggested this solution which only works on our test environment:
$settingspath = '.\settings.xml'
I like the latter approach a lot and prefer it to having to parse the filepath as a parameter each time, but I can't get it to work on my development environment. What should I do? Does it have something to do with how PowerShell is configured?
Yes, that should work. But if you need to see the absolute path, this is all you need:
(Get-Item .).FullName
The reliable way to do this is just like you showed $MyInvocation.MyCommand.Path.
Using relative paths will be based on $pwd, in PowerShell, the current directory for an application, or the current working directory for a .NET API.
PowerShell v3+:
Use the automatic variable $PSScriptRoot.
The easiest method seems to be to use the following predefined variable:
$PSScriptRoot
about_Automatic_Variables and about_Scripts both state:
In PowerShell 2.0, this variable is valid only in script modules (.psm1). Beginning in PowerShell 3.0, it is valid in all scripts.
I use it like this:
$MyFileName = "data.txt"
$filebase = Join-Path $PSScriptRoot $MyFileName
You can also use:
(Resolve-Path .\).Path
The part in brackets returns a PathInfo object.
(Available since PowerShell 2.0.)
Try :
(Get-Location).path
or:
($pwd).path
Path is often null. This function is safer.
function Get-ScriptDirectory
{
$Invocation = (Get-Variable MyInvocation -Scope 1).Value;
if($Invocation.PSScriptRoot)
{
$Invocation.PSScriptRoot;
}
Elseif($Invocation.MyCommand.Path)
{
Split-Path $Invocation.MyCommand.Path
}
else
{
$Invocation.InvocationName.Substring(0,$Invocation.InvocationName.LastIndexOf("\"));
}
}
Get-Location will return the current location:
$Currentlocation = Get-Location
I like the one-line solution :)
$scriptDir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
Try this:
$WorkingDir = Convert-Path .
In Powershell 3 and above you can simply use
$PSScriptRoot
If you just need the name of the current directory, you could do something like this:
((Get-Location) | Get-Item).Name
Assuming you are working from C:\Temp\Location\MyWorkingDirectory>
Output
MyWorkingDirectory
Most answers don't work when debugging in the following IDEs:
PS-ISE (PowerShell ISE)
VS Code (Visual Studio Code)
Because in those the $PSScriptRoot is empty and Resolve-Path .\ (and similars) will result in incorrect paths.
Freakydinde's answer is the only one that resolves those situations, so I up-voted that, but I don't think the Set-Location in that answer is really what is desired. So I fixed that and made the code a little clearer:
$directorypath = if ($PSScriptRoot) { $PSScriptRoot } `
elseif ($psise) { split-path $psise.CurrentFile.FullPath } `
elseif ($psEditor) { split-path $psEditor.GetEditorContext().CurrentFile.Path }
For what it's worth, to be a single-line solution, the below is a working solution for me.
$currFolderName = (Get-Location).Path.Substring((Get-Location).Path.LastIndexOf("\")+1)
The 1 at the end is to ignore the /.
Thanks to the posts above using the Get-Location cmdlet.
this function will set the prompt location to script path, dealing with the differents way to get scriptpath between vscode, psise and pwd :
function Set-CurrentLocation
{
$currentPath = $PSScriptRoot # AzureDevOps, Powershell
if (!$currentPath) { $currentPath = Split-Path $pseditor.GetEditorContext().CurrentFile.Path -ErrorAction SilentlyContinue } # VSCode
if (!$currentPath) { $currentPath = Split-Path $psISE.CurrentFile.FullPath -ErrorAction SilentlyContinue } # PsISE
if ($currentPath) { Set-Location $currentPath }
}
You would think that using '.\' as the path means that it's the invocation path. But not all the time. Example, if you use it inside a job ScriptBlock. In which case, it might point to %profile%\Documents.
This is what I came up with. It's an array including multiple methods of finding a path, uses the current location, filters out null\empty results, and returns the first not-null value.
#((
($MyInvocation.MyCommand.Module.ModuleBase),
($PSScriptRoot),
(Split-Path -Parent -Path $MyInvocation.MyCommand.Definition -ErrorAction SilentlyContinue),
(Get-Location | Select-Object -ExpandProperty Path)
) | Where-Object { $_ })[0]
To only get the current folder name, you can also use:
(Split-Path -Path (Get-Location) -Leaf)
To expand on #Cradle 's answer: you could also write a multi-purpose function that will get you the same result per the OP's question:
Function Get-AbsolutePath {
[CmdletBinding()]
Param(
[parameter(
Mandatory=$false,
ValueFromPipeline=$true
)]
[String]$relativePath=".\"
)
if (Test-Path -Path $relativePath) {
return (Get-Item -Path $relativePath).FullName -replace "\\$", ""
} else {
Write-Error -Message "'$relativePath' is not a valid path" -ErrorId 1 -ErrorAction Stop
}
}
I had similar problems and it made me a lot of trouble since I am making programs written in PowerShell (full end user GUI applications) and I have a lot of files and resources I need to load from disk.
From my experience, using . to represent current directory is unreliable. It should represent current working directory, but it often does not.
It appears that PowerShell saves location from which PowerShell has been invoked inside ..
To be more precise, when PowerShell is first started, it starts, by default, inside your home user directory. That is usually directory of your user account, something like C:\USERS\YOUR USER NAME.
After that, PowerShell changes directory to either directory from which you invoked it, or to directory where script you are executing is located before either presenting you with PowerShell prompt or running the script. But that happens after PowerShell app itself originally starts inside your home user directory.
And . represents that initial directory inside which PowerShell started. So . only represents current directory in case if you invoked PowerShell from the wanted directory. If you later change directory in PowerShell code, change appears not to be reflected inside . in every case.
In some cases . represents current working directory, and in others directory from which PowerShell (itself, not the script) has been invoked, what can lead to inconsistent results.
For this reason I use invoker script. PowerShell script with single command inside:
POWERSHELL.
That will ensure that PowerShell is invoked from the wanted directory and thus make . represent current directory. But it only works if you do not change directory later in PowerShell code.
In case of a script, I use invoker script which is similar to last one I mentioned, except it contains a file option:
POWERSHELL -FILE DRIVE:\PATH\SCRIPT NAME.PS1.
That ensures that PowerShell is started inside current working directory.
Simply clicking on script invokes PowerShell from your home user directory no matter where script is located.
It results with current working directory being directory where script is located, but PowerShell invocation directory being C:\USERS\YOUR USER NAME, and with . returning one of these two directories depending on the situation, what is ridiculous.
But to avoid all this fuss and using invoker script, you can simply use either $PWD or $PSSCRIPTROOT instead of . to represent current directory depending on weather you wish to represent current working directory or directory from which script has been invoked.
And if you, for some reason, want to retrieve other of two directories which . returns, you can use $HOME.
I personally just have invoker script inside root directory of my apps I develop with PowerShell which invokes my main app script, and simply remember to never ever change current working directory inside my source code of my app, so I never have to worry about this, and I can use . to represent current directory and to support relative file addressing in my applications without any problems.
This should work in newer versions of PowerShell (newer than version 2).
Mine was a short, so unplug everything but USB from it and recompile
I have a large amount of powershell code that I've written over the course of a long project; these scripts perform a wide variety of functions, and most of them depend in some way on others within the scope of the project. Right now, the work is made up of a couple of files containing many functions each. Originally, in order to work with these scripts all the script files were sort of haphazardly dot-sourced into the environment.
However, I've learned recently that Powershell 2.0 introduces modules, and I would like to deploy these scripts together that way. Since a module's contents are all loaded together, I would like to split apart my files so that each script has its own file, in order to aid source control. However, I'm a little unclear about the connections between the scripts now.
I've done some testing and it seems that it's ok to move the Export-ModuleMember command for each function to the individual .ps1 files; this seems more like functions declaring their own scope like public and private scoping in C#. However, after doing that my .psm1 file contains nothing but this:
Get-ChildItem -recurse $psScriptRoot | where { $_.Extension -eq ".ps1" } | foreach { . $_.FullName }
Does that seem right? All the scripts are being dot sourced, and all the scripts refer to each other under that assumption. Should they instead refer to each other using their locations relative to $psScriptRoot?
Is there a way different than both these ways? Can anyone offer advice? I don't know much about these yet.
I've seen a similar technique where each .ps1 file contains one function and the functions are sourced in the PSM1 file used in the WPK module and PSRemoteRegistry modules.
This line is the PSRemoteRegistry module:
Get-ChildItem -Path $PSScriptRoot\*.ps1 | Foreach-Object{ . $_.FullName }
I would say I like the technique over having one giant script files of functions.
you could also look at creating a manifest (I don't actually know if you need a psm1 with a psd1).
Here was my use of a manifest:
New-ModuleManifest `
-Path Fiddler.psd1 `
-Author "Niklas Goude" `
-CompanyName "http://www.powershell.nu/" `
-ModuleVersion 1.0 `
-Description "Module from http://www.powershell.nu/2011/03/14/fiddler/</a> - psd1 created by Matt # amonskeysden.tumblr.com" `
-FormatsToProcess #() `
-RequiredAssemblies #("Fiddler.dll") `
-NestedModules #() `
-Copyright "" `
-ModuleToProcess "Fiddler.psm1" `
-TypesToProcess #() `
-FileList #("Fiddler.psm1","Fiddler.dll")
I think the answer to your question would be to include your file list in the FileList parameter there.
I wrote some of my findings (including links to MS resources) here:
http://amonkeysden.tumblr.com/post/5127684898/powershell-and-fiddler