Powershell - Specifying variable path for usernames - powershell

I have tried searching for this, but I do not think I am using the correct words.
I am creating a script that needs to work on many environments. One of the specifications, is that I need to be able to delete a certain directory in a users AppData. Problem with it, is I do not know how to set a dynamic path.
I.e. C:\Users\User1\AppData\Local\X compared to C:\Users\User2\AppData\Local\X
How would I get and specify a series of user accounts on the local machine, ideally with out polling AD?

Check out the environment variables for paths to local resources using Get-ChildItem like so:
Get-ChildItem -Path env:
This will show you all environment variables and their value without the need to query Active Directory, the one you want for AppData\Local is called LOCALAPPDATA
To use an environmental variable in a function the syntax is $ENV:<Name> so to use the environmental variable for LOCALAPPDATA you would use $ENV:LOCALAPPDATA
Play around with the environmental variables and start coding your script, if you have additional questions you can then post your script and we can contribute a more specific answer to help you out :)

To get user names from AD, AD module must be installed on the system from which you run the queries.
#Export to a csv file user names and use it as the source.
Note that this command will export all users from your AD.
Get-ADuser -filter * | select name | Export-Csv c:\users.csv
$users = Import-Csv c:\users.csv
Foreach ($user in $users.name) {
$path = "c:\users\$user\appdata\local\X"
if ( $(Try { Test-Path $path.trim()} Catch { $false }) ) {
Remove-Item $path -Force -Recurse
}
Else {
write-host "Path not found"
}
}

Related

is there a way to check if a program is installed just by filename in powershell?

I am trying to create a powershell script to auto install all .msi and .exe files silently in a directory. However while doing this i want to check if any of the programs are already installed.
I know i can get all installed files in the system like below
$32bit_softwares = Get-ItemProperty HKLM:\SOFTWARE\wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* |
Select-Object DisplayName,DisplayVersion,Publisher,InstallDate
$64bit_softwares = Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\* |
Select-Object DisplayName,DisplayVersion,Publisher,InstallDate
$all_softwares = $32bit_softwares+$64bit_softwares
and i can get the filenames of the files in the directory like below:
$directoryRead = Read-Host -Prompt "enter directory"
$fileNames = Get-ChildItem $directory -Recurse -include *.exe,*.msi | ForEach-Object {$_.name}
How can i compare these 2 in a loop? like
$all_softwares.DisplayName -like "$softwareName*"
I am not sure a like filter above will do the job as filenames will be like examplename.exe
Thanks in advance.
So the problem that I see and I think you are asking about is that the installer filename will be different than the software name you pull out of the Registry. With the difference it will be hard to match up exactly.
Is the set of MSI's and/or EXE's (the installers) a known, static set? Would it be possible to setup a hash table (dictionary) mapping between the Registry name and the installer name?
This would make matching exactly while looping through the installers and doing a .Contains on the array from the Registry easier.

Is it possible to make a search and replace in file-content on multiple network locations?

I need to search for a string and then replace it with another in multiple files. Sound easy, but the hard part is that is that it's multiple files on multiple network locations. I've tried connecting to all of the locations at once with vscode and then using the built-in search and replace function. This allmost works, except when I get to big searches is seems to hang.
I'm now looking for another, more stable, way to do this. Anybody got any ideas? I thought powershell could be a good competitor, but unfortunately I'm not that used to working with powershell.
I found this guide and it's a bit like what I want, except I need to do it on multiple files at multiple locations at once.
https://mcpmag.com/articles/2018/08/08/replace-text-with-powershell.aspx
I would settle with running one skript for each location since it's only < 20 locations to scan. But it needs to include subfolders.
Any tips are appreciated, thanks! :)
Edit 1:
The folder structure differs from location to location so its hard to say how it looks. But I can say that no location has a folder structure deeper than 15 steps. The text that I'm replacing are thumbprints of certificates stored in .config files. The files are between 100 and 1000 characters long and the thumbprints I'm replacing looks something like this d2e8c58e5b34021671f2121483572f03f54ab9ae
This is assuming that the different network locations are in trusted domains or at least part of the wmi trustedhosts. PowerShell remoting will also need to be enabled on all computers involved. Run (In elevated PowerShell) Enable-PSRemoting -Force to enable PowerShell Remoting
$command = { Get-ChildItem -Path C:\Test\ -Include *.config -Name -Recurse | ForEach-Object {$configContent = Get-Content -Path $_ -Raw; $configContent.Replace("Old Value", "New Value") | Out-File -FilePath ($_.FullName) -Force } }
Invoke-Command -ComputerName "TestServer1", "TestServer2", "etc..." -ScriptBlock $command
If you are not part of the domain but have a domain/server login, you will need to use the -Credentials switch on the Invoke-Command function. This will basically find all files that have the .config extension in any subfolders in the path, get the current content of the .config file, replace your value, and finally overwrite the existing config file. WATCH OUT THOUGH this will get EVERY .config file that is in that path. If you have more than one it will also grab it, but if it doesn't have the string it will just rewrite the same file.
Without seeing an example of the folder structures and files this is quite hard to give a thorough answer on. However I would probably build a series of ForEach segments. For example:
ForEach ($Server in $Servers)
{
ForEach ($File in $Files)
{
Select-String -Path $File -Pattern "$ExampleString"
}
}

Powershell: NTFS permissions and Parent Folders -pathtoolong issues

I apologize in advance of the long post. I have spent a significant amount of time trying to find an answer or piece together a solution to this problem.
It's a simple request: Add a user/group with 2 sets of permissions to your entire DFS environment applying one set to folders and sub-folders and the other to files only.
Seems easy enough, however in the environment I'm trying to manage we have 1000's of folder paths greater than 260 characters deep and any use of dir -recurse or get-childitem will result in hitting the error "pathtoolong". Every example solution for this problem has used a variation of the above or relies on "get-childitem". This fails for most real world situations as I believe many of us IT admins are faced with long paths due to the nature of DFS use.
The current attempt:
Currently I'm using a custom module "NTFSSecurity" which is fantastic to apply NTFS permissions. It works great!
It can be found here: https://ntfssecurity.codeplex.com/
A tutorial from here: https://blogs.technet.microsoft.com/fieldcoding/2014/12/05/ntfssecurity-tutorial-1-getting-adding-and-removing-permissions/
The problem found in the above tutorial and every other example I've been able to find, it references commands such as:
dir -Recurse | Get-NTFSAccess -Account $account
This will fail in the real world of super long file paths.
The "PathTooLong" error workaround:
My workaround current consists of using Robocopy to export the file paths to a text file. I found this as a recommendation from someone dealing with a similar problem. Robocopy will not error on "pathtoolong" issues and is perfect for this exercise. I then try and run commands against the text file containing all of the paths I need to modify.
The command for the Robocopy is this:
robocopy '<insert source path here>' NULL /NJH /E /COPYALL /XF *.* | Out-File -FilePath '<path to fileout.txt>'
This will create a text file while copying only folder structure and permissions. Excellent!
You will then have to clean up the text file from additional characters which I use:
$filepath = '<path>\DFS_Folder_Structure.txt'
$structure = Get-Content $filepath
$structure -replace ' New Dir 0 '| Out-File -FilePath \\<path_you_want_file>\DFS_Folder_Structure2.txt
I also reversed the contents of the text file so it shows the furthest child object (folder) and work down. I thought this might be easier for identifying a parent folder or some other recursive logic which I haven't been able to figure out.
To reverse text from bottom to top use this command here:
$x = Get-Content -Path 'C:\temp_dfs\DFS_Folder_Structure2.txt'; Set-Content -Path 'C:\temp_dfs\Reversed_data.txt' -Value ($x[($x.Length-1)..0])
This script currently only applies to paths with Inheritance off or for childobjects with Inheritance off. This is taken from the NTFSSecurity module command Get-NTFSInheritance which will return results for AccessInheritance and AuditInheritance.Access is if the folder is inheriting from a parent above. Audit is if the folder is passing it down to child objects.
There are 4 possibilities:
AccessInheritance True AuditInheritance True
AccessInheritance True AuditInheritance False
AccessInheritance False AuditInheritance True
AccessInheritance False AuditInheritance False
(*Special note: I have seen all 4 show up in the DFS structure I'm dealing with.)
Script to Set Permissions based on file path contained in text file:
#Get File content to evaluate
$path = Get-Content 'C:\temp_dfs\Reversed_data.txt'
$ADaccount = '<insert fully qualified domain\user or group etc.>'
Foreach ($line in $path)
{
#Get-NTFSAccess -Path $line -Account $ADaccount | Remove-NTFSAccess
#This command will find the access of an account and then remove it.
#It has been omitted but included in case needed later.
$result = Get-NTFSInheritance -Path $line
If ($result.AccessInheritanceEnabled -Match "False" -and $result.AuditInheritanceEnabled -match "False")
{
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights Traverse,ExecuteFile,ListDirectory,ReadData,ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo ThisFolderAndSubfolders
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo FilesOnly
}
If ($result.AccessInheritanceEnabled -Match "False" -and $result.AuditInheritanceEnabled -Match "True")
{
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights Traverse,ExecuteFile,ListDirectory,ReadData,ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo ThisFolderAndSubfolders
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo FilesOnly
}
If ($result.AccessInheritanceEnabled -Match "True" -and $result.AuditInheritanceEnabled -Match "False")
{
continue
}
If ($result.AccessInheritanceEnabled -Match "True" -and $result.AuditInheritanceEnabled -Match "True")
{
continue
}
}
This script will apply permissions for the specified User/Group account and set permissions for Folder and Sub-folders and then add another set of permissions to Files only.
Now this current fix works great except it only touches folders with Inheritance turned off. This means you'd need to run this script and then set permissions on the "main parent folder". This is completely do-able and may be the best method to avoid double entries of permissions and is the current state of my solution.
If you add criteria to the bottom sections where AccessInheritanceEnable = True and Audit = True you will get double entries because you're applying permissions to both the parent --> which pushes its permissions to the child-objects and also explicitly on the child-objects themselves. This is due to the text file contains both parent and child and I haven't figure out a way to address that. This isn't "horrible" but my OCD doesn't like to have double permissions added if it can be avoided.
The real question:
What I'd really like to do is somehow identify parent folders, compare them to parents further up the tree and see if it was inheriting permissions and only apply the permission set to the highest parent in a specific chain. My mind wants to explode thinking about how you would compare parents and find the "highest parent".
Again the problem being anytime you want to -recurse the folder structure it will fail due to "pathtoolong" issues so it needs to be contained to logic applied to the text file paths. I've seen a bit mentioned about split-path but I don't really understand how that's applied or how you could compare a path to another path until you identified a parent path.
Thank-you for taking the time to read this long post and question. If you're still awake now, I'm open to any suggestions. lol.
the NTFSSecurity module is indeed fantastic.
I used it to make a script that can export ntfs security of a UNC path and it's subfolders to a readable excel file.
It can be found on:
https://github.com/tgoetheyn/Export-NtfsSecurity
I use is frequently and didn't had any problem with long filenames.
Hope you like it.
PS: If you add ntfssecurity, don't forget to include the "Synchronize" permission. If not included, strange things can happen.

On which system is get-acl resolved?

I've inherited a Powershell script that a remote customer uses to recursively search for directories and exports (to csv) multiple ACL values including Path, Owner, FileSystemRights, IdentifyReference, and AccessControlType. The script works great, but I am curious as to how the flow actually takes place. Below is partial script to show code relevant to my question below.
//Partial script begin:
get-childitem $rootdir -recurse | where-object {$_.psIscontainer -eq $true} | foreach-object {
$a = ($_.Fullname)
$b = (get-acl $_.Fullname).Owner
$c = (get-acl $_.Fullname).Access
foreach ($c1 in $c) {
$d = $c1.FileSystemRights
$e = $c1.AccessControlType
//Partial script end.
To my question: If running this script on a remote system, using admin privileges and variable $rootdir = \\someshare, on which system does the get-acl get resolved...on the system hosting the folder structure, or the remote system running the PS script and mapped to the share folder?
Thanks.
// My original question may have been a bit nebulous, so hopefully I can clarify a bit. By using get-acl on a remote system and mapped to a server share folder, will invoking get-acl cause any resource hit on the server during the ACL resolution process...disk I/O, memory, CPU. I am not a programmer, so please bear with me as I try to formulate my question properly.
Assuming that you have all authentication correctly setup (you would run into a double-hop auth problem if i understand your plan correctly) the call to Get-Acl would be executed on the system the script is run on.
From the technet article on the Get-ACL cmdlet
The Get-Acl cmdlet enables you to retrieve the security descriptor
(access control list) for a file, a folder, or even a registry key
It retrieves NTFS persmission for any folder specified, including remote folders.
In your case, it would run from the machine the script is running from, and authenticate to the remote machine using the credentials supplied to retrieve the ACL

Powershell Script to move multiple unknown files into correct locations

I am attempting to create a script for use when we perform manual data transfers at work, this can be tedious to perform when users have a ton of random data in random locations. I want to move those items from the old location on the old drive to our network location and then pull it back down. What I have below is a beta version of what I am looking to do, my issue is that I am unable to figure out why I am unable to find the current logged in user and exclude certain accounts.
$DOCDIR = [Environment]::GetFolderPath("MyDocuments")
$TARGETDIR = 'C:\TextFiles'
if(!(Test-Path -Path $TARGETDIR )){
New-Item -ItemType directory -Path $TARGETDIR
}
$Include=#("*.*")
$Path=#("C:\Users\%USERNAME%\Documents","C:\Users\%USERNAME%\Pictures")
Get-ChildItem -Path $Path -Include $Include -Recurse | Move-Item -Destination C:\TextFiles
Mind you more will be added to this but I am unsure how to get the current user and have it exclude our administrator account on the units.
Thank you for any help.
You can use the environment variable named USERDOMAIN and USERNAME to determine the currently logged on user.
if ($env:UserName -eq 'Trevor.Sullivan') {
# Do something
}
To take it one step further, you could build an array of the user accounts that you want to exclude, and then check to see if the currently logged on user account is contained in that array. Here is an example:
# Build the list of excluded users
$ExcludedUserList = #(
'User1'
, 'User2'
, 'User3'
, 'User4'
);
# Check if user is contained in exclusion list
if ('User5' -notin $ExcludedUserList) {
# Do something here
}