Alternative to test-path, using credential, to hosts on a different domain, with wildcard in folder path - powershell

I'm trying to copy a file to a specific folder on one of n hosts (hostA, hostB etc.), but i don't know the full path of the folder.
If I don't use a credential (which I have to do) I can e.g.
test-path -path \\hostA\d$\*\targetFolder ...and hit D:\blah\targetFolder
I could use the credential with new-psdrive, but then I can't map to a wildcarded path. I could also invoke-command, but then I'd have to work out a way to get the file from the sourceHost...
This is for a TFS/AzureDevops pipe.

Using New-PSDrive, map a (non-persistent, PS-only) drive to the admin share \\hostA\d$ itself, and then use that drive for wildcard-based path testing:
# Define a PS-only RemoteD: drive that maps to \\hostA\d$,
# using the specified credentials.
New-PSDrive RemoteD FileSystem \\hostA\d$ -Credential (Get-Credential)
# Use paths based on RemoteD: for wildcard-based testing.
Test-Path RemoteD:\*\targetFolder

Related

How to refer to HKEY_CLASSES_ROOT in PowerShell?

New-Item -Path "HKCR:\Directory\Background\shell\customname" -Force
I've been doing the same thing for HKCU and KHLM but when I try HKCR I get errors in PowerShell. how am I supposed to do it for HKEY_CLASSES_ROOT?
I searched for a solution but couldn't find any.
Okay I figured out on my own,
checked Get-PSDrive
and saw the only registry aliases available by default on Windows/PowerShell are
HKCU Registry HKEY_CURRENT_USER
HKLM Registry HKEY_LOCAL_MACHINE
so, what I did, following this, was to add a new alias for HKEY_CLASSES_ROOT that is called HKCR
New-PSDrive -Name "HKCR" -PSProvider Registry -Root "HKEY_CLASSES_ROOT"
Defining a custom drive whose root is HKEY_CLASSES_ROOT, as shown in your own answer, is definitely an option, especially for repeated use.
Ad hoc, you can alternatively use the Registry:: provider prefix directly with native registry paths:
New-Item -Path 'Registry::HKEY_CLASSES_ROOT\Directory\Background\shell\customname' -Force
Note:
The Registry part of the prefix is the provider name, as shown in Get-PSProvider's output.
Hypothetically, multiple providers with the same name could be registered, in which case you can prefix the name with the implementing module name for disambiguation; in the case of the registry provider, this module-qualified prefix is Microsoft.PowerShell.Core\Registry::[1] However, it's fair to assume that no third-party providers will choose a name that conflicts with the providers that ship with PowerShell, so Registry:: (or registry::, case doesn't matter), should do.
Note that the module-qualified provider name does show up in the prefix of the .PSPath property that provider items, such as reported by Get-Item and Get-ChildItem, are decorated with, e.g.:
PS> (Get-Item HKCU:\Console).PSPath
Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Console
[1] Note that the Core part of the name does not refer to the PowerShell (Core) edition; it simply denotes a module that is at the core of either edition.

net use x: https://server/path vs new-psDrive x https://server/path

I am able to map a network path with the console command
net use x: https://some.server.xy/path/to/directory
However, when I tried to map the network drive in PowerShell (before I assigned it with net use x: ...) with
new-psDrive v fileSystem https://some.server.xy/path/to/directory
I got the error message
new-psDrive : The specified drive root "https://some.server.xy/path/to/directory" either does not exist, or it is not a folder.
Apparantly, my assumption that those two commands would have the same effect was wrong.
The question is: what is PowerShell's equivalent for using net use ...?
Copied from Stack Exchange: Map Network Drive to a WebDAV Server via PowerShell:
Here is a working example of me mounting the Sysinternals WebDAV site to my S: drive:
[String]$WebDAVShare = '\\live.sysinternals.com\Tools'
New-PSDrive -Name S -PSProvider FileSystem -Root $WebDAVShare
Notice you need to use the UNC format, not the http:// prefix.
Also you need to make sure that the WebClient service is running on your computer.
If you wanted to confirm that a server supports WebDAV, you could do:
(Invoke-WebRequest http://live.sysinternals.com -Method Options).Headers.DAV
And if that returns something like 1,2,3 then the server supports various versions of WebDAV. (Although the server administrator may have disallowed the Options verb.)

Copy a file to a remote computer

I have made two instances on sky-high; cl1 and srv1. I am trying to copy a folder from cl1 to srv1. I can use the command
Enter-PSSession -Credential $cred IP_ADD_SRV1
from cl1 to get into srv1. I have been looking at the help site for copy-item and found this command called Copy a file to a remote computer. Is this right? The command is
$Session = New-PSSession -ComputerName "Server01" -Credential "Contoso\User01"
Copy-Item "D:\Folder001\test.log" -Destination "C:\Folder001_Copy\" -ToSession $Session
My questions are:
Is the ComputerName just the name I called them on my Microsoft Remote Desktop?
And what do I put as the credential?
My problem is that the path for the two folders I want to copy are almost the same. Someone told me I need to use the UNC path. Do I need to use this
both at the copy-item and destination? I am new to this, but does
this look right for the UNC path: \\cl1\C$\Users\Admin\Test. ?
You can copy a file or folder from a pc to a remote machine in several ways.
A 'Normal' copy (not using a Session object)
if the pc you are logged into is called cl1 and the file is on that computer (source), you need to specify the Destination in UNC format:
Copy-Item -Path 'C:\SourceFolder\TheFileToCopy.txt' -Destination '\\srv1\c$\DestinationFolder'
If however the file is on the remote machine and you need to copy that TO the machine you're logged into, then the Source should be in UNC format:
Copy-Item -Path '\\srv1\c$\TheFileToCopy.txt' -Destination 'C:\DestinationFolder'
Using the Session object
if the pc you are logged into is called cl1 and the file is on that computer (source) and you have established a session using $session = New-PSSession –ComputerName srv1 to the remote machine, then you need to specify both the Path and Destination parameters as LOCAL paths:
Copy-Item -Path 'C:\SourceFolder\TheFileToCopy.txt' -Destination 'C:\DestinationFolder' -ToSession $session
A Credential object contains user name and (encrypted) password to use to authenticate to the remote machine. Use the Get-Credential cmdlet for that
It seems you want to copy a directory from a source on computer Cl1 to a path on the remote server srv1.
From your comments, I see that the source is C:\Users\Admin\Test (that is the LOCAL path of the computer you are logged in to, i.e. Cl1) and that the destination would be C:\Users\Admin\Backup on the REMOTE machine.
That is why you need to use the UNC format for the destination path, C:\Users\Admin\Backup --> \\srv1\C$\Users\Admin\Backup.
Using the servers name needs DNS to be set up properly, so you can also use the IP address of that server instead of its name. Suppose that the server has IP 10.212.141.129, the UNC path for the destination would then become \\10.212.141.129\C$\Users\Admin\Backup.
However.. You are targetting the so-called Administrative Share (C$), and for that you need to have permissions. Also you are targetting a user folder for user Admin (which is user Admin on the remote machine, and that is not the same one as the Administrator on your computer.
Therefore, it is quite possible you do not have access permissions on the target folder.
You can give yourself permissions (if you know the correct credentials of course) by adding parameter -Credential $cred to the Copy-Item cmdlet. Such a credentials object is easily obtained by using
$cred = Get-Credential -Message "Please enter Domain Admin credentials"
For Copy-Item to be able to copy something to somewhere, you must make sure the destination path exists.
Try to navigate in File Explorer to that remote path using the same UNC naming convention.
If for instance the path \\srv1\C$\Users\Admin exists, but there is no folder Backup, (and you have permissions to do so), create that folder, either from within Explorer, or in PowerShell:
if (-not (Test-Path -LiteralPath '\\srv1\C$\Users\Admin\Backup' -PathType Container)) {
$null = New-Item -Path '\\srv1\C$\Users\Admin\Backup' -ItemType Directory
}
Next, you should be able to copy all files and subfolders from the source directory to that destination using
Copy-Item -Path 'C:\Users\Admin\Test' -Destination '\\srv1\C$\Users\Admin\Backup' -Recurse # -Credential $cred # can go here
# local source on cl1 ^^^^ ^^^^ to remote destination on srv1
Of course, you can also use the Session method I've described earlier., where in that case you should use local pathnames (C:\whatever) and don't need UNC paths, because the $session object takes care of that for you.
It could be that on the destination server, there is a share set-up for you that resides somewhere else. For instance a folder X:\Students\Course1\Output and that path has been shared as StudentMaterial$.
If this might be the case (ask your teacher) you can set the destination as \\srv1\StudentMaterial$ and you do not need to go all the way via the Administrative Share.
Hope this explains some more

Execute an Uninstall-Setup from server on remote computers via PowerShell

This is my first question here and I am also quite new on PowerShell, so I hope I am doing everything alright.
My problem is the following: I want to uninstall a programm on several computers, check if the registry-key is deleted and then install a new version of the programm.
The setup is located on a server within the same domain as the computers.
I want my Script to loop through the computers and execute the setup from the server for every computer. As I am quite new with PowerShell, I have no idea how to do this. I was thinking to maybe use Copy-Item, but I dont want to really move the setup, but simply execute it from the server to the computers? Any idea how to do this?
Best regards
You can try the following approach.
Note that the need to provide credentials explicitly is a workaround for the infamous double-hop problem.
# The list of computers on which to run the setup program.
$remoteComputers = 'computer1', 'computer2' # ...
# The full UNC path of the setup program.
$setupExePath = '\\server\somepath\setup.exe'
# Obtain credentials that can be used on the
# remote computers to access the share on which
# the setup program is located.
$creds = Get-Credential
# Run the setup program on all remote computers.
Invoke-Command -ComputerName $remoteComputers {
# WORKAROUND FOR THE DOUBLE-HOP PROBLEM:
# Map the target network share as a dummy PS drive using the passed-through
# credentials.
# You may - but needn't - use this drive; the mere fact of having established
# a drive with valid credentials makes the network location accessible in the
# session, even with direct use of UNC paths.
$null = New-PSDrive -Credential $using:cred dummy -Root (Split-Path -Parent $using:$setupExePath) -PSProvider FileSystem
# Invoke the setup program from the UNC share.
& $using:$setupExePath
# ... do other things
}

New-PSDrive drive mapping name

In my powershell script i am using New-PSDrive function to map remote server file path into my local computer as windows deployment operation proccess.
I plan to reuse this Powershell script in the future, so i dont want any conflict between drives because of naming. For example, if two deployment operations need to reach the script at the same time, then one of two will be deployed uncorrectly.
That's the question: Can i use timestamp or any other unique information as a drive mapping name? That way, i can be sure of avoiding name conflict.
Edit:
I have tried to create custom named new-psdrive mapping without persist parameter, but that way, powershell tries to reach the folder with relative path (under the current working directory)
Here is the code where i try to copy some files (backup):
$day = Get-Date -Format "yyyyMMdd"
$appsource = "\\$computername\D$\Applications"
New-PSDrive -Name J -PSProvider FileSystem -Root $appsource-Credential $cred -persist
Write-Host "Backup işlemi başladı."
robocopy "J:\App" "J:\backup\$day"
Edit 2:
You can not use a dynamic name as a persisted drive mapping name. If you are to reach cross domain computer, the best way is (but cost-effective way) to use Invoke-Command for running script on remote computer. 2 way (remote-local, local-remote) file-sharing permissions are need to be allowed. If you use Invoke-Command, you are conflict-free. Because the command uses dynamic session on the remote computer.
Per the documentation from Get-Help New-PSDrive -full, the name of the new drive is supplied as a string, so if you can build up the string from your preferred information (timestamp, etc.) before passing it to New-PSDrive, you can use it as a drive name. Note that you should avoid characters that will be problematical in pathnames, such as spaces and the reserved characters (e.g., \, :,/, the wildcard characters, etc.).
Since your edit shows that you're using ROBOCOPY, which runs "outside" PowerShell's code/memory space, you may not be able to use New-PSDrive to establish the mapping - I've had inconsistent results with this. Much more reliable is to establish the mapping with NET USE - in your case, NET USE J: $appsource will likely do the trick.
Since Windows-mapped drives have hard requirements on names (which is what is created when using the persist parameter) it may be better to use invoke-command and pass in a script block than mapping the drive at all.
$SB = {
$day = Get-Date -Format "yyyyMMdd"
Robocopy "D:\Test\App" "D:\Test\backup\$day"
}
Invoke-Command -ComputerName $CompName -Credential $cred -ScriptBlock $SB
This way it removes the need to worry about mapped drive collision