I am trying to create a shortcut in Powershell from one server to another - powershell

$PrivateDrive = "Sharedrivepath1"
$ScanDrive = "ScanDrivePath2"
New-Item -Itemtype SymbolicLink -Path $PrivateDrive -Name ScanDrive -Value $ScanDrive
I am trying to create a shortcut from the ScanDrive to the PrivateDrive, I have a full filepath and have access to both locations.
These both exist.
But I get the error "New-Item : Symbolic Links are not supported for the specified path"
EDIT: This is how I declare my Private and Scan Drives
$SamaccountName = ($name).Givenname + '.' + ($name.Surname)
$PrivateDrive = '\\SERVER1\private\home folders\' + $SamaccountName
$ScanDrive = "\\SERVER2\Shares_2\" + $SamaccountName

The error message is PowerShell's, in response to the underlying CreateSymbolicLink() WinAPI function reporting error code 1 (INVALID_FUNCTION).
There are two possible causes that I'm aware of:
A configuration problem: R2R (Remote-to-Remote) symlink evaluation is disabled (which is true by default.
To query the current configuration, run the following:
fsutil behavior get SymLinkEvaluation
To modify the configuration, you must call from an elevated (run as admin) session. The following enables R2R symlink evaluation:
# Requires an ELEVATED session.
fsutil behavior set SymLinkEvaluation R2R:1
(Less likely) A fundamental limitation:
The remote link path (the target path too?) is not exposed vie one of the following technologies, which are the ones listed as supported in the linked WinAPI help topic:
Server Message Block (SMB) 3.0 protocol
SMB 3.0 Transparent Failover (TFO)
Resilient File System (ReFS)

Related

How to move file on remote server to another location on the same remote server using PowerShell

Currently, I run the following command to fetch the files to my local system.
Get-SCPFile
-ComputerName $server
-Credential $credential
-RemoteFile ($origin + $target + ".csv")
-LocalFile ($destination + $target + ".csv")
It works as I'd like (although it sucks that I can't copy multiple files by regex and/or wildcard). However, after the operation has been carried out, I'd like to move the remote files to another directory on the remote server so instead of residing in $origin at $server, I want them to be placed in $origin + "/done" at the same server. Today, I have to use PuTTY for that but it would be so much more convenient to do that from PS.
Googling gave me a lot of material but I couldn't make it work. At the moment, I'm not sure if I'm specifying the path incorrectly somehow or if it's not possible to use the plain commands when working against an external, secured, Unix-server.
For copying files, I can't use Copy-Item, hence the function Get-SCPFile. I can imagine that remote moving, renaming and listing the items isn't possible neither for the same reason (whatever that reason is).
This example as well as this one produce error cannot find path despite the value being used for copying the file successfully with the script at the top. I'm pretty sure it's a misleading error message (not being enitrely sure, though).
$file = "\\" + $server + "" + $origin + "" + $target + ".csv"
# \\L234231.vds.afm.se/var/trans/ut/drish/sxx/meta001.csv
Remove-Item $file -force
Many answers (like this) are very simple, which supports my theory that the combination of Unix and secure raise an extra challenge. Perhaps I'm wording the question insufficiently well.
There's also more advanced examples, still not working, just hanging up the window with no error messages. I feel my competence prevents me from estimating the degree of screwuppiness in this approach.
In PowerShell you can create a PowerShell Session (PSSession) from your System remotly on another System (and into another Session on your System but thats details... ) and execute your commands there.
You can create a PSSession with New-PSSession but a lot of cmdlets have a-ComputerName parameter (or something similar) so that they can be executed remotley without creating a PSSession first.
A PSSession can be used with Enter-PSSession to get an interactive Session or with Invoke-Command to execute a ScriptBlock. That way you could test your Remove-Item command directly on the target server. Depending on the setup you might need to use Linux syntax within the remote session.
Here are some more infos about_PSSessions and using it with SSH to connect to Linux

Powershell Error Message - Object Not Found [duplicate]

I wrote a powershell script to strip R/H/S attributes off all files in a specified set of root paths. The relevant code is:
$Mask = [System.IO.FileAttributes]::ReadOnly.Value__ -bor [System.IO.FileAttributes]::Hidden.Value__ -bor [System.IO.FileAttributes]::System.Value__
Get-ChildItem -Path $Paths -Force -Recurse -ErrorAction SilentlyContinue | ForEach-Object {
$Value = $_.Attributes.value__
if($Value -band $Mask) {
$Value = $Value -band -bnot $Mask
if($PSCmdlet.ShouldProcess($_.FullName, "Set $([System.IO.FileAttributes] $Value)")) {
$_.Attributes = $Value
}
}
}
This works fine, but when processing one very large folder structure, I got a few errors like this:
Exception setting "Attributes": "Could not find a part of the path 'XXXXXXXXXX'."
At YYYYYYYYYY\Grant-FullAccess.ps1:77 char:17
+ $_.Attributes = $Value
+ ~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], SetValueInvocationException
+ FullyQualifiedErrorId : ExceptionWhenSetting
I find this strange because the FileInfo object being manipulated is guaranteed to exist, since it comes from a file search.
I can't give the file names because they are confidential, but I can say:
they are 113-116 characters long
the unique set of characters involved are %()+-.0123456789ABCDEFGIKLNOPRSTUVWX, none of which are illegal in a file name
the % character is there due to URL-encoded spaces (%20)
Do you have any suggestions as to what may be causing this? I assume that if the full path was too long, or I didn't have write permissions to the file, then a more appropriate error would be thrown.
As stated in your own answer, the problem turned out to be an overly long path (longer than the legacy limit of 259 chars.)
In addition to enabling long-path support via Group Policy, you can enable it on a per-computer basis via the registry as follows, which requires running with elevation (as admin):
# NOTE: Must be run elevated (as admin).
# Change will take effect in FUTURE sessions.
Set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled 1
Pass 0 to turn support off.
However, even with long-path supported turned OFF (as is invariably the case on pre-Windows 10 versions) it is possible to handle long paths:
In Windows PowerShell (PowerShell up to version 5.1), you must use the long-path opt-in prefix, \\?\, as discussed below.
In PowerShell [Core] v6+, no extra work is needed, because it always supports long paths - you neither need to turn on support system-wide nor do you need the long-path prefix discussed below.
Caveat: While you may use \\?\ in PowerShell [Core] as well in principle, support for it is inconsistent as of v7.0.0-rc.2; see GitHub issue #10805.
Important: Prefix \\?\ only works under the following conditions:
The prefixed path must be a full (absolute), normalized path (must not contain . or .. components).
E.g., \\?\C:\path\to\foo.txt works, but \\?\.\foo.txt does not.
Furthermore, if the path is a UNC path, the path requires a different form:
\\?\UNC\<server>\<share>\...;
E.g., \\server1\share2 must be represented as \\?\UNC\server1\share2
It did turn out to be a long path issue after all, despite the wording of the error messages. A simple Get-ChildItem search for the files produced the same errors. I finally tracked down the files mentioned in the error messages and measured their total path lengths. They were exceeding 260 characters.
I experimented with adding a \\?\ prefix to the paths, but powershell doesn't seem to like that syntax.
Fortunately, the script is being used on Windows 2016, so I tried enabling long path support in group policy. That made the whole problem go away.

Error "Could not find a part of the path" while setting attributes on an existing file

I wrote a powershell script to strip R/H/S attributes off all files in a specified set of root paths. The relevant code is:
$Mask = [System.IO.FileAttributes]::ReadOnly.Value__ -bor [System.IO.FileAttributes]::Hidden.Value__ -bor [System.IO.FileAttributes]::System.Value__
Get-ChildItem -Path $Paths -Force -Recurse -ErrorAction SilentlyContinue | ForEach-Object {
$Value = $_.Attributes.value__
if($Value -band $Mask) {
$Value = $Value -band -bnot $Mask
if($PSCmdlet.ShouldProcess($_.FullName, "Set $([System.IO.FileAttributes] $Value)")) {
$_.Attributes = $Value
}
}
}
This works fine, but when processing one very large folder structure, I got a few errors like this:
Exception setting "Attributes": "Could not find a part of the path 'XXXXXXXXXX'."
At YYYYYYYYYY\Grant-FullAccess.ps1:77 char:17
+ $_.Attributes = $Value
+ ~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], SetValueInvocationException
+ FullyQualifiedErrorId : ExceptionWhenSetting
I find this strange because the FileInfo object being manipulated is guaranteed to exist, since it comes from a file search.
I can't give the file names because they are confidential, but I can say:
they are 113-116 characters long
the unique set of characters involved are %()+-.0123456789ABCDEFGIKLNOPRSTUVWX, none of which are illegal in a file name
the % character is there due to URL-encoded spaces (%20)
Do you have any suggestions as to what may be causing this? I assume that if the full path was too long, or I didn't have write permissions to the file, then a more appropriate error would be thrown.
As stated in your own answer, the problem turned out to be an overly long path (longer than the legacy limit of 259 chars.)
In addition to enabling long-path support via Group Policy, you can enable it on a per-computer basis via the registry as follows, which requires running with elevation (as admin):
# NOTE: Must be run elevated (as admin).
# Change will take effect in FUTURE sessions.
Set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled 1
Pass 0 to turn support off.
However, even with long-path supported turned OFF (as is invariably the case on pre-Windows 10 versions) it is possible to handle long paths:
In Windows PowerShell (PowerShell up to version 5.1), you must use the long-path opt-in prefix, \\?\, as discussed below.
In PowerShell [Core] v6+, no extra work is needed, because it always supports long paths - you neither need to turn on support system-wide nor do you need the long-path prefix discussed below.
Caveat: While you may use \\?\ in PowerShell [Core] as well in principle, support for it is inconsistent as of v7.0.0-rc.2; see GitHub issue #10805.
Important: Prefix \\?\ only works under the following conditions:
The prefixed path must be a full (absolute), normalized path (must not contain . or .. components).
E.g., \\?\C:\path\to\foo.txt works, but \\?\.\foo.txt does not.
Furthermore, if the path is a UNC path, the path requires a different form:
\\?\UNC\<server>\<share>\...;
E.g., \\server1\share2 must be represented as \\?\UNC\server1\share2
It did turn out to be a long path issue after all, despite the wording of the error messages. A simple Get-ChildItem search for the files produced the same errors. I finally tracked down the files mentioned in the error messages and measured their total path lengths. They were exceeding 260 characters.
I experimented with adding a \\?\ prefix to the paths, but powershell doesn't seem to like that syntax.
Fortunately, the script is being used on Windows 2016, so I tried enabling long path support in group policy. That made the whole problem go away.

Determine where script is being executed from

I have a script that will send items to the recycle bin (if selected) or delete items permanently. If the script is run locally, the recycle piece works properly.
However, if it's run from a different computer - in this case, my local machine runs the script against a shared folder on a server - the delete is permanent, and doesn't get sent to the recycle bin. The script (in a prior run) makes a decision about WHAT to delete by first setting the Archive bit to TRUE and then (after seeing how many backups it is to retain) un-setting the Archive bit for items to be deleted on the next execution of that same script.
My thought was to alter the main script to mark the files for deletion, but only do the physical action of deleting the file(s) only when the script was being run locally, or to put the Recycle script (by itself) as a Task on the server that would delete & send the item to the Recycle Bin that would run at a set interval.
My questions-
In Powershell (using 2.0) how do you determine the source computer
vs the target computer? In this case, the script is being run from
MyPC, and it's target is Server1.
The script will run whether the target is a mapped drive (Drive Y:),
or if it's targeted by the servername (\Server1). How can you
distinguish the above question in both of these cases?
You can get the local computer name with $env:COMPUTERNAME. Use it to compare the value against the target server name.
For each file, you'd have to check first if the drive is a mapped drive, if it is, get the server name from the wmi instance and compare it to $env:COMPUTERNAME.
You can get a file's Drive qualifier with the Split-Path cmdlet:
PS> $drive = Split-Path Q:\test.txt -Qualifier
PS> $drive
Q:
And then get the server name with WMI:
PS> (gwmi win32_logicaldisk -filter "drivetype=4 and deviceid='$drive'").ProviderName.Split('\')[2]
Server1
The OP wrote:
#Shay - Thanks for your help. I've learned a great deal from many posts by you on various Powershell sites.
I was able to use almost everything you suggested, and only had to add an extra line of code to make it work. I checked the property ([System.Uri]$markedFile).IsUnc to determine if the filename I've read is a UNC name.
It returns False if the drive is mapped, and True if it is UNC. From that, I'm able to get the servername & make a comparison to the environment. Code follows.
$markedFile = "\\Server1\foldername1\Error.log"
#$markedFile = "Y:\foldername1\Error.log"
$TargetComputer = $null
$thisComputer = Get-Content env:computername
if (Test-Path $markedFile) { # if file exists
if (([System.Uri]$markedFile).IsUnc) { # if it's a UNC name & not a mapped drive name
$TargetComputer = ([System.Uri]$markedFile).Host
}
else { #file is not a UNC name, it must be a mapped drive
$drive = Split-Path $markedFile -Qualifier
$TargetComputer = (gwmi win32_logicaldisk -Filter "drivetype=4 and deviceid = '$drive'").Providername.split('\')[2]
}
}
The above code works either way. Thank you again for your help!

Issues With New-ADGroup, Set-ACL and Network Folders

I'm playing with some PowerShell code to dynamically generate AD security groups and then apply them to folders on a network share, but having issues with resolving the newly created group.
Consider this:
import-module activedirectory
for ($i = 0; $i -lt 10; $i++) {
$group = New-ADGroup -Path "OU=Groups,OU=Department,DC=Domain,DC=Network" -Name "z-test-group-$i" -GroupScope DomainLocal -GroupCategory Security -PassThru
$acl = Get-Acl C:\Temp
$permission = $group.SID,"FullControl","Allow"
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule $permission
$acl.SetAccessRule($accessRule)
$acl | Set-Acl C:\Temp
}
Which works fine.
However, if I change the folder to a network folder, such as G:\Temp, or \\domain.network\DFS\GroupShare\Temp, I get a 'Method failed with unexpected error code 1337'.
I tired using SetACL.exe and received a similar error:
C:\Temp\SetACL.exe -on "\\domani.network\dfs\GroupShare\Temp" -ot file -actn ace -ace "n:$GroupSID;p:full;s:y"
SetACL finished with error(s):
SetACL error message: The call to SetNamedSecurityInfo () failed
Operating system error message: The security ID structure is invalid.
INFORMATION: Processing ACL of: <\\?\UNC\domain.network\dfs\GroupShare\Temp>
If I wait say 10 to 20 seconds, and run the Set-ACL (or SetACL.exe) portion of the code again, it completes successfully.
At first I thought this was related directly to the domain controllers (4 of them which are a mix of 2003 and 2008 R2), but the fact that it worked fine on local folders was intriguing (and annoying).
I did a Wireshark trace during the execution of the code on a local folder and then a network folder. The main difference is when trying to apply the ACLs to the network folder I see LDAP lookups and (amongst other things) the following SMB response:
NT Trans Response, FID: 0x0040, NT SET SECURITY DESC, Error: STATUS_INVALID_SID
Which I assume is what causes my Set-ACL command to fail.
The underlying network filesystem is EMC Celerra 6.0.xx. I am very unfamiliar with this technology, however from what I understand it holds some kind of SID cache which would explain the above error (it doesn't yet know of the new group even though AD does).
So I guess there are two questions:
Is there any way around this (PowerShell/C# ect) that doesn't
involve sleeping/waiting? IE, set the ACL even though the SID is
invalid?
If EMC Celerra is the issue (I assume it is), is there any
way I can force it to update its 'SID cache' or whatever it may be?
I have read various articles about this issue, but none seem to have an effective resolution (or work for me).
Thanks for your help.
Rhys.
If the issue is just the delay involved in waiting for the cache to update blocking other work the script needs to be doing you could ship that off to a background job and let your main script go on to other things.
Figured it out!
Modified the acl.mappingErrorAction on our EMC Celerra NAS.
Was set to 0, updated it to 1.
server_param server_2 -facility cifs -modify acl.mappingErrorAction -value 1
Now we have no issues in setting the newly created security group into the ACLs for the folder on a network share (no delays).
Info: acl.mappingErrorAction
Defines the rules for unknown mapping between security, user, and group identifiers (SID/UID/GID) on ACL settings.
Two kinds of errors might occur:
The SID set in the ACL is unknown to the domain controllers being used.
The username is not yet mapped to a UID/GID.
The bit list consists of four binary bits (bits 0 through 3, right to left). Each bit is 1 when set; otherwise 0.
Bit 0 (0001 or +1): Store unknown SID.
Bit 1 (0010 or +2): Store SID with no UNIX mapping.
Bit 2 (0100 or +4): Enable debug traces.
Bit 3 (1000 or +8): Do lookup only in cache (secmap or global SID cache or per connection SID cache).
Values: 0 – 15
Default: 0
Seems obvious enough now that I understand more about the underlying CIFS/ACL settings on the NAS then I ever wanted to know.
Rhys.