How can I modify files/folders on a network share as another user? - powershell

Summary
As part of our build process on DevOps, I'm trying to copy over the build artifact to a specific folder on an internal network share. Problem is, although the build agent is on our internal network, the DevOps service does not have permissions to access any of the network shares.
I think I can get around this by supplying valid login credentials in the PowerShell script that does the work, but I'm having trouble getting the credentials to be accepted. Though viewing the folder structure without credentials seems to be possible, actually making modifications (create/delete files and/or folders) is giving me access denied or authentication denied messages.
What I've Tried
At first, I was doing a very simple create folder command:
# NETWORK_SHARE would be the internal IP of the target public folder on the network, like 12.345.678.90
New-Item -Path "NETWORK_SHARE\path\to\folder" -ItemType Directory
But that was giving me an Access Denied error:
New-Item : Access to the path 'folder' is denied.
Then I did some research and thought that perhaps if I supply the credentials of a valid user, it would allow me to do this. Here's the simple test command. It's supposed to create a new folder in a specified location:
$username = "MyUsername"
$password = ConvertTo-SecureString "MyPassword" -AsPlainText -Force
$credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# NETWORK_SHARE would be the internal IP of the target public folder on the network, like 12.345.678.90
Invoke-Command -ComputerName "NETWORK_SHARE" -ScriptBlock { New-Item -Path $args[0] -ItemType Directory } -Cred $credentials -Authentication Credssp -ArgumentList "NETWORK_SHARE\path\to\folder"
The error that I'm getting is the following:
Connecting to remote server NETWORK_SHARE failed with the following error message : The WinRM client cannot process the request. CredSSP
authentication is currently disabled in the client configuration. Change the client configuration and try the request again. CredSSP authentication must also be enabled in the server configuration. Also, Group Policy must be edited to allow credential delegation to the target computer. Use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Delegating Fresh Credentials. Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com
It doesn't even look like it's trying to authenticate, it just immediately spits back the error above. I'm not sure how to go about debugging this. I can make changes to the build agent if necessary, but I do not have the ability to change any configuration on the target network share, as that is maintained by the IT team and they are very strict about opening up our drives to the internet. Is there a way to authenticate successfully to create a new folder on the network share without changing any configuration on the target?

I ended up taking a non-Powershell approach to this. Since I had access to the build agent, I configured the DevOps service to run with the credentials of a valid user instead of the default "Network Service" user. This granted the DevOps service all the permissions it needed, and I was able to write a trivial PS script that creates/copies/deletes folders on the network share.

Related

Double-Hop Errors when running Skype for Business Cmdlets

I am attempting to automate the Skype for Business Server installation process in Powershell, I have a script that remotes into specified machines and begins preparing them as Front-End servers. The problem lies when certain SfB cmdlets (SfB commands are all of the form "verb-Cs...", ex. Get-CsUser or Get-CsPool) are run in remote sessions, they throw the double-hop error:
Exception: Active Directory error "-2147016672" occurred while searching for domain controllers in domain...
This is after running Enable-CsComputer, which enables the computer's role-based off its definition in the topology (topology was published successfully). The user object is in all required groups (RTCUniversalServerAdmins, Schema Admins, CsAdministrators & Local Admin rights on all SfB Servers). Oddly enough, the command 'Import-CsConfiguration -localstore" does not throw errors, and it's in the same remote session. There may be other local or domain groups that I need to be in, but I cannot pinpoint exactly which and have not seen them documented in the Skype build guides. Skype commands that have parameters to specify targets or just pull data, such as Get-CsPool or Get-CsAdForest, do not have errors because they are run in the local scope. The Enable-CsComputer has no parameter for the computer name, it has to be executed from that machine itself.
Enabling CredSSP delegation on each server is not an option, and I'm not understanding why there is a "second hop" in this command! If the second hop was a resource on a file server or database, that would make sense, and be easy to solve, but in this case, I can't track it. Can anyone tell me what I may be missing?
Here's a code sample to try and illustrate. From the jumbox I get the pool data to create an array, and a session is opened to each machine:
$ServerArray =get-cspool -identity $poolName
$i=0
$SessionArray = #{}
foreach($server in $ServerArray.Computers){$SessionArray[$i] = new-PsSession -ComputerName $server}
foreach($session in $SessionArray.values){
invoke-Command -session $session -scriptBlock {
#remote commands:
import-csConfiguration -<config file path> -localstore; #no errors
enable-CsReplica; #no errors
enable-cscomputer; #double hop error here
}}
If I log into that machine and run the same command, it executes fine but the intention of the project is to automate it on an arbitrary number of machines.
It looks like it's just trying to authenticate to a domain controller, which is reasonable. You'll have to approach this like any other double-hop issue.
Microsoft has an article dedicated to the double hop issue, and has a few solutions other than CredSSP that you can look at: Making the second hop in PowerShell Remoting

TeamCity Remote Powershell Access Denied Suddenly

Disclaimer: I am not a DevOps guy so please forgive any ignorance. I'm learning this stuff to expand my understanding.
I've enabled remote Powershell on a Windows Server 2019 instance in order to stop/start scheduled tasks during deployment of files from my build server (also Windows Server 2019).
I followed the below steps in an Administrator Powershell as the Adminstrator user on the remote server:
1. Enable RSRemoting.
2. Remove existing listener.
3. Create self-signed certificate and export to crt file.
$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "<subdomain.domain.com>"
Create listener.
Create firewall rules to allow secure PSRemoting and disable unsecure connections.
Copy certificate to build server.
Import certificate on build server.
From the build server, I've tested the configuration using the following commands in Powershell:
$username = 'Administrator'
$pass = ConvertTo-SecureString -string '<password here>' -AsPlainText -Force
$cred = New-Object -typename System.Management.Automation.PSCredential -argumentlist $username, $pass
Invoke-Command -ComputerName <subdomain.domain.com> -UseSSL -ScriptBlock {whoami} -Credential $cred
Which responds nicely with win-<some stuff>\administrator. However, when I execute a remote Powersehll command from within a TeamCity build step, I get a big ugly Connecting to remote server <subdomain.domain.com> failed with the following error message : Access is denied..
The weird part is, this worked two days ago and I have several builds that were able to complete all remote operations. From this morning, it's just stopped working - poof!
If I fudge the credentials, I do get an incorrect username/password error so it is definitely reaching the server.
Another interesting find is that if I run
[bool](Test-WSMan)
on the remote server, I get True returned, but if I run the same command with -ComputerName <subdomain.domain.com> on the build server, I get
WinRM cannot complete the operation. Verify that the specified computer name is
valid, that the computer is accessible over the network, and that a firewall exception for the WinRM service is enabled and allows access from this computer. By default, the WinRM firewall exception for public profiles limits
access to remote computers within the same local subnet. returned.
Both the remote host and build server are logged on as the default Administrator.
Any ideas?
After more research and calling in a few favours, I was advised to tweak the TeamCity Build Agent and TeamCity Server services. These need to Log On As a User and not Local System. I can't explain how my previous settings worked. The Access is denied error I experienced has nothing to do with the Remote Powershell configuration mentioned above.

Drive Mapping with Azure Scale Sets using Desired State Configuration

I am running into an interesting issue. Maybe you fine folks can help me understand what's happening here. If there's a better method, I'm all ears.
I am running a DSC Configuration on Azure and would like to map a drive. I've read this really isn't what DSC is for, but I am not aware of any other way of doing this outside of DSC with Azure Scalesets. Here's the portion of the script I am running into issues:
Script MappedDrive
{
SetScript =
{
$pass = "passwordhere" | ConvertTo-SecureString -AsPlainText -force
$user = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "username",$pass
New-PSDrive -Name W -PSProvider FileSystem -root \\azurestorage.file.core.windows.net\storage -Credential $user -Persist
}
TestScript =
{
Test-Path -path "W:"
}
GetScript =
{
$hashresults = #{}
$hashresults['Exists'] = test-path W:
}
}
I've also attempted this code in the SetScript section:
(New-Object -ComObject WScript.Network).MapNetworkDrive('W:','\\azurestorage.file.core.windows.net\storage',$true,'username','passwordhere')
I've also tried a simple net use command to map the drive instead of the fancy, New-Object or New-PSDrive cmdlets. Same behavior.
If I run these commands (New-Object/Net Use/New-PSDrive) manually, the machine will map the drive if I run it with a separate drive letter. Somehow, the drive is attempting to be mapped but isn't mapping.
Troubleshooting I've done:
There is no domain in my environment. I am simply attempting to create a scale set and run DSC to configure the machine using the storage account credentials granted upon creation of the storage account.
I am using the username and password that is given to me by the Storage Account user id and access key (randomly generated key, with usually the name of the storage account as the user).
Azure throws no errors on running the DSC module (No errors in Event Log, Information Only - Resource execution sequence properly lists all of my sequences in the DSC file.)
When I log into the machine and check to see if the drive is mapped, I run into a disconnected network drive on the drive letter I want (W:).
If I open Powershell, I receive an error: "Attempting to perform the InitializeDefaultDrives operation on the 'FileSystem' provider failed."
If I run "Get-PSDrive" the W: drive does not appear.
If I run the SetScript code manually inside a Powershell Console, the mapped drive works fine under a different drive letter.
If I try to disconnect the W: drive, I receive "This network connection does not exist."
I thought maybe DSC needed some time before mapping and added a Sleep Timer, but that didn't work. Same behavior.
I had a similar problem before, while it didn't involve DSC, mounting an Azure File share would be fine until the server would be restarted, then it would appear as a disconnected drive. This happend if i used New-Object/Net Use/New-PSDrive with the persist option.
The answer to that issue, i found in the updated docs
Persist your storage account credentials for the virtual machine
Before mounting to the file share, first persist your storage account
credentials on the virtual machine. This step allows Windows to
automatically reconnect to the file share when the virtual machine
reboots. To persist your account credentials, run the cmdkey command
from the PowerShell window on the virtual machine. Replace
with the name of your storage account, and
with your storage account key.
cmdkey /add:<storage-account-name>.file.core.windows.net /user:<storage-account-name> /pass:<storage-account-key>
Windows will now reconnect to your file share when the virtual machine
reboots. You can verify that the share has been reconnected by running
the net use command from a PowerShell window.
Note that credentials are persisted only in the context in which
cmdkey runs. If you are developing an application that runs as a
service, you will need to persist your credentials in that context as
well.
Mount the file share using the persisted credentials
Once you have a remote connection to the virtual machine, you can run
the net use command to mount the file share, using the following
syntax. Replace with the name of your storage
account, and with the name of your File storage share.
net use <drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name>
example :
net use z: \\samples.file.core.windows.net\logs
Since you persisted your storage account credentials in the previous
step, you do not need to provide them with the net use command. If you
have not already persisted your credentials, then include them as a
parameter passed to the net use command, as shown in the following
example.
Edit:
I don't have an Azure VM free to test it on, but this works fine on a Server 2016 hyper-v vm
Script MapAzureShare
{
GetScript =
{
}
TestScript =
{
Test-Path W:
}
SetScript =
{
Invoke-Expression -Command "cmdkey /add:somestorage.file.core.windows.net /user:somestorage /pass:somekey"
Invoke-Expression -Command "net use W: \\somestorage.file.core.windows.net\someshare"
}
PsDscRunAsCredential = $credential
}
In my brief testing the drive would only appear after the server was rebooted.
What I imagine is happening here:
DSC runs under the NT AUTHORITY\SYSTEM account and unless the Credential attribute has been set, the Computer account is used when pulling the files from a network share. But looking at how Azure Files operate, permissions shouldn't be an issue, but running this whole process under NT AUTHORITY\SYSTEM could. I suggest you try to run DSC as a user of your VM's and see if that works.
ps. You could also try to perform the same operation against a VM with network share where you are confident that share\ntfs permissions are correct. You might need to enable anonymous user to access your share for that to work.

Powershell remoting - cannot execute an exe as another user

I've a commandline program (c#) that encrypts config files based on machine key.
A powershell script copies the build to a Target Server, modifies configs accordingly and installs windows services.
All the windows services run as local system account (standard user, non-admin) - let's call this account "locuser".
The Target Server is a Win 2012 R2 Server. All of the above is achieved by PS remoting from the Build Server to this Target server.
Now, I need to run the encrypt commandline program as "locuser", so that the program can use the account specific key to do the encryption.
I know that this can be easily achieved by calling Start-Process cmdlet with -Credentials parameter. Well, here's the catch, the above works fine, if I remote in (RDP) to the Target Server and then run the Start-Process .... -Credential $cred from a Powershell Console.
However, I need this to be working while I remote-in (using my scripts) to the TargetServer whilst deploying. When I remote-in to the TargetServer I use credentials that has Admin privileges.
I've tried the following
I've granted "locuser" both "Full Control" and "Invoke (Execute)" permissions by using the Set-PSSessionConfiguration -Name Microsoft.PowerShell -ShowSecurityDescriptorUI command. I've run this command for both Microsoft.Powershell and Microsoft.Powershell32 - Still get Access Denied
I've edited the "Local Security Policy"->"Local Policies"->"User Rights Assignment"->Impersonate a client after authentication - and added both the Admin account (that I login with) and the "locuser" account - Still get Access Denied
I've also granted locuser admin rights - Still get Access Denied
I'm pretty sure, there is some configuration on the PS Remoting Side of things that I'm missing out but can't figure out what - because all Powershell throws me is a Access Denied error (see screenshot) with little to no useful information to troubleshoot further.
Also, checked Event logs for any traces but to no avail.
You've fallen prey to the dreaded Double Hop. Basically you're authenticating from computer A to computer B, then trying to authenticate again from computer B to computer C (which also happens to be B in this case).
If at all possible, you would be better off ending the session and starting a new one with the locuser credentials, then just calling Start-Process. Another, more messy approach is to use schtasks.
I can tell you how to do it in the same session but it's a bit messy and very complicated, and should only be a last resort:
On the originating server (Build Server):
Run the command Enable-WSManCredSSP -Role Client -Delegate [name] where [name] is an IP or DNS address / range including any target servers (eg "192.168.1.*")
Open GPEdit.msc, navigate to Computer Configuration\Administrative Templates\System\Credentials Delegation and check that the rules Allow delegating fresh credentials and Allow delegating fresh credentials with NTLM... are enabled and include [name]
On the Target Server:
Run the command Enable-WSManCredSSP -Role Server
Running the command:
Invoke-Command [targetserver] [-Credential $cred] -Scriptblock {
## do stuff
Invoke-Command . -Credential $locusercred -Authentication Credssp -ScriptBlock {
Start-Process -FilePath $sc #etc
}
}
Some things to be aware of:
Firstly I used this setup to create a local session, then remote from there (so A-A-B instead of A-B-B) so the Group Policy stuff might be in the wrong place but pretty sure it's right.
Secondly I found that credentials are a pain to get working in sessions (in this case $locusercred). I did get it going natively but weirdly it suddenly couldn't decrypt the securestring. I ended up saving a securestring with a defined key to the registry so it can always be decrypted from any account, you may need to come up with your own solution there.
All this stuff is explained in the free eBook "The Secrets of PowerShell Remoting", if you go for the double-hop approach I recommend giving it a read.

Azure Automation Powershell DSC: Copy files off UNC Share

I'm trying to create a software distribution point to deploy internal applications to azure virtual machines with Azure Automation DSC.
These msi based applications then gets copied to the local vm by the DSC File Resource and installed by the Package DSC Resource
I've tried to do so with an Azure Storage Account. The Storage File Feature provides an UNC Share which is accessible by username and password and therefore would seem like an easy solution to create a software dist point.
These are the crucial parts of my DSC Configuration ( *.file just used for sample reason )
$storageCredential = Get-AutomationPSCredential -Name "PackageStorage"
LocalConfigurationManager
{
#DebugMode = 'All'
RebootNodeIfNeeded = $true
}
File CopyPackagesFolder
{
DestinationPath = "C:\packages"
Credential = $storageCredential
Ensure = "Present"
SourcePath = "\\*.file.core.windows.net\packages\"
Type = "Directory"
Recurse = $true
}
This only works the very first time it is executed by LCM. After the first successful execution is fails with the following message:
A specified logon session does not exist. It may already have been
terminated. An error occurs when accessing the network share with the
specified credential. Please make sure the credential is correct and
the network share is accessible. Note that Credential should not be
specified with the local path. The related file/directory is:
\*.file.core.windows.net\packages.
What do I miss?
Can you verify that the account has not changed before second DSC run by doing "net use" directly on the share using these credentials?