DFS installation through GCE instance startup script not working - powershell

I'm trying to install DFS on a Windows 2012R2 instance in GCP. The instance has a startup script, and in the startup script, it does this:
$code = '
Write-Host "Setting up DFS Replication for Assets"
Start-Sleep 5
Add-DfsrMember -GroupName "CMS" -ComputerName $env:ComputerName
Start-Sleep 5
Set-DfsrMembership -GroupName "CMS" -FolderName "Assets" -ComputerName $env:ComputerName -ContentPath "C:\web\Proof_web\Website\Assets" -ReadOnly 1 -Force
Start-Sleep 5
Add-DfsrConnection -GroupName "CMS" -SourceComputerName gcp-staging-app-1 -DestinationComputerName $env:ComputerName
dfsrdiag StaticRPC /port:49200 /Member:$env:ComputerName
Start-Sleep 5
Restart-Service "DFSR"
Start-Sleep 5
Dfsrdiag PollAD /Member:gcp-staging\$env:computername
'
echo $code
Write-Host "Running powershell to install and configure DFS"
Start-Process -FilePath powershell.exe -ArgumentList $code -verb RunAs -WorkingDirectory C:\installers
I can see in the serial output that all these things look to be happening. When I RDP onto the instance and run a "Get-DFSReplicationGroup", I see what I expect, BUT when I open DFS Management mmc, there's nothing there. The "Namespaces" and "Replication" headers are there, but there's nothing underneath them.
I can then take the same code, run it manually in Powershell ISE, and it all works as expected, after a service restart on the memeber and the source instance.
Somebody, please tell me what sort of idiot I am. Be gentle.
Updates: Gave up on the startup script approach, pretty sure it's permissions, am finding articles where MS advisors are saying that the user has to be a domain admin, which seems pretty whack. But i'm now trying to run the script from a scheduled task, and same issue, permissions. If I add the service account to delegated permissions in DFS, I get this error now; –
"Could not add the computer to the replication group. Computer: WEB-QZL Replication group: "CMS" Retrieving the COM class factory for remote component with CLSID {CEFE3B33-B60F-44FC-BFE4-D354A1CE39EE} from machine WEB-QZL.domain.local failed due to the following error: 80070005 WEB-QZL.domain.local." Why is this process so overally complicated! –
And just to clarify, if I add the svc account to domain admins in AD, it works. I don't want to have a svc account as a domain admin. Just tell me the specific permission MS! this is killing me

Spent a bit of time messing about with this now, went with a run-once scheduled task in the end, that calls the PS script, as can't get it to work on startup without passing credentials in the script which we didn't want to do, and I'm not aware if there's anyway to change the account the startup scripts run under in GCP.
So, for a domain user / service account to have the ability to do this via the script called from a scheduled task, had to give the service account permissions via GPO. The policy / right is called "Synchronize directory service data". Once this service account had that privilege, ran the scheduled task and the new member was added, directories targeted etc.
Thanks all for your help. Hope this helps someone else in the future.
All the best.

Related

Start PowerShell As A Group Managed Service Account

How do I start PowerShell with a gMSA account. I right click on the PowerShell icon, run as different user, then input domain\msa$ with no password. It errors out about credentials being incorrect.
I've installed the service account on the machine and running the Test-ADServiceAccount return true. I've granted it the 'log on as a service' and 'log on as a batch job' permissions (I don't really think this was needed but tried it anyway and it didn't work).
Any ideas?
There are different ways to set up tasks running a PS script with a gMSA, this is what I personally do because I find it easy to do.
First you need to develop your .ps1 to download the file from your FS with your user or with a service account with permissions to download the file.
Once the script is tested and running correctly, set up and test a Scheduled Task with your user or service account used in step 1. It's a good idea to configure all the triggers / extra configurations beforehand because once you update the scheduled task you can't modify it without doing all the process again.
Once the Scheduled Task is fully functional and all triggers set, you proceed to update the task using the gMSA instead of your user or service account. I personally use this:
$taskName = "My Scheduled Task Name"
$gMSAName = (Get-ADServiceAccount gMSA_Name).sAMAccountName # Or hardcode your gMSA Name with a $ at the end
$runLevel = "Highest" # Limited, etc
$principal = New-ScheduledTaskPrincipal -UserID $gmsaName -LogonType Password -RunLevel $runLevel
Set-ScheduledTask -TaskName $taskName -Principal $principal
After running this and if everything went OK, once you re-open the Task Scheduler and search for your task you should see the name of your gMSA here:
Remember, once you update the task if you need to edit it later, Task Scheduler will force you to use a different user and the whole process of updating the task via PS will have to be repeated.
To have in consideration:
The gMSA will need the same permissions as you or your service account over the File Share to read / modify / etc.
The server where the task will run has to be a member of the associated Security Group of your gMSA:
(Get-ADServiceAccount gMSA_Name -Properties PrincipalsAllowedToRetrieveManagedPassword).PrincipalsAllowedToRetrieveManagedPassword
This is the associated AD Group and your task server MUST be a member of this group in order to use the gMSA.
psexec DOES work, at least interactively. On the machine where the gMSA is 'installed' use this:
psexec -u DOMAIN\gMSA_acct$ powershell.exe
When prompted for password just hit enter. That will launch Powershell as the gMSA. You can verify with a WHOAMI from that session.
If there is a commandline switch to make psexec skip the prompt for password (or possibly using <NUL STDIN redirect) you can make this work non-interactively as well.
However this doesn't change the recommendation to run the task as the gMSA. That is 100% correct, you should NOT be running tasks as LocalSystem, especially if you need to access remote resources. Perhaps the file copy task can be split out from the rest.

Double-Hop Errors when running Skype for Business Cmdlets

I am attempting to automate the Skype for Business Server installation process in Powershell, I have a script that remotes into specified machines and begins preparing them as Front-End servers. The problem lies when certain SfB cmdlets (SfB commands are all of the form "verb-Cs...", ex. Get-CsUser or Get-CsPool) are run in remote sessions, they throw the double-hop error:
Exception: Active Directory error "-2147016672" occurred while searching for domain controllers in domain...
This is after running Enable-CsComputer, which enables the computer's role-based off its definition in the topology (topology was published successfully). The user object is in all required groups (RTCUniversalServerAdmins, Schema Admins, CsAdministrators & Local Admin rights on all SfB Servers). Oddly enough, the command 'Import-CsConfiguration -localstore" does not throw errors, and it's in the same remote session. There may be other local or domain groups that I need to be in, but I cannot pinpoint exactly which and have not seen them documented in the Skype build guides. Skype commands that have parameters to specify targets or just pull data, such as Get-CsPool or Get-CsAdForest, do not have errors because they are run in the local scope. The Enable-CsComputer has no parameter for the computer name, it has to be executed from that machine itself.
Enabling CredSSP delegation on each server is not an option, and I'm not understanding why there is a "second hop" in this command! If the second hop was a resource on a file server or database, that would make sense, and be easy to solve, but in this case, I can't track it. Can anyone tell me what I may be missing?
Here's a code sample to try and illustrate. From the jumbox I get the pool data to create an array, and a session is opened to each machine:
$ServerArray =get-cspool -identity $poolName
$i=0
$SessionArray = #{}
foreach($server in $ServerArray.Computers){$SessionArray[$i] = new-PsSession -ComputerName $server}
foreach($session in $SessionArray.values){
invoke-Command -session $session -scriptBlock {
#remote commands:
import-csConfiguration -<config file path> -localstore; #no errors
enable-CsReplica; #no errors
enable-cscomputer; #double hop error here
}}
If I log into that machine and run the same command, it executes fine but the intention of the project is to automate it on an arbitrary number of machines.
It looks like it's just trying to authenticate to a domain controller, which is reasonable. You'll have to approach this like any other double-hop issue.
Microsoft has an article dedicated to the double hop issue, and has a few solutions other than CredSSP that you can look at: Making the second hop in PowerShell Remoting

Powershell remoting - cannot execute an exe as another user

I've a commandline program (c#) that encrypts config files based on machine key.
A powershell script copies the build to a Target Server, modifies configs accordingly and installs windows services.
All the windows services run as local system account (standard user, non-admin) - let's call this account "locuser".
The Target Server is a Win 2012 R2 Server. All of the above is achieved by PS remoting from the Build Server to this Target server.
Now, I need to run the encrypt commandline program as "locuser", so that the program can use the account specific key to do the encryption.
I know that this can be easily achieved by calling Start-Process cmdlet with -Credentials parameter. Well, here's the catch, the above works fine, if I remote in (RDP) to the Target Server and then run the Start-Process .... -Credential $cred from a Powershell Console.
However, I need this to be working while I remote-in (using my scripts) to the TargetServer whilst deploying. When I remote-in to the TargetServer I use credentials that has Admin privileges.
I've tried the following
I've granted "locuser" both "Full Control" and "Invoke (Execute)" permissions by using the Set-PSSessionConfiguration -Name Microsoft.PowerShell -ShowSecurityDescriptorUI command. I've run this command for both Microsoft.Powershell and Microsoft.Powershell32 - Still get Access Denied
I've edited the "Local Security Policy"->"Local Policies"->"User Rights Assignment"->Impersonate a client after authentication - and added both the Admin account (that I login with) and the "locuser" account - Still get Access Denied
I've also granted locuser admin rights - Still get Access Denied
I'm pretty sure, there is some configuration on the PS Remoting Side of things that I'm missing out but can't figure out what - because all Powershell throws me is a Access Denied error (see screenshot) with little to no useful information to troubleshoot further.
Also, checked Event logs for any traces but to no avail.
You've fallen prey to the dreaded Double Hop. Basically you're authenticating from computer A to computer B, then trying to authenticate again from computer B to computer C (which also happens to be B in this case).
If at all possible, you would be better off ending the session and starting a new one with the locuser credentials, then just calling Start-Process. Another, more messy approach is to use schtasks.
I can tell you how to do it in the same session but it's a bit messy and very complicated, and should only be a last resort:
On the originating server (Build Server):
Run the command Enable-WSManCredSSP -Role Client -Delegate [name] where [name] is an IP or DNS address / range including any target servers (eg "192.168.1.*")
Open GPEdit.msc, navigate to Computer Configuration\Administrative Templates\System\Credentials Delegation and check that the rules Allow delegating fresh credentials and Allow delegating fresh credentials with NTLM... are enabled and include [name]
On the Target Server:
Run the command Enable-WSManCredSSP -Role Server
Running the command:
Invoke-Command [targetserver] [-Credential $cred] -Scriptblock {
## do stuff
Invoke-Command . -Credential $locusercred -Authentication Credssp -ScriptBlock {
Start-Process -FilePath $sc #etc
}
}
Some things to be aware of:
Firstly I used this setup to create a local session, then remote from there (so A-A-B instead of A-B-B) so the Group Policy stuff might be in the wrong place but pretty sure it's right.
Secondly I found that credentials are a pain to get working in sessions (in this case $locusercred). I did get it going natively but weirdly it suddenly couldn't decrypt the securestring. I ended up saving a securestring with a defined key to the registry so it can always be decrypted from any account, you may need to come up with your own solution there.
All this stuff is explained in the free eBook "The Secrets of PowerShell Remoting", if you go for the double-hop approach I recommend giving it a read.

How to remotely register static ETW manifests as part of a website deployment?

I'm doing a pilot effort to use the new EventSource (Microsoft.Diagnostics.Tracing.EventSource from nuget) and its new support for ETW channels in order to write to the windows event log. The code is in place, and it writes properly to my workstations event log. I'm thrilled.
Now comes the difficult part. The application that's taking advantage of this capability is a web service, and we deploy it with webdeploy as part of a build-deploy-test system. Because usage of ETW channels requires static registration of provider manifests via wevtutil.exe. The EventSource documentation states that this is best done as part of an installer, but this seems a bit out of webdeploy's capabilities.
Our aim is that we would be able to automatically uninstall the manifest resident on the target server immediately before executing the webdeploy package, and then to import the new manifest after the webdeploy sync has completed. We're not set on this, but it seems like the most sensible way.
For that reason, it seems like maybe this is something that powershell remoting might be able to solve, but it's not an area I know much about.
Has anyone done something like this? Is there a better or simpler way?
There are only a few requirements here. A) the remote machine must have PowerShell remoting enable which also means it must have PowerShell 2.0 or higher B) the script running on the local machine must be able to run as administrator and the credentials used must have admin privileges on the remote machine. If you can meet those requirements then this should be cake.
On the remote machine you need to execute two commands to enable remoting:
Set-ExecutionPolicy RemoteSigned
Enable-PSRemoting -Force
Then on the local machine from an elevated prompt you should be able to execute something like this from a script:
# these two paths assume these files have been copied to the remote computer and to a directory
# in which the service account has privileges to read i.e. not under a userprofile dir.
$etwDllPath = c:\somepath\myassembly.mysourcename.etwManifest.dll
$etwManPath = c:\somepath\myassembly.mysourcename.etwManifest.man
$s = New-PSSession -ComputerName <remoteComputerName>
Invoke-Command -Session $s {param($man) wevtutil.exe um $man} -arg $etwManPath
Invoke-Command -Session $s {param($man,$dll) wevutil.exe im $man /rf:$dll /mf:$dll} -arg $etwManPath, $etwDllPath
Remove-PSSession $s
If you can avoid a remote path with spaces, try to. It will make this easier. :-)

Orchestrator won't run PowerShell Cloud Exchange task

I'm having a problem getting a PowerShell script which queries objects in a cloud-based Exchange resource to work in an Orchestrator runbook.
The PowerShell script (which works correctly from my desktop computer's command line and when stepping through it in ISE) sets up a remote management session to the cloud and looks like this:
try
{
$user = "username#domain.com"
$pword = convert-toSecureString -string "password" -asplaintext -force
$creds = new-object -typename system.management.automation.pscredential -argumentlist $user, $pword
$o365 = new-pssession -configurationname Microsoft.Exchange -connectionuri https://ps.outlook.com -credential $creds -authentication basic - allowredirection
import-pssession $o365 -allowclobber -prefix o365
get-o365Mailbox 'Doe, John'
}
catch
{
throw $_.exception
}
As I mentioned, it runs fine when I step through it in the editor on my desktop but when executed inside the Orchestrator runbook it fails on the "import-pssession" command (because the $o365 is never set).
I've taken the PowerShell script and run it manually on the actual runbook server and it works there as well as it does on my own desktop -- it's only when run inside of an Orchestrator runbook that it won't function. I only have a few weeks experience with Orchestrator and didn't know I'd run into a problem like this so quickly -- I am trying to run the script in a "Run .Net Script" activity with the language set to "Powershell," which I believe is the recommended method.
I've tried saving the script as a file on the runbook server and then used the "Run Program" activity to run PowerShell with this file (recommended by someone during my searching) and that doesn't work either.
Is the Orchestrator service account that's running the script a member of the Exchange RBAC role groups? If not, it won't be allowed to connect to those Exchange management sessions.
The problem turned out to be related to the client's firewall and proxy settings for the service account they set up to be used by Orchestrator. They (the clients) would not grant the service account Internet access as a matter of policy.
A couple of different solutions came up: One was installing the PowerShell integration pack from CodePlex and using that -- the CodePlex PowerShell activity allowed me to explicitly set the security context of the activity, which let me get around their firewall issue by running the activity under an account which did have Internet access.
The second solution was installing the Exchange Admin integration pack and configuring a connection to the cloud host. Using the "Run Exchange PowerShell Command" activity rather than the more generic "Run .NET script" activity also allowed the code to work as expected.
Orchestrator is still x86 and the commands in your script will only run in x64.
Test this in your x86 ISE and see the same failure.
My workaround is to call the script using the "Run Program" activity within the System activities list.:
Program execution
Computer = I always start with initialize activity and then subscribe to the computer here
Program path: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Parameters: full path to the .ps1 of your script
Working folder: c:\temp