Cannot find service when attempting to remotely stop via Powershell / MSBuild - powershell

I have a task to deploy two Windows services created using Topshelf to a test server as part of our continuous integration build process.
My MSBuild target file is as follows:
<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Condition="'$(ConfigurationName)'=='Release'" Name="StopService">
<Exec Command="powershell.exe -NonInteractive -executionpolicy Unrestricted -command "& { &&apos;.\ServiceStop.ps1&apos; } "" ContinueOnError="true" />
</Target>
</Project>
This executes a Powershell script (well, a couple of lines) situated within the same folder within the project called ServiceStop.ps:
$service = get-service -ComputerName MyServerName -Name 'MyServiceName'
stop-service -InputObject $service -Verbose
The problem
When I queue a new build from within TFS, the script does successfully execute; however, the get-service command fails to find the service in question - despite the fact that it is definitely there and running. The specific error from the build log is as follows:
Get-Service : Cannot find any service with service name 'MyServiceName' (TaskId:198)
When the script is run locally from my machine, the service on the remote machine is found and stopped successfully, making me think it is some sort of permissions issue.
What I've tried
I have very limited experience with Powershell. I read that credentials could be stored within a Powershell object like so:
$pw = Read-Host -AsSecureString "Enter password"
$pw | ConvertFrom-SecureString | Out-File -Path .\storedPassword.txt
$password = get-content .\storedPassword.txt | convertto-securestring
$credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "myAdminAccountName",$password
However, it would appear that get-service does not have any method to pass credentials to it.
I also experimented with using PSExec to remotely start and stop the service, but ran into similar issues.
Some questions I reviewed
Using PowerShell credentials without being prompted for a password
Powershell stop-service error: cannot find any service with service name
Powershell Get-WmiObject Access is denied
Saving credentials for reuse by powershell and error ConvertTo-SecureString : Key not valid for use in specified state
I have spent more time on this issue than I can really afford, so would appreciate any help / guidance / comments that may help.
Thank you!
UPDATE
I was able to confirm that the Powershell script was receiving information from the MSBuild task, as the log showed Powershell output.
However, I ran out of time to find a solution and instead worked around the issue by dropping the updated services binaries onto the target server and wrote a Powershell script that installed them from there.
Thanks a lot to those that commented on the issue.

It appears that nothing is being passed from the MSBuild target file to the powershell script. You are defining the name of the service involved via ' marks, so its taking that rather than what the MSBuild target file is doing.
I would suggest you find out how to pass this variable, otherwise the script won't be able to pick this up correctly.

I'm making a complete stab in the dark, but your service name needs the TopShelf instance name including when you call get-service.
For example your service might be called "MyWindowsService" but what you need to programmatically look for is "MyWindowsService$default"

Related

DFS installation through GCE instance startup script not working

I'm trying to install DFS on a Windows 2012R2 instance in GCP. The instance has a startup script, and in the startup script, it does this:
$code = '
Write-Host "Setting up DFS Replication for Assets"
Start-Sleep 5
Add-DfsrMember -GroupName "CMS" -ComputerName $env:ComputerName
Start-Sleep 5
Set-DfsrMembership -GroupName "CMS" -FolderName "Assets" -ComputerName $env:ComputerName -ContentPath "C:\web\Proof_web\Website\Assets" -ReadOnly 1 -Force
Start-Sleep 5
Add-DfsrConnection -GroupName "CMS" -SourceComputerName gcp-staging-app-1 -DestinationComputerName $env:ComputerName
dfsrdiag StaticRPC /port:49200 /Member:$env:ComputerName
Start-Sleep 5
Restart-Service "DFSR"
Start-Sleep 5
Dfsrdiag PollAD /Member:gcp-staging\$env:computername
'
echo $code
Write-Host "Running powershell to install and configure DFS"
Start-Process -FilePath powershell.exe -ArgumentList $code -verb RunAs -WorkingDirectory C:\installers
I can see in the serial output that all these things look to be happening. When I RDP onto the instance and run a "Get-DFSReplicationGroup", I see what I expect, BUT when I open DFS Management mmc, there's nothing there. The "Namespaces" and "Replication" headers are there, but there's nothing underneath them.
I can then take the same code, run it manually in Powershell ISE, and it all works as expected, after a service restart on the memeber and the source instance.
Somebody, please tell me what sort of idiot I am. Be gentle.
Updates: Gave up on the startup script approach, pretty sure it's permissions, am finding articles where MS advisors are saying that the user has to be a domain admin, which seems pretty whack. But i'm now trying to run the script from a scheduled task, and same issue, permissions. If I add the service account to delegated permissions in DFS, I get this error now; –
"Could not add the computer to the replication group. Computer: WEB-QZL Replication group: "CMS" Retrieving the COM class factory for remote component with CLSID {CEFE3B33-B60F-44FC-BFE4-D354A1CE39EE} from machine WEB-QZL.domain.local failed due to the following error: 80070005 WEB-QZL.domain.local." Why is this process so overally complicated! –
And just to clarify, if I add the svc account to domain admins in AD, it works. I don't want to have a svc account as a domain admin. Just tell me the specific permission MS! this is killing me
Spent a bit of time messing about with this now, went with a run-once scheduled task in the end, that calls the PS script, as can't get it to work on startup without passing credentials in the script which we didn't want to do, and I'm not aware if there's anyway to change the account the startup scripts run under in GCP.
So, for a domain user / service account to have the ability to do this via the script called from a scheduled task, had to give the service account permissions via GPO. The policy / right is called "Synchronize directory service data". Once this service account had that privilege, ran the scheduled task and the new member was added, directories targeted etc.
Thanks all for your help. Hope this helps someone else in the future.
All the best.

Configure SharePoint 2010 UPS with PowerShell

SOLUTION FOUND: For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
NOTE: I've only included the parts of my script I believe to be relevant. I can provide other parts as needed.
I'm at my wit's end on this one. Everything seems to work fine, except the UPS Synchronization service is stuck on "Starting", and I've left it over 12 hours.
It works fine when it's set up through the GUI, but I'm trying to automate every step possible. While automating I'm trying to include every option available from the GUI so that it's present if it ever needs to be changed.
Here's what I have so far:
$domain = "DOMAIN"
$fqdn = "fully.qualified.domain.name"
$admin_pass = "password"
New-SPManagedPath "personal" -WebApplication "http://portal.$($fqdn):9000/"
$upsPool = New-SPServiceApplicationPool -Name "SharePoint - UPS" -Account "$domain\spsvc"
$upsApp = New-SPProfileServiceApplication -Name "UPS" -ApplicationPool $upsPool -MySiteLocation "http://portal.$($fqdn):9000/" -MySiteManagedPath "personal" -ProfileDBName "UPS_ProfileDB" -ProfileSyncDBName "UPS_SyncDB" -SocialDBName "UPS_SocialDB" -SiteNamingConflictResolution "None"
New-SPProfileServiceApplicationProxy -ServiceApplication $upsApp -Name "UPS Proxy" -DefaultProxyGroup
$upsServ = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Service"}
Start-SPServiceInstance $upsServ.Id
$upsSync = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Synchronization Service"}
$upsApp.SetSynchronizationMachine("Portal", $upsSync.Id, "$domain\spfarm", $admin_pass)
$upsApp.Update()
Start-SPServiceInstance $upsSync.Id
I've tried running each line one at a time by just copying it directly into the shell window after defining the variables, and none of them give an error, but there has to be something the CA GUI does that I'm missing.
For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
The workaround is to put my code into its own script file, and then use Start-Process to run the script as the farm account (it's a lot cleaner than the Job method described in the linked article):
$credential = Get-Credential ("$domain\spfarm", $SecureString)
Start-Process -FilePath powershell.exe -ArgumentList "-File C:\upsSync.ps1" -Credential $credential

Powershell - Copying File to Remote Host and Executing Install exe using WMI

EDITED: Here is my code now. The install file does copy to the remote host. However, the WMI portion does not install the .exe file, and no errors are returned. Perhaps this is a syntax error with WMI? Is there a way to just run the installer silently with PsExec? Thanks again for all the help sorry for the confusion:
#declare params
param (
[string]$finalCountdownPath = "",
[string]$slashes = "\\",
[string]$pathOnRemoteHost = "c:\temp\",
[string]$targetJavaComputer = "",
[string]$compname = "",
[string]$tempPathTarget = "\C$\temp\"
)
# user enters target host/computer
$targetJavaComputer = Read-Host "Enter the name of the computer on which you wish to install Java:"
[string]$compname = $slashes + $targetJavaComputer
[string]$finalCountdownPath = $compname + $tempPathTarget
#[string]$tempPathTarget2 =
#[string]$finalCountdownPath2 = $compname + $
# say copy install media to remote host
echo "Copying install file and running installer silently please wait..."
# create temp dir if does not exist, if exist copy install media
# if does not exist create dir, copy dummy file, copy install media
# either case will execute install of .exe via WMII
#[string]$finalCountdownPath = $compname + $tempPathTarget;
if ((Test-Path -Path $finalCountdownPath) )
{
copy c:\hdatools\java\jre-7u60-windows-i586.exe $finalCountdownPath
([WMICLASS]"\\$targetJavaComputer\ROOT\CIMV2:win32_process").Create("cmd.exe /c c:\temp\java\jre-7u60-windows-i586.exe /s /v`" /qn")
}
else {
New-Item -Path $finalCountdownPath -type directory -Force
copy c:\hdatools\dummy.txt $finalCountdownPath
copy "c:\hdatools\java\jre-7u60-windows-i586.exe" $finalCountdownPath
([WMICLASS]"\\$targetJavaComputer\ROOT\CIMV2:win32_process").Create("cmd.exe /c c:\temp\java\jre-7u60-windows-i586.exe /s /v`" /qn")
}
I was trying to get $Job = Invoke-Command -Session $Session -Scriptblock $Script to allow me to copy files on a different server, because I needed to off load it from the server it was running from. I was using the PowerShell Copy-Item to do it. But the running PowerShell script waits until the file is done copying to return.
I want it to take as little resources as possible on the server that the powershell is running to spawn off the process on another server to copy the file. I tried to user various other schemes out there, but they didn't work or the way I needed them to work. (Seemed kind of kludgey or too complex to me.) Maybe some of them could have worked? But I found a solution that I like that works best for me, which is pretty easy. (Except for some of the back end configuration that may be needed if it is is not already setup.)
Background:
I am running a SQLServer Job which invokes Powershell to run a script which backups databases, copies backup files, and deletes older backup files, with parameters passed into it. Our server is configured to allow PowerShell to run and under the pre-setup User account with SQL Server Admin and dbo privileges in an Active Directory account to allow it to see various places on our Network as well.
But we don't want it to take the resources away from the main server. The PowerShell script that was to be run would backup the database Log file and then use the another server to asynchronously copy the file itself and not make the SQL Server Job/PowerShell wait for it. We wanted it to happen right after the backup.
Here is my new way, using WMI, using Windows Integrate Security:
$ComputerName = "kithhelpdesk"
([Wmiclass]'Win32_Process').GetMethodParameters('Create')
Invoke-WmiMethod -ComputerName RemoteServerToRunOn -Path win32_process -Name create -ArgumentList 'powershell.exe -Command "Copy-Item -Path \\YourShareSource\SQLBackup\YourDatabase_2018-08-07_11-45.log.bak -Destination \\YourShareDestination\YourDatabase_2018-08-07_11-45.log.bak"'
Here is my new way using passed in Credentials, and building arg list variable:
$Username = "YouDomain\YourDomainUser"
$Password = "P#ssw0rd27"
$ComputerName = "RemoteServerToRunOn"
$FromFile = "\\YourShareSource\SQLBackup\YourDatabase_2018-08-07_11-45.log.bak"
$ToFile = "\\YourShareDestination\SQLBackup\YourDatabase_2018-08-07_11-45.log.bak"
$ArgumentList = 'powershell.exe -Command "Copy-Item -Path ' + $FromFile + ' -Destination ' + $ToFile + '"'
$SecurePassWord = ConvertTo-SecureString -AsPlainText $Password -Force
$Cred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList $Username, $SecurePassWord
([Wmiclass]'Win32_Process').GetMethodParameters('Create')
Invoke-WmiMethod -ComputerName $ComputerName -Path win32_process -Name create -ArgumentList $ArgumentList -Credential $Cred
We think that this above one is the preferred one to use.
You can also run a specific powershell that will do what you want it to do (even passing in parameters to it):
Invoke-WmiMethod -ComputerName RemoteServerToRunOn -Path win32_process -Name create -ArgumentList 'powershell.exe -file "C:\PS\Test1.ps1"'
This example could be changed to pass in parameters to the Test1.ps1 PowerShell script to make it more flexible and reusable. And you may also want to pass in a Credential like we used in a previous example above.
Help configuring WMI:
I got the main gist of this working from: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/invoke-wmimethod?view=powershell-5.1
But it may have also needed WMI configuration using:
https://helpcenter.gsx.com/hc/en-us/articles/202447926-How-to-Configure-Windows-Remote-PowerShell-Access-for-Non-Privileged-User-Accounts?flash_digest=bec1f6a29327161f08e1f2db77e64856b433cb5a
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/enable-psremoting?view=powershell-5.1
Powershell New-PSSession Access Denied - Administrator Account
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/invoke-wmimethod?view=powershell-5.1 (I used to get how to call Invoke-WmiMethod).
https://learn.microsoft.com/en-us/powershell/scripting/core-powershell/console/powershell.exe-command-line-help?view=powershell-6 (I used to get syntax of command line)
I didn't use this one, but could have: How to execute a command in a remote computer?
I don't know for sure if all of the steps in the web articles above are needed, I suspect not. But I thought I was going to be using the Invoke-Command PowerShell statement to copy the files on a remote server, but left my changes from the articles above that I did intact mostly I believe.
You will need a dedicated User setup in Active Directory, and to configure the user accounts that SQL Server and SQL Server Agent are running under to give the main calling PowerShell the privileges needed to access the network and other things to, and can be used to run the PowerShell on the remote server as well. And you may need to configure SQLServer to allow SQL Server Jobs or Stored Procedures to be able to call PowerShell scripts like I did. But this is outside the scope of this post. You Google other places on the internet to show you how to do that.

Orchestrator won't run PowerShell Cloud Exchange task

I'm having a problem getting a PowerShell script which queries objects in a cloud-based Exchange resource to work in an Orchestrator runbook.
The PowerShell script (which works correctly from my desktop computer's command line and when stepping through it in ISE) sets up a remote management session to the cloud and looks like this:
try
{
$user = "username#domain.com"
$pword = convert-toSecureString -string "password" -asplaintext -force
$creds = new-object -typename system.management.automation.pscredential -argumentlist $user, $pword
$o365 = new-pssession -configurationname Microsoft.Exchange -connectionuri https://ps.outlook.com -credential $creds -authentication basic - allowredirection
import-pssession $o365 -allowclobber -prefix o365
get-o365Mailbox 'Doe, John'
}
catch
{
throw $_.exception
}
As I mentioned, it runs fine when I step through it in the editor on my desktop but when executed inside the Orchestrator runbook it fails on the "import-pssession" command (because the $o365 is never set).
I've taken the PowerShell script and run it manually on the actual runbook server and it works there as well as it does on my own desktop -- it's only when run inside of an Orchestrator runbook that it won't function. I only have a few weeks experience with Orchestrator and didn't know I'd run into a problem like this so quickly -- I am trying to run the script in a "Run .Net Script" activity with the language set to "Powershell," which I believe is the recommended method.
I've tried saving the script as a file on the runbook server and then used the "Run Program" activity to run PowerShell with this file (recommended by someone during my searching) and that doesn't work either.
Is the Orchestrator service account that's running the script a member of the Exchange RBAC role groups? If not, it won't be allowed to connect to those Exchange management sessions.
The problem turned out to be related to the client's firewall and proxy settings for the service account they set up to be used by Orchestrator. They (the clients) would not grant the service account Internet access as a matter of policy.
A couple of different solutions came up: One was installing the PowerShell integration pack from CodePlex and using that -- the CodePlex PowerShell activity allowed me to explicitly set the security context of the activity, which let me get around their firewall issue by running the activity under an account which did have Internet access.
The second solution was installing the Exchange Admin integration pack and configuring a connection to the cloud host. Using the "Run Exchange PowerShell Command" activity rather than the more generic "Run .NET script" activity also allowed the code to work as expected.
Orchestrator is still x86 and the commands in your script will only run in x64.
Test this in your x86 ISE and see the same failure.
My workaround is to call the script using the "Run Program" activity within the System activities list.:
Program execution
Computer = I always start with initialize activity and then subscribe to the computer here
Program path: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Parameters: full path to the .ps1 of your script
Working folder: c:\temp

WiX: installer fails in ConfigureUsers action when installing from remote computer

I have a WiX MSI installer for an ASP.NET website that runs on my_server. The package is installed via a very simple Powershell script install.ps1 that just calls msiexec with some parameters.
The problem
When I run install.ps1 directly on my_server, everything is fine. But when I want to run install.ps1 on my_server from a remote machine (e.g. build_server), the installation fails with error code 1603 and the MSI install log reveals the the following error:
Action start 14:22:30: ConfigureUsers.
ConfigureUsers: Error 0x80070005: failed to add/remove User actions
CustomAction ConfigureUsers returned actual error code 1603
Any suggestions?
Extra information
I run install.ps1 remotely with the following command:
Invoke-Command -ComputerName my_server -ScriptBlock { path\to\install.ps1 } -Authentication Negotiate
I use the same user credentials on both my_server and build_server.
In the WiX definition, the website is set up with a specific user account for the app pool, like this:
<Component Id="AppPoolCmp"
Guid="a-fine-looking-guid"
KeyPath="yes">
<util:User Id="AppPoolUser"
CreateUser="no"
RemoveOnUninstall="no"
Name="[APP_POOL_IDENTITY_NAME]"
Password="[APP_POOL_IDENTITY_PWD]"
Domain="[APP_POOL_IDENTITY_DOMAIN]">
</util:User>
<iis:WebAppPool Id="AppPool"
Name="[APP_POOL_NAME]"
ManagedPipelineMode="Classic"
ManagedRuntimeVersion="v4.0"
Identity="other"
User="AppPoolUser">
<iis:RecycleTime Value="5:00" />
</iis:WebAppPool>
</Component>
This is likely to be the double hop issue, your credentials are not valid beyond the scope of the first server.
Can you do the command with the option:
-Authentication CredSSP
Rather than Negotiate.
You will also need to specify credentials manually using the -Credentials flag as well as set up the client and server for CredSSP:
Enable-WSManCredSSP -Role Client -DelegateComputer HOSTNAME -Force
Enable-WSManCredSSP -Role Server -Force
The steps are explained in more detail here.