Powershell DSC client can't register with pull server - powershell

For the past few days, I have been trying to create a development/test environment where I can automate deployments with DSC.
I have been using WMF5.1.
The pullserver has been set up using the example: Sample_xDscWebServiceRegistrationWithSecurityBestPractices
From xPSDesiredStateConfiguration 5.1.0.0.
configuration Sample_xDscWebServiceRegistrationWithSecurityBestPractices
{
param
(
[string[]]$NodeName = 'CORE-O-DSCPull.CORE.local',
[ValidateNotNullOrEmpty()]
[string] $certificateThumbPrint,
[Parameter(HelpMessage='This should be a string with enough entropy (randomness) to protect the registration of clients to the pull server. We will use new GUID by default.')]
[ValidateNotNullOrEmpty()]
[string] $RegistrationKey # A guid that clients use to initiate conversation with pull server
)
Import-DSCResource -ModuleName xPSDesiredStateConfiguration -ModuleVersion '5.1.0.0'
Node $NodeName
{
WindowsFeature DSCServiceFeature
{
Ensure = "Present"
Name = "DSC-Service"
}
xDscWebService PSDSCPullServer
{
Ensure = "Present"
EndpointName = "PSDSCPullServer"
Port = 8080
PhysicalPath = "$env:SystemDrive\inetpub\wwwroot\PSDSCPullServer"
CertificateThumbPrint = $certificateThumbPrint
ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules"
ConfigurationPath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"
State = "Started"
DependsOn = "[WindowsFeature]DSCServiceFeature"
RegistrationKeyPath = "$env:PROGRAMFILES\WindowsPowerShell\DscService"
AcceptSelfSignedCertificates = $true
UseSecurityBestPractices = $true
}
File RegistrationKeyFile
{
Ensure = 'Present'
Type = 'File'
DestinationPath = "$env:ProgramFiles\WindowsPowerShell\DscService\RegistrationKeys.txt"
Contents = $RegistrationKey
}
}
}
I apply the MOF file to my pull server without issues. I create a meta MOF using the same example:
[DSCLocalConfigurationManager()]
configuration Sample_MetaConfigurationToRegisterWithSecurePullServer
{
param
(
[ValidateNotNullOrEmpty()]
[string] $NodeName = 'CORE-O-DSCPull.CORE.local',
[ValidateNotNullOrEmpty()]
[string] $RegistrationKey, #same as the one used to setup pull server in previous configuration
[ValidateNotNullOrEmpty()]
[string] $ServerName = 'CORE-O-DSCPull.CORE.local' #node name of the pull server, same as $NodeName used in previous configuration
)
Node $NodeName
{
Settings
{
RefreshMode = 'Pull'
}
ConfigurationRepositoryWeb CORE-O_PullSrv
{
ServerURL = "https://$ServerName`:8080/PSDSCPullServer.svc" # notice it is https
RegistrationKey = $RegistrationKey
ConfigurationNames = #('Basic')
}
}
}
I apply the LCM settings to my pull-server without a problem.
I can create a simple basic.mof and use DSC to apply it. All this works fine.
Next, I create another meta.mof file for another node to let it register to my pull-server. I use the same configuration as above except for the nodename, which I change to the name of the other node. I use the command:
Set-DscLocalConfigurationManager -ComputerName <nodename> -path <pathtonewmetamof>
This command works correctly. That machine can then use DSC to apply the same basic.mof without problems.
Here comes the problem:
I restart my pull server and node, create a new basic.mof and try to apply this to both my machines. This procedure works fine on the pull server itself, but my node can no longer apply the basic.mof, because it will no longer register with my pull-server. I have replicated this many times, where I would install both machines from scratch and configure them. Every time I restart my machines, registration stops working. See the error below:
Registration of the Dsc Agent with the server https://CORE-O-DSCPull.CORE.local:8080/PSDSCPullServer.svc failed. The underlying error is: Failed to register Dsc
Agent with AgentId 1FE837AA-C774-11E6-80B5-9830B2A0FAC0 with the server
https://core-o-dscpull.core.local:8080/PSDSCPullServer.svc/Nodes(AgentId='1FE837AA-C774-11E6- 80B5-9830B2A0FAC0'). .
+ CategoryInfo : InvalidResult: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : RegisterDscAgentCommandFailed,Microsoft.PowerShell.DesiredStateConfiguration.Commands.RegisterDscAgentCommand
+ PSComputerName : CORE-O-DC.CORE.local
So, my problem is that registration seems to work fine until I reboot the pull server. Does anyone have any idea what can cause this issue?

For those wondering if I managed to fix this, yes I did.
It appears to be a bug in WMF5.0 and I was only using WMF5.1 on the pullserver. Not on the node. So I had to update that and now it is working.

As explained in this blog entry the low-level problem is that WMF 5.0 uses TLS 1.0 to communicate with the server, while WFM 5.1 does no longer support TLS 1.0.
In the aforementioned entry you will find two solutions: one that implies upgrading WMF in every and each of the nodes, and another that allows less secure connections by modifying the register in the server.

Related

Partial DSC configuration for SMB

Unfortunately I can't find any examples on the internet for my scenario.
I got a DSC server with a SMB share. I want to deploy partial configs like in https://learn.microsoft.com/de-de/powershell/dsc/pull-server/partialconfigs
But there are only examples for a HTTP DSC servers not SMB. Is this also possible with an SMB DSC server? If so, could I have an example?
I have found an example:
[DSCLocalConfigurationManager()]
configuration PartialConfig
{
Node localhost
{
Settings
{
RefreshMode = 'Pull'
ConfigurationID = 'a5f86baf-f17f-4778-8944-9cc99ec9f992'
RebootNodeIfNeeded = $true
}
ConfigurationRepositoryShare SMBPull
{
SourcePath = '\\Server\Configurations'
Name = 'SMBPull'
}
PartialConfiguration OSConfig
{
Description = 'Configuration for the Base OS'
ConfigurationSource = '[ConfigurationRepositoryShare]SMBPull'
RefreshMode = 'Pull'
}
PartialConfiguration SQLConfig
{
Description = 'Configuration for the SQL Server'
DependsOn = '[PartialConfiguration]OSConfig'
RefreshMode = 'Push'
}
}
}

Auto creation of Service without DefaultServices on developer machines

At the recent Service Fabric Community Q&A 24th Edition there was a lot of discussion around using the DefaultService construct in the ApplicationManifest.xml and it's drawbacks. Microsoft suggested omitting this from an ApplicationManifest entirely and instead modifying the Deploy-FabricApplication.ps1 to construct a default implementation of an application so developers still have a decent F5 experience.
So I have modified the Deploy-FabricApplication.ps1 to the following (this excerpt is the bottom of the script):
if ($IsUpgrade)
{
$Action = "RegisterAndUpgrade"
if ($DeployOnly)
{
$Action = "Register"
}
$UpgradeParameters = $publishProfile.UpgradeDeployment.Parameters
if ($OverrideUpgradeBehavior -eq 'ForceUpgrade')
{
# Warning: Do not alter these upgrade parameters. It will create an inconsistency with Visual Studio's behavior.
$UpgradeParameters = #{ UnmonitoredAuto = $true; Force = $true }
}
$PublishParameters['Action'] = $Action
$PublishParameters['UpgradeParameters'] = $UpgradeParameters
$PublishParameters['UnregisterUnusedVersions'] = $UnregisterUnusedApplicationVersionsAfterUpgrade
Publish-UpgradedServiceFabricApplication #PublishParameters
}
else
{
$Action = "RegisterAndCreate"
if ($DeployOnly)
{
$Action = "Register"
}
$PublishParameters['Action'] = $Action
$PublishParameters['OverwriteBehavior'] = $OverwriteBehavior
$PublishParameters['SkipPackageValidation'] = $SkipPackageValidation
Publish-NewServiceFabricApplication #PublishParameters
#Get-ServiceFabricApplication
New-ServiceFabricService -Stateless -ApplicationName "fabric:/Acme.Hierarchy" -ServiceTypeName "Acme.Hierarchy.HierarchyServiceType" -ServiceName "fabric:/Acme.Hierarchy/Acme.Hierarchy.HierarchyService"-InstanceCount 1 -PartitionSchemeSingleton
}
The above fails with the error
FabricElementNotFoundException
However, if you uncomment the line #Get-ServiceFabricApplication you will see that it does in fact return an application of
ApplicationName : fabric:/Acme.Hierarchy
ApplicationTypeName : Acme.HierarchyType
ApplicationTypeVersion : 1.0.0
ApplicationParameters : { "_WFDebugParams_" = "[{"CodePackageName":"Code","Cod
ePackageLinkFolder":null,"ConfigPackageName":null,"Con
figPackageLinkFolder":null,"DataPackageName":null,"Dat
aPackageLinkFolder":null,"LockFile":null,"WorkingFolde
r":null,"ServiceManifestName":"Quantium.RetailToolkit.
Fabric.Hierarchy.HierarchyServicePkg","EntryPointType"
:"Main","DebugExePath":"C:\\Program Files
(x86)\\Microsoft Visual Studio\\2017\\Professional\\Co
mmon7\\Packages\\Debugger\\VsDebugLaunchNotify.exe","D
ebugArguments":"
{6286e1ef-907b-4371-961c-d833ab9509dd} -p [ProcessId]
-tid [ThreadId]","DebugParametersFile":null}]";
"Acme.Hierarchy.HierarchyServ
ice_InstanceCount" = "1" }
Create application succeeded.
and running the command that fails after the publish script has finished works perfectly.
Does anyone have a solution as to how I can get a good developer experience by not using DefaultServices and instead using Powershell scripts?
Thanks in advance
I have updated the answer to add more details why the default services should not be used (In production only).
On service fabric, you have two options to create your services:
The Declarative way, done via Default Services feature where you describe services that should run as part of your application using the ApplicationManifest.
and the Dynamic(Imperative) way using powershell commands to create these services once the application is deployed.
The declarative way bring you the convenience of defining the expected structure of your application, so that Service Fabric does the job of creating and starting instances of your services according to the declaration in the ApplicationManifest. The convenience it gives you is very useful for development purposes, imagine if every time you had to debug an application, it had to: Build > Package > Deployed to Service Fabric > You had to manually start the many services that define your application. This would be too inconvenient, so that is why default service become handy.
Another scenario is when your application definition immutable, that means, the same number of services and instances will stay the same without variation throughout the time it is deployed in production.
But we know this is high unlikely that these definitions will keep the same throughout the years or even hours in a day, because the idea of microservices is that they should be scalable and flexible, so that we can tweak the configuration of individual services independent of each other.
By using default services, would be too complex for the orchestration logic identify what changes has been made on your services compared to the default specified in the original deployment, and in cases of conflict, which configuration should have priority, for example:
The deployed default services define a service with 5 instances, after it is deployed you execute an powershell script to update to 10 instances, then a new application upgrade comes in with the default services having 5 instances or a new value of 8, what should happen? which one is the correct?
You add an extra named services (service of same type with other name) to an existing deployment that is not defined in the default services, what happens when the new deployment comes in and say that this service should not be expected? Delete it? And the data? How this service should be removed from production? If I removed by mistake during development?
A new version deletes an existing service, the deployment fails, how the old service should be recreated? And if there was any data there to be migrated as part of the deployment?
A service has been renamed. How do I track that it was renamed instead of removing the old and adding a new one?
These is some of the many issues that can happen. This is why you should move away from default services and create them dynamically(imperatively), with dynamic services, service fabric will receive an upgrade command and what will happen is:
"This is my new application type package with new service type
definitions, whatever version you get deployed there, replace for this
version and keep the same configuration".
If a new configuration is required, you will provide as parameters for the deployment to override the old values or change it on a separate command. This will make things much simpler, as SF won't have to worry about different configurations and will just apply package changes to deployed services.
You can also find a nice info about these issues on these links:
How not to use service fabric default services
Service Fabric Q&A 10
Service Fabric Q&A 11
Regarding your main question:
Does anyone have a solution as to how I can get a good developer
experience by not using DefaultServices and instead using Powershell
scripts?
if you want a good experience you should use the default services, it is intended for this, give the developer a good experience without worrying about services required to run at startup.
The trick is, during your CI process, you should remove the default services from the application manifest before you pack your application, so that you don't face the drawbacks later.
Removing the defaultServices during CI (Like VSTS Build), you have the benefits of defaultServices on dev environment, don't have to maintain the powershell script versions(if a new version comes along) and the removal of the default services is a very simple powershell script added as a build step. Other than that, everything keeps the same.
Ps: I don't have a real script at hand now, but will be very simple like this:
$appManifest = "C:\Temp\ApplicationManifest.xml" #you pass as parameter
[xml]$xml = Get-Content $appManifest
$xml.ApplicationManifest.DefaultServices.RemoveAll()
$xml.save($appManifest)
The solution here is to use a script named Start-Service.ps1 in the Scripts folder in Application project.
Below is an example script for the Data Aggregation sample Microsoft have provided.
$cloud = $false
$singleNode = $true
$constrainedNodeTypes = $false
$lowkey = "-9223372036854775808"
$highkey = "9223372036854775807"
$countyLowKey = 0
$countyHighKey = 57000
$appName = "fabric:/DataAggregation"
$appType = "DataAggregationType"
$appInitialVersion = "1.0.0"
if($singleNode)
{
$webServiceInstanceCount = -1
$deviceCreationInstanceCount = -1
$countyServicePartitionCount = 1
$deviceActorServicePartitionCount = 1
$doctorServicePartitionCount = 1
}
else
{
$webServiceInstanceCount = #{$true=-1;$false=1}[$cloud -eq $true]
$deviceCreationInstanceCount = #{$true=-1;$false=1}[$cloud -eq $true]
$countyServicePartitionCount = #{$true=10;$false=5}[$cloud -eq $true]
$deviceActorServicePartitionCount = #{$true=15;$false=5}[$cloud -eq $true]
$doctorServicePartitionCount = #{$true=100;$false=5}[$cloud -eq $true]
if($constrainedNodeTypes)
{
$webServiceConstraint = "NodeType == "
$countyServiceConstraint = "NodeType == "
$nationalServiceConstraint = "NodeType == "
$deviceServiceConstraint = "NodeType == "
$doctorServiceConstraint = "NodeType == "
$deviceCreationServiceConstraint = "NodeType == "
}
else
{
$webServiceConstraint = ""
$countyServiceConstraint = ""
$nationalServiceConstraint = ""
$deviceServiceConstraint = ""
$doctorServiceConstraint = ""
$deviceCreationServiceConstraint = ""
}
}
$webServiceType = "DataAggregation.WebServiceType"
$webServiceName = "DataAggregation.WebService"
$nationalServiceType = "DataAggregation.NationalServiceType"
$nationalServiceName = "DataAggregation.NationalService"
$nationalServiceReplicaCount = #{$true=1;$false=3}[$singleNode -eq $true]
$countyServiceType = "DataAggregation.CountyServiceType"
$countyServiceName = "DataAggregation.CountyService"
$countyServiceReplicaCount = #{$true=1;$false=3}[$singleNode -eq $true]
$deviceCreationServiceType = "DataAggregation.DeviceCreationServiceType"
$deviceCreationServiceName = "DataAggregation.DeviceCreationService"
$doctorServiceType = "DataAggregation.DoctorServiceType"
$doctorServiceName = "DataAggregation.DoctorService"
$doctorServiceReplicaCount = #{$true=1;$false=3}[$singleNode -eq $true]
$deviceActorServiceType = "DeviceActorServiceType"
$deviceActorServiceName= "DataAggregation.DeviceActorService"
$deviceActorReplicaCount = #{$true=1;$false=3}[$singleNode -eq $true]
New-ServiceFabricService -ServiceTypeName $webServiceType -Stateless -ApplicationName $appName -ServiceName "$appName/$webServiceName" -PartitionSchemeSingleton -InstanceCount $webServiceInstanceCount -PlacementConstraint $webServiceConstraint -ServicePackageActivationMode ExclusiveProcess
#create national
New-ServiceFabricService -ServiceTypeName $nationalServiceType -Stateful -HasPersistedState -ApplicationName $appName -ServiceName "$appName/$nationalServiceName" -PartitionSchemeSingleton -MinReplicaSetSize $nationalServiceReplicaCount -TargetReplicaSetSize $nationalServiceReplicaCount -PlacementConstraint $nationalServiceConstraint -ServicePackageActivationMode ExclusiveProcess
#create county
New-ServiceFabricService -ServiceTypeName $countyServiceType -Stateful -HasPersistedState -ApplicationName $appName -ServiceName "$appName/$countyServiceName" -PartitionSchemeUniformInt64 -LowKey $countyLowKey -HighKey $countyHighKey -PartitionCount $countyServicePartitionCount -MinReplicaSetSize $countyServiceReplicaCount -TargetReplicaSetSize $countyServiceReplicaCount -PlacementConstraint $countyServiceConstraint -ServicePackageActivationMode ExclusiveProcess
#create doctor
New-ServiceFabricService -ServiceTypeName $doctorServiceType -Stateful -HasPersistedState -ApplicationName $appName -ServiceName "$appName/$doctorServiceName" -PartitionSchemeUniformInt64 -LowKey $lowkey -HighKey $highkey -PartitionCount $doctorServicePartitionCount -MinReplicaSetSize $doctorServiceReplicaCount -TargetReplicaSetSize $doctorServiceReplicaCount -PlacementConstraint $doctorServiceConstraint -ServicePackageActivationMode ExclusiveProcess
#create device
New-ServiceFabricService -ServiceTypeName $deviceActorServiceType -Stateful -HasPersistedState -ApplicationName $appName -ServiceName "$appName/$deviceActorServiceName" -PartitionSchemeUniformInt64 -LowKey $lowkey -HighKey $highkey -PartitionCount $deviceActorServicePartitionCount -MinReplicaSetSize $deviceActorReplicaCount -TargetReplicaSetSize $deviceActorReplicaCount -PlacementConstraint $deviceServiceConstraint -ServicePackageActivationMode ExclusiveProcess -Verbose
#create device creation
New-ServiceFabricService -ServiceTypeName $deviceCreationServiceType -Stateless -ApplicationName $appName -ServiceName "$appName/$deviceCreationServiceName" -PartitionSchemeSingleton -InstanceCount $deviceCreationInstanceCount -PlacementConstraint $deviceCreationServiceConstraint -ServicePackageActivationMode ExclusiveProcess

Accept certificate permanently during FtpWebRequest via PowerShell

Recently I encounter some problems making the connection to a FTP server but there will be some popup asking for the acceptance on the certificate.
I don't know how to overcome this via PowerShell during invoke method $ftpRequest.GetResponse(). I found some solution regarding overriding the callback method on certificate like this one [System.Net.ServicePointManager]::ServerCertificateValidationCallback
The solution is given on C# & I don't know how to port it to PowerShell yet.
My code is as below
function Create-FtpDirectory {
param(
[Parameter(Mandatory=$true)]
[string]
$sourceuri,
[Parameter(Mandatory=$true)]
[string]
$username,
[Parameter(Mandatory=$true)]
[string]
$password
)
if ($sourceUri -match '\\$|\\\w+$') { throw 'sourceuri should end with a file name' }
$ftprequest = [System.Net.FtpWebRequest]::Create($sourceuri);
Write-Information -MessageData "Create folder to store backup (Get-FolderName -Path $global:backupFolder)"
$ftprequest.Method = [System.Net.WebRequestMethods+Ftp]::MakeDirectory
$ftprequest.UseBinary = $true
$ftprequest.Credentials = New-Object System.Net.NetworkCredential($username,$password)
$ftprequest.EnableSsl = $true
$response = $ftprequest.GetResponse();
Write-Host "Folder created successfully, status $response.StatusDescription"
$response.Close();
}
[UPDATED] While searching for Invoke-RestRequest, I found this solution from Microsoft example
Caution: this is actually accept ANY Certificate
# Next, allow the use of self-signed SSL certificates.
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $True }
More information (thanks to #Nimral) : https://learn.microsoft.com/en-us/dotnet/api/system.net.servicepointmanager.servercertificatevalidationcallback?view=netcore-3.1
It's a bit hacky, but you can use raw C# in PowerShell via Add-Type. Here's an example class I've used to be able to toggle certificate validation in the current PowerShell session.
if (-not ([System.Management.Automation.PSTypeName]'CertValidation').Type)
{
Add-Type #"
using System.Net;
using System.Net.Security;
using System.Security.Cryptography.X509Certificates;
public class CertValidation
{
static bool IgnoreValidation(object o, X509Certificate c, X509Chain ch, SslPolicyErrors e) {
return true;
}
public static void Ignore() {
ServicePointManager.ServerCertificateValidationCallback = IgnoreValidation;
}
public static void Restore() {
ServicePointManager.ServerCertificateValidationCallback = null;
}
}
"#
}
Then you can use it prior to calling your function like this.
[CertValidation]::Ignore()
And later, restore default cert validation like this.
[CertValidation]::Restore()
Keep in mind though that it's much safer to just fix your service's certificate so that validation actually succeeds. Ignoring certificate validation should be your last resort if you have no control over the environment.

Ability to set CertificateID for LCM with February powershell 5

I'm trying to update my DSC deployment to now use partial configurations to break up the configuration. For that I need to now use a pull process instead of push.
When I try to apply the configuration for the LCM which looks something like:
[DscLocalConfigurationManager()]
Configuration CreateGESService
{
param(
[Parameter(Mandatory=$true)]
[ValidateNotNullorEmpty()]
[PsCredential] $InstallCredential,
[Parameter(Mandatory=$true)]
[ValidateNotNullorEmpty()]
[PsCredential] $RunCredential
)
Node $AllNodes.NodeName
{
$hostVersion = (get-host).Version
# changed how the possible values for debugMode in the February build
if (($hostVersion.Major -ge 5) -and ($hostVersion.Minor -ge 0) -and ($hostVersion.Build -ge 9842)){
$debugMode = 'All'
}
else{
$debugMode = $true
}
#setup the localConfigManager
Settings
{
#CertificateID = $node.Thumbprint
# slower performance - and only available WMF5
# now we need to kill the dsc
DebugMode = $debugMode
ConfigurationMode = 'ApplyAndAutoCorrect'
ConfigurationModeFrequencyMins = '15'
AllowModuleOverwrite = $true
RefreshMode = 'Push'
ConfigurationID = $node.ConfigurationID
}
PartialConfiguration GetEventStoreConfiguration {
Description = "Contains the stuff for GetEventStore Being Installed"
ConfigurationSource = "[ConfigurationRepositoryShare]ConfigSource"
RefreshMode = "Pull"
}
PartialConfiguration ExternalIntegrationConfiguration{
Description = "Contains the stuff for External Integration"
ConfigurationSource = "[ConfigurationRepositoryShare]ConfigSource"
DependsOn = '[PartialConfiguration]GetEventStoreConfiguration'
RefreshMode = "Pull"
}
PartialConfiguration ServeGroupSpike{
Description = "Contains the stuff for External Integration"
ConfigurationSource = "[ConfigurationRepositoryShare]ConfigSource"
DependsOn = '[PartialConfiguration]ExternalIntegrationConfiguration'
RefreshMode = "Pull"
}
ConfigurationRepositoryShare ConfigSource{
SourcePath = "\\someServer\Shared\dscService\Configuration"
Credential = $InstallCredential
}
ResourceRepositoryShare ResourceSource{
SourcePath = "\\someServer\Shared\dscService\Resources"
Credential = $InstallCredential
}
}
If I try to include the CertificateID I get an error like:
The property CertificateID of metaconfiguration is not compatible with the current version 2.0.0 of the configuration
document. This property only works with version greater than or equal to 1.0.0 . In case the version is greater, then
the property MinimumCompatibleVersion should be set to atleast 1.0.0 . Set these properties in the
OMI_ConfigurationDocument instance in the document and try again.
+ CategoryInfo : InvalidArgument: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : MI RESULT 4
+ PSComputerName : SGSpike-Main
Naturally when the Configuration is attempted to be applied it can't decrypt the credentials passed, and I get an error in the event view like:
Job {B37D5239-EDBA-11E4-80C2-00155D9ACA1F} :
WarningMessage An error occured while applying the partial configuration [PartialConfiguration]ExternalIntegrationConfiguration. The error message is :
The Local Configuration Manager is not configured with a certificate. Resource '[File]GpgProgram' in configuration 'ExternalIntegrationConfiguration' cannot be processed..
Any ideas how to do this? I had this working with the certificateID when I was using a single configuration in a push model.
Even in the April 2015 drop the problem still seems to exist. Further diagnosis shows that you can:
Not use partial configurations
Not use a certificate to encrypt credentials
Opened an issue on connect (with some more details) at https://connect.microsoft.com/PowerShell/Feedback/Details/1292678

how to establish and enter a remote session using runspace in powershell

I am trying to establish a session with a remote host B from my machine A, within a C# code. I am using runspace API for that. the code snippet is provided below
Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();
//constructing the vmname parameter here
vmname = useralias + DateTime.Now.ToString();
Pipeline pipeline = runspace.CreatePipeline();
string scripttext = "$secpasswd = ConvertTo-SecureString '222_bbbb' -AsPlainText –Force";
string scripttext1 = "$mycreds = New-object -typename System.Management.Automation.PSCredential('TS-TEST-09\\Administrator',$secpasswd)";
string scripttext2 = "$s = New-PSSession -ComputerName TS-TEST-09 -Credential $mycreds";
//not accepting session string here, only computername acceptable
**string scripttext3 = "Enter-PSSession -Session $s";**
//Command cmd = new Command(#"C:\mypath\helper.ps1", true);
//cmd.Parameters.Add("local_useralias", useralias);
//cmd.Parameters.Add("local_vmname", vmname);
//cmd.Parameters.Add("local_datastore", datastoredropdown.Text.ToString());
//cmd.Parameters.Add("local_architecture", architecturedropdown.Text.ToString());
pipeline.Commands.AddScript(scripttext);
pipeline.Commands.AddScript(scripttext1);
pipeline.Commands.AddScript(scripttext2);
pipeline.Commands.AddScript(scripttext3);
//pipeline.Commands.Add(cmd);
Collection<PSObject> results = pipeline.Invoke();
runspace.Close();
this code is expected to enter into a session with machine TS-TEST-09 and invoke the script helper.ps1 existing on that machine(that part is commented out in the code currently as i am not able to enter into the session with remote host).
now the problem is that i can't enter into the session $s using -Session parameter(highlighted at scripttext3) however i can enter into it using -Computername parameter.
the error that i get when using -Session parameter in scripttext3 is :
at
invokedSystem.Management.Automation.Internal.Host.InteralHost.GetIHostSupportsInteractiveSession()
at invokedSystem.Management.Automation.
Internal.Host.InteralHost.PushRunspace(Runspace runspace)
at Microsoft.Powershel.Commands.EnterPSSessionCommand.ProcessRecord()
at System.Management.Automation.Cmdlet.DoProcessRecord()
at System.Management.Automation.CommandProcessor.ProcessRecord()
end of inner exception stack trace
does it mean i have to write a custom PSHost and add support for the Enter-PSSession cmdlet with this parameter?
Is there any alternative to make this command work?
any help will be much appreciated.
Thanks,
Manish
The easiest way to open a remote session goes something like this:
string shell = "http://schemas.microsoft.com/powershell/Microsoft.PowerShell";
var target = new Uri("http://myserver/wsman");
var secured = new SecureString();
foreach (char letter in "mypassword")
{
secured.AppendChar(letter);
}
secured.MakeReadOnly();
var credential = new PSCredential("username", secured);
var connectionInfo = new WSManConnectionInfo(target, shell, credential);
Runspace remote = RunspaceFactory.CreateRunspace(connectionInfo);
remote.Open();
using (var ps = PowerShell.Create())
{
ps.Runspace = remote;
ps.Commands.AddCommand("'This is running on {0}.' -f (hostname)");
Collection<PSObject> output = ps.Invoke();
}
You could also create remote pipelines from the remote runspace instance, but the new PowerShell object is a much more managable way to do this (since powershell v2.)
In powershell v3, you can just new up a WSManConnectionInfo and set the ComputerName property as the other properties adopt the same defaults as the above. Unfortunately these properties are read-only in v2 and you have to pass in the minimum as above. Other variants of the constructor will let you use kerberos/negotiate/credssp etc for authentication.
-Oisin
I wanted to leave a comment on the solution, as it helped me alot as well,
but one thing i was missing was the port for WSMan which is 5985
so this
var target = new Uri("http://myserver/wsman");
should be
var target = new Uri("http://myserver:5895/wsman");
in my case
Have you tried Invoke-Command? Enter-PsSession is there for interactive use. (see help Enter-PsSession -online).