Using Partial Cnfiguration without hardcoding configuration name in LCM property - powershell

I would like to combine few small DSC configurations into one MOF file. I know there is something like Partial Configuration in Powershell v5, however to use this feature i have to reconfigure LCM on target node everytime when amount of configurations is changed (which is impossible because i want to configure LCM manually only once on first DSC configuration).
Unfortunatelly DSC do not allow to reconfigure LCM via DSC Resource which means i cannot change this setting by "Pull Mode" on local machine.
I'm still wondering why LCM do not support "*" inside PartialConfigurtion property when it could be very usefull specially when every configuration uses GUID anyway (*.GUID.MOF)
Have you ever found any solution to workaround this problem?
Thanks in advance

DSC doesn't require all partial configuration fragments to be available at the time of applying the configuration. So you can still populate many partialConfig ahead of time in LCM which may become available at some point of time. This gives you some flexibility for not modifying LCM settings every time you need to add another partial configuration. I would also suggest opening a uservoice issue request # https://windowsserver.uservoice.com/forums/301869-powershell/category/148047-desired-state-configuration-dsc for:
Allowing '*' in partial configuration.
Allowing updating meta-config from pull server.

Related

Setting up VM in Azure from scratch. How to copy files to VM drive preferably by DSC?

I have a buch of tools that I copy to destination machine in Azure every time I create a new one. How I do it now
zip folder with tools
open powershell session
use Copy-Item -toSession
unzip
This somewhat works. However, it's not ideal - e.g. update of one tool is not as easy as it should be.
I would like to add this to PowerShell DSC configuration. Tried to find something like that and every File resource I found so far uses network shares.
Q: Is there any oficial way how to achieve the same result?
Q: If not, any sensible way how to achieve this? DSC was my first choice, but is not mandatory.
I find this as basic requirement a would expect that this will be one of scenarios that people try to solve.
Note1: I use DSC in push mode.
Note2: We were trying ansible to cover whole process (VM creation, LB, NSG, VPN, ..., VM setp - registry, FW, ..)), but found out that not everything in Azure is possible with ansible (IIRC gateways, vpns, ..)

How to make self updating pipeline in concourse

I would like to make a pipeline that as first step checks its own configuration and updates itself if needed.
What tool / API should I use for this? Is there a docker image that has this installed for the correct concourse version? What is the advised way to authenticate in concourse from such task?
Regarding the previous answer suggesting the Fly binary, see the Fly resource.
However, having a similar request, I am going to try with the Pipeline resource. It seems more specific and has var injection solved directly through parameters.
I still have to try it out, but it seems to me that it would be more efficient to have a single pipeline which updates all pipelines, and not having to insert this job in all of your pipelines.
Also, a specific pipeline should not be concerned with itself, just the source code it builds (or whatever it does). If you want to start a pipeline if its config file changed, this could be done by modifying a triggering resource, e.g. pushing an empty "pipeline changed" commit
naively, it'd be a task which gets the repo the pipeline is committed to, and does a fly set-pipeline to update the configuration. However there are a few gotchas here:
fly binary. you'll want fly executable to be available to your container which runs this task, and it should be same version of fly as the concourse that's being targeted. Probably that means you should download it directly via curl from the host.
authenticating with the concourse server. you'll need to provide credentials for fly to use -- probably via parameters.
parameter updates. if new parameters become needed, you'll need to use some kind of single source for all the parameters that need to be set, and use --load-vars-from rather than just --var. My group uses Lastpass notes with a bunch of variables saved in them and download via the lpass tool, but that gets hard if you use 2FA or similar.
moving the server. You will need the external address of the concourse to be injected as a parameter as well, if you want to be resilient to it changing.

Managing Multiple servers in an environment with Powershell DSC

I want to manage the servers in our staging pipeline with Powershell DSC (push model). The servers map to the environments as following
Development: 1 server
Test: 2 servers
UAT: 2 servers
Production: 2 servers
The server(s) within one environment do have the same configuration. But the configuration is different between the environments. I wanted to go with the push model because I do not have to setup a pull server.
Powershell DSC offers the option to manage the configuration via configuration data in a separate file But this comes with the caveat that you need to specify a node name that matches the respective server name. And that means, I need to copy the configuration data for each server in one environment. And when changing the configuration I need to remember that there is a second place where I need to update the configuration value.
Additionally, I do not really care about the server names. If the servers are exchanged tomorrow for new servers, the configuration should be just applied which is relevant to the environment.
What is the best practice approach to manage multiple servers within one environment with the same configuration?
Check the links, I think they cover scenerio
Using A Single DSC Configuration for Multiple Servers
enter link description here
DSC ConfigurationNames with multiple nodes
enter link description here
The mof file that gets produced does not contain the nodename inside it. So as long as you build a generic configuration, you can rename it after the fact at deploy time.
You can create one config for each environment with some generic name. Then enumerate the list of servers and make a copy of the config for each one with that servers name.
You can take it a step further. Have a share where you create a folder for each server that matches the server's name. Then copy the mof for that server into that folder with a name of localhost.mof. You can then run Start-DSCConfiguration -Path \\server\share\$env:computername from that machine as part of my deployment script.

How do I deploy service fabric application from VSTS release pipeline?

I have configured a CI build for a Service Fabric application, in Visual Studio Team Services, according to this documentation: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration
But instead of having my CI build do the publishing, I only perform the Build and Package tasks, and include all Service Fabric related output, such as pkg folder, scripts, publish profiles and application parameters, in the drop. This way I can pass it along to the new Release pipeline (agent-based releases) to do the actual deployment of my service fabric application.
In my release definition I have a single Azure Powershell task, that uses an ARM endpoint (with proper service principals configured).
When I deploy my app to an existing service fabric cluster, I use the default Deploy-FabricApplication cmdlet passing along the pkg folder and a publish profile that is configured with a connection to the existing cluster.
The release fails with an error message "Cluster connection instance is null". And I cannot understand why?
Doing some debugging I have found that:
The Deploy-FabricApplication cmdlet executes the Connect-ServiceFabricCluster cmdlet just fine, but as soon as the Publish-NewServiceFabricApplication cmdlet takes over execution, then the cluster connection is lost.
I would expect that this scenario is possible using the service fabric cmdlets, but I cannot figure out how to keep the cluster connection open during depoyment.
UPDATE: The link to the documentation no longer refers to the Service Fabric powershell scripts, so the pre-condition for this question is no longer documented. The article now refers to the VSTS build and release tasks, which can be prefered over the powershell cmdlets I tried to use.
When the Connect-ServiceFabricCluster function is called (from Deploy-FabricApplication.ps1) a local $clusterConnection variable is set after the call to Connect-ServiceFabricCluster. You can see that using Get-Variable.
Unfortunately there is logic in some of the SDK scripts that expect that variable to be set but because they run in a different scope, that local variable isn't available.
It works in Visual Studio because the Deploy-FabricApplication.ps1 script is called using dot source notation, which puts the $clusterConnection variable in the current scope.
I'm not sure if there is a way to use dot sourcing when running a script though the release pipeline but you could, as a workaround, make the $clusterConnection variable global right after it's been set via the Connect-ServiceFabricCluster call. Edit your Deploy-FabricApplication.ps1 script and add the following line after the connection logic (~line 169):
$global:clusterConnection = $clusterConnection
By the way, you might want to consider setting up custom build/release tasks that deploy a Service Fabric application, rather than using the various Deploy-FabricApplication.ps1 scripts.
There now exists a built-in VSTS task for deploying a Service Fabric app so you no longer need to bother with executing the PowerShell script on your own. Task documentation page is at https://www.visualstudio.com/docs/build/steps/deploy/service-fabric-deploy. The original CI article has also been updated which provides details on how to set everything up: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/.
Try to use "PowerShell" task instead of "Azure PowerShell" task.
I hit the same bug today and opened a GitHub issue here
On a side note, VS generated script Deploy-FabricApplication.ps1 uses module
"$((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Service Fabric SDK" -Name "FabricSDKPSModulePath").FabricSDKPSModulePath)\ServiceFabricSDK.psm1"
That's where Publish-NewServiceFabricApplication comes from. You can check the deployment logic and rewrite it in more sane way using lower-level ServiceFabric SDK cmdlets (potentially getting connection using Get-ServiceFabricClusterConnection instead of global-ling it)

How to push an enterprise-wide PowerShell profile customization?

As I've explained in my other question, I'm busy setting up a PowerShell module repository in my enterprise.
My plan is to have a master repository (r/w access to a limited group of people) and slave repositories (read only access to everyone). I need multiple repositories because clients are located in different security zones and I can't have a central location reachable by all clients.
For this reason, I need to configure the PowerShell profile of the clients so that they can point to the correct repository to find the modules. I would like to define a $PowerShellRepositoryPath environment variable for this purpose.
Also, the profile needs to be customized in order for it to execute a script located in the repository (thus where $PowerShellRepositoryPath points to) when PowerShell starts (my goal here is to automatically add the latest module versions to the PSModulePath of the clients on startup).
We have a mixed environment with domain members and stand-alone servers in different network zones.
How would you proceed? Is it possible to push that variable and the profile via a GPO for domain members? Would customizing the $Profile variable via GPO be an option?
What about the standalone servers?
Edit:
I think that for creating the environment variable, I'll just use a GPO to create it and use it in PowerShell via $env:variableName. For non-domain situations, I'll probably have to use a script though..
I am not sure about pushing $profile via GPO. But, I'd simply put a logon script that copies the profile script from a network location based on the user's group/security membership.
Well if you're going to change the path to the modules, I'd have a file in the repository (say current.txt) that has the name for the current module (or current file path, whichever you are changing) in it. Then have the $profile script read the content of that file, and set the variable based on the contents. This way you don't have to screw around with updating the profile scripts, just update the central repository current.txt file with the path (or partial path, the part that changes, or filename or whatever), and when it replicates to the client repositories, all powershell profiles get updated with the latest modules when the profile script is executed.
Out of curiosity, why not just overwrite the module files in the client repositories with the latest version? If you did it that way, all clients would always have the latest versions, and you wouldn't have to update the $profile scripts.
Alternately you could always write another script to replace the $profile script on all machines. I think the first route I suggested would be the cleanest way of doing what you are after.
As far as the GPO thing goes, I don't believe you can do this. There is no GPO defined to control what is in the profile script. I would say you could maybe do it with a custom ADM file, but the profile script path is not controlled by the registry, so no go there.