I want to manage the servers in our staging pipeline with Powershell DSC (push model). The servers map to the environments as following
Development: 1 server
Test: 2 servers
UAT: 2 servers
Production: 2 servers
The server(s) within one environment do have the same configuration. But the configuration is different between the environments. I wanted to go with the push model because I do not have to setup a pull server.
Powershell DSC offers the option to manage the configuration via configuration data in a separate file But this comes with the caveat that you need to specify a node name that matches the respective server name. And that means, I need to copy the configuration data for each server in one environment. And when changing the configuration I need to remember that there is a second place where I need to update the configuration value.
Additionally, I do not really care about the server names. If the servers are exchanged tomorrow for new servers, the configuration should be just applied which is relevant to the environment.
What is the best practice approach to manage multiple servers within one environment with the same configuration?
Check the links, I think they cover scenerio
Using A Single DSC Configuration for Multiple Servers
enter link description here
DSC ConfigurationNames with multiple nodes
enter link description here
The mof file that gets produced does not contain the nodename inside it. So as long as you build a generic configuration, you can rename it after the fact at deploy time.
You can create one config for each environment with some generic name. Then enumerate the list of servers and make a copy of the config for each one with that servers name.
You can take it a step further. Have a share where you create a folder for each server that matches the server's name. Then copy the mof for that server into that folder with a name of localhost.mof. You can then run Start-DSCConfiguration -Path \\server\share\$env:computername from that machine as part of my deployment script.
Related
I'm trying to deploy an infinisppan cluster (2 machines) using the domain mode. But I can't find any working example of domain.xml and host.xml config file.
This cluster would be used by keycloak as a cache server
Any luck one of you already work on this ?
You need to download infinispan 9.4.14 (or any 9.4) and start the bin/domain.sh [bat] script.
That's it you have a running domain with two servers.
If you want to add a second machine you need to copy the server and start the domain script by passing "--host-config=host-slave.xml" also you need to set "jboss.domain.master.address=" with "-D" to let the process know where the domain master is.
Anothe option is to move host-slave.xml-->host.xml and edit the domain-controller discovery-options.
More information can be found here -> http://infinispan.org/docs/stable/server_guide/server_guide.html#domain_mode
I'm trying to create a configuration, using PowerShell DSC, that would help me create a SharePoint farm using Virtual Machines. Assuming that I have a Windows 10 machine with Hyper-V installed I would like my configuration script to create the required VMs, for example DC, SPA1, SPw1, SPW2 and SPDB1, configure their network connections and connect to a domain controller (DC1), then proceed to install the SharePoint/SQL Server prerequisites and installation before going on to configure the farm, once available.
I've created configurations that complete various stages but I am unable to figure out how to connect them to work in an orchestrated manor. For example I can create the VMs or perform the install and configuration of SharePoint but I can't get these configurations to work in tandem.
Having read the DSC documentation I thought that is might be possible using composite resources but I am unable to get the configuration to continue onto the new Virtual Machine after creation.
From the composite resource documentation:
configuration RenameVM
{
Import-DscResource -Module TestCompositeResource
Node localhost
{
xVirtualMachine VM
{
VMName = "Test"
SwitchName = "Internal"
SwitchType = "Internal"
VhdParentPath = "C:\Demo\VHD\RTM.vhd"
VHDPath = "C:\Demo\VHD"
VMStartupMemory = 1024MB
VMState = "Running"
}
}
Node "192.168.10.1"
{
xComputer Name
{
Name = "SQL01"
DomainName = "fourthcoffee.com"
}
}
}
Ideally the node names would be dynamically declared in the configuration data and not explicitly defined I.P addresses. I'm also having trouble with my Hyper-V configuration creating multiple switches but that's a separate issue. So I guess my question is:
Is it possible to create a configuration that deals with the creation and advanced configuration of Virtual Machines?
The problem you are running up against is a conceptual one of what DSC does.
Reading the document that you linked, it says
Configurations are declarative PowerShell scripts which define and configure instances of resources. Upon running the configuration, DSC (and the resources being called by the configuration) will simply “make it so”, ensuring that the system exists in the state laid out by the configuration.
DSC is designed to configure an instance of a resource. At its basic level a DSC configuration is run on a single machine, configuring that machine into a specified state.
DSC scripts should be constrained to work within the boundaries of the machine that they are running on. It seems that this is part of the problem you are experiencing.
If you have two sets of scripts. A Deploy VM script, that runs against a hyper-v server and a Sharepoint build that then configures the VM once it has launched. It seems that what you are trying to do is launch the Sharepoint script from within the hyper-v deploy script. At that stage though the Sharepoint server is outside of the boundary of control of the hyper-v server (apart from its atomic VM capabilities, start,stop, delete etc)
Instead what I would suggest you do is see them as two entirely separate entities. There is no need to have a scripted connection between creating a VM and installing Sharepoint.
At a high level your pipeline would look something like this
Run deploy configuration to create a new VM. At the point where that VM is running that configuration is complete. It has no other actions.
The VM builds and starts, part of its initial configuration is to run a bootstrap script that tells it its function.
The VM contacts the DSC server, tells it its function, and requests any configurations that are available for it.
The VM downloads its configurations, and configures itself as a Sharepoint Server (or SQL Server, etc)
If there are external dependencies, i.e. you can install Sharepoint before SQL has completed, then simply have a dependson for a shared file. i.e. if \\server\share\sqlcompleted.txt exists Or whatever other mechanism fits your environment.
Building servers this way removes dependencies, it means that if you decide you want to switch to ESX then all you need to change is your deploy script. Equally if you move everything to a cloud deployment.
Is there any way using the PowerShell Azure cmdlets to get the machine name on which an Azure worker or web role is running? Specifically, I'm looking for the name that starts with "RD". I'm not 100% sure if I'm searching for this using the right terminology, because my results are clouded with information about Azure Virtual Machines. I've also been exploring the objects returned from such calls as Get-AzureDeployment and Get-AzureVM, but haven't found the "RD" name anyplace yet.
I've also found the discussion here, but wondering if it's out of date: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/73eb430a-abc7-4c15-98e7-a65308d15ed9/how-to-get-the-computer-name-of-a-webworker-role-instance?forum=windowsazuremanagement
Motivation: My New Relic monitoring often complains "server not reporting" for instances that have been decommissioned. New Relic's server monitoring knows only the "RD..." names, and I'm looking for a quick way to get a list of these from Azure so that I can compare and see if New Relic is only complaining about old instances or if there's a real problem with one of the current instances.
You can actually get more significant host names than RD... by setting the vmName key in the cloud service's ServiceConfiguration file.
Then, your host names will be of the form vmnameXX, where XX is the instance number of the role. (i.e. "MyApp01", "MyApp02", ...)
For details on this, see the links below:
https://azure.microsoft.com/documentation/articles/virtual-networks-viewing-and-modifying-hostnames/
http://blogs.msdn.com/b/cie/archive/2014/03/30/custom-hostname-for-windows-azure-paas-virtual-machines.aspx
I found couple of tutorials how to run multiple instances of JBoss on the same machine.
All of them mention uncommenting Service Binder and having separate service-binding.xml files for each server.
The question is why it's done like that? Is there any reason except adding additional layer of indirection?
It looks the same could be done by modification of ports in jboss-service.xml for each server. The only restriction would be that there won't be easy way to switch which instance of JBoss uses which set of ports.
You are right with modifying the ports in jboss-service.xml. This is the straightforward and genuine way to change the ports.
Unfortunately, ports are not only defined in that file, but also in other places like jboss-web's configuration etc.
Catching all those places can be error prone.
So the idea was to have a central file (service-binding.xml) that lives in the root of a server installation. You basically copy the 'default' config to server1, server2 etc and then via command line pass in the server name when starting so that the correct port-offset for all of the services is taken from service-bindings.xml and applied to the resulting runtime configuration.
JBossAS 7 takes this concept one step further to the ServiceBindingGroups, where the base ports are defined on a domain level and then per server you pick a basic group + just a port offset by name, so that there is even less work needed than in as4
As I've explained in my other question, I'm busy setting up a PowerShell module repository in my enterprise.
My plan is to have a master repository (r/w access to a limited group of people) and slave repositories (read only access to everyone). I need multiple repositories because clients are located in different security zones and I can't have a central location reachable by all clients.
For this reason, I need to configure the PowerShell profile of the clients so that they can point to the correct repository to find the modules. I would like to define a $PowerShellRepositoryPath environment variable for this purpose.
Also, the profile needs to be customized in order for it to execute a script located in the repository (thus where $PowerShellRepositoryPath points to) when PowerShell starts (my goal here is to automatically add the latest module versions to the PSModulePath of the clients on startup).
We have a mixed environment with domain members and stand-alone servers in different network zones.
How would you proceed? Is it possible to push that variable and the profile via a GPO for domain members? Would customizing the $Profile variable via GPO be an option?
What about the standalone servers?
Edit:
I think that for creating the environment variable, I'll just use a GPO to create it and use it in PowerShell via $env:variableName. For non-domain situations, I'll probably have to use a script though..
I am not sure about pushing $profile via GPO. But, I'd simply put a logon script that copies the profile script from a network location based on the user's group/security membership.
Well if you're going to change the path to the modules, I'd have a file in the repository (say current.txt) that has the name for the current module (or current file path, whichever you are changing) in it. Then have the $profile script read the content of that file, and set the variable based on the contents. This way you don't have to screw around with updating the profile scripts, just update the central repository current.txt file with the path (or partial path, the part that changes, or filename or whatever), and when it replicates to the client repositories, all powershell profiles get updated with the latest modules when the profile script is executed.
Out of curiosity, why not just overwrite the module files in the client repositories with the latest version? If you did it that way, all clients would always have the latest versions, and you wouldn't have to update the $profile scripts.
Alternately you could always write another script to replace the $profile script on all machines. I think the first route I suggested would be the cleanest way of doing what you are after.
As far as the GPO thing goes, I don't believe you can do this. There is no GPO defined to control what is in the profile script. I would say you could maybe do it with a custom ADM file, but the profile script path is not controlled by the registry, so no go there.