How to apply customer-specific configuration during VSTS release? - azure-devops

We would like to try building a release pipeline for our product in VSTS - however, our product requires a separate instance of the application per customer (there are some legacy in the picture here :)). What we THINK we want, is a process like this:
For each customer:
Update DB schema
Configure a container, with customer-specific configuration etc.
Publish the container into Azure Container Registry
Deploy the container in Azure Container Service (OR on-prem if the customer runs on-prem)
The configuration can be multiple things: Extensions of the API in the application (new DLLs basically), connection strings, ...
I figure we can do this fairly easily using a custom PowerShell script, but I would like to not write anything custom (at least for the "looping" issue) if I don't have to. We could also create separate environments in VSTS for each customer, but that seems quite unmaintainable with well over 100 customers.
Some additional details:
- There's a separate DB per customer
- There's two separate web applications per customer
So what's the best practice here? Any advice? Thanks! :-)

You could think of doing it in two ways.
1 - By creating one environment for each customer. So you could have the exact same tasks for each environment, or have the flexibility to change steps in a particular environment.
This approach would give you also the ability to use a flow pipeline, because your build will be released only after is passes your internal QA and other processes.
To do it easily, you could also create task groups to reuse then in each environment.
2 - The other way is to create create separate releases for each customer or group of customers. This will also give you the same flexibility, you can use your builds, but you have to add some extra steps to make sure you are using the right build, since you can choose any build when you create a release, which you can do mannualy.
Updated
A third option could be to create on environment for all customers and then have the one deployment agent installed for every customer, using all of them on the same deployment group. Then have one file with all your variables per customer, with the file named with the agent name, and a powershell script that uses the agent name variable to find what file to run. This powershell script would then run all your individual configurations.
In that case, I suspect that you would end up doing almost all your deployment in powershell, which could be more time consuming for you to maintain. You also have to keep in mind that in this particular scenario you would update all your customers the same time, because all agents would be on the same deployment group.

Related

Multiple Deployment Groups on Single Target Server - Any potential issues?

We currently have 4 Azure DevOps team projects that require two Deployment Groups to be created for their SIT and UAT release pipelines. All 4 team projects will share the two Deployment Groups, the idea being to create the deployment group from one team project and then sharing or extending it to the other 3 (which I believe is common practice).
My main concern though is that due to some budget constraints, the decision has been taken to create both SIT and UAT Deployment Groups on a single target server. Much as I strongly believe this is probably not best practice, are there any technical reasons why this cannot or shouldn't be implemented?
In simple terms, deployment groups is that:
A deployment group is a logical set of deployment target machines that
have agents installed on each one. Deployment groups represent the
physical environments; for example, "Dev", "Test", "UAT", and
"Production". In effect, a deployment group is just another grouping
of agents, much like an agent pool.
We support registering the same machine to multiple deployment groups. However you would need to edit the agent name in our PS "registration script" provided in the UI. Or log into the machine physically and execute the script in a different folder than the default one specified in the script.
Normally, we set up deployment groups with multiple agents and run deployment just to target agent according to requirements.
What you can do is assign tags to deployment agents and use tags to assign releases to specific agents.
In summary, it's able to register the same machine/server to multiple deployment groups if you insist on.
But due to server performance, environmental isolation , Disaster Tolerance and other factors.

Map service roles and replicas to servers with Azure DevOps Release

My project is Windows Service application which could be installed in several roles (the difference is in service name, exe path and some setting in app.config). Each role could be scaled horizontally by instances count. And all these {roles x replica counts} should be deployed over a set of servers in specific proportions for effective performance and utilization.
As an example:
ServerA
ServiceAlfa.1
ServiceAlfa.2
ServiceBravo
ServiceDelta
ServerB
ServiceBravo
ServiceCharlie
ServiceDelta.1
ServiceDelta.2
ServiceDelta.3
How can I achieve this with Azure DevOps (Dev17.M153.5) instruments?
I know brand new yaml pipeline introduces some conception of Environments and VM. It's just not available in latest stable version yet. But it's like a replacement for Deployment Groups early used for deployment to multiple machines, which I can use. I have already installed deployment agents and registered it. But I still cannot figure it out how better configure my complex mapping of instances to servers in release pipeline.
I can create a one job stage per role and link them with corresponding variable groups like
StageAlfa
ServerA:2
StageBravo
ServerA:1
ServerB:1
StageCharlie
ServerB:1
StageDelta
ServerA:1
ServerB:3
So I should check and compare the server name in my script
Or I can do the opposite: create a stage per machine and link it with variable group describing count of specific role replicas on current server. So in every stage I could select specific machine from deployment group by tag.
Looks like the second approach is simpler but they both are felt so awkward!
P.S. Windows Services on Machines not a containers in Kubernetes due to specific Windows software dependencies.
Your approaches are correct. You may consider migrating to Azure DevOps Service or upgrade to Azure DevOps Server 2020, which supports Envorinments and VM:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops#continuous-deployment-in-yaml

Octopus - deploying multiple copies of same service

I've got an Octopus deployment for an NServiceBus consumer. Until recently, there's only been one queue to consume. Now we're trying to get smart about putting different types of messages in different queues. Right now we've broken that up into 3 queues, but that number might increase in the future.
The plan now is to install the NSB consumer service 3 times, in 3 separate folders, under 3 different names. The only difference in the 3 deployments will be an app.config setting:
<add key="NsbConsumeQueue" value="RedQueue" />
So we'll have a Red service, a Green service and a Blue service, and each one will be configured to consume the appropriate queue.
What's the best way to deploy these 3 services in Octopus? My ideal would be to declare some kind of list of services somewhere e.g.
ServiceName QueueName
----------- ---------
RedService RedQueue
GreenService GreenQueue
BlueService BlueQueue
and loop through those services, deploying each one in its own folder, and substituting the value of NsbConsumeQueue in app.config to the appropriate value. I don't think this can be done using variables, which leaves PowerShell.
Any idea how to write a PS script that would do this?
At my previous employer, we used the following script to deploy from Octopus:
http://www.layerstack.net/blog/posts/deploying-nservicebus-with-octopus-deploy
Add the two Powershell scripts to your project that contains the NServiceBus host. Be sure to override the host identifier or ServicePulse will go mad, because every deployment gets its own folder, due to Octopus.
But as mentioned in the comments, be sure that you're splitting endpoints for the right reason. We also had/have at least 4 services, but that's because we have a logical separation. For example, we have a finance service where all finance messages go to. And a sales service where all sales services go to. This follows the DDD bounded context principle and is there for reasons. I hope your services aren't actually called red, green and blue! :)
Powershell should not be needed for this. Variables in Octopus can be scoped to a step in the deployment process. So you could have 3 steps, one for each service, and 3 variables for the queue names, each scoped to one of the steps.
You could also add variables for the service names, and use those variables in the process step settings. That would let you see both the service names and queue names from the variables page.

What's the difference between Capistrano 'stages' and 'roles'

Here's some quotes I've found on the web:
Stages:
From Beanstalk blog
"allows you to setup one recipe to deploy your code to more than one
location."
From Github
"we have a production server and a staging server. So naturally, we
would like two deployment stages, production and staging. We also
assume you're creating an application from scratch."
Roles:
From SO (accepted answer)
Roles allow you to write capistrano tasks that only apply to certain
servers. This really only applies to multi-server deployments. The
default roles of "app", "web", and "db" are also used internally, so
their presence is not optional (AFAIK)
In my naivety , these sound like the same thing, could someone please explain the different in a way your grandmother could understand?
P.S I'm deploying PHP if that helps.
Stages are used to deploy different branches to different groups of servers (where a group may be one or more servers).
Roles are used to deploy the same branch to different servers in the same group, and allow you to run certain capistrano commands on certain servers in that group. For example, if you run a DB update task during deploy, you could specify to run it for the :db role only, where :db represents a single server, instead of wasting resources running the same command on two servers for the same result.
This is only really useful when you have multiple servers in a server group (for example, staging1 and staging2, prod1 and prod2). If you have single servers for staging and production, you don't need to worry about roles.
Note that I've also simplified the definition of stages here. You can actually deploy multiple stages to a single server if you need to, by making :deploy_to dependent on the stage.

Good practices of Websphere MQ production deployment

I'm about to prepare a deployment specification for the Websphere MQ production environment. As always I hate reinventing the wheel hence the question:
Is there an article, specififaction of best practices when it comes to deploying and maintaining the Webshpere MQ production environment?
Here are more specific doubts of mine:
Configuration versioning (MQSC, dmpmqcfg, etc).
Deploying new objects (MQSC or manual instructions?)
Deployment automation (maybe basing on the diff of dmpmqcfg?).
Deploying and versioning configuration alterations.
Currently I am simply creating MQ objects manually and versioning the output of dmpmqcfg. However, in a while there are going to be too many deployments to handle it like this.
That's an extremely broad question so I'll try to respond before a moderator deletes it. :-)
The answer depends on many things such as whether MQ clusters are in use, the approaches to high availability and disaster recovery, the security requirements, whether the QMgrs are configured as dedicated or shared infrastructure, etc. However, there are a few patterns that I follow in almost all cases, including non-Production. This is because things like monitoring and security tend to get dropped at deployment time if not tested in Dev and don't work as expected in Prod.
I use a script to create my QMgrs in Production to insure that basics like generating the X.509 certificate (or CSR) is always done according to standards, that any exits or exit parm files are present, that certain SupportPac executables (like q) are present in /opt/mqm/bin, circular queues, etc. It also checks for negative factors such as GSKit not being installed.
I have a baseline script that is run against all QMgrs. This script sets up the DLQ, any queues for monitoring agents, enables events as required, sets up system services, trigger monitors, listeners, etc. The exception is B2B gateway QMgrs which are handled in a class all their own and have very specific configurations not used on the internal network.
cluster.
I have several classes of QMgr with specific configuration requirements. These include cluster repositories (where primary and secondary are distinct sub-types), service-provider QMgrs, and service consumer QMgrs. These all have secondary scripts run against them.
I have scripts per-cluster to join or suspend a QMgr in cases where clustering is used (which for me is almost 100% since v7.1).
These set up a QMgr's infrastructure. Then I maintain scripts for each application. So for example, if there's a Payroll app, I'll have queues and possibly topics with names containing a PAY node such as PAY.EMPLOYEE.UPDT.REQ.V032.PRD. Corresponding to that will be a single script for all PAY.** queues. Used to be one for setmqaut commands too, but these are now in the same script as the objects. I only ever have one version of the script and keep a history of changes in the script. This way when I need to recreate a QMgr, I just run all the scripts for it. Similarly, if I need to deploy the PAY objects on another QMgr, I just copy the script to that server.
When defining objects for clusters, I always do a DEFINE NOREPLACE that contains all the run-time attributes such as whether the queue is enabled in the cluster. The queue is always defined as disabled in the cluster and for triggering but because I use NOREPLACE re-running the script doesn't change whatever state it has in, say, a month. Those things that are configuration and not run-time, such as the description, are handled in an ALTER immediately after the DEFINE and these are updated each time the script is run. There's an article on this here.
Finally, the scripts I use are of the self-executing, self-documenting variety. For example, many people put all the MQSC commands into a script then do something like:
runmqsc < payroll.mqsc > payroll.out
TONS of problems here. The main one is that it relies on the operator to know a lot and execute the script right all the time. For example, suppose (s)he forgets to capture the output? Or overwrites a previous output? Or doesn't get STDERR because (s)he needs to do the 2>&1 at the end and doesn't know redirection that well?
So my scripts are all written in ksh handle all the capturing of output, complete with time and date stamping and STDERR, can freely mix MQSC with OS commands, etc. All you do is go to the scripts directory for that QMgr and . ./*ksh to build/rebuild a QMgr.
I do of course also take regular configuration dumps, but these are more for running queries and reports like "how many QMgrs have this channel defined and where are they?" kind of thing.
Also, when taking backups there is almost NEVER a good reason to back up a QMgr at a point in time. However, if it is required be sure to stop the QMgr first. Also, think long and hard about capturing certificates in a backup. Many people are good about locking the certificate directory so only mqm can read it but often the backups are unprotected. As long as you aren't trying to restore on top of Production, many shops let you restore the Production /var/mqm/* files to your own sandbox. If the QMgr's KDB files are included, you just lost them. An alternative is to put the certificates in /etc or some other directory that is protected but not backed up with the QMgr's directories.