Should ARM Template be ran on every deployment - azure-devops

When using template to deploy infrastructure.Is it expected to run your arm template on every deployment or are you suppose run the arm template once to setup infrastructure and create a another pipeline that deploy to the infrastructure that was setup by ARM.
Run ARM -> Once,
deploy build artifacts -> Repeat
Run ARM then deploy build artifacts -> Repeat

Depends how you want to setup your test environments. In my system I deploy each branch to a new test environment, instead of using a single instance of a resource as "test" instance and deploy to that. So I do run ARM template deployments as part of the deployment pipeline. I place the deployment scripts and ARM templates for a microservice in the same repository as the code. This makes the coherence I am looking for as infra, backend, frontend all live together in one repository for a microservice.

Wanted to throw an opinion for the other side. I highly recommend rerunning your ARM infrastructure deployments every release or at least setting up a scheduled deployment. The reason being, yes it may take a little more time...or a few extra minutes depending on your resources. However, in larger organizations and in particular in lower environments where developers or others may have at least contributor access, there is the risk of drift. By rerunning the ARM templates for each deployment you are guaranteeing the state matches your template, without having to add or setup any policy logic.
Plus I'd say it's the ultimate confidence in your Infrastructure as code. You are 100% confident your template is rerunable.

well, there is no answer to this one, but in my book it doesnt make sense to run the arm template if there are no changes to it. you should have a separate repo for IaC code or a separate build for arm template

From my point of view, re-running the arm template depends on whether your project’s infrastructure and configuration are updated.
If the structure and configuration of the project you build is not updated, you do not need to run the arm template multiple times. You could directly deploy the build artifacts to the same resource.
On the other hand, if your project requires new resources or parameters, you can update or create new resources by editing the Template configuration file (generally a json file). This allows the deployed environment to meet the needs of your project.
In short, there is no absolute answer to this topic, it only depends on your needs.

Related

How to manage logical grouping of microservice based application to ensure version compatibility for CI/CD Pipeline?

For the MicroService Architecture based application, I'm trying to understand a standard process about how to logically group and manage correct version compatibility among independently deployable microservices. Let me elaborate with practical scenario :
Say, I am building a software application which is composed of 10 microservices. All the microservices have their independent repositories(branching workflow etc.) and their separate CI/CD Pipeline.
The CI/CD Pipeline gets triggered whenever any change pushed to 'master' branch for respective microservice.
Considering Helm chart and Kubernetes based deployment, all the microservices will get deployed with version 1.0 for the very first deployment and our system would work. For subsequent releases, we might have only couple of services that will get deploy. So after couple of production releases, each microservice will be at different version to constituent an application at that point of time.
My question is :
How to logically group independently deployable microservices in order to deploy or rollback to earlier release i.e. how to determine what was the version of different microservices for earlier releases?
Is there any existing tool or standard practice to track versions of each microservice for given release to seamlessly rollback to expected release?
If not automated solution, what would be the right approach to address such requirement?
Appreciate your thoughts and suggestion on this.
With consideration kuberenets:
1. Helm is nice tool to deploy and track.
2. Native k8s deployment works nice, you need to use deployment properly especially look --record flag in k8s commands eg check this link
With AWS ECS clusters:
1. they have task definations and tasks. I think that works for you.
Not have pointers for docker-compose, swarm, and other tools. But you can always use the power of git and some scripting.
the idea is make a file that lists all versions of services/containers/code . and commit that file in git with code. Make tag out of it for simplicity. your script should compare this state file and current state and apply specific changes only. Look at git submodules also. it is nothing but a group of many git projects and it tracks status of each project with help of commit id of each project. This helped us in the situation you mention.
This is a fairly new problem, we just launched a new tool Reliza Hub to solve that. Also here is my post on the subject: Microservices – Combinatorial Explosion of Versions. Currently, we are at the MVP stage and a lot of work is going on - see this video tutorial if our direction makes sense for you https://www.youtube.com/watch?v=yDlf5fMBGuI
If you decide to implement and have any questions or need help with integration, just tag me on SO and I'd be very much willing to make it work for you.
To sum up few things that we are doing - we denote developer facing projects (those that map to source code) as Projects and customer facing projects (bundles that customer sees) as Products.
And we say that Products are essentially composition of Projects and provide tooling how you can compile different versions of Projects into what's called a Product bundle. You can then integrate this into any CI or CD tool out there or start manually if you haven't configured CICD yet.
Other than that, yes - I highly recommend helm and kubernetes - this is what we use on newer projects. (And I can also add ArgoCD and Spinnaker to the existing tooling). But it is not enough to track permutations of different versions of microservices and establishing which configurations are good and which are not between different environments.

Maintainability of TFS xaml build vs TFS vNext build vs Octopus Deploy

My question is about maintainability of vNext/Octopus processes vs XAML based processes. Or rather about the impossibility to maintain them sanely leading me to believe we are doing something terribly wrong.
Given:
Microsoft pushes to phase out its TFS XAML builds in favour of the vNext builds
Octopus Deploy is a popular deployment automation framework
We have many XAML based builds, but starting to port to vNext
The deployments are automated with Octopus Deploy
Concretely, we have three kinds of builds going on in QA:
Old XAML based compilation builds producing artifacts to be deployed
Ultimately just builds the code, zips it and places in a well-known location
New vNext compilation builds producing artifacts to be deployed
Same as above
Deployment builds
XAML based build definition per deployment environment. This is the source of truth for the particular deployment, containing all the configuration URLs, connection strings, certificate thumbprints, etc.. The build definition has over 100 build parameters. Each time a new environment is setup we clone an existing XAML build definition and change the parameters.
This build unpacks the build artifact, generates all the web/app config files based on the configuration parameters and kicks off Octopus Deploy with a lot of parameters using octo.exe and waiting for the end
Octopus Deploy process
Creates 3 packages from the build artifact previously unpacked by the XAML build to match three areas of deployment - web farm, background job engine cluster and the database
Delivers the relevant packages to the relevant tentacles.
The tentacles unpack and setup their respective packages
So, if we have 50 deployment environments, then we have 50 XAML deployment builds, each capturing the context of the respective environment. But the XAML deployment build delegates the deployment job to Octopus and here we are forced to have 50 Octopus projects - one per deployment.
Why is it so? We examined the option of having just one Octopus Project, but what would be the Release versions of such project? In order for us to be able to navigate amongst the gazillion releases, the release version must include:
The build version of the deployed code, e.g. 55.0.18709.3
The name of the deployment environment, e.g. atwfm
Using the example above this gives us 55.0.18709.3-atwfm, but sometimes we want to deploy the same build artifact in the same deployment environment twice. But the only Octopus project would already have the release 55.0.18709.3-atwfm, so how to deploy 55.0.18709.3 in atwfm again, without deleting the already existing release?
We could not find a workaround and so, we have Octopus project per deployment.
THIS IS ABSOLUTELY CRAZY because Octopus projects are a pain to update. Suppose we need to add a step - go do it in 50 projects. There are great advises on the Internet to use automation to edit multiple projects. Not ideal at all.
vNext, BTW, has the same problem. If I am to port the existing XAML builds to vNext I will end up with 50 vNext deployment builds. If I decide to add a step, I need to do it in all the 50 builds!!!
Note, that XAML builds do not have this problem (they have many others, though), because their the process is separate from the parameters. I can modify the workflow once and all the XAML builds are now updated with the new process change.
My question is - how do people work with vNext and Octopus, because our process drives me crazy. There must be a better way.
EDIT 1
I would like to clarify. We sometimes want to deploy the same build artifacts twice. We are not recompiling them and reusing the same version. No. We already have the build artifacts handy with the build version baked inside the artifact. We may want to deploy it the second time into the same environment because, for example, some databases in that environment have been misconfigured and now this is fixed and we need to redeploy. This does not mean we can rerun the already existing Octopus release, because the fix may involve tweaking the deployment parameters of the respective XAML deployment build definition. Hence we may be forced to restart the XAML deployment build, which never compiles code.
EDIT 2
First of all, why do we drive the deployment from TFS XAML builds rather than from Octopus? Historic reasons. We did not have Octopus at first. The deployment was done by our ad hoc code. When we introduced Octopus we decided to keep the XAML deploymenet builds for two reasons:
To save the cost of migrating all the XAML deployment builds with all the gazillion deployment parameters to Octopus. Maybe it was a wrong decision, maybe we could have automated the migration.
Because TFS has better facility to display test results. The deployment may end with deployment smoke tests and their results has to be published somewhere. We do not see how Octopus can help us publish the results, TFS can.
Why would one redeploy? For example, one of the deployment parameters is certificate thumbprint. When the certificate is renewed, this parameter must be changed (we do have automation for updating XAML build parameters). But often we discover that it was already deployed with wrong thumbprint. So, we fix the deployment and redeploy. Or, we discover some strange behavior of the deployed application and wish to redeploy with some extra tracing/debugging features.
There is a lot to unpack here, but I'll give it a go.
TL;DR It's the way you version the releases that's causing you all the pain. Change that and everything else will fall in to place
Lets start at the end and work backwards.
Octopus Deploy has a concept of Environments. This means that you can Deploy the same project to multiple environments and use Octopus's scoping mechanism to manage environment specific configuration.
So using your example.
Creates 3 packages from the build artifact previously unpacked by the
XAML build to match three areas of deployment - web farm, background
job engine cluster and the database
I set up an Environment in Octopus for each of your 50 Environments. (I'll use 3 environments in the example to keep it simple, but the principles apply no matter how many environments you have)
In my Dev Environment I have a single server so I create an environment called "Dev" and add the tentacle for that specific server. Then I tag the tentacle with the deployment type "Web", "Job", "Database"
I then set up a test environment which has 3 servers so I create the Environment and add the 3 servers. I then tag each tentacle with the deployment type "Web", "Job", "Database"
Finally I set up the Production environment. This has 5 web servers, 1 job server and 1 database server. I add all 7 tentacles to the environment, and tag them appropriately.
Now I only need 1 project to deploy to all 3 environments. In my project I have 3 steps.
Step 1 Deploy Web Site
Step 2 Deploy Jobs
Step 3 Deploy database
I can tag each of these steps to say what kind of tentacle I want to deploy to. Now when I run the deployment the link between the tags on the step, and the tags on the tentacle mean Octopus knows where to deploy the code.
Variables: Your variables can be scoped to an environment. So for example if your dev environment database connection string is dev.database.net/Instance and your test environment database connection string is test.Database.net/Instance then you can scope these in the variables section of the project. If your DNS is consitant with your environment names you could even use some of the built in variables to make adding environments more easy. i.e. ${Octopus.Environment.Name}.Database.net/Instance
Releases and version numbers: So here is where I think your problem lies. Adding the environment name to the release and trying to create multiple releases with the same version is basically causing you all of the pain.
Using the example above this gives us 55.0.18709.3-atwfm, but
sometimes we want to deploy the same build artifact in the same
deployment environment twice. But the only Octopus project would
already have the release 55.0.18709.3-atwfm, so how to deploy
55.0.18709.3 in atwfm again, without deleting the already existing release?
There are a couple of things here. In Octopus you can easily deploy again from the UI, however it sounds like you're rebuilding the artifact and trying to create a new release with the same version number. Don't do this! Each new build should have a distinct and unique build number / release number.
The principle I follow is "build once deploy many"
When you create a release it requires a version number, this release then flows through the environments. So I build my code and it gets a versions number 55.0.18709.3 then I deploy it to Dev. When the deployment has been verified I then want to "Promote" the release to test I can do this from within Octopus or I can get TFS to do this.
So I promote 55.0.18709.3 to test and then on to prod. If I need to know which release is in which environment, Octopus tells me this via the dashboard or API.
Finally I can "Orchestrate" the flow of releases through my environments using Build v.next.
So my end to end process looks something like.
Build vNext Build
Compile
Run Unit Tests
Package output
Publish package
build vNext Release
Call Octopus to create the release passing in the version number
Optionally deploy the release to the first environment on your way to live
I now have everything I need in Octopus to deploy to ANY environment with a single project and my environment specific configuration.
I can either "Deploy" the release to a specific environment or "Promote" the release from one environment to another. This can be done easily from within the Octopus UI
Or I can create a "Promote" using the Octopus plugin in TFS and use that to orchestrate the promotion of code through the environments.
Octopus Terminology.
Create release - This pulls together the Artifacts and Release number in Octopus to create an Immutable thing which will be deployed to one of more environments.
Deploy release - The act of pushing your code to a specific environment.
Promote release - Once the code has been deployed in to a single environment, it can them be promoted in to other environments
If you have a specific sequence of environments then you can use the "Lifecycles" feature of Octopus to enforce that workflow. but that's a topic for another day!
EDIT1 Response
I don't think the edit changes my answer, you can re-deploy the same release many times as you like. what you cannot do is create a new release with the same version number. You might want to decouple these steps could you add some more detail about what changes in the XAML build? You can change variables in a release, you can update them in octopus and then redeploy
EDIT 2 Response
That makes things clearer. I think you need to take the hit and migrate the parameters to Octopus. It's variable management is much better than XAML builds and although build vNext is comparable to Octopus it makes more sense to have the config in Octopus. As XAML builds are on their way out, it makes sense to move this stuff now. Whilst it might be a lot of work, at the end you'll have a much smother workflow and you can really take advantage of the power of Octopus.
The Test results point. I agree this is better suited to build vNext, so at this point you will be using build vNext as your Orchestrator and Octopus Deploy as your release management tool.
The process would look something like
Build vNext
Compile code.
Run Unit tests
Run Octopack
Publish packages
Call Octopus and Create release
Call Octopus to Deploy to "Dev"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Test"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Production" (Perhaps with a manual innervation)
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests

Unreleasing (undeploying) applications in VSTS?

I have a project with N git repos, each representing a static website (N varies). For every git repo there exists a build definition that creates an nginx docker image on Azure Container Registry. These N build definitions are linked to N release defenitions that deploy each image to k8s (also on Azure). Overall, CI/CD works fine and after the releases have succedded for the first time, I see a list of environments, each representing a website that is now online.
What I cannot do though with VSTS CI/CD is to declare how these environments are torn down. In GitLab CI (which I used before), there exists a concept of stopping an environment and although this is just a stage in .gitlab-ci.yaml, running it literally removes an environemnt from the list of the deployed ones.
Stopping an environment can be useful when deleting autodeployable feature branches (aka Review Apps). In my case, I'd like to do this when an already shared static website needs to be removed.
VSTS does not seem to have a concept of unreleasing something that has already been released and I'm wondering what the best workaround could be. I tried these two options so far:
Create N new release definition pipelines, which call kubecetl delete ... for a corresponding static websites. That does make things clear at all, because an environment called k8s prod (website-42) in one pipeline is not the same one as in another one (otherwise, I could see whether web → cloud or web × cloud was called last):
Define a new environment called production (delete) in the same release defenition and trigger it manually.
In this case 'deploy' sits a bit closer to 'undeploy', but its hard to figure out what was last (in the example above, you can kind of guess that re-releasing my k8s resources happened after I deleted them – you need to look at the time on the cards, which is a pretty poor indication).
What else could work for deleting / undeploying released applications?
VSTS has not the feature "stop environment" (auto delete the deployed things on the environment) in release management. But you can achieve the same thing in VSTS YAML build.
So except the two workarounds you shared, you can also stop the environment by VSTS YAML build (similar as the mechanism in GitLab).
For YAML CI build, you just need to commit the file end with .vsts-ci.yml. And in the .vsts-ci.yml file, you can specify with the tasks to delete the deployed app.

CI and Deployment with TFS and Powershell

I am working on a CI process with automated deployment. TFS Build is building the solution, and it then uses an InvokeProcess task to kick off a Powershell script. The Powershell script deploys the database changes as a dacpac using sqlpackage, reporting services reports using the web service, fonts to the SSRS server, and the website itself to 1 or more web servers - the whole process uses a deployment configuration file to define drop paths, server ips, installation folders etc. There will be one of these per environment.
I would like to be able to build the solution and deploy to an internal server to run automated tests as part of the automated build. Once tests are completed, and the build has been manually checked, I'd then like to be able to kick off another Build definition which only has the deployment portion of the standard build template, which will simply take a build number or build drop location, and deploy the same build to a different environment (i.e. staging, prod etc.)
The issue I have is that I'm currently managing most of my web/app configuration using config file transformation - i.e. I have build definitions for Debug, Test, Prod etc. and then Web.Debug.config, Web.Test.config etc. I only want to carry out one build, and then deploy that same build to different environments, however at the moment the build will only generate configuration files for one environment - i.e. whatever the build configuration is.
Would the best approach be to generate all config files (or actually pre-createg complete config files for each environment), and then just choose the appropriate one for the specific deployment? Or should I store the env specific config in my deployment configuration file and update the appropriate keys using powershell when deploying?
What would be the normal/recommended approach here?
I'd suggest creating new Configurations for each target environment (e.g. by default you have Debug/Release, create some more). Then use the built-in web.config transforms, for non web-projects use Slow Cheetah
This will spit out pre-configured build outputs for each configuration you specify you want build (in your Build Definition).

What are the Team City best practices for multistage deployment?

We have 3 environments:
Development: Team City deploys here for Subversion commits on trunk.
Staging: User acceptance is done here, on builds that are release candidates.
Production: When UAT passed, the passing code set is deployed here.
We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server.
Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production.
I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging.
I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it?
Update:
I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone?
Update 2
I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was.
Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management.
Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!)
The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways:
Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it.
Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it!
Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use.
Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track).
Does that answer your question? Is there anything still missing?
We also used TeamCity as our build server so let me explain our setup.
We have 4 environments
Development used by Dev to verify commits in a server environment
QA for testing purposes
Staging for deployment checks and some UAT
Production
We only use TeamCity to deploy to Development (Nightly builds) and to QA (on-demand).
The Dev build uses the trunk branch and QA build uses a different branch used for the RC.
Deployment to the Staging and Production are managed by the IT team, and are therefore not automated.
What we do instead is that we use TeamCity to produce artifacts from the QA build. The artifacts are the deployment kits sent for Staging/Production deployments.
That said, I am not sure if TeamCity would provide you a complete control on which build can be promoted to which environment. We basically control this on the SVN side with branches, and have different builds for those branches. You could (should) do be able to manage this it the same way. You can therefore ensure what is getting deployed.
I understand that your needs may be slightly different than ours but I hope that this will helps you finding the best setup.
I think you might want to check out something like Octopus Deploy or BuildMaster. They provide a nice structure for the deployment practices you're trying to automate. Both tools integrate with TeamCity nicely.
Basically, you'd continue to use TeamCity for CI, and you could also continue to deploy to your development environment with TeamCity too, but you'd use one of the deployment tools to promote an (existing) build to staging and production.
Edit 2014-02-05 – Update
The makers of BuildMaster have a new deployment feature – ProGet Deploy – for their NuGet server tool, ProGet. It's very similar to Octopus Deploy, tho I haven't played with it yet myself, so Octopus may have a better visualization of what versions have been deployed to which environments; I still use BuildMaster because of that important feature.
Also, I'm currently using both TeamCity, BuildMaster, and ProGet and I never want to go back to not having automated builds. Currently, all of my apps are built and deployed via BuildMaster. All of my library projects are built in TeamCity and deployed to ProGet. Being able to manage my internal dependencies via the NuGet infrastructure is nice.