Related
When using template to deploy infrastructure.Is it expected to run your arm template on every deployment or are you suppose run the arm template once to setup infrastructure and create a another pipeline that deploy to the infrastructure that was setup by ARM.
Run ARM -> Once,
deploy build artifacts -> Repeat
Run ARM then deploy build artifacts -> Repeat
Depends how you want to setup your test environments. In my system I deploy each branch to a new test environment, instead of using a single instance of a resource as "test" instance and deploy to that. So I do run ARM template deployments as part of the deployment pipeline. I place the deployment scripts and ARM templates for a microservice in the same repository as the code. This makes the coherence I am looking for as infra, backend, frontend all live together in one repository for a microservice.
Wanted to throw an opinion for the other side. I highly recommend rerunning your ARM infrastructure deployments every release or at least setting up a scheduled deployment. The reason being, yes it may take a little more time...or a few extra minutes depending on your resources. However, in larger organizations and in particular in lower environments where developers or others may have at least contributor access, there is the risk of drift. By rerunning the ARM templates for each deployment you are guaranteeing the state matches your template, without having to add or setup any policy logic.
Plus I'd say it's the ultimate confidence in your Infrastructure as code. You are 100% confident your template is rerunable.
well, there is no answer to this one, but in my book it doesnt make sense to run the arm template if there are no changes to it. you should have a separate repo for IaC code or a separate build for arm template
From my point of view, re-running the arm template depends on whether your project’s infrastructure and configuration are updated.
If the structure and configuration of the project you build is not updated, you do not need to run the arm template multiple times. You could directly deploy the build artifacts to the same resource.
On the other hand, if your project requires new resources or parameters, you can update or create new resources by editing the Template configuration file (generally a json file). This allows the deployed environment to meet the needs of your project.
In short, there is no absolute answer to this topic, it only depends on your needs.
I'm new to the CI/CD world and now I would like to implement these workflows in my development process.
I would like to understand how properly make a build and release pipeline to manage Dev, Test and Prod environments when Dev, Test and Prod have slight differences.
So I'm making an Asp .Net Core app, the code is hosted in Azure DevOps which I will use also for build and release, for the client side code (js and css) I use Typescript and SASS and to compile to js and css I use npm scripts.
Now in the Dev environment I want to deploy the non minified js and css and I want also the sourcemap files, in Test environment instead I want the minified js and css and the sourcemap files, in the prod environment I want only the minified version of my css and js.
This case is taken only as practical example, but I would like to understand the general rule, which I can apply regardless of the kind of app or the host, build and release platforms.
As an additional note I understand that this case is pretty trivial and could be managed pretty easily without too much ceremonies, but I would like to understand the guidelines and the best practices, and then I will chose what is appropriate to my particular case and adapt those guidelines and best practices accordingly.
Now I can chose between different options:
I can manage the differencies at the build stage:
I can have one build pipeline which produce the "standard" client code, the source map and the minified versions and deploy the same artifacts to Dev, Test and Prod;
I can have different build pipeline for different environment;
I can have one build pipeline and use conditional tasks;
I can manage the differences at the release stage:
I can build the code using the option 1.1 and then exclude the files that I don't need in the release pipeline;
I can build only the server side code in the build pipeline and compile the client side code during the release pipeline;
I can compile the standard version of the js and css files in the build pipeline and in the release pipelines I can produce the source map or i can minify the js and css;
I don't like the option 1.1 because I don't like to have useless files spread all over the place and this add some extra steps in the build pipeline that aren't necessary.
The options 1.2 and 1.3 adds some complexity to the build pipelines.
With the options 2.x we have "incomplete" builds, because the artifacts produced by the build process lacks of some artifacts that are required by the deploy environment.
To me, which I don't know what are the guidelines and the best practices for the CI and CD workflows, seem that the much more appropriate is one of the option 1.3 or 2.3.
If I'm not wrong now the question become:
It is acceptable to have build pipelines that produces artifacts which are not entirely shippable because they don't meet the requirements for the deploy environment (like the needs to have the sourcemap in Dev environment)?
Ciao Leoni,
I've been a release manager for a number of years, and I understand your pain. In the system I worked on the sequence was something like this:
1: from the development domain to a staging server
2: from the staging server to a penetration & vulnerability testing environment
3: from the testing domain to SaaS production domain and DML repository.
4: from production domain to an escrow and installed cut.
My recommendation is that all tidying up, such as removal of developer's back-up routines (named following an strict convention) and minification is done on the staging server. We allowed minor bug fixes to be applied to the staging server code, and then 'fix pack' releases cut. Once the code is in the penetration & vulnerability testing environment, our practice was that the code itself must not change: only the security settings between domains and for escrow/installed release.
Once a documented process is agreed to, it's easy for people to use that as a check sheet. Your processes may need to be different from what I've out-lined above, and they should be expected to be refined over time. I know many people who do not like documented procedures, but I've documented some benefits here:
http://www.esm.solutions/wp/change-management/
A presto, Robert
My question is about maintainability of vNext/Octopus processes vs XAML based processes. Or rather about the impossibility to maintain them sanely leading me to believe we are doing something terribly wrong.
Given:
Microsoft pushes to phase out its TFS XAML builds in favour of the vNext builds
Octopus Deploy is a popular deployment automation framework
We have many XAML based builds, but starting to port to vNext
The deployments are automated with Octopus Deploy
Concretely, we have three kinds of builds going on in QA:
Old XAML based compilation builds producing artifacts to be deployed
Ultimately just builds the code, zips it and places in a well-known location
New vNext compilation builds producing artifacts to be deployed
Same as above
Deployment builds
XAML based build definition per deployment environment. This is the source of truth for the particular deployment, containing all the configuration URLs, connection strings, certificate thumbprints, etc.. The build definition has over 100 build parameters. Each time a new environment is setup we clone an existing XAML build definition and change the parameters.
This build unpacks the build artifact, generates all the web/app config files based on the configuration parameters and kicks off Octopus Deploy with a lot of parameters using octo.exe and waiting for the end
Octopus Deploy process
Creates 3 packages from the build artifact previously unpacked by the XAML build to match three areas of deployment - web farm, background job engine cluster and the database
Delivers the relevant packages to the relevant tentacles.
The tentacles unpack and setup their respective packages
So, if we have 50 deployment environments, then we have 50 XAML deployment builds, each capturing the context of the respective environment. But the XAML deployment build delegates the deployment job to Octopus and here we are forced to have 50 Octopus projects - one per deployment.
Why is it so? We examined the option of having just one Octopus Project, but what would be the Release versions of such project? In order for us to be able to navigate amongst the gazillion releases, the release version must include:
The build version of the deployed code, e.g. 55.0.18709.3
The name of the deployment environment, e.g. atwfm
Using the example above this gives us 55.0.18709.3-atwfm, but sometimes we want to deploy the same build artifact in the same deployment environment twice. But the only Octopus project would already have the release 55.0.18709.3-atwfm, so how to deploy 55.0.18709.3 in atwfm again, without deleting the already existing release?
We could not find a workaround and so, we have Octopus project per deployment.
THIS IS ABSOLUTELY CRAZY because Octopus projects are a pain to update. Suppose we need to add a step - go do it in 50 projects. There are great advises on the Internet to use automation to edit multiple projects. Not ideal at all.
vNext, BTW, has the same problem. If I am to port the existing XAML builds to vNext I will end up with 50 vNext deployment builds. If I decide to add a step, I need to do it in all the 50 builds!!!
Note, that XAML builds do not have this problem (they have many others, though), because their the process is separate from the parameters. I can modify the workflow once and all the XAML builds are now updated with the new process change.
My question is - how do people work with vNext and Octopus, because our process drives me crazy. There must be a better way.
EDIT 1
I would like to clarify. We sometimes want to deploy the same build artifacts twice. We are not recompiling them and reusing the same version. No. We already have the build artifacts handy with the build version baked inside the artifact. We may want to deploy it the second time into the same environment because, for example, some databases in that environment have been misconfigured and now this is fixed and we need to redeploy. This does not mean we can rerun the already existing Octopus release, because the fix may involve tweaking the deployment parameters of the respective XAML deployment build definition. Hence we may be forced to restart the XAML deployment build, which never compiles code.
EDIT 2
First of all, why do we drive the deployment from TFS XAML builds rather than from Octopus? Historic reasons. We did not have Octopus at first. The deployment was done by our ad hoc code. When we introduced Octopus we decided to keep the XAML deploymenet builds for two reasons:
To save the cost of migrating all the XAML deployment builds with all the gazillion deployment parameters to Octopus. Maybe it was a wrong decision, maybe we could have automated the migration.
Because TFS has better facility to display test results. The deployment may end with deployment smoke tests and their results has to be published somewhere. We do not see how Octopus can help us publish the results, TFS can.
Why would one redeploy? For example, one of the deployment parameters is certificate thumbprint. When the certificate is renewed, this parameter must be changed (we do have automation for updating XAML build parameters). But often we discover that it was already deployed with wrong thumbprint. So, we fix the deployment and redeploy. Or, we discover some strange behavior of the deployed application and wish to redeploy with some extra tracing/debugging features.
There is a lot to unpack here, but I'll give it a go.
TL;DR It's the way you version the releases that's causing you all the pain. Change that and everything else will fall in to place
Lets start at the end and work backwards.
Octopus Deploy has a concept of Environments. This means that you can Deploy the same project to multiple environments and use Octopus's scoping mechanism to manage environment specific configuration.
So using your example.
Creates 3 packages from the build artifact previously unpacked by the
XAML build to match three areas of deployment - web farm, background
job engine cluster and the database
I set up an Environment in Octopus for each of your 50 Environments. (I'll use 3 environments in the example to keep it simple, but the principles apply no matter how many environments you have)
In my Dev Environment I have a single server so I create an environment called "Dev" and add the tentacle for that specific server. Then I tag the tentacle with the deployment type "Web", "Job", "Database"
I then set up a test environment which has 3 servers so I create the Environment and add the 3 servers. I then tag each tentacle with the deployment type "Web", "Job", "Database"
Finally I set up the Production environment. This has 5 web servers, 1 job server and 1 database server. I add all 7 tentacles to the environment, and tag them appropriately.
Now I only need 1 project to deploy to all 3 environments. In my project I have 3 steps.
Step 1 Deploy Web Site
Step 2 Deploy Jobs
Step 3 Deploy database
I can tag each of these steps to say what kind of tentacle I want to deploy to. Now when I run the deployment the link between the tags on the step, and the tags on the tentacle mean Octopus knows where to deploy the code.
Variables: Your variables can be scoped to an environment. So for example if your dev environment database connection string is dev.database.net/Instance and your test environment database connection string is test.Database.net/Instance then you can scope these in the variables section of the project. If your DNS is consitant with your environment names you could even use some of the built in variables to make adding environments more easy. i.e. ${Octopus.Environment.Name}.Database.net/Instance
Releases and version numbers: So here is where I think your problem lies. Adding the environment name to the release and trying to create multiple releases with the same version is basically causing you all of the pain.
Using the example above this gives us 55.0.18709.3-atwfm, but
sometimes we want to deploy the same build artifact in the same
deployment environment twice. But the only Octopus project would
already have the release 55.0.18709.3-atwfm, so how to deploy
55.0.18709.3 in atwfm again, without deleting the already existing release?
There are a couple of things here. In Octopus you can easily deploy again from the UI, however it sounds like you're rebuilding the artifact and trying to create a new release with the same version number. Don't do this! Each new build should have a distinct and unique build number / release number.
The principle I follow is "build once deploy many"
When you create a release it requires a version number, this release then flows through the environments. So I build my code and it gets a versions number 55.0.18709.3 then I deploy it to Dev. When the deployment has been verified I then want to "Promote" the release to test I can do this from within Octopus or I can get TFS to do this.
So I promote 55.0.18709.3 to test and then on to prod. If I need to know which release is in which environment, Octopus tells me this via the dashboard or API.
Finally I can "Orchestrate" the flow of releases through my environments using Build v.next.
So my end to end process looks something like.
Build vNext Build
Compile
Run Unit Tests
Package output
Publish package
build vNext Release
Call Octopus to create the release passing in the version number
Optionally deploy the release to the first environment on your way to live
I now have everything I need in Octopus to deploy to ANY environment with a single project and my environment specific configuration.
I can either "Deploy" the release to a specific environment or "Promote" the release from one environment to another. This can be done easily from within the Octopus UI
Or I can create a "Promote" using the Octopus plugin in TFS and use that to orchestrate the promotion of code through the environments.
Octopus Terminology.
Create release - This pulls together the Artifacts and Release number in Octopus to create an Immutable thing which will be deployed to one of more environments.
Deploy release - The act of pushing your code to a specific environment.
Promote release - Once the code has been deployed in to a single environment, it can them be promoted in to other environments
If you have a specific sequence of environments then you can use the "Lifecycles" feature of Octopus to enforce that workflow. but that's a topic for another day!
EDIT1 Response
I don't think the edit changes my answer, you can re-deploy the same release many times as you like. what you cannot do is create a new release with the same version number. You might want to decouple these steps could you add some more detail about what changes in the XAML build? You can change variables in a release, you can update them in octopus and then redeploy
EDIT 2 Response
That makes things clearer. I think you need to take the hit and migrate the parameters to Octopus. It's variable management is much better than XAML builds and although build vNext is comparable to Octopus it makes more sense to have the config in Octopus. As XAML builds are on their way out, it makes sense to move this stuff now. Whilst it might be a lot of work, at the end you'll have a much smother workflow and you can really take advantage of the power of Octopus.
The Test results point. I agree this is better suited to build vNext, so at this point you will be using build vNext as your Orchestrator and Octopus Deploy as your release management tool.
The process would look something like
Build vNext
Compile code.
Run Unit tests
Run Octopack
Publish packages
Call Octopus and Create release
Call Octopus to Deploy to "Dev"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Test"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Production" (Perhaps with a manual innervation)
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
I am working on a CI process with automated deployment. TFS Build is building the solution, and it then uses an InvokeProcess task to kick off a Powershell script. The Powershell script deploys the database changes as a dacpac using sqlpackage, reporting services reports using the web service, fonts to the SSRS server, and the website itself to 1 or more web servers - the whole process uses a deployment configuration file to define drop paths, server ips, installation folders etc. There will be one of these per environment.
I would like to be able to build the solution and deploy to an internal server to run automated tests as part of the automated build. Once tests are completed, and the build has been manually checked, I'd then like to be able to kick off another Build definition which only has the deployment portion of the standard build template, which will simply take a build number or build drop location, and deploy the same build to a different environment (i.e. staging, prod etc.)
The issue I have is that I'm currently managing most of my web/app configuration using config file transformation - i.e. I have build definitions for Debug, Test, Prod etc. and then Web.Debug.config, Web.Test.config etc. I only want to carry out one build, and then deploy that same build to different environments, however at the moment the build will only generate configuration files for one environment - i.e. whatever the build configuration is.
Would the best approach be to generate all config files (or actually pre-createg complete config files for each environment), and then just choose the appropriate one for the specific deployment? Or should I store the env specific config in my deployment configuration file and update the appropriate keys using powershell when deploying?
What would be the normal/recommended approach here?
I'd suggest creating new Configurations for each target environment (e.g. by default you have Debug/Release, create some more). Then use the built-in web.config transforms, for non web-projects use Slow Cheetah
This will spit out pre-configured build outputs for each configuration you specify you want build (in your Build Definition).
I have an automated deployment process for a Java app where currently I'm building the app on a build machine, checking the build into scm, and having the production machine pull the build artifact (which is a zip) and through ant move the class and config files to where they're supposed to be.
I've seen other strategies where the production machine pulls the source from scm and builds it itself.
The thing I don't like about the former approach is that if I'm building for production instead of staging or dev or whatever, I have to manually specify the env in the build. If the target server were in charge of this, though, there would be less thought and friction involved in the build. However, I also like using the exact same build as was being tested on staging.
So, I guess my question is, is it preferred to copy the already build/already tested app to production or to have production build the app again once it's been tested.
If you already have an automated build system that is creating a testing build, how hard is it to extend that so that it builds both a testing build and a production build at the same time. This way you get the security of knowing they were built from the exact same checked out source and you have less manual labor. I really cringe at the idea of checking built artifacts into SCM!
I always prefer keeping as little as humanly possible on a production server - less to update, less to go wrong.