How to do automated integration tests using XUnit (.Net Core 2.1) and AzureDevOps? - azure-devops

I'm using Team Foundation Version Control as a source control for my .NET Core 2.1 project.
AzureDevOps is configured in continuous integration to checkout the code and build it.
We have 3 environments (Staging, PreProd, Prod). The Staging is not isometric with Prod so it is untrustworthy and we have to execute our integration tests on each environment with environmental data.
My build is generated by an agent in AzureDevOps on an OnPremise server which can only reach Prod environment.
I'd like to automate my XUnit integration tests in an AzureDevOps pipeline, however, I don't know where and how to do it. Am I supposed to execute the integration test step after building? or after releasing?
It looks like I need to deploy my binaries first on my environments, then execute the integration tests, and, if they go wrong, rollback the release.
Weird?!?
How can unblock this situation?
Regards,

If you want to run integration tests you need to first deploy your binaries to environment. You can do it as a separate:
step,
stage
pipeline
after deploying code.
Here it is up to you how you will do it. (To achieve last option you need to use pipeline triggers)
If you follow approach shift left, it means you detect issues as quickly as possible, you should don't worry about breaking them. If it happens on staging I would rather encourage you to fix the issue instead of roll backing code. Especially if it involves data model change.
And on production you can run only smoke tests, which are kind of integration tests which doesn't impact on state. They are like GET in REST - smoke tests should be idempotent, so you can run them without worrying bout changing state.

Since you use TFVC version, you could define a build pipeline to build and test your code, and then to publish artifacts. You also define a release pipeline to consume and deploy those artifacts to deployment targets.
As you have to execute integration tests on each environment with environmental data, you can run your XUnit integration tests in Release pipeline via VSTest task.

Related

Deployment configured as YAML as part of a Pipeline

We have been using a YAML file to do our CI in Azure DevOps for a few months with the idea that we would get our release configured using YAML in the future.
Well that time is now and I'm confused by how you would introduce a CD process. With the MyProject-CI.yml being a Build Pipeline and our Releases being Classic Pipelines I assumed that when the time came to get the CD process down as YAML we would create a MyProject-CD.yml. That would be triggered by the dropping of an Artifact within the MyProject-CI.yml CI.
However I think that was just a misunderstanding on my behalf and what we are supposed to do is convert the original MyProject-CI.yml into a multi-stage pipeline that has the following stages
Build and Run Unit Tests
Deploy to Development and run WebTests
Deploy to Production and run WebTests
Is the switch to a multi stage CI/CD in one file correct rtaher than Release and Build in separate files?
The short answer is yes, you got the idea. A single multi-stage pipeline yml is the way to do both build and deploy, and that is the base intention. Here is an exercise that parallels your case that might help.
As your pipelines get more complex, you will likely get into scenarios with multiple files, as you can template parts of your pipeline for reuse in multiple places, or to enforce conventions from a central location.

Triggering a build on completion of another build when a Pull Request comes in

We have a web application in an Azure DevOps repo and there's a branch policy on the master branch that kicks off a build when a pull request is created. This validates that it compiles and performs code quality checks and the like.
We also have some integration tests (using Mocha and Selenium) that live in another repo. I would like to run the integration tests when a PR against master is created.
As far as I know I cannot have the same build pull from two different repos (without using extensions and it seems cleaner to me to have two separe builds anyway). So I thought I would have another build just to run the integration tests. The build that pulls from the webapp repo would have a final step where it would deploy to an integration tests environment and then the second build would get the latest version of the integration tests and run them against the integration tests environment. I created a Build Completion trigger on the integration tests build that is triggered by the completion of the webapp build.
The problem is that when I queue the webapp build manually, it will launch the integration tests build when done. But when the webapp build is queued by an incoming PR, the integration tests build does not get triggered.
Is this a bug in Azure DevOps or am I going about this wrong?
Also in my side builds from PR doesn't trigger another builds (with Build Completion trigger), I don't know if it's a bug or it's by design.
Anyway, there is a workaround - the final step in the first build will trigger the second build. how? with Trigger Build task.
You just need to change the branch because it will be a merge branch from the PR that doesn't exist in the tests repository:
You can also do it without install extensions with PowerShell task and the Rest API.

Maintainability of TFS xaml build vs TFS vNext build vs Octopus Deploy

My question is about maintainability of vNext/Octopus processes vs XAML based processes. Or rather about the impossibility to maintain them sanely leading me to believe we are doing something terribly wrong.
Given:
Microsoft pushes to phase out its TFS XAML builds in favour of the vNext builds
Octopus Deploy is a popular deployment automation framework
We have many XAML based builds, but starting to port to vNext
The deployments are automated with Octopus Deploy
Concretely, we have three kinds of builds going on in QA:
Old XAML based compilation builds producing artifacts to be deployed
Ultimately just builds the code, zips it and places in a well-known location
New vNext compilation builds producing artifacts to be deployed
Same as above
Deployment builds
XAML based build definition per deployment environment. This is the source of truth for the particular deployment, containing all the configuration URLs, connection strings, certificate thumbprints, etc.. The build definition has over 100 build parameters. Each time a new environment is setup we clone an existing XAML build definition and change the parameters.
This build unpacks the build artifact, generates all the web/app config files based on the configuration parameters and kicks off Octopus Deploy with a lot of parameters using octo.exe and waiting for the end
Octopus Deploy process
Creates 3 packages from the build artifact previously unpacked by the XAML build to match three areas of deployment - web farm, background job engine cluster and the database
Delivers the relevant packages to the relevant tentacles.
The tentacles unpack and setup their respective packages
So, if we have 50 deployment environments, then we have 50 XAML deployment builds, each capturing the context of the respective environment. But the XAML deployment build delegates the deployment job to Octopus and here we are forced to have 50 Octopus projects - one per deployment.
Why is it so? We examined the option of having just one Octopus Project, but what would be the Release versions of such project? In order for us to be able to navigate amongst the gazillion releases, the release version must include:
The build version of the deployed code, e.g. 55.0.18709.3
The name of the deployment environment, e.g. atwfm
Using the example above this gives us 55.0.18709.3-atwfm, but sometimes we want to deploy the same build artifact in the same deployment environment twice. But the only Octopus project would already have the release 55.0.18709.3-atwfm, so how to deploy 55.0.18709.3 in atwfm again, without deleting the already existing release?
We could not find a workaround and so, we have Octopus project per deployment.
THIS IS ABSOLUTELY CRAZY because Octopus projects are a pain to update. Suppose we need to add a step - go do it in 50 projects. There are great advises on the Internet to use automation to edit multiple projects. Not ideal at all.
vNext, BTW, has the same problem. If I am to port the existing XAML builds to vNext I will end up with 50 vNext deployment builds. If I decide to add a step, I need to do it in all the 50 builds!!!
Note, that XAML builds do not have this problem (they have many others, though), because their the process is separate from the parameters. I can modify the workflow once and all the XAML builds are now updated with the new process change.
My question is - how do people work with vNext and Octopus, because our process drives me crazy. There must be a better way.
EDIT 1
I would like to clarify. We sometimes want to deploy the same build artifacts twice. We are not recompiling them and reusing the same version. No. We already have the build artifacts handy with the build version baked inside the artifact. We may want to deploy it the second time into the same environment because, for example, some databases in that environment have been misconfigured and now this is fixed and we need to redeploy. This does not mean we can rerun the already existing Octopus release, because the fix may involve tweaking the deployment parameters of the respective XAML deployment build definition. Hence we may be forced to restart the XAML deployment build, which never compiles code.
EDIT 2
First of all, why do we drive the deployment from TFS XAML builds rather than from Octopus? Historic reasons. We did not have Octopus at first. The deployment was done by our ad hoc code. When we introduced Octopus we decided to keep the XAML deploymenet builds for two reasons:
To save the cost of migrating all the XAML deployment builds with all the gazillion deployment parameters to Octopus. Maybe it was a wrong decision, maybe we could have automated the migration.
Because TFS has better facility to display test results. The deployment may end with deployment smoke tests and their results has to be published somewhere. We do not see how Octopus can help us publish the results, TFS can.
Why would one redeploy? For example, one of the deployment parameters is certificate thumbprint. When the certificate is renewed, this parameter must be changed (we do have automation for updating XAML build parameters). But often we discover that it was already deployed with wrong thumbprint. So, we fix the deployment and redeploy. Or, we discover some strange behavior of the deployed application and wish to redeploy with some extra tracing/debugging features.
There is a lot to unpack here, but I'll give it a go.
TL;DR It's the way you version the releases that's causing you all the pain. Change that and everything else will fall in to place
Lets start at the end and work backwards.
Octopus Deploy has a concept of Environments. This means that you can Deploy the same project to multiple environments and use Octopus's scoping mechanism to manage environment specific configuration.
So using your example.
Creates 3 packages from the build artifact previously unpacked by the
XAML build to match three areas of deployment - web farm, background
job engine cluster and the database
I set up an Environment in Octopus for each of your 50 Environments. (I'll use 3 environments in the example to keep it simple, but the principles apply no matter how many environments you have)
In my Dev Environment I have a single server so I create an environment called "Dev" and add the tentacle for that specific server. Then I tag the tentacle with the deployment type "Web", "Job", "Database"
I then set up a test environment which has 3 servers so I create the Environment and add the 3 servers. I then tag each tentacle with the deployment type "Web", "Job", "Database"
Finally I set up the Production environment. This has 5 web servers, 1 job server and 1 database server. I add all 7 tentacles to the environment, and tag them appropriately.
Now I only need 1 project to deploy to all 3 environments. In my project I have 3 steps.
Step 1 Deploy Web Site
Step 2 Deploy Jobs
Step 3 Deploy database
I can tag each of these steps to say what kind of tentacle I want to deploy to. Now when I run the deployment the link between the tags on the step, and the tags on the tentacle mean Octopus knows where to deploy the code.
Variables: Your variables can be scoped to an environment. So for example if your dev environment database connection string is dev.database.net/Instance and your test environment database connection string is test.Database.net/Instance then you can scope these in the variables section of the project. If your DNS is consitant with your environment names you could even use some of the built in variables to make adding environments more easy. i.e. ${Octopus.Environment.Name}.Database.net/Instance
Releases and version numbers: So here is where I think your problem lies. Adding the environment name to the release and trying to create multiple releases with the same version is basically causing you all of the pain.
Using the example above this gives us 55.0.18709.3-atwfm, but
sometimes we want to deploy the same build artifact in the same
deployment environment twice. But the only Octopus project would
already have the release 55.0.18709.3-atwfm, so how to deploy
55.0.18709.3 in atwfm again, without deleting the already existing release?
There are a couple of things here. In Octopus you can easily deploy again from the UI, however it sounds like you're rebuilding the artifact and trying to create a new release with the same version number. Don't do this! Each new build should have a distinct and unique build number / release number.
The principle I follow is "build once deploy many"
When you create a release it requires a version number, this release then flows through the environments. So I build my code and it gets a versions number 55.0.18709.3 then I deploy it to Dev. When the deployment has been verified I then want to "Promote" the release to test I can do this from within Octopus or I can get TFS to do this.
So I promote 55.0.18709.3 to test and then on to prod. If I need to know which release is in which environment, Octopus tells me this via the dashboard or API.
Finally I can "Orchestrate" the flow of releases through my environments using Build v.next.
So my end to end process looks something like.
Build vNext Build
Compile
Run Unit Tests
Package output
Publish package
build vNext Release
Call Octopus to create the release passing in the version number
Optionally deploy the release to the first environment on your way to live
I now have everything I need in Octopus to deploy to ANY environment with a single project and my environment specific configuration.
I can either "Deploy" the release to a specific environment or "Promote" the release from one environment to another. This can be done easily from within the Octopus UI
Or I can create a "Promote" using the Octopus plugin in TFS and use that to orchestrate the promotion of code through the environments.
Octopus Terminology.
Create release - This pulls together the Artifacts and Release number in Octopus to create an Immutable thing which will be deployed to one of more environments.
Deploy release - The act of pushing your code to a specific environment.
Promote release - Once the code has been deployed in to a single environment, it can them be promoted in to other environments
If you have a specific sequence of environments then you can use the "Lifecycles" feature of Octopus to enforce that workflow. but that's a topic for another day!
EDIT1 Response
I don't think the edit changes my answer, you can re-deploy the same release many times as you like. what you cannot do is create a new release with the same version number. You might want to decouple these steps could you add some more detail about what changes in the XAML build? You can change variables in a release, you can update them in octopus and then redeploy
EDIT 2 Response
That makes things clearer. I think you need to take the hit and migrate the parameters to Octopus. It's variable management is much better than XAML builds and although build vNext is comparable to Octopus it makes more sense to have the config in Octopus. As XAML builds are on their way out, it makes sense to move this stuff now. Whilst it might be a lot of work, at the end you'll have a much smother workflow and you can really take advantage of the power of Octopus.
The Test results point. I agree this is better suited to build vNext, so at this point you will be using build vNext as your Orchestrator and Octopus Deploy as your release management tool.
The process would look something like
Build vNext
Compile code.
Run Unit tests
Run Octopack
Publish packages
Call Octopus and Create release
Call Octopus to Deploy to "Dev"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Test"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Production" (Perhaps with a manual innervation)
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests

How to test Concourse pipelines

My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.

Force rules for build and deployment

Our web project is source-controlled with SVN. It contains MSBuild file to build local, test and production builds. We also use CruiseControl.NET to deploy production and test versions to servers manually (not after every commit).
The question is how to check that if production deployment is being done using CC.NET web project is built using production build (not test or other)? How to force specific steps to be executed when building and deploying to production (like compress JS and CSS, compile with debug="false", etc...)? Now it is possible for every developer make changes in MSBuild file (so he/she can forget to compress JS on production build, etc.).
I used CruiseControl with NAnt extensively, but not MSBuild. Do you have different MSBuild files for each build type (i.e. local/test/prod)? Could you have a single one that can be parameterized such that your CCNet integrators can explicitly call the appropriate options for the target environment? That's how I have my continuous integration versus release candidate builds configured in CCNet. A single master NAnt build script and a different CC integrator for each target env with the necessary build parameters (be them targets or property values). I imagine you could do something quite similar with MSBuild vars/targets.