SonarQube + Azure DevOps + Pipeline as Code - Is it possible? - azure-devops

The company I work on recently purchased SonarQube Enterprise to improve code quality throughout all repositories. I found out that there is a feature that enables SonarQube to comment automatically on PRs targeting a specific branch, and I successfully managed to try that out.
Thing is:
That configuration is not scalable: I would need to manually configure every repo to follow that rule
That configuration needs a build pipeline to be defined "old school" on Azure DevOps to work, and we are moving into Pipeline as Code, starting of course with CI (where this takes place)
Anyone managed to get the PR commenting working in that scenario? Or, at least, solving the #1 problem?
Cheers

You can use REST APIs to do whatever configuration you need to do across your repositories. Refer to the REST API documentation.
Shouldn't matter, although I haven't tested it. The SonarQube tasks aren't aware of whether the build source is YAML or visual designer/classic/JSON builds. The underlying tasks and job running architecture is the same. As long as the build is hooked up to a branch policy, it should still work.

Related

Azure DevOps Release Pipeline using Packaged Build and Publish Profile

I am trying to create a release pipeline in Azure DevOps. We already have a functioning build pipeline that works well, it is able to package the build with VSBuild and publish it as an artifact. Then in the release pipeline I am using an IIS Deployment job (which includes IIS Manage and IIS Deploy tasks) and it gets that artifact to deploy.
The problem is that we already have a publish profile (.pubxml) that should take care of pretty much everything the IIS Deployment is doing (at least as far I as I understand it). So to me it seems I have two options that don't require me to refactor the project configuration itself.
I can try to mimic the settings on the IIS Deployment job to match our .pubxml as closely as possible and manually applying any changes that aren't doable through the task settings. Obviously this is not ideal as that would require us to update both when ever we make changes and it introduces a large chance of the pipeline breaking down over time.
I can scrap the idea of using IIS Deployment and just use a VSBuild task that uses arguments /p:DeployOnBuild=true /p:PublishProfile=Staging. This doesn't seem like best practices because it means my release pipeline isn't passing a build package to deploy, it is just creating a new one at each stage.
So is there a better option that would allow me to utilize the package I created with VSBuild and the .pubxml configuration together in a deploy? If that isn't possible then are either of my options the "correct" way to handle my situation or am I just missing another method of deployment I could use?
Thank you for any help or insight you can provide. Please let me know if there is any more information I can give that would be useful.
You can try using publish settings file (*.publishsettings) for your IIS deployment.
A publish settings file (.publishsettings) is different than a publishing profile (.pubxml) created in Visual Studio. A publish settings file is created by IIS or Azure App Service, or it can be manually created, and then it can be imported into Visual Studio.
To view more details, you can see:
Publish an application to IIS by importing publish settings in Visual Studio
Deploy your app to a folder, IIS, Azure, or another destination
So unfortunately there doesn't seem to be a way I can achieve everything I wanted in this. The publish profiles are required for when we build the project so without making changes to how we configure those I need to build the project whenever I want to deploy. Ultimately I went with option #2. I essentially just copied most of the build tasks used in the testing pipeline and placed those in the release pipeline with a few modified commands to actually deploy the build once finished. It all seems to work just fine but still doesn't feel like best practices. If I am missing something please let me know and I will make updates as appropriate.

confusion on Azure DevOps pipelines

I've recently been working on switching from On premise TFS to Azure DevOps, and trying to learn more about the different pipelines and I think I may have had my Build pipeline do too much.
Currently I have my Build Pipeline do
Get Source code from Repo
Run database scripts/deploy dacpacs
Copy files over to virtual machines that have web application set up already
Run unit/integration tests
Publish the test results
I repeat these steps closely multiple times, one for develop branch, one for current and previous release branch.
But if I want to take advantage of the Releases and Deployments areas what would that really get me?
It looks like it would be easier to say yes this code did make it out to this dev/beta environment.
I'm working with ColdFusion code that includes some .NET webservices within the repo, would I have to make an artifact that zips up the repo and then deploys it, or is there a better way to take advantage of the release pipeline?
It's not necessary to make an artifact that zips up the repo and then deploys it. There are several types of tools you might use in your application lifecycle process to produce or store artifacts. For example, you might use version control systems such as Git or TFVC to store your artifacts. You can configure Azure Pipelines to deploy artifacts from multiple sources. Check the following link for more details:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=azure-devops#sources

Why does GitHub check not reflect Azure Pipelines build status?

I am trying to add Azure Pipelines configuration to an existing project, bundler/bundler. Here is the PR that adds the configuration:
https://github.com/bundler/bundler/pull/6899
As one of the maintainers set up the bundler/bundler project on Azure Pipelines, this PR already triggers a build:
https://dev.azure.com/bundler/bundler/_build/results?buildId=11
Note that the build has a green checkmark and is marked as finished.
(Also note that there are loads of tests failing in the build, as this wasn't tested on Windows before. To make the build succeed anyway - and not all PRs and commits get the red "x" on Github while I am working on fixing the tests, I added || exit 0 at the end of the test command - which works fine on Azure Pipelines)
A feature of Azure Pipelines' GitHub integration is that the build results are shown in Github via a feature called "Check":
https://github.com/bundler/bundler/pull/6899/checks
(A shorter version of that is also included at the end of the PR page: https://github.com/bundler/bundler/pull/6899#partial-pull-merging)
Unfortunately, this check doesn't reflect the build status on Azure Pipelines and is still shown as "in progress":
and
Any idea why the GitHub check doesn't reflect the build status on Azure Pipelines?
What is confusing me further, is that the integration with Azure Pipelines actually worked just fine (check correctly reflects the build status) in the Pull Request that was automatically created by Azure Pipelines when creating the bundler/bundler project: https://github.com/bundler/bundler/pull/6955
But: It also can't really be the Azure Pipelines configuration I created in my PR, because the same configuration also works just fine in my fork: https://github.com/janpio/bundler/pull/6#partial-timeline (see the green checkmark for the bundler task). (On the other hand: Here Azure Pipelines doesn't use the "Check" feature of Github at all)
Great question. The most likely reason is that there some was glitch in the communication between Azure Pipelines and GitHub. It's very rare but sometimes a webhook between GitHub and Azure Pipelines doesn't trigger. There's no way to tell why it happened; it could have been a fault on either side.
Unfortunately, there's no way to re-send a webhook that didn't get delivered. Your only recourse is to rebuild that pull request. If you select the "Rebuild" option (in the ... menu):
Then a new build will be queued and, when it finishes, the status update will be sent back to GitHub. The check in the pull request will then be updated.
A less likely (but definitely possible) reason is that there's a bug in either Azure Pipelines or GitHub. And in this particular case, there was a bug with the code that uploads test results from Azure Pipelines to the test case manager API.
(Thanks for reporting the issue, we're sorry that we had a bit of a problem here, but we're glad that we were able to resolve this.)
Setting the following configuration worked for me:

What are the Team City best practices for multistage deployment?

We have 3 environments:
Development: Team City deploys here for Subversion commits on trunk.
Staging: User acceptance is done here, on builds that are release candidates.
Production: When UAT passed, the passing code set is deployed here.
We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server.
Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production.
I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging.
I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it?
Update:
I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone?
Update 2
I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was.
Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management.
Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!)
The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways:
Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it.
Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it!
Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use.
Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track).
Does that answer your question? Is there anything still missing?
We also used TeamCity as our build server so let me explain our setup.
We have 4 environments
Development used by Dev to verify commits in a server environment
QA for testing purposes
Staging for deployment checks and some UAT
Production
We only use TeamCity to deploy to Development (Nightly builds) and to QA (on-demand).
The Dev build uses the trunk branch and QA build uses a different branch used for the RC.
Deployment to the Staging and Production are managed by the IT team, and are therefore not automated.
What we do instead is that we use TeamCity to produce artifacts from the QA build. The artifacts are the deployment kits sent for Staging/Production deployments.
That said, I am not sure if TeamCity would provide you a complete control on which build can be promoted to which environment. We basically control this on the SVN side with branches, and have different builds for those branches. You could (should) do be able to manage this it the same way. You can therefore ensure what is getting deployed.
I understand that your needs may be slightly different than ours but I hope that this will helps you finding the best setup.
I think you might want to check out something like Octopus Deploy or BuildMaster. They provide a nice structure for the deployment practices you're trying to automate. Both tools integrate with TeamCity nicely.
Basically, you'd continue to use TeamCity for CI, and you could also continue to deploy to your development environment with TeamCity too, but you'd use one of the deployment tools to promote an (existing) build to staging and production.
Edit 2014-02-05 – Update
The makers of BuildMaster have a new deployment feature – ProGet Deploy – for their NuGet server tool, ProGet. It's very similar to Octopus Deploy, tho I haven't played with it yet myself, so Octopus may have a better visualization of what versions have been deployed to which environments; I still use BuildMaster because of that important feature.
Also, I'm currently using both TeamCity, BuildMaster, and ProGet and I never want to go back to not having automated builds. Currently, all of my apps are built and deployed via BuildMaster. All of my library projects are built in TeamCity and deployed to ProGet. Being able to manage my internal dependencies via the NuGet infrastructure is nice.

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.