It makes sense to have a pre deployment approval for an environment but what is a post deployment approval and why might I use it? No definition in Team Service docs here
Validating testing is the main scenario I can think of. Imagine a deployment chain that goes
Dev -> Test -> UAT -> Prod
So I set up a trigger to deploy to dev on every check-in / commit and run some basic smoke tests.
Then I set up a scheduled deployment to Test for 3AM and run a more comprehensive set of automated tests, however certain bits of the application still rely on manual testing. The testers can approve or reject the release post deployment based on if they find any bugs. If the testers (or test lead) don't approve the post deployment step then the release cannot proceed to UAT.
You then might have Business Users who approve deployment to UAT and once testing is complete validate that the release can go live. (Another post deployment check)
Finally you might have a check that approves deployment to production.
If you have 100% automated testing in all environments then you don't need these kind of manual interventions, however if you still need manual testing then this can prove a fairly light weight way of ensuring a sensible approvals process is in place.
Different groups sometimes "own" different environments, so they control when things deploy to their environments and when things are ready to leave their environments. The scenario I commonly use to explain this is as follows:
You have a dev server that's owned by developers. Changes get pushed there and signed off on by a dev team lead. The dev team lead does a post-deployment approval for the "Dev" environment.
Then it goes off to the QA environment. The QA team might be in the middle of some manual testing of the previous build. They want to conclude that before the next build comes their way, so the QA team lead get a pre-deployment approval. The devs might sign off on 10 releases in the interim, but the QA team lead can choose just to take the most recent one, since testing older builds isn't doing anyone any good.
Then, the QA team signs off and it goes to a UAT environment. UAT is owned by the product owner, who frequently demos the upcoming changes to upper management and/or end-users. The product owner will want to control when a UAT deployment happens, because they don't want their demo to get ruined by an inopportune deployment. Without a post-deployment approval owned by the QA team, the product owner might take an untested, unvalidated build and deploy it and end up demoing something horribly broken, making everyone look bad.
And so on.
Related
Quick question.
Is there a way to constrain/restrict what order users can can deploy builds to environments?
For example if I have these four environments configured with manual push-button deploy (not-automated) I can start all four together if I want. I don't have to wait for the other to be done before kicking off the next one:
DEV
TEST
STAGE
PROD
Microsoft seems to be missing this feature in TFS 2015. It would make sense to offer a deployment condition that states that previous environments must have successful deployments before you can run push-button deploy for the next.
Yes, I know, you are going to say "but you can automate that so the deploys run in the order you want." Management here does NOT want that. They want push button deployment for each environment WITH a constraint that previous environments must be completed first.
This means a manual start for each environment.
Other than having the release manager "eyeball" the situation before pushing the button for the next environment I can't see a way to configure this rule.
Any ideas?
There is not any restriction on manually deploy situation for now. This is designed for giving you the ability to override the release process.
Note that you can always deploy a release directly to any of the
environments in your release definition by selecting the Deploy
action when you create a new release.
In this case, the environment triggers you configure, such as a
trigger on successful deployment to another environment, do not apply.
The deployment occurs irrespective of these settings. This gives you the ability to override the release process. Performing such
direct deployments requires the Manage deployments permission, which
should only be given to selected and approved users.
Source Link: Environment triggers
Suggest you use automation triggers, you could use Parallel forked and joined deployments, in combination with the ability to define pre- and post-deployment approvals, this enables the configuration of complex and fully managed deployment pipelines to suit almost any release scenario.
If you insist on manual push-button deploy, you may have to ask the release manager "eyeball" the situation to restrict environment deployment order as you mentioned.
How would one go about creating a secure means of deploying a package by way of Octopus Deploy?
Implementing a duplicate team, former for developers to deploy to development environment, the latter, to deploy to staging/production environment, with identical roles and specific users that would be team leads that can only deploy to staging/production.
The idea is to prevent developers from having to deploy or promote to staging/production as means of security.
It seems rather clunky in having a duplicate team, and would cause confusion especially when new octopus projects are created in the regards of syncing up between the duplicate teams.
What would you advise/recommend in this approach?
Ninja Edit I have included the tags teamcity and powershell as that is the idea - teamcity, when a build process is kicked off, that will deploy a build eventually leading to octopus deploy which will carry out the deployment process to that environment.
We're in a similar situation where developers are responsible for the DEVELOPMENT environment, testers for TEST and the operations team for PREPROD and PROD.
This is enforced by providing all users with access to Octopus Deploy, creating environment specific teams with roles scoped to particular environments; and assigning users to teams.
http://docs.octopusdeploy.com/display/OD/Managing+users+and+teams
We're putting together our CI pipelines and a need we'll quickly have is the ability to visualize all pipelines, at the very least determine which ones are waiting for input.
Our general flow is very roughly:
Deploy to dev
Prompt QA team for QA deployment approval
Deploy to QA
Prompt QA team for Staging environment deployment approval
Deploy to Staging
QA Signoff
Business Signoff
Deploy to prod
With some smoke test steps in the way. Would love for the QA team to have a dashboard of which pipelines are awaiting their approval.
Even a view that shows the last invocation of every pipeline would be sufficient (you could at least quickly see which stage certain jobs are paused at - we will use the same pipeline design for most of our microservices)
Anything out there that would be useful for us?
This is definitely for you. I have been using this for around 6 Months now .
Hygieia Dashboard
You can use AnyStatus which is a desktop application for Windows that brings together metrics and events from various sources such as Jenkins, Azure DevOps, TeamCity and more. AnyStatus supports Jenkins Jobs and Views. It can also monitor other resources such as web servers, databases, operating system metrics and others.
Disclaimer: I am the author of AnyStatus.
We have 3 environments:
Development: Team City deploys here for Subversion commits on trunk.
Staging: User acceptance is done here, on builds that are release candidates.
Production: When UAT passed, the passing code set is deployed here.
We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server.
Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production.
I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging.
I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it?
Update:
I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone?
Update 2
I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was.
Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management.
Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!)
The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways:
Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it.
Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it!
Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use.
Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track).
Does that answer your question? Is there anything still missing?
We also used TeamCity as our build server so let me explain our setup.
We have 4 environments
Development used by Dev to verify commits in a server environment
QA for testing purposes
Staging for deployment checks and some UAT
Production
We only use TeamCity to deploy to Development (Nightly builds) and to QA (on-demand).
The Dev build uses the trunk branch and QA build uses a different branch used for the RC.
Deployment to the Staging and Production are managed by the IT team, and are therefore not automated.
What we do instead is that we use TeamCity to produce artifacts from the QA build. The artifacts are the deployment kits sent for Staging/Production deployments.
That said, I am not sure if TeamCity would provide you a complete control on which build can be promoted to which environment. We basically control this on the SVN side with branches, and have different builds for those branches. You could (should) do be able to manage this it the same way. You can therefore ensure what is getting deployed.
I understand that your needs may be slightly different than ours but I hope that this will helps you finding the best setup.
I think you might want to check out something like Octopus Deploy or BuildMaster. They provide a nice structure for the deployment practices you're trying to automate. Both tools integrate with TeamCity nicely.
Basically, you'd continue to use TeamCity for CI, and you could also continue to deploy to your development environment with TeamCity too, but you'd use one of the deployment tools to promote an (existing) build to staging and production.
Edit 2014-02-05 – Update
The makers of BuildMaster have a new deployment feature – ProGet Deploy – for their NuGet server tool, ProGet. It's very similar to Octopus Deploy, tho I haven't played with it yet myself, so Octopus may have a better visualization of what versions have been deployed to which environments; I still use BuildMaster because of that important feature.
Also, I'm currently using both TeamCity, BuildMaster, and ProGet and I never want to go back to not having automated builds. Currently, all of my apps are built and deployed via BuildMaster. All of my library projects are built in TeamCity and deployed to ProGet. Being able to manage my internal dependencies via the NuGet infrastructure is nice.
Describe the process you use to develop web applications at a not-so-high level, focusing on VC, bug tracking, QA, unit testing, deployment and anything else similar (minus the planning/client communication side of things).
I'm new in this area, so my rough example (read: haven't used this process) is no doubt abit off, so to speak - point out it's flaws so I can learn.
Eg.
Create project repository on local SVN server.
Create batch/shell scripts for DNS mappings.
Check out project, begin work on local working copy.
Develop features as branches.
Track bugs with Mantis (link commits to bugs through it's SVN integration (no idea if that exists)).
Document as you go.
Do QA on branch.
Merge to trunk when stable.
Unit Testing?
Commit to repository when feature is implemented and stable.
Copy releases to tags in repository. Eg. /project/tags/rel-123/
Use Phing to upload to staging server. (Could someone please clarify exactly what a staging server is used for beyond 'testing'?)
Use Phing to prep live site for update, set up DB/deploy, etc.
Create/checkout HEAD version ("main branch")
Develop code and sync with the main branch -at least- daily
After development is done, write and run unit tests
Go through code review and submit code/changes to the main branch
Let continuous builder run all unit tests and system/integration tests on main branch
When ready, cherry pick revisions and integrate them to the QA branch
Run system and integration tests, fix reported bugs, or rollback as necessary; this repeats steps 4-7
After QA signoff, integrate QA change to release branch
Run unit tests, system/integration tests on release branch
Deploy to production and run sanity tests.
A staging server is a copy of your production environment that is as up-to-date as possible. On my current project, we're able to keep each release independent from each other, so our "staging server" is our production server, just accessed from a different url.
Notes and discreprencies:
All of the steps have some variation depending on the size of your project. The larger your project, the better the benefit from cherry picking and environment separation. In smaller projects, these can just be time sinks and are often ignored or bypassed.
To clarify, there is a Development stack, QA stack, and Staging stack. Depending on your project size, QA might be staging, Production might be staging, or some combination thereof. The separation of the Dev and QA stacks is the most important one.
In the steps above, I'm assuming both code and relevant data is versioned or tracked. Having a release and build process that takes control data into account makes a release very, very easy.
In a small-medium sized project, there may or may not be a release branch; it depends on the frequency of code change. Also, depending on the frequency of code change and size of your project, you may integrate the full QA branch to the Release branch, or cherry pick specific revisions to integrate to the release branch.
FWIW, I've found "migration scripts" to be of little value. They're always a one-off script with little reuse and make rollbacks a pain in the ass. Its much easier, I would argue better, to have the application backwards-compatible. After a few releases (when a rollback is a laughable), data cleanup should be done, if necessary.
Very roughly:
Create repository in SVN
Checking local working copy to developer environment
Update/commit changes frequently
Deploy to stage from SVN trunk using custom deploy script
QA tests on stage, reports bugs in Mantis
Developers fix bugs, mark as resolved
Re-deploy to stage
QA tests bugs, closes if fixed
QA is finished, do regression testing
Deploy to production using custom deploy script
Do a little dance
We also create branches for future versions or features. These eventually get merged into the trunk.
We keep our db structures synchronized with a custom db comparison tool that is executed during the deploys.
Old post, but interesting question !
At my company now :
Create a new Github repo
Configure Jenkins
Clone locally
Start a branch
Develop and add tests (server, client and e2e)
Commit for every step, and fetch + rebase to keep the branch in sync
When ready, push the branch to the server : a pre-commit check lint and tests and block if not ok
Create a pull request for the branch
Here, jenkins automatically runs tests on the branch and flag it as "green" or "broken tests" directly in the pull request
Have at last 2 colleagues review the pull request and fix their findings (back to step 5)
When everything green and 2 colleagues have agreed, the last one merges the pull request
Delete the branch on server
When ready, push a new version
Latest version get immediately deployed on a testing platform
QA validate the corrections and features introduced (back to 5 if problem)
(TODO) deploy to a pre-prod with identical parameters than the production
Deploy to production
Go apologize to users for the bugs introduced ;) and report them in the issue manager
get feature requests and report them in the issue manager
restart cycle at step 2