Keep admin and frontend deployments in sync using Azure pipelines - azure-devops

We have been tripped up twice recently as our development output has increased.
We have a; backend services, an Admin SPA site and a number of frontend applications including native apps. All in different repos
We also have a fully automated CI/CD pipelines for everything that works fantastically.
What has happened recently is the public applications have gotten ahead of the Admin SPA which is making the team look bad.
Has anyone seen a solution that requires minimum input for developers - the more I can rely on automation the better.
The goal is to keep feature deployments in concert
Tanks

So the plan is to go down versioning with Semantic versioning and a route on admin that returns a json response with the version number.
The build and deploy for admin takes in the version and returns it.
The deploy for reliant apps has a script that queries admin before starting.
There is still a bit of manual work for the developers but it is manageable.
Thanks #Bruno

Related

Kubernetes Preview environments

I would like to ask what people use to provision an ephemeral preview environment in AWS EKS for your service under test. Also in addition, I am curious to know how you provision any dependent services (such as Database).
E.g. I am working on a back-end service and would like to deploy an isolated ephemeral version of this service packaged from my feature branch, including the database. Furthermore, I would also like copy of a front-end service in my isolated environment to test my back-end.
Any thoughts would be appreciated
Thanks
Sachin
You can roll your own solution: by wiring together your own CI/CD (Jenkins, CircleCI, BuildKite, Github Actions, etc) solution to trigger building and deploying of a preview environment by tying in to webhooks on your source repository. This would have to include your building of the modified code, then deploying that code to some staging environments, then of course seeding those environments with some type of data.
There is a bit of nuance to getting this right. You should check out https://ephemeralenvironments.io/ which is a good template of what needs to go in to these environments.
A lot of other folks use services that provide this as a SaaS platform, Shipyard.build, Release, and Velocity.tech are a few of your options.
Disclaimer: I'm on the Operations team at Shipyard
Hope this helps!

where do github apps run and what are the resources limits

i am interested in building a github app. reading through github Setting up your development environment to create a GitHub App documentation it explains that a github app is based on a http server which will handle webhooks.
yet, on every github app i installed, the app\installation did not require anything that involves hosting and/or creation of http server in order to deploy the app to my github accout.
for such github apps, which are installed directly through the github marketplace (you can take probot stale and rennovate as examples for such apps)
where do these application run? (e.g.; does github deploys the app on a dedicate (virtual) server?).
what are the resources limitations for such apps (amount of memory, cpu, etc.?)
how can the github app logs be accessed by the github account owner who installed such app?
links for reference and an answer will be great.
GitHub App is just another app that you create. GitHub apps are treated as first-class citizens when it comes to integrating with GitHub. One can use Nodejs , Ruby, etc to build the App. Once the app is ready it can be hosted on a Server just like any other server hosted apps. You register your app on GitHub by providing relevant details.
So, coming to your questions.
The Apps can run on any hosting service of your choice. It can be a Windows Server, Heroku, etc.
I believe it is only limited by the resource of your server or the hosting service provider that you chose. However you might be ineterset in erading more about the Rate Limit More on Rate limits here.
GitHub app logs are something which only the developer will be able to see. To the end user ,i.e. the repo owner who installed the GitHUb app on his repos, all that will be available are the checks , statuses and any other details that the developer of app decided to display.
A very handy guide on Deployment and other details : Probot Documentation. This documentation is great if you are planning to use the probot framework for developing your github apps, but most of the instructions still stand true in case you decide to pick up a different tech stack.
The most important thing to realise about a (so-called?) Github App is that the App itself does not run anywhere - or at least that is what I would argue. Basically Github Apps are two linked mechanisms, both a bit of infrastructure. The first of these mechanisms is access control, essentially replacing use of user PATs - you can give relatively fine grained access to repos that the App is installed in, rather than just giving access to all repos the user can access. The second mechanism is that of webhooks - generating events as requested.
What Github Apps do not directly provide is the bit between this - handling the webhooks and generating API calls using the App for access. Basically you are on your own and need to do it yourself. The plus, #asif-kamran-malick mentioned, is that you have freedom to implement it how you see fit.
One alternative possibility is that the App itself, rather than setting to handle ongoing Webhooks, runs on installation and looks to add Actions into the repo. Never done it, but some of the github examples seem to work this way. Of course, Actions are run within Github environments and are potentially subject to resource limits. Apart from this though, Actions are a completely separate "beast" and should not be confused.

VSTS + Octopus Deploy? Why do I see a lot of CI/CD setups with both?

I'm a developer whose transitioning into Devops. By observation, I've noticed that a lot of dev shops have started using Octopus Deploy and Azure Devops Services (AzDo, formerly VSTS), or they are starting new projects to setup devops ci/cd pipelines AND they spec to use both tools.
I've been through some quick training for both tools and though they aren't perfectly the same, AzDo seems to offer all of the same features as Octopus Deploy.
So, my question is if a company is already using AzDo for much of their version control, or anything CI/CD pipeline-related, why would you use Octopus? What benefit does it offer to use Octopus for your build and deploys to AzDo?
Note, I am very, very new to Devops. I'm just asking because at the "10,000 feet view" there doesn't seem to be any reason for Octopus if you're already using AzDo. I mention Octopus Deploy by name because I see it come up frequently. However, I assume there could be other tools that serve the same purpose of automatic build and deploying that might also integrate with AzDo. However, AzDo offers build and deploy built in one. Why split out the work?
Let me preface why I like to both build and deploy with VSTS:
Same permissioning end to end
Line of sight from end to end build and deployment
Reasons I favor Octopus Deploy over VSTS Release:
Ability to upload packages/artifacts
External ones that are maybe one off packages to get deployed for a specific release
Target Definition
When you create Targets or servers you are deploying to, you are able to add a target to one or multiple environments and assign tags/roles to a target. What does this mean? More flexible server definition rather than defining strict Agents to a pool or servers to a Deployment Group, you can allow a target to span multiple (ie: a testing server that spans your Dev and Test environments and only gets triggered on steps that are defined for that role). I realize you can accomplish similar things to this in VSTS but in my opinion it's far more cumbersome.
Variable Definition
Variables can be grouped at a global level and grouped by a specific pipeline/process (that part is similar to VSTS). Variables can also be grouped or scoped by environments or roles (above) so you are able to have different variable values per role per environment; both super granular and flexible. Places this comes in handy is if you have a backend server with a connection string and maybe 2 content delivery nodes (role - content delivery) that get slightly different values than the backend server. At the moment, I do not know (other than creating new environments) how one would accomplish the same in VSTS.
Process Definition
All of the above comes together in the process definition of Octopus Deploy. The super flexible and granular variables and target definition allows you to focus on the actual deployment process rather than getting hung up on the nuances of the UI and its limitations. One example would be defining a process where the first step would be taking something out of a load balancer from a central server, step two deploy code to delivery server one, step three put back in lb, step 4 take out node two from lb called from a central server, step 5 deploy code to node two, and last step, back into load balancer. I realize it's a very simple hypothetical, but within Octopus Deploy, it's one steady process filtered to execute on specific roles, within VSTS you would have to break that down into different agent phases and probably pipelines.
Above are really the biggest points I see to use Octopus Deploy over VSTS Release. Now why would someone use VSTS to build and OD to release/deploy? There are a lot of different factors that go into it, some are corporate drivers like having an enterprise git client that has permissions handled thru MSDN. Sometimes it's a project management driver of having work items tied tightly to commit and builds, but with the added flexibility that OD brings to the table for free/minimal cost.
Hoping this help shine a little light into maybe why some people are crossing streams and using both VSTS and OD.
A lot of good points have been made already, but it really comes down to what you need. I would venture a lot of us started using Octopus before Release Management was really a thing.
We use VSTS for all our source control and builds and then all our deployments are handled through Octopus.
When we started evaluating tools, VSTS had nothing for deployments. Even now, they are still playing catch up to Octopus in feature set.
If you are doing true multi-tenanted and multi-environment deployments, I don't think VSTS really compares. We are using Octopus with around 30 tenants, some on Azure, some on premise. We deploy a mix of web and desktop apps. We are even using Octopus to deploy some legacy VB6 and winforms applications.
Multi-Tenancy (critical for us)
VSTS added Deployment Groups a while ago which sound pretty similar to Octopus Environments before multi-tenancy was implemented. Before Octopus had true multi-tenancy (it's been around a while now), people would work around it by creating different environments per tenant, like "CustomerA - Dev", "CustomerA - Prod", etc. Now you just have your Dev/Test/Prod environments and each tenant can have variables scoped to those individual environments.
Support
Documentation is excellent and it's really easy to get up and running.
The few times I've needed to contact someone at Octopus, they've answered very quickly and knowledgeably.
Usability
Having the Octopus dashboard giving us an overview of all our projects is amazing. I don't know of anyway to do this in VSTS, without going into each individual project.
Octopus works great on a mobile device for checking deployment status and even starting new deployments.
Community
Octopus works with their customers to understand what they want and they often release draft RFCs and have several times completely changed course based on customer feedback.
If we know what sort of applications you are deploying, and to what kinds of environments, we would be able to better tailor our responses.
The features you see today in VSTS weren't there a few years ago, so there might be an historical reason.
But I want to state here some non-opinionated reasons that may suggest an organization to opt for different tools instead of one.
Separate responsibility and access levels
Multiple CI tools in dev teams (orgs that are using also Jenkins or TeamCity or else) and need to standardize and control deployments
An org needs a feature available only in Octopus (maybe Multi-tenancy)
Octopus does a great job of focusing on deployments. Features reach octopus before vsts, support is local and responsive. That, and you never run out of build/release minutes!
Seriously though, I just like to support smaller companies where possible and if all features were equal, I'd still pick them.
The big reason in the past was that TFS On prem and early VSTS did NOT support non-Microsoft (.Net) code very well if at all. You could utilize the source control and work features of TFS and then use octopus/Jenkins etc... as the build release parts to cover code that TFS didn't really know what to do with.
Also the release pipelines used to be very simplistic and not that useful where the other products were all plugin based and could do (almost) anything you needed them to. Most of that has changed so that VSTS is much better at working with Non-Microsoft code bases then it used to be. Over time integrations get created inside a companies walls and undoing those decisions can be more painful then just having "too many" tools. Also I feel like there is just more people out there familiar with those tools since they have been mature longer and cover a larger part of the development world then VSTS has in the past.
To fully implement CD you need both. VSTS runs tests and is a build server. OD isn’t. VSTS is light on sophisticated application installations. And if you are provisioning environments, IaC style, you need Terraform in addition. Don’t try to shoehorn everything into a single tool. DevOps requires a whole ecosystem. The reasons are not historical.

Deploy Mercurial with SFTP/SSH

Currently I am using sourcetree to manage my repo's. I've read about using different branches for 3 stages of website development. I've seen sites like deployhq.com and beanstalkapp.com. I am curious is there a "free" way to deploy/rollback and publish via sftp/shh to my web servers both staging and live production for each of my repo's. I need a way for junior level developers to easily be able to do it too, be great if there was a review process. If there is no free method, what;s the best paid method out there. I also use bitbucket.org to store my repos.
Thanks so much
You could use a lightweight deployment tool such as Kwatee (self-promotion). Kwatee is free and can deploy any files via a web interface or python scripts (also ant/maven if you use java). All you'd have to do is to automate the check-out of your branch and then trigger the deploy command in kwatee which will use ssh/scp or telnet/ftp behind the scenes to update your deployment (only changes will be deployed).

What are the Team City best practices for multistage deployment?

We have 3 environments:
Development: Team City deploys here for Subversion commits on trunk.
Staging: User acceptance is done here, on builds that are release candidates.
Production: When UAT passed, the passing code set is deployed here.
We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server.
Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production.
I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging.
I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it?
Update:
I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone?
Update 2
I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was.
Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management.
Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!)
The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways:
Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it.
Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it!
Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use.
Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track).
Does that answer your question? Is there anything still missing?
We also used TeamCity as our build server so let me explain our setup.
We have 4 environments
Development used by Dev to verify commits in a server environment
QA for testing purposes
Staging for deployment checks and some UAT
Production
We only use TeamCity to deploy to Development (Nightly builds) and to QA (on-demand).
The Dev build uses the trunk branch and QA build uses a different branch used for the RC.
Deployment to the Staging and Production are managed by the IT team, and are therefore not automated.
What we do instead is that we use TeamCity to produce artifacts from the QA build. The artifacts are the deployment kits sent for Staging/Production deployments.
That said, I am not sure if TeamCity would provide you a complete control on which build can be promoted to which environment. We basically control this on the SVN side with branches, and have different builds for those branches. You could (should) do be able to manage this it the same way. You can therefore ensure what is getting deployed.
I understand that your needs may be slightly different than ours but I hope that this will helps you finding the best setup.
I think you might want to check out something like Octopus Deploy or BuildMaster. They provide a nice structure for the deployment practices you're trying to automate. Both tools integrate with TeamCity nicely.
Basically, you'd continue to use TeamCity for CI, and you could also continue to deploy to your development environment with TeamCity too, but you'd use one of the deployment tools to promote an (existing) build to staging and production.
Edit 2014-02-05 – Update
The makers of BuildMaster have a new deployment feature – ProGet Deploy – for their NuGet server tool, ProGet. It's very similar to Octopus Deploy, tho I haven't played with it yet myself, so Octopus may have a better visualization of what versions have been deployed to which environments; I still use BuildMaster because of that important feature.
Also, I'm currently using both TeamCity, BuildMaster, and ProGet and I never want to go back to not having automated builds. Currently, all of my apps are built and deployed via BuildMaster. All of my library projects are built in TeamCity and deployed to ProGet. Being able to manage my internal dependencies via the NuGet infrastructure is nice.