Permissions to deploy to multiple environments with Capistrano - deployment

What is the proper way to set up Capistrano to deploy a Rails app to multiple environments with different permissions required for each environment? In other words, imagine a typical scenario where a developer makes changes to code and pushes the changes to a testing environment. After testing, a release manager pushes the changes to production. And so on, with possible additional levels in between. Capistrano (even with the multistage extension in capistrano-ext) seems to be built for a single user having permissions to deploy to any environment. What is the recommended setup for cases where people at the bottom level shouldn't be able to deploy all the way to production?

In setting up Capistrano and deployment, there are differences between the user account which is used for deployment and the people with permissions who can deploy.
In Capistrano you setup the user
set :user, 'deploy'
This user account must exist on each machine the Capistrano deploy script connects, each role app, web, db. It is recommend to set it up with SSH key authentication.
When someone uses the cap deploy it will connect to the machines with SSH-Keys and will work only if you have your public key installed on that account.
This method allows different people to have different access to the machines. For production, only install the SSH-Keys of the people with admin access to the machines. Then even if someone runs the cap deploy it will not work since they cannot connect to the remote user.
We allow anyone to have their SSH key on the staging environment, but only a couple of people have access to the production server.

Related

Keep admin and frontend deployments in sync using Azure pipelines

We have been tripped up twice recently as our development output has increased.
We have a; backend services, an Admin SPA site and a number of frontend applications including native apps. All in different repos
We also have a fully automated CI/CD pipelines for everything that works fantastically.
What has happened recently is the public applications have gotten ahead of the Admin SPA which is making the team look bad.
Has anyone seen a solution that requires minimum input for developers - the more I can rely on automation the better.
The goal is to keep feature deployments in concert
Tanks
So the plan is to go down versioning with Semantic versioning and a route on admin that returns a json response with the version number.
The build and deploy for admin takes in the version and returns it.
The deploy for reliant apps has a script that queries admin before starting.
There is still a bit of manual work for the developers but it is manageable.
Thanks #Bruno

Test server for release pipeline in Azure DevOps

Forgive me for asking a stupid question. I am from IT Infrastructure background & have been asked to create CI/CD pipelines based on my recent learnings on DevOps.
We have couple of applications whose source code is currently in TFS 2013 & those apps are written in ASP.NET C# language. Now, requirement is to migrate the source code from TFS to Azure Repos (Azure DevOps services) & further create a CI/CD pipeline.
Now for demo purposes, customer is asking us to do the deployment (i.e. Release pipeline) on a test server which is a plain windows 2012 OS without any SQL & IIS for both of these applications. Is that possible & how could we achieve the results to confirm release pipeline is funcioning properly?
In my opinion, it wont work as there is no application infra/configuration done for those applications on that plain test server. I guess we actually need a ready dev/stage environment which is replica of production to do the testing of release pipeline for those applications. Am I correct?
Just need expert advise for confirmation so I communicate the same to customer.
Azure DevOps Pipelines use an agent to perform the deployments. You can run the agent entirely in the cloud when deploying to Azure resources. You can also install an agent locally. Follow this link and scroll down to read about self-hosted agents. This is how you can deploy to your test instance from the pipeline.
Now, what you deploy there may require additional software be installed. You say it's an application in C#. Cool. Now, what's it do? Is it a windows program? Then just having the server there, with an agent installed, is all you need. Is it a web program? Then, yeah, it's going to need an IIS (or whatever) instance available somewhere to deploy to. Is it a database program? Then, yeah, it's going to need a database instance to deploy to. There's nothing magical about having a VM or a machine somewhere. All the same rules have to apply. There has to be an OS, drives, memory, and yes, supporting services depending on the needs of the application.
However, using a local machine instead of a hosted one, that works fine. Just follow the instructions in the link above.

Secure Deployment pattern using Octopus deploy

How would one go about creating a secure means of deploying a package by way of Octopus Deploy?
Implementing a duplicate team, former for developers to deploy to development environment, the latter, to deploy to staging/production environment, with identical roles and specific users that would be team leads that can only deploy to staging/production.
The idea is to prevent developers from having to deploy or promote to staging/production as means of security.
It seems rather clunky in having a duplicate team, and would cause confusion especially when new octopus projects are created in the regards of syncing up between the duplicate teams.
What would you advise/recommend in this approach?
Ninja Edit I have included the tags teamcity and powershell as that is the idea - teamcity, when a build process is kicked off, that will deploy a build eventually leading to octopus deploy which will carry out the deployment process to that environment.
We're in a similar situation where developers are responsible for the DEVELOPMENT environment, testers for TEST and the operations team for PREPROD and PROD.
This is enforced by providing all users with access to Octopus Deploy, creating environment specific teams with roles scoped to particular environments; and assigning users to teams.
http://docs.octopusdeploy.com/display/OD/Managing+users+and+teams

Disconnect TFS Clone from production systems

How can I ensure that a TFS 2010/2012 clone doesn't interact systems from my production environment? I want to run the clone in parallel for doing an upgrade test and some further tests. The clone should not interact with production systems. Is there a way to do that, without knowing all involved systems excatly?
You will need to run the ChangeServerId command to ensure that the GUIDs for the configuration database and collection databases are changed. Here is a link to the MSDN article on the command: http://msdn.microsoft.com/en-us/library/vstudio/ee349259.aspx
NOTE: You must ensure that the app tier is not configured for the databases before running this command. If an app tier is configured, you will need to run the RemapDBs command located here: http://msdn.microsoft.com/en-us/library/vstudio/ee349262.aspx and restart the TfsJobAgent service on the app tier.

Why might a team opt for local capistrano scripts over an online deployment utility like beanstalk's?

My team uses local capistrano scripts for deployment of a few web apps. We use beanstalk's hosted repository for our git repos.
It's always bothered me that our deployment workflow doesn't give us a centralized log of all deployments to each environment. Beanstalk's deployment feature seems to accomplish this goal and much more.
Some weird legacy requirements meant that beanstalk's simple FTP deployments were insufficient. However, their deployment feature can now execute SSH commands and their deployment case study made me seriously reconsider our workflow.
What are the downsides of the beanstalk approach relative to the local capistrano one? Is there any reason I shouldn't make the switch?
The best thing about SSH deployments in Beanstalk is that you don't have to decide between Capistrano and Beanstalk. One of the best use cases is to use them together: Beanstalk as a manager and Capistrano as a performer.
You can setup SSH deployments in Beanstalk to login to one of your servers and issue a Capistrano deployment from there. It will then deploy code to itself and other servers.
This way you get flexibility and bullet-proofness of Capistrano (transactional deploys), with the easy-of-use, permissions, notifications and timeline of Beanstalk Deployments.
This also gives you an ability to run Capistrano deployments from a mobile device or a computer where there's no Capistrano installed.
P.S. — I work at Wildbit.