Using permissions/other features to limit deployment in Azure DevOps - azure-devops

I need to configure permissions and make use of native features to limit deployment within Azure DevOps, so that those with limited access can only release to dev/test environments and those with privileged access can deploy to all environments, including staging/prod, for example.
I'd like to achieve this without splitting release pipelines up - is it best just to use pre-deployment approvals or is there a better way to remove the ability for those with limited access to deploy into prod, at all?
Can this be done by limiting access to service connections, for example? So a limited user would have 'User' access to the dev/test service connections but not staging/prod, as a safety net?
Just looking for some tips/best practice advice.
Thanks..

You could use deployment groups to handle this.
A deployment group is a logical set of deployment target machines that have agents installed on each one. Deployment groups represent the physical environments; for example, "Dev", "Test", "UAT", and "Production". In effect, a deployment group is just another grouping of agents, much like an agent pool.
When authoring an Azure Pipelines or TFS Release pipeline, you can specify the deployment targets for a job using a deployment group. This makes it easy to define parallel execution of deployment tasks.
Deployment groups:
Specify the security context and runtime targets for the agents. As
you create a deployment group, you add users and give them
appropriate permissions to administer, manage, view, and use the group.
Let you view live logs for each server as a deployment takes place,
and download logs for all servers to track your deployments down to
individual machines.
Enable you to use machine tags to limit deployment to specific sets
of target servers.
Besides, suggest you also take a look at this blog: Configuring your release pipelines for safe deployments which include multiple points:
Gradual rollout to multiple environments
Uniform deployment workflow for all environments
Manual approval for rollouts
Segregation of roles
Health check during roll out
Branch filters for deployments
Secure the pipelines

Related

Does setting up SSL for Azure devops on permises impact already configured deployment group agents using http

We have Azure devops setup using http and planning to move to https. As Deployment group agent targets has been setup using http site, does it impact the deployments using deployment group agents ?
Deployment group agents has been configured on servers using PowerShell referring to http site , does it mean we need to reconfigure the agents?
Yes, you will need to reconfigure all agents.
Basically the Project Collection Uri for your instance will change.

Deployment Group and Environment in Azure devops

I want to understand the difference between environment and deployment group in Azure Devops.
And when should I use one and not the other?
Thanks
Deployment Groups were the "old way" to do rolling deployment across multiple VMs. The new way is to use environments with the 'rolling' strategy from yaml.
My recommendation would be to let Deployment Groups lie and invest in Environments and YAML.
See:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-group-phases?view=azure-devops&tabs=yaml

Kubernetes: How to manage multiple separate deployments of the same app

We're migrating our app's deployments from using VMs to Kubernetes and as my knowledge of Kubernetes is very limited, I'm lost how I could set up deployment for multiple clients.
Right now we have a separate VM for each client but how to separate the clients in Kubernetes in a way that will be cost and resource efficient and easy to manage?
I managed to create dev and staging environments using namespaces and this is working great.
To update dev and staging deployment I just use kubectl apply -f <file> --namespace staging.
Now I need to deploy app to production for several clients (50+). They should be completely separate from each other (using separate environment variables and secrets) while code should be the same. And I don't know what is the best way to achieve that.
Could you please hint me what is the right way for that in Kubernetes?
You can use Kustomize. It provides purely declarative approach to configuration customization to manage an arbitrary number of distinctly customized Kubernetes configurations.
https://github.com/kubernetes-sigs/kustomize/tree/master/examples
one (or a set of namespaces) by customer
kustomize has a very good patterns system to handle generic configuration and several adaptation by clients
use NetworkPolicy to isolate network between clients

Using Cloudformation to build environment on new account

I'm trying to write some Cloudformation templates to setup a new account with all the resources needed for running our site. In this case we'll be setting up a UAT/test environment.
I have setup:
VPC
Security groups
ElastiCache
ALB
RDS
Auto scaling group
What I'm struggling with is, when I bring up my auto scaling group with our silver AMI, it fails health checks and the auto scaling group gets rolled back.
I have our code in a git repo, which is to be deployed via CodeDeploy, but it seems I can't add a CodeDeploy deployment without an auto scaling group and I can't setup the auto scaling group without CodeDeploy.
Should I modify our silver AMI to fake the health checks so the auto scaling group can be created? Or should I create the auto scaling group without health checks until a later step?
How can I programmatically setup CodeDeploy with Cloudformation so it pulls the latest code from our git repo?
Create the deployment app, group, etc. when you create the rest of the infrastructure, via CloudFormation.
One of the parameters to your template would be the app package already found in an S3 code deploy bucket, or the Github commit id to a working release of your app.
In addition to the other methods available to you in CodeDeploy, you can use AWS CloudFormation templates to perform the following tasks: Create applications, Create deployment groups and specify a target revision, Create deployment configurations, Create Amazon EC2 instances.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-cloudformation-templates.html
With this approach you can launch a working version of your app as you create your infrastructure. Use normal health checks so you can be sure your app is properly configured.

Multiple environments (Staging, QA, production, etc) with Kubernetes

What is considered a good practice with K8S for managing multiple environments (QA, Staging, Production, Dev, etc)?
As an example, say that a team is working on a product which requires deploying a few APIs, along with a front-end application. Usually, this will require at least 2 environments:
Staging: For iterations/testing and validation before releasing to the client
Production: This the environment the client has access to. Should contain stable and well-tested features.
So, assuming the team is using Kubernetes, what would be a good practice to host these environments? This far we've considered two options:
Use a K8s cluster for each environment
Use only one K8s cluster and keep them in different namespaces.
(1) Seems the safest options since it minimizes the risks of potential human mistake and machine failures, that could put the production environment in danger. However, this comes with the cost of more master machines and also the cost of more infrastructure management.
(2) Looks like it simplifies infrastructure and deployment management because there is one single cluster but it raises a few questions like:
How does one make sure that a human mistake might impact the production environment?
How does one make sure that a high load in the staging environment won't cause a loss of performance in the production environment?
There might be some other concerns, so I'm reaching out to the K8s community on StackOverflow to have a better understanding of how people are dealing with these sort of challenges.
Multiple Clusters Considerations
Take a look at this blog post from Vadim Eisenberg (IBM / Istio): Checklist: pros and cons of using multiple Kubernetes clusters, and how to distribute workloads between them.
I'd like to highlight some of the pros/cons:
Reasons to have multiple clusters
Separation of production/development/test: especially for testing a new version of Kubernetes, of a service mesh, of other cluster software
Compliance: according to some regulations some applications must run in separate clusters/separate VPNs
Better isolation for security
Cloud/on-prem: to split the load between on-premise services
Reasons to have a single cluster
Reduce setup, maintenance and administration overhead
Improve utilization
Cost reduction
Considering a not too expensive environment, with average maintenance, and yet still ensuring security isolation for production applications, I would recommend:
1 cluster for DEV and STAGING (separated by namespaces, maybe even isolated, using Network Policies, like in Calico)
1 cluster for PROD
Environment Parity
It's a good practice to keep development, staging, and production as similar as possible:
Differences between backing services mean that tiny incompatibilities
crop up, causing code that worked and passed tests in development or
staging to fail in production. These types of errors create friction
that disincentivizes continuous deployment.
Combine a powerful CI/CD tool with helm. You can use the flexibility of helm values to set default configurations, just overriding the configs that differ from an environment to another.
GitLab CI/CD with AutoDevops has a powerful integration with Kubernetes, which allows you to manage multiple Kubernetes clusters already with helm support.
Managing multiple clusters (kubectl interactions)
When you are working with multiple Kubernetes clusters, it’s easy to
mess up with contexts and run kubectl in the wrong cluster. Beyond
that, Kubernetes has restrictions for versioning mismatch between the
client (kubectl) and server (kubernetes master), so running commands
in the right context does not mean running the right client version.
To overcome this:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
I have an article that exemplifies how to accomplish this: Using different kubectl versions with multiple Kubernetes clusters
I also recommend the following reads:
Mastering the KUBECONFIG file by Ahmet Alp Balkan (Google Engineer)
How Zalando Manages 140+ Kubernetes Clusters by Henning Jacobs (Zalando Tech)
Definitely use a separate cluster for development and creating docker images so that your staging/production clusters can be locked down security wise. Whether you use separate clusters for staging + production is up to you to decide based on risk/cost - certainly keeping them separate will help avoid staging affecting production.
I'd also highly recommend using GitOps to promote versions of your apps between your environments.
To minimise human error I also recommend you look into automating as much as you can for your CI/CD and promotion.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests which was done live on GKE though Jenkins X supports most kubernetes clusters
It depends on what you want to test in each of the scenarios. In general I would try to avoid running test scenarios on the production cluster to avoid unnecessary side effects (performance impact, etc.).
If your intention is testing with a staging system that exactly mimics the production system I would recommend firing up an exact replica of the complete cluster and shut it down after you're done testing and move the deployments to production.
If your purpose is testing a staging system that allows testing the application to deploy I would run a smaller staging cluster permanently and update the deployments (with also a scaled down version of the deployments) as required for continuous testing.
To control the different clusters I prefer having a separate ci/cd machine that is not part of the cluster but used for firing up and shutting down clusters as well as performing deployment work, initiating tests, etc. This allows to set up and shut down clusters as part of automated testing scenarios.
It's clear that by keeping the production cluster appart from the staging one, the risk of potential errors impacting the production services is reduced. However this comes at a cost of more infrastructure/configuration management, since it requires at least:
at least 3 masters for the production cluster and at least one master for the staging one
2 Kubectl config files to be added to the CI/CD system
Let’s also not forget that there could be more than one environment. For example I've worked at companies where there are at least 3 environments:
QA: This where we did daily deploys and where we did our internal QA before releasing to the client)
Client QA: This where we deployed before deploying to production so that the client could validate the environment before releasing to production)
Production: This where production services are deployed.
I think ephemeral/on-demand clusters makes sense but only for certain use cases (load/performance testing or very « big » integration/end-to-end testing) but for more persistent/sticky environments I see an overhead that might be reduced by running them within a single cluster.
I guess I wanted to reach out to the k8s community to see what patterns are used for such scenarios like the ones I've described.
Unless compliance or other requirements dictate otherwise, I favor a single cluster for all environments. With this approach, attention points are:
Make sure you also group nodes per environment using labels. You can then use the nodeSelector on resources to ensure that they are running on specific nodes. This will reduce the chances of (excess) resource consumption spilling over between environments.
Treat your namespaces as subnets and forbid all egress/ingress traffic by default. See https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Have a strategy for managing service accounts. ClusterRoleBindings imply something different if a clusters hosts more than one environment.
Use scrutiny when using tools like Helm. Some charts blatantly install service accounts with cluster-wide permissions, but permissions to service accounts should be limited to the environment they are in.
I think there is a middle point. I am working with eks and node groups. The master is managed, scaled and maintained by aws. You could then create 3 kinds of node groups (just an example):
1 - General Purpose -> labels: environment=general-purpose
2 - Staging -> labels: environment=staging (taints if necessary)
3 - Prod -> labels: environment=production (taints if necessary)
You can use tolerations and node selectors on the pods so they are placed where they are supposed to be.
This allows you to use more robust or powerful nodes for production's nodegroups, and, for example, SPOT instances for staging, uat, qa, etc... and has a couple of big upsides:
Environments are physically separated (and virtually too, in namespaces)
You can reduce costs by sharing not only the masters, but also some nodes with pods shared by the two environments and by using spot or cheaper instances in staging/uat/...
No cluster-management overheads
You have to pay attention to roles and policies to keep it secure. You can implement network policies using, for example eks+calico.
Update:
I found a doc that may be useful when using EKS. It has some details on how to safely run multi-tenant cluster, and some of this details may be useful to isolate production pods and namespaces from the ones in staging.
https://aws.github.io/aws-eks-best-practices/security/docs/multitenancy/
Using multiple clusters is the norm, at the very least to enforce a strong separation between production and "non-production".
In that spirit, do note that GitLab 13.2 (July 2020) now includes:
Multiple Kubernetes cluster deployment in Core
Using GitLab to deploy multiple Kubernetes clusters with GitLab previously required a Premium license.
Our community spoke, and we listened: deploying to multiple clusters is useful even for individual contributors.
Based on your feedback, starting in GitLab 13.2, you can deploy to multiple group and project clusters in Core.
See documentation and issue.
A few thoughts here:
Do not trust namespaces to protect the cluster from catastrophe. Having separate production and non-prod (dev,stage,test,etc) clusters is the minimum necessary requirement. Noisy neighbors have been known to bring down entire clusters.
Separate repositories for code and k8s deployments (Helm, Kustomize, etc.) will make best practices like trunk-based development and feature-flagging easier as the teams scale.
Using Environments as a Service (EaaS) will allow each PR to be tested in its own short-lived (ephemeral) environment. Each environment is a high-fidelity copy of production (including custom infrasture like database, buckets, dns, etc.), so devs can remotely code against a trustworthy environment (NOT minikube). This can help reduce configuration drift, improve release cycles, and improve the overall dev experience. (disclaimer: I work for an EaaS company).
I think running a single cluster make sense because it reduces overhead, monitoring. But, you have to make sure to place network policies, access control in place.
Network policy - to prohibit dev/qa environment workload to interact with prod/staging stores.
Access control - who have access on different environment resources using ClusterRoles, Roles etc..