Does Vault support two CA intermediates to sign two different environment certs? - hashicorp-vault

There are multiple non-production environments, such as DEV, TEST, QA, SIT, PKG, and if we want to manage a separate CA for each environment to sign certs, should we set up a separate vault cluster for each environment?
Is there any way that we can manage all these CA in the same vault cluster?

You can create as many PKI secrets engines as you like within a single vault cluster.
Rather than separate clusters (which would be a lot of overhead), create and mount PKI engines are separate mount points. eg: pki/dev, pki/test, etc... Each engine will hold the CA for the corresponding environment.
This applies to all secrets engines: Vault is happy to have more than one mounted, and all operations are on a specific path. You can of course apply separate policies to those engines.

Related

Application Load Balancers in an EKS cluster

I'm trying to figure out ways to automate k8s deployments in an EKS cluster. I'm trying to set up namespaces for each specific environment. One for dev, one for staging, and one for production. My production namespace is in a separate region and also in a separate cluster (dev & staging are in one cluster). I'm a little new to this concept, but does it make sense to have each respective application load balancer in it's respective namespace? Is that practice common or best practice? Any ideas on automating deployments would be appreciated.
Hi Dave Michaels,
I assume there are two questions in your post above:
If we use a dedicated namespace in the same cluster (dev & staging setup), can we use a dedicated load balancer for each of these namespaces? Is this good practice.
Answer: Yes. As you are using the namespace concept for each environment in the same cluster, it is Ok to create a dedicated load balancer (promise me you will use ingress :)) in each of these namespaces as we need an easier way to access those environments. To be frank, I am not a fan of using namespaces for environments, because as your cluster grows and lots of microservices getting added to it, you might want to use namespace for another reason eg., namespace per team or domain to have granular access rights. But I have seen teams using it for different environments successfully as well.
Suggest automated Kubernetes deployments possibilities?
This is a large topic by itself.
As your microservices grow, you will have multiple Kubernetes manifests to handle, first thing I will suggest is to either use a configuration manager like Kustomize or a package manager like Helm to segregate variables from actual manifests, this will help to easily automate deployment across environments (same cluster or different clusters). Coming to actual deployment automation, if there is no existing CD in place I would suggest exploring tools that support natively Kubernetes that supports GitOps, like FluxCD or ArgoCD etc

What is the recommended cert strategy for multi-master K8s clusters?

Is it atypical for multi-master K8s cluster deployments to use unique certs per service, per controller node? Most guides I've seen generate unique certs per service (API, Controller, Scheduler) and then use those certs for the eponymous service on each Controller node.
Does Kubernetes disallow or discourage unique certs per service, per node? With DNS/IP SANs it should be possible to still have each service respond to a singular cluster address, so I'm curious if this decision is one for the sake of simpler instructions, or if it's actually some requirement I'm missing.
Thank you.
Does Kubernetes disallow or discourage unique certs per service, per
node? With DNS/IP SANs it should be possible to still have each
service respond to a singular cluster address, so I'm curious if this
decision is one for the sake of simpler instructions, or if it's
actually some requirement I'm missing
When we have running Kubernetes cluster/s, we can have thousands of private and public keys, and different components usually do not know if they are valid. So there is the Certificate Authority that is a 3rd party entity which tells the interested elements "this certificate is trusted".
documentation:
Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API server’s certificate, by the API server to validate kubelet client
certificates, etc.
This actually shows that you can have different certificates in each cluster, but it is not a requirement, you can imagine many different combinations of your CA. You can have one global CA that is responsible for signing all the keys, or one CA for each cluster, one for internal communication and one for external, etc.
Any request that presents a client certificate signed by cluster CA will be considered authenticated. In that authentication process, it should be possible to obtain a username from the Common Name field (CN) and a group from the Organization field of that certificate. So the answer would be yes, you can use different certs per service, node or any component in the cluster unless it is signed by the Certificate Authority in the cluster.
When creating certificates for the master with multiple masters (HA cluster), you have to make sure that the load-balancers IP and DNS name is a part of that certificate. Otherwise, whenever a client will try to talk through the API server through an LB, a client will complain since the common name on the certificate will be different than the one it wants to communicate with.
Going further, each of the core cluster components has his own client certificate in addition to the main certificate because each of them will have a different access level to the cluster with different common names.
It is noteworthy that kubelet has a little different certificate name as each kublet will have a different identity (hostname where the kubelet is running will be a part of the certificate) it is related with other features like NodeAuthorizer and Node Restriction Admission Plugin. These features are important from the perspective of least privilege - they limit the in another case unrestricted access and interaction of the kubelet to apiserver.
Using this feature, you can limit kubelet to only being able to modify its node resource instead of the whole cluster, as the same it will only be able to read its nodes secrets instead of all secrets in the cluster, etc.
EDIT - following comment discussion:
Assuming you are asking for opinion on why does more people do not use multiple certificates, I think that it is because it does not realy rise security in a significant matter. As the certs are not as important as the CA - which is a trusted guarantor that the entities can talk with each other securely. So you can make multiple CA's - the reason for that would be more of a HA approach than security. Of course if you have a trusted CA, you don't need more certificate kinds as you actually do not reach any goal by increasing the number of them.

management of kubernetes secrets

we are starting with Kubernetes and wondering how other projects manage Kubernetes secrets:
Since Kubernetes secrets values are just base64 encoded, it's not recommended to commit the secrets into source control
If not committing to source control, it should be kept in some central place somewhere else, otherwise there's no single source of truth. If it's stored in some other places (.e.g. Hashicorp Vault), how's the integration with CI? Does the CI get values from Vault, create secrets resource on demand in Kubernetes?
Another approach is probably to have a dedicated team to handle infrastructure and only that team knows and manages secrets. But if this team can potentially become a bottleneck if number of projects are large
how other projects manage Kubernetes secrets
Since they are not (at least not yet) proper secrets (base64 encoded), we treat them to separate restricted access git repository.
Most of our projects have code repository (with non-secret related manifests such as deployments and services as part of CI/CD pipeline) and separate manifest repository (holding namespaces, shared database inits, secrets and more or less anything that is either one-time init separate from CI/CD, requires additional permission to implement or that should be restricted in any other way such as secrets).
With that being said, although regular developer doesn't have access to restricted repository, special care must be given to CI/CD pipelines since even if you secure secrets, they are known (and can be displayed/misused) during CI/CD stage, so that can be weak security spot there. We mitigate that by having one of our DevOps supervise and approve (protected branches) any change to CI/CD pipeline in much the same manner that senior lead is supervising code changes to be deployed to production environment.
Note that this is highly dependent on project volume and staffing, as well as actual project needs in term of security/development pressure/infrastructure integration.
I came across this in github called SealedSecrets. https://github.com/bitnami-labs/sealed-secrets. I haven't used it myself. Though it seems to be a good alternative.
But take note of this github issue (https://github.com/bitnami-labs/sealed-secrets/issues/92). It may cause you to lose labels and annotations.
In a nut shell SealedSecrets allows you to create a custom resource definition which encrypts your secret. In turn when you deploy the resource it will decrypt the CRD and turn it to a kubernetes Secret. This way you can commit your SealedSecret resource in your source code repo.
I use k8 secrets as the store where secrets are kept. As in when I define a secret I define it in k8 not somewhere else to then figure out how to inject it into k8. I have a handy client to create lookup and modify my secrets. I don't need to worry about my secrets leaving the firewall. They are easily injected into my services
If you want an extra layer of protection you can encrypt the secrets in k8 yourself with a KMS or something like that
We recently released a project called Kamus. The idea is to allow developers to encrypt secrets for a specific application (identified with a Kubernetes service account), while only this application can decrypt it. Kamus was designed to support GitOps flow, as the encrypted secret can be committed to source control. Take a look at this blog post for more details.

Multiple environments (Staging, QA, production, etc) with Kubernetes

What is considered a good practice with K8S for managing multiple environments (QA, Staging, Production, Dev, etc)?
As an example, say that a team is working on a product which requires deploying a few APIs, along with a front-end application. Usually, this will require at least 2 environments:
Staging: For iterations/testing and validation before releasing to the client
Production: This the environment the client has access to. Should contain stable and well-tested features.
So, assuming the team is using Kubernetes, what would be a good practice to host these environments? This far we've considered two options:
Use a K8s cluster for each environment
Use only one K8s cluster and keep them in different namespaces.
(1) Seems the safest options since it minimizes the risks of potential human mistake and machine failures, that could put the production environment in danger. However, this comes with the cost of more master machines and also the cost of more infrastructure management.
(2) Looks like it simplifies infrastructure and deployment management because there is one single cluster but it raises a few questions like:
How does one make sure that a human mistake might impact the production environment?
How does one make sure that a high load in the staging environment won't cause a loss of performance in the production environment?
There might be some other concerns, so I'm reaching out to the K8s community on StackOverflow to have a better understanding of how people are dealing with these sort of challenges.
Multiple Clusters Considerations
Take a look at this blog post from Vadim Eisenberg (IBM / Istio): Checklist: pros and cons of using multiple Kubernetes clusters, and how to distribute workloads between them.
I'd like to highlight some of the pros/cons:
Reasons to have multiple clusters
Separation of production/development/test: especially for testing a new version of Kubernetes, of a service mesh, of other cluster software
Compliance: according to some regulations some applications must run in separate clusters/separate VPNs
Better isolation for security
Cloud/on-prem: to split the load between on-premise services
Reasons to have a single cluster
Reduce setup, maintenance and administration overhead
Improve utilization
Cost reduction
Considering a not too expensive environment, with average maintenance, and yet still ensuring security isolation for production applications, I would recommend:
1 cluster for DEV and STAGING (separated by namespaces, maybe even isolated, using Network Policies, like in Calico)
1 cluster for PROD
Environment Parity
It's a good practice to keep development, staging, and production as similar as possible:
Differences between backing services mean that tiny incompatibilities
crop up, causing code that worked and passed tests in development or
staging to fail in production. These types of errors create friction
that disincentivizes continuous deployment.
Combine a powerful CI/CD tool with helm. You can use the flexibility of helm values to set default configurations, just overriding the configs that differ from an environment to another.
GitLab CI/CD with AutoDevops has a powerful integration with Kubernetes, which allows you to manage multiple Kubernetes clusters already with helm support.
Managing multiple clusters (kubectl interactions)
When you are working with multiple Kubernetes clusters, it’s easy to
mess up with contexts and run kubectl in the wrong cluster. Beyond
that, Kubernetes has restrictions for versioning mismatch between the
client (kubectl) and server (kubernetes master), so running commands
in the right context does not mean running the right client version.
To overcome this:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
I have an article that exemplifies how to accomplish this: Using different kubectl versions with multiple Kubernetes clusters
I also recommend the following reads:
Mastering the KUBECONFIG file by Ahmet Alp Balkan (Google Engineer)
How Zalando Manages 140+ Kubernetes Clusters by Henning Jacobs (Zalando Tech)
Definitely use a separate cluster for development and creating docker images so that your staging/production clusters can be locked down security wise. Whether you use separate clusters for staging + production is up to you to decide based on risk/cost - certainly keeping them separate will help avoid staging affecting production.
I'd also highly recommend using GitOps to promote versions of your apps between your environments.
To minimise human error I also recommend you look into automating as much as you can for your CI/CD and promotion.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests which was done live on GKE though Jenkins X supports most kubernetes clusters
It depends on what you want to test in each of the scenarios. In general I would try to avoid running test scenarios on the production cluster to avoid unnecessary side effects (performance impact, etc.).
If your intention is testing with a staging system that exactly mimics the production system I would recommend firing up an exact replica of the complete cluster and shut it down after you're done testing and move the deployments to production.
If your purpose is testing a staging system that allows testing the application to deploy I would run a smaller staging cluster permanently and update the deployments (with also a scaled down version of the deployments) as required for continuous testing.
To control the different clusters I prefer having a separate ci/cd machine that is not part of the cluster but used for firing up and shutting down clusters as well as performing deployment work, initiating tests, etc. This allows to set up and shut down clusters as part of automated testing scenarios.
It's clear that by keeping the production cluster appart from the staging one, the risk of potential errors impacting the production services is reduced. However this comes at a cost of more infrastructure/configuration management, since it requires at least:
at least 3 masters for the production cluster and at least one master for the staging one
2 Kubectl config files to be added to the CI/CD system
Let’s also not forget that there could be more than one environment. For example I've worked at companies where there are at least 3 environments:
QA: This where we did daily deploys and where we did our internal QA before releasing to the client)
Client QA: This where we deployed before deploying to production so that the client could validate the environment before releasing to production)
Production: This where production services are deployed.
I think ephemeral/on-demand clusters makes sense but only for certain use cases (load/performance testing or very « big » integration/end-to-end testing) but for more persistent/sticky environments I see an overhead that might be reduced by running them within a single cluster.
I guess I wanted to reach out to the k8s community to see what patterns are used for such scenarios like the ones I've described.
Unless compliance or other requirements dictate otherwise, I favor a single cluster for all environments. With this approach, attention points are:
Make sure you also group nodes per environment using labels. You can then use the nodeSelector on resources to ensure that they are running on specific nodes. This will reduce the chances of (excess) resource consumption spilling over between environments.
Treat your namespaces as subnets and forbid all egress/ingress traffic by default. See https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Have a strategy for managing service accounts. ClusterRoleBindings imply something different if a clusters hosts more than one environment.
Use scrutiny when using tools like Helm. Some charts blatantly install service accounts with cluster-wide permissions, but permissions to service accounts should be limited to the environment they are in.
I think there is a middle point. I am working with eks and node groups. The master is managed, scaled and maintained by aws. You could then create 3 kinds of node groups (just an example):
1 - General Purpose -> labels: environment=general-purpose
2 - Staging -> labels: environment=staging (taints if necessary)
3 - Prod -> labels: environment=production (taints if necessary)
You can use tolerations and node selectors on the pods so they are placed where they are supposed to be.
This allows you to use more robust or powerful nodes for production's nodegroups, and, for example, SPOT instances for staging, uat, qa, etc... and has a couple of big upsides:
Environments are physically separated (and virtually too, in namespaces)
You can reduce costs by sharing not only the masters, but also some nodes with pods shared by the two environments and by using spot or cheaper instances in staging/uat/...
No cluster-management overheads
You have to pay attention to roles and policies to keep it secure. You can implement network policies using, for example eks+calico.
Update:
I found a doc that may be useful when using EKS. It has some details on how to safely run multi-tenant cluster, and some of this details may be useful to isolate production pods and namespaces from the ones in staging.
https://aws.github.io/aws-eks-best-practices/security/docs/multitenancy/
Using multiple clusters is the norm, at the very least to enforce a strong separation between production and "non-production".
In that spirit, do note that GitLab 13.2 (July 2020) now includes:
Multiple Kubernetes cluster deployment in Core
Using GitLab to deploy multiple Kubernetes clusters with GitLab previously required a Premium license.
Our community spoke, and we listened: deploying to multiple clusters is useful even for individual contributors.
Based on your feedback, starting in GitLab 13.2, you can deploy to multiple group and project clusters in Core.
See documentation and issue.
A few thoughts here:
Do not trust namespaces to protect the cluster from catastrophe. Having separate production and non-prod (dev,stage,test,etc) clusters is the minimum necessary requirement. Noisy neighbors have been known to bring down entire clusters.
Separate repositories for code and k8s deployments (Helm, Kustomize, etc.) will make best practices like trunk-based development and feature-flagging easier as the teams scale.
Using Environments as a Service (EaaS) will allow each PR to be tested in its own short-lived (ephemeral) environment. Each environment is a high-fidelity copy of production (including custom infrasture like database, buckets, dns, etc.), so devs can remotely code against a trustworthy environment (NOT minikube). This can help reduce configuration drift, improve release cycles, and improve the overall dev experience. (disclaimer: I work for an EaaS company).
I think running a single cluster make sense because it reduces overhead, monitoring. But, you have to make sure to place network policies, access control in place.
Network policy - to prohibit dev/qa environment workload to interact with prod/staging stores.
Access control - who have access on different environment resources using ClusterRoles, Roles etc..

How can I have dev/test/cert/prod environments in K8s?

I am looking for best option in handling DEV, TEST, CERT and PROD environments in Kubernetes.
You can use namespaces in Kubernetes. Create one namespace per environment.
http://kubernetes.io/docs/user-guide/namespaces/
Once things get more involved, you can may be move to one cluster per environment, or something like DEV, TEST in a cluster and CERT and PROD in their own clusters.