Kubernetes infrastructure as code best practice - kubernetes

Can anyone point me to the common strategy to setup a Kubernetes cluster according to the principles of infrastructure as code and automatic deployment for different developer teams with Git repos and an undefined CI/CD platform.
Let's say I am going to use Terraform to deploy a Kubernetes cluster on a hypothetical cloud service named QKS with a commonly used service, for example Apache Airflow, for which a public helm chart is available. There are two custom services (from two independent developer groups) to deploy named "apples" and "bananas".
I am struggling with the separation of responsibilities of different code bases. Which steps in this process can best still be done manually. A lot is being written about this technology, but I cannot find any articles on this issue in particular.

This is my own proposal.
Have three git repositories:
my-infrastructure: includes the Terraform files, the Airflow Helm deployment and deployment of two namespaces included access roles to these namespaces. CICD tracks for changes and deploys them on QKS
apples: code base and corresponding helm template. CICD can deploy on the apples namespace only.
bananas: code base and corresponding helm template. CICD can deploy on the bananas namespace only.
Notes:
subdivision of the cluster into namespaces is obvious;
all secrets and authorization tokens for the namespaces can be created via Terraform using Terraform kubernetes provider.
https://www.terraform.io/docs/providers/kubernetes/r/secret.html

There is an interesting kubernetes project for this called cluster-api that lets you create, configure & manage kubernetes clusters in a declarative fashion in a way similar to how we manage different resources in kubernetes itself. It defines new resources of different kinds like Cluster, Machine
e.g. You could define a cluster like this:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
Of course you would need a starting / bootstrap kubernetes cluster where you will deploy this resource. This project is still in prototype stage, so use caution.
Check out the cluster-api repository on Github: https://github.com/kubernetes-sigs/cluster-api

Related

How do I use crossplane to Install helm charts (with provider-helm) into other cluster

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

How to include AWS EKS with CI/CD?

I am studying about CI/CD on AWS (CodePipeline/CodeBuild/CodeDeploy) and found it to be a very good tool for managing a pipeline on the cloud with everything managed (don't even need to install Jenkins on EC2).
I am now reading about container building and deployment. For the build phase, CodeBuild supports building container images. For the deploy phase, while I could find a CodeDeploy solution to ECS cluster, it seems there is no direct CodeDeploy solution for EKS (kindly correct if I am wrong).
May I know if there is a solution to integrate EKS cluster (i.e. the deploy phase can fetch the docker image from ECR or dockerhub and deploy to EKS)? I have come across some ideas using lamda functions to trigger the cluster to perform rolling update of the container image, but I could not find a step-by-step guide on this.
=========================
(Update 17 Sep 2020)
Somehow managed to create a lambda function to trigger an update to EKS to perform rolling update of the k8s deployment. Thanks Prashanna for the source base.
Just want to share the key setups in the process.
(1) Update the lambda execution role to include permission to describe EKS clusters
Create a policy with describe EKS cluster access, and attach to the role:
Policy snippet:
...
......
"Action": "eks:Describe*"
...
......
Or you can create a "EKSFullAccess" policy, and attach to the lambda execution role
(2) Update the k8s ConfigMap, and supplement the lambda execution role ARN to the mapRole section. The corresponding k8s role should be a role that has permission to update container images (say system:masters) used for the k8s deployment
You can edit the map with command like below:
kubectl edit -n kube-system configmap/aws-auth
You don't have to add/update another ConfigMap even if your deployment is in another namespace. It will take effect as well.
Sample lambda function call request and response:
Gitab provides the inbuilt integration of EKS and deployment with the help of Helm charts. If you plan to use other tools Using AWS lambda to update the image is the best bet!
I've added my github project.
Setup a lambda with below code and give RBAC access to this lambda in your EKS. Try invoking the lambda by passing the required information like namespace, deployment, image etc
Lambda for Kubernetes image update
The lambda must require EKS:describecluster policy.
The Lambda role must be provided atleast update image RBAC role in EKS cluster RBAC role setup
Since there's no built-in CI/CD for EKS at the moment, this is going to be a showcase of success/failure stories of a 3rd-party CI/CDs in EKS :) My take: https://github.com/fluxcd/flux
Pros:
Quick to set up initially (until you get into multiple teams/environments)
Tracks and deploys image releases out of box
Possibility to split what to auto-deploy in dev/prod using regex. E.g. all versions to dev, only minor to prod. Or separate tag prefixes for dev/prod.
All state is in git - a good practice to start with
Cons:
Getting complex for further pipeline expansion, e.g. blue-green, canary, auto-rollbacks, etc.
The dashboard is proprietary (weave works product)
Not for on-demand parametrized job runs like traditional CIs.
Setup:
Setup an automated image build (looks like you've already figured out)
Setup flux and helm-operator into the cluster, point them to your "gitops repo"
For each app, create a HelmRelease object that describes a regex of image tag to track
Done. A newly published image tag that falls into regex will be auto-deployed to the cluster and the new version is committed to a gitops repo.

Common config in Kubernetes ConfigMap

Kubernetes already provides a way to manage configuration with ConfigMap.
However, I have a question/problem here.
If I have multiple applications with different needs deployed in Kubernetes, all these deployments might share and access some common config variables. Is it possible for ConfigMap to use a common config variable?
There are two ways to do that.
Kustomize - Customization of kubernetes YAML configurations (developed as kubernetes sigs, and had been integrated into kubectl command line). But currently it isn't mature enough if compare with helm chart
https://github.com/kubernetes-sigs/kustomize
Helm chart - The Kubernetes Package Manager. Its vaules.yaml can define the vaule for same configuration files (in your case, they are configmap) with variables.
https://helm.sh/

Using Terraform to deploy Kubernetes apps

I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?
Thank you
Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.
Jenkins - CI
Helm - templating and chart build
Spinnaker - deploy
Pros:
Spinnaker is an excellent tool for deployment to kubernetis.
It can be made aware of multiple environment ,so cloud pipeline are
easier to build.
Natively integrates with most of the cloud providers like AWS,Azure,PCF etc
Cons:
On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.
As David Maze mentioned, you can combine terraform with helm.
You can find more information abut terraform provider here
and here
As per terraform documentation
"install_tiller" - (Optional) Install Tiller if it is not already installed. Defaults to true.
You can use also ansible with helm packages manager here:
Please take a lookk for othe automated tools described shortly here and here. like jenkins mentioned by Shirine.
Please take a lookk for othe automated tools described shortly here like jenkins mentioned bye #Shirine
There are different solutions. Depending on your needs you should consider factors like: paid/free solutions, for developers/teams, preferred platform, other factors like security, increasing transparency, collaboration and availability.
Hope this help
I maintain the Kustomization provider as an alternative integration of Kubernetes manifests into Terraform.
It has three main advantages over alternative options:
Every K8s resource is tracked individually in the Terraform state. This gives you a preview of changes in the plan phase. And also enables destroy-and-recreate plans in case of changes to immutable fields.
The provider allows you to use native Kubernetes YAML unchanged. No need to translate everything into HCL like with the Kubernetes provider.
Being based on Kustomize, it allows you to use Kustomize's overlay approach. But by defining the overlay in Terraform, you can use Terraform variables, module outputs and so on, to patch the Kubernetes resources.
You can of course use the provider's data sources and resources directly, but the most convenient way is probably via this module:
module "example_manifests" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
# list of paths to K8s YAML files
"${path.root}/path/to/a/kubernetes/resource.yaml"
]
}
}
}

Kubernetes: Is it possible to have the exact same deployment descriptor for all environments including local?

I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...
The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.
Can you provide me with some info about that matter?
There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s.
So- though one can use the same single yaml file in different environments, typically that's not what one wants.
What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time.
The tool most commonly used to support this kind of approach is helm:
https://helm.sh/
Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.
If I understood your question properly, you would like to spin up your infrastructure using one command and one file.
It is possible; however, it depends on your services. If some pods require another one to be running before they can start, this can get tricky. However technically you can put all your manifest files in one bundle. You can then create all the deployments services etc with kubectl apply -f bundle.yml
To create this bundle, you need to separate every manifest (deployment, service configmap, etc.) by triple dashes (---)
Example:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-1
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-2