I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?
Thank you
Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.
Jenkins - CI
Helm - templating and chart build
Spinnaker - deploy
Pros:
Spinnaker is an excellent tool for deployment to kubernetis.
It can be made aware of multiple environment ,so cloud pipeline are
easier to build.
Natively integrates with most of the cloud providers like AWS,Azure,PCF etc
Cons:
On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.
As David Maze mentioned, you can combine terraform with helm.
You can find more information abut terraform provider here
and here
As per terraform documentation
"install_tiller" - (Optional) Install Tiller if it is not already installed. Defaults to true.
You can use also ansible with helm packages manager here:
Please take a lookk for othe automated tools described shortly here and here. like jenkins mentioned by Shirine.
Please take a lookk for othe automated tools described shortly here like jenkins mentioned bye #Shirine
There are different solutions. Depending on your needs you should consider factors like: paid/free solutions, for developers/teams, preferred platform, other factors like security, increasing transparency, collaboration and availability.
Hope this help
I maintain the Kustomization provider as an alternative integration of Kubernetes manifests into Terraform.
It has three main advantages over alternative options:
Every K8s resource is tracked individually in the Terraform state. This gives you a preview of changes in the plan phase. And also enables destroy-and-recreate plans in case of changes to immutable fields.
The provider allows you to use native Kubernetes YAML unchanged. No need to translate everything into HCL like with the Kubernetes provider.
Being based on Kustomize, it allows you to use Kustomize's overlay approach. But by defining the overlay in Terraform, you can use Terraform variables, module outputs and so on, to patch the Kubernetes resources.
You can of course use the provider's data sources and resources directly, but the most convenient way is probably via this module:
module "example_manifests" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
# list of paths to K8s YAML files
"${path.root}/path/to/a/kubernetes/resource.yaml"
]
}
}
}
Related
I am trying to utilize Rancher Terraform provider to create a new RKE cluster and then use the Kubernetes and Helm Terraform providers to create/deploy resources to the created cluster. I'm using this https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config attribute to create a local file with the new cluster's kube config.
The Helm and Kubernetes providers need the kube config in the provider configuration: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs. Is there any way I can get the provider configuration to wait for the local file being created?
Generally speaking, Terraform always needs to evaluate provider configurations during the planning step because providers are allowed to rely on those settings in order to create the plan, and so it typically isn't possible to have a provider configuration refer to something created only during the apply step.
As a way to support bootstrapping in a situation like this though, this is one situation where it can be reasonable to use the -target=... option to terraform apply, to plan and apply only sufficient actions to create the Rancher cluster first, and then follow up with a normal plan and apply to complete everything else:
terraform apply -target=rancher2_cluster_v2.example
terraform apply
This two-step process is needed only for situations where the kube_config attribute isn't known yet. As long as this resource type has convergent behavior, you should be able to use just terraform apply as normal unless you in future make a change that requires replacing the cluster.
(This is a general answer about provider configurations refering to resource attributes. I'm not familiar with Rancher in particular and so there might be some specifics about that particular resource type which I'm not mentioning here.)
I found a sort of workaround solution. I output the rancher2_cluster.cluster.kube_config object into a variable. Then referenced that variable in my Kubernetes module. Instead of using kube_config attribute in the provider configuration, I used the token and host attributes and used yamldecode to parse the creds directly from the kube_config variable.
provider "kubernetes" {
token = "${yamldecode(var.kube_config)["users"][0]["user"]["token"]}"
host = "${yamldecode(var.kube_config)["clusters"][0]["cluster"]["server"]}"
}
I will suggest to split your functionality in 2 layers
Run the fist layer to generate the kube_config file.
Run the second layer that will consume this file.
I'm quite new to Helm. I'm learning to automate the microservices deployment using helm and azure kubernetes service. I need to deploy multiple microservices(payment,order) into different environments(dev,QA).
As per my analysis I hope we achieve this by following steps
Create separate clusters for different environments
Create multiple variable files based on environments.
we can pass only cluster name and variable file based on our deployment. so it will deploy according to our inputs.
I'm trying to implement the same, but I'm not sure how to configure the above scenarios in helm part in real time.
Shall we achieve this completely using helm alone or shall we use Ansible for orchestration along with helm ?
Anyone could you please advise me on this and suggest me any other best practices if we have?
Reference :
https://codefresh.io/helm-tutorial/helm-deployment-environments/
Thanks in advance :)
Helm cannot control which cluster it's deploying to, this is being decided by the kubeconfig file on the machine used to invoke the helm command.
If your kubeconfig file is configured to access multiple clusters, you can just set the right context before each helm install command, and it will target the command at the cluster of your choice.
I am studying about CI/CD on AWS (CodePipeline/CodeBuild/CodeDeploy) and found it to be a very good tool for managing a pipeline on the cloud with everything managed (don't even need to install Jenkins on EC2).
I am now reading about container building and deployment. For the build phase, CodeBuild supports building container images. For the deploy phase, while I could find a CodeDeploy solution to ECS cluster, it seems there is no direct CodeDeploy solution for EKS (kindly correct if I am wrong).
May I know if there is a solution to integrate EKS cluster (i.e. the deploy phase can fetch the docker image from ECR or dockerhub and deploy to EKS)? I have come across some ideas using lamda functions to trigger the cluster to perform rolling update of the container image, but I could not find a step-by-step guide on this.
=========================
(Update 17 Sep 2020)
Somehow managed to create a lambda function to trigger an update to EKS to perform rolling update of the k8s deployment. Thanks Prashanna for the source base.
Just want to share the key setups in the process.
(1) Update the lambda execution role to include permission to describe EKS clusters
Create a policy with describe EKS cluster access, and attach to the role:
Policy snippet:
...
......
"Action": "eks:Describe*"
...
......
Or you can create a "EKSFullAccess" policy, and attach to the lambda execution role
(2) Update the k8s ConfigMap, and supplement the lambda execution role ARN to the mapRole section. The corresponding k8s role should be a role that has permission to update container images (say system:masters) used for the k8s deployment
You can edit the map with command like below:
kubectl edit -n kube-system configmap/aws-auth
You don't have to add/update another ConfigMap even if your deployment is in another namespace. It will take effect as well.
Sample lambda function call request and response:
Gitab provides the inbuilt integration of EKS and deployment with the help of Helm charts. If you plan to use other tools Using AWS lambda to update the image is the best bet!
I've added my github project.
Setup a lambda with below code and give RBAC access to this lambda in your EKS. Try invoking the lambda by passing the required information like namespace, deployment, image etc
Lambda for Kubernetes image update
The lambda must require EKS:describecluster policy.
The Lambda role must be provided atleast update image RBAC role in EKS cluster RBAC role setup
Since there's no built-in CI/CD for EKS at the moment, this is going to be a showcase of success/failure stories of a 3rd-party CI/CDs in EKS :) My take: https://github.com/fluxcd/flux
Pros:
Quick to set up initially (until you get into multiple teams/environments)
Tracks and deploys image releases out of box
Possibility to split what to auto-deploy in dev/prod using regex. E.g. all versions to dev, only minor to prod. Or separate tag prefixes for dev/prod.
All state is in git - a good practice to start with
Cons:
Getting complex for further pipeline expansion, e.g. blue-green, canary, auto-rollbacks, etc.
The dashboard is proprietary (weave works product)
Not for on-demand parametrized job runs like traditional CIs.
Setup:
Setup an automated image build (looks like you've already figured out)
Setup flux and helm-operator into the cluster, point them to your "gitops repo"
For each app, create a HelmRelease object that describes a regex of image tag to track
Done. A newly published image tag that falls into regex will be auto-deployed to the cluster and the new version is committed to a gitops repo.
i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply
I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?
While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.
The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.
You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider