Can you define Kubernetes Services / Pods using YAML in Terraform? - kubernetes

I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?

While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.

The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.

You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider

Related

Get all resources created by a helm release using k8s rest API

Is there a way I can get a list of all helm releases, and then all resources that have been created by this release, using the Kubernetes REST API?
I mean something similar to helm and kubectl commands
helm list
kubectl get all --all-namespaces -l=release-name
but using REST - https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/
Helm is built on top of the Kubernetes API layer, in that it creates objects like Deployments and Services and stores its state in Secrets; but a Helm release is not itself a Kubernetes object and you can't directly use the Kubernetes API to access it.
If your application is in Go, then Helm 3 includes a Go SDK, which essentially exposes most of the helm binary as Go-native library calls. This isn't a network-visible API, though, and if your application is in any other language you won't be able to integrate with it.
If instead it's more important to be able to manipulate your application using Kubernetes's REST API than to be using Helm proper, one alternative is to write a program (a Kubernetes controller) that interacts with the Kubernetes API, and have it be driven by a custom resource, a Kubernetes object that would include your application-specific configuration. This pair is commonly called an operator. Much more so than a Helm chart, though, this involves actually writing code and not just dropping in bits of Kubernetes YAML. (And, much more so than a Helm chart, you can unit-test more complex logic using your host language's native test tools.)
But in short, no, unless you can use the Helm Go SDK then there's not a good way to programmatically interact with a Helm chart beyond shelling out to the helm command.

How to make Terraform provider dependent on a resource being created

I am trying to utilize Rancher Terraform provider to create a new RKE cluster and then use the Kubernetes and Helm Terraform providers to create/deploy resources to the created cluster. I'm using this https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config attribute to create a local file with the new cluster's kube config.
The Helm and Kubernetes providers need the kube config in the provider configuration: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs. Is there any way I can get the provider configuration to wait for the local file being created?
Generally speaking, Terraform always needs to evaluate provider configurations during the planning step because providers are allowed to rely on those settings in order to create the plan, and so it typically isn't possible to have a provider configuration refer to something created only during the apply step.
As a way to support bootstrapping in a situation like this though, this is one situation where it can be reasonable to use the -target=... option to terraform apply, to plan and apply only sufficient actions to create the Rancher cluster first, and then follow up with a normal plan and apply to complete everything else:
terraform apply -target=rancher2_cluster_v2.example
terraform apply
This two-step process is needed only for situations where the kube_config attribute isn't known yet. As long as this resource type has convergent behavior, you should be able to use just terraform apply as normal unless you in future make a change that requires replacing the cluster.
(This is a general answer about provider configurations refering to resource attributes. I'm not familiar with Rancher in particular and so there might be some specifics about that particular resource type which I'm not mentioning here.)
I found a sort of workaround solution. I output the rancher2_cluster.cluster.kube_config object into a variable. Then referenced that variable in my Kubernetes module. Instead of using kube_config attribute in the provider configuration, I used the token and host attributes and used yamldecode to parse the creds directly from the kube_config variable.
provider "kubernetes" {
token = "${yamldecode(var.kube_config)["users"][0]["user"]["token"]}"
host = "${yamldecode(var.kube_config)["clusters"][0]["cluster"]["server"]}"
}
I will suggest to split your functionality in 2 layers
Run the fist layer to generate the kube_config file.
Run the second layer that will consume this file.

What are YAML config files referred to as in the Kubernetes ecosystem?

Declarative definitions for resources in a kubernetes cluster such as Deployments, Pods, Services etc.. What are they referred to as in the Kubernetes eco-system?
Possibilities i can think of:
Specifications (specs)
Objects
Object configurations
Templates
Is there a consensus standard?
Background
I'm writing a small CI tool that deploys single or multiple k8s YAML files. I can't think of what to name these in the docs and actual code.
The YAML form is generally a manifest. In a Helm chart they are templates for manifests (or more often just "templates"). When you send them to the API you parse the manifest and it becomes an API object. Most types/kinds (you can use either term) have a sub-struct called spec (eg. DeploymentSpec) that contains the declarative specification of whatever that type is for. However that is not required and some core types (ConfigMap, Secret) do not follow that pattern.

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Using Terraform to deploy Kubernetes apps

I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?
Thank you
Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.
Jenkins - CI
Helm - templating and chart build
Spinnaker - deploy
Pros:
Spinnaker is an excellent tool for deployment to kubernetis.
It can be made aware of multiple environment ,so cloud pipeline are
easier to build.
Natively integrates with most of the cloud providers like AWS,Azure,PCF etc
Cons:
On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.
As David Maze mentioned, you can combine terraform with helm.
You can find more information abut terraform provider here
and here
As per terraform documentation
"install_tiller" - (Optional) Install Tiller if it is not already installed. Defaults to true.
You can use also ansible with helm packages manager here:
Please take a lookk for othe automated tools described shortly here and here. like jenkins mentioned by Shirine.
Please take a lookk for othe automated tools described shortly here like jenkins mentioned bye #Shirine
There are different solutions. Depending on your needs you should consider factors like: paid/free solutions, for developers/teams, preferred platform, other factors like security, increasing transparency, collaboration and availability.
Hope this help
I maintain the Kustomization provider as an alternative integration of Kubernetes manifests into Terraform.
It has three main advantages over alternative options:
Every K8s resource is tracked individually in the Terraform state. This gives you a preview of changes in the plan phase. And also enables destroy-and-recreate plans in case of changes to immutable fields.
The provider allows you to use native Kubernetes YAML unchanged. No need to translate everything into HCL like with the Kubernetes provider.
Being based on Kustomize, it allows you to use Kustomize's overlay approach. But by defining the overlay in Terraform, you can use Terraform variables, module outputs and so on, to patch the Kubernetes resources.
You can of course use the provider's data sources and resources directly, but the most convenient way is probably via this module:
module "example_manifests" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
# list of paths to K8s YAML files
"${path.root}/path/to/a/kubernetes/resource.yaml"
]
}
}
}