Does Kubernetes have a way of reusing manifests without copying and paste them? Something akin to Terraform templates.
Is there a way of passing values between manifests?
I am looking to deploy the same service to multiple environments and wanted a way to call the necessary manifest and pass in the environment specific values.
I'd also like to do something like:
Generic-service.yaml
Name={variablename}
Foo-service.yaml
Use=General-service.yaml
variablename=foo-service-api
Any guidance is appreciated.
Kustomize, now part of kubectl apply -k is a way to parameterize your Kubernetes manifests files.
With Kustomize, you have a base manifest file (e.g. of Deployment) and then multiple overlay directories for parameters e.g. for test, qa and prod environment.
I would recommend to have a look at Introduction to kustomize.
Before Kustomize it was common to use Helm for this.
Related
I have one kubernetes operator (ex: kubectl get oracle_ctrl). Now I want to provide custom arguments for the kubectl command.
ex: kubectl apply oracle_ctrl --auto-discover=true --name=vcn1
I can write one more controller to do the same job. But I don't want to write one more controller and make use of existing controller.
Is it possible to use operator-sdk to provide custom args to kubectl?
No, this isn't possible.
kubernetes/kubectl#914 has a little bit further discussion of this, but its essential description is "we should start the proposal and design process to eventually write something better than kubectl create to support it". Your CRD can define additional columns to be shown in kubectl get but this is really the only kubectl-related extension point. You could potentially create a kubectl plugin or another CLI tool that does what you need.
Rather than using the kubectl imperative tools, it's often a better practice to directly write YAML artifacts and commit them to source control. You can parameterize these using tools like Helm or Kustomize. If kubectl apply -f or helm install is your primary way of loading things into the cluster, then you don't need custom CLI options to make this work.
Kubernetes already provides a way to manage configuration with ConfigMap.
However, I have a question/problem here.
If I have multiple applications with different needs deployed in Kubernetes, all these deployments might share and access some common config variables. Is it possible for ConfigMap to use a common config variable?
There are two ways to do that.
Kustomize - Customization of kubernetes YAML configurations (developed as kubernetes sigs, and had been integrated into kubectl command line). But currently it isn't mature enough if compare with helm chart
https://github.com/kubernetes-sigs/kustomize
Helm chart - The Kubernetes Package Manager. Its vaules.yaml can define the vaule for same configuration files (in your case, they are configmap) with variables.
https://helm.sh/
I want keep version of all pods (App) in env inside namespace. so i can use them in yaml file to create deployment. or even in ci/cd makes devops easier.
right now developer must set the version in yaml file.
If you want to use the environment variables in menifest file or in yaml file you can simply use the kubernetes secrets & config maps.
where can store the environment and use them during the deployment.
That's about the design principle, and that's the ideal approach to apply for your pipeline.
You don't have to save the exact version of all your Pods inside the manifest file, just use the latest or environment-like tag (e.g staging or production)
And in your pipeline, you could patch the deployment with the corresponding tag based on your build.
One example of this approach:
kubectl patch deployment $YOUR_DEPLOYMENT_NAME -p "{\"metadata\":{\"labels\":{\"image\":\"$YOUR_BUILD_STAGE-$PIPELINE_ID\"}},\"spec\":{\"revisionHistoryLimit\":2,\"template\":{\"spec\":{\"containers\":[{\"name\":\"$YOUR_CONTAINER_NAME\",\"image\":\"$DOCKER_IMAGE_NAME:$YOUR_BUILD_STAGE-$PIPELINE_ID\"}]}}}}"
I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?
While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.
The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.
You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider
I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?
You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!