Declarative definitions for resources in a kubernetes cluster such as Deployments, Pods, Services etc.. What are they referred to as in the Kubernetes eco-system?
Possibilities i can think of:
Specifications (specs)
Objects
Object configurations
Templates
Is there a consensus standard?
Background
I'm writing a small CI tool that deploys single or multiple k8s YAML files. I can't think of what to name these in the docs and actual code.
The YAML form is generally a manifest. In a Helm chart they are templates for manifests (or more often just "templates"). When you send them to the API you parse the manifest and it becomes an API object. Most types/kinds (you can use either term) have a sub-struct called spec (eg. DeploymentSpec) that contains the declarative specification of whatever that type is for. However that is not required and some core types (ConfigMap, Secret) do not follow that pattern.
Related
If I have a microservice app within a namespace, I can easily get all of my namespaced resources within that namespace using the k8s api. I cannot, however, view what non-namespaced resources are being used by the microservice app. If I want to see my non-namespaced resources, I can only see them all at once, with no indication of which ones are dependancies in the microservice app.
How can I find my dependancies related to my application? I'd like to be able to get reference to things like PersistentVolumes, StorageClasses, ClusterRoles, etc. that are being used by the app's namespaced resources.
Your code, running in a pod container inside a namespace, runs using a serviceaccount set using pod.spec.serviceAccountName.
If not set, it'll run using the default serviceaccount.
You need to create a clusterRole in order to grant access to cluster-wide resources specific verbs, then in the pod namespace assign this clusterRole to the serviceaccount, via a roleBinding targetting the clusterRole create before.
Then your pod, using a kubernetes client, and using the "in-cluster config" auth method, will be able to query the apiserver to get/list/watch/delete/patch... the said cluster-wide resources.
This is a definitely a non-trivial task because of the many ways such dependency can come into play: whenever an object "uses" another one, there we could identify a dependency. The issue is that this "use" relation can take many forms: e.g., a Pod can reference a Volume in its definition (which would be a sort of direct dependency), but can also use a PersistentVolumeClaim which would then instantiate a PV through use of a StorageClass -- and these relations are only known to Kubernetes at run time, when the YAML definitions are applied.
In other words:
To chase dependencies, you would have to inspect the YAML description of resources in-use, knowing the semantics of each: there's no single depends: value in each but one would need to follow e.g., the spec.storageClass of a PVC, the spec.volumes: of a Pod, etc.
In some cases, this would not even be enough: e.g., for matching Services and Pods this would not even be enough, as one would have to match ports on each side.
All of this would need to be done by extracting YAML from a running K8s cluster, since some relations between resources would not be known until they are instantiated.
You could check How do you visualise dependencies in your Kubernetes YAML files? article by Daniele Polencic shows a few tools that can be used to visualize dependencies:
There isn't any static tool that analyses YAML files. But you can visualise your dependencies in the cluster with Weave Scope, KubeView or tracing the traffic with Istio.
i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply
I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.
I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...
The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.
Can you provide me with some info about that matter?
There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s.
So- though one can use the same single yaml file in different environments, typically that's not what one wants.
What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time.
The tool most commonly used to support this kind of approach is helm:
https://helm.sh/
Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.
If I understood your question properly, you would like to spin up your infrastructure using one command and one file.
It is possible; however, it depends on your services. If some pods require another one to be running before they can start, this can get tricky. However technically you can put all your manifest files in one bundle. You can then create all the deployments services etc with kubectl apply -f bundle.yml
To create this bundle, you need to separate every manifest (deployment, service configmap, etc.) by triple dashes (---)
Example:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-1
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-2
I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?
While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.
The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.
You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider