Is it possible to always mount a config map to all pods in Kubernetes - kubernetes

I am trying to follow something similar to the Istio injection model where Istio is able to run a mutating admission webhook in order to auto inject their sidecar.
We would like to do something similar, but with some config maps. We have a need to mount config maps to all new pods in a given namespace, always mounted at the same path. Is it possible to create a mutating admission webhook that will allow me to mount this config map at the known path while admitting new pods?
docs to mutating webhooks: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/

This should be entirely possible and is, in fact, an aligned use-case for a custom mutating admission webhook. Unfortunately, the official documentation on actually implementing them is somewhat sparse.
This is the most useful reference material I found when I was working on this mutating admission webhook.
The general process is as follows:
Generate and deploy certificates as demonstrated here
Deploy MutatingWebhookConfiguration to your cluster. This configuration allows us to tell the API server where to send the AdmissionReview objects. This is also where you should specify which operations on which API resources in which namespaces you want to target.
Write the actual webhook server, which should accept AdmissionReview objects at the specified endpoint (/mutate is the convention) and return AdmissionResponse objects with the mutated object, as is shown here (note: in the linked example, I added an annotation to incoming pods that fit a certain criteria, while your application would add a field for the ConfigMap)
Deploy the webhook server and expose it using normal methods (Deployment and Service, or whatever fits your use case). Make sure it's accessible at the location you specified in the configuration for the MutatingWebhookConfiguration
Hope this was enough information! Let me know if I left anything too vague / was unclear.

Related

How to make Terraform provider dependent on a resource being created

I am trying to utilize Rancher Terraform provider to create a new RKE cluster and then use the Kubernetes and Helm Terraform providers to create/deploy resources to the created cluster. I'm using this https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config attribute to create a local file with the new cluster's kube config.
The Helm and Kubernetes providers need the kube config in the provider configuration: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs. Is there any way I can get the provider configuration to wait for the local file being created?
Generally speaking, Terraform always needs to evaluate provider configurations during the planning step because providers are allowed to rely on those settings in order to create the plan, and so it typically isn't possible to have a provider configuration refer to something created only during the apply step.
As a way to support bootstrapping in a situation like this though, this is one situation where it can be reasonable to use the -target=... option to terraform apply, to plan and apply only sufficient actions to create the Rancher cluster first, and then follow up with a normal plan and apply to complete everything else:
terraform apply -target=rancher2_cluster_v2.example
terraform apply
This two-step process is needed only for situations where the kube_config attribute isn't known yet. As long as this resource type has convergent behavior, you should be able to use just terraform apply as normal unless you in future make a change that requires replacing the cluster.
(This is a general answer about provider configurations refering to resource attributes. I'm not familiar with Rancher in particular and so there might be some specifics about that particular resource type which I'm not mentioning here.)
I found a sort of workaround solution. I output the rancher2_cluster.cluster.kube_config object into a variable. Then referenced that variable in my Kubernetes module. Instead of using kube_config attribute in the provider configuration, I used the token and host attributes and used yamldecode to parse the creds directly from the kube_config variable.
provider "kubernetes" {
token = "${yamldecode(var.kube_config)["users"][0]["user"]["token"]}"
host = "${yamldecode(var.kube_config)["clusters"][0]["cluster"]["server"]}"
}
I will suggest to split your functionality in 2 layers
Run the fist layer to generate the kube_config file.
Run the second layer that will consume this file.

API endpoints for kubernetes mutating webhook server

As described here, this is a reference implementation of a webhook server as used in kubernetes e2e test. In the main function, a number of endpoints have been defined to handle different requests for mutation. However, there is no clear documentation as to which endpoint gets invoked when.
So, how do we know which endpoint is invoked when?
I see you are trying to understand what is the ordering of execution of mutating webhooks.
I have found this piece of code in kubernetes repo. Based on this you can see that these are sorted by name of a webhook to have a deterministic order.
A single ordering of mutating admissions plugins (including webhooks) does not work for all cases, so take a look at mutating plugin ordering section in Admission webhook proposal for explanation how its handled.
Also notice there are no "pod only endpoints" or "endpoints that get called for pods". Let's say you have your webhook server and want to mutate pods and your server has only one endpoint: /. If you want to mutate pods with it you need to specify it under rules. So setting rules[].resources: ["pods"] and rules[].operations: ["CREATE"] in your webhook config will run your mutating webhook whenever there is pod to be created.
Let me know it it helped.

What prerequisites do I need for Kubernetes to mount an EBS volume?

The documentation doesn’t go into detail. I imagine I would at least need an iam role.
This is what we have done and it worked well.
I was on kubernetes 1.7.2 and trying to provision storage (dynamic/static) for kubernetes pods on AWS. Some of the things mentioned below may not be needed if you are not looking for dynamic storage classes.
Made sure that the DefaultStorageClass admission controller is enabled on the API server. (DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component.)
I have given options --cloud-provider=aws and --cloud-config=/etc/aws/aws.config (while starting apiserver, controller-manager, kubelet)
(the file /etc/aws/aws.conf is present on instance with below contents)
$ cat /etc/aws/aws.conf
[Global]
Zone = us-west-2a
Created IAM policy added to role (as in link below), created instance profile for it and attached to the instances. (NOTE: I missed attaching instance profile and it did not work).
https://medium.com/#samnco/using-aws-elbs-with-the-canonical-distribution-of-kubernetes-9b4d198e2101
For dynamic provisioning:
Created storage class and made it default.
Let me know it this did not work.
Regards
Sudhakar
This is the one used by kubespray, and is very likely indicative of a rational default:
https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/contrib/aws_iam/kubernetes-minion-policy.json
with the tl;dr of that link being to create an Allow for the following actions:
s3:*
ec2:Describe*
ec2:AttachVolume
ec2:DetachVolume
route53:*
(although I would bet that s3:* is too wide, I don't have the information handy to provide a more constrained version; similar observation on the route53:*)
All of the Resource keys for those are * except the s3: one which restricts the resource to buckets beginning with kubernetes-* -- unknown if that's just an example, or there is something special in the kubernetes prefixed buckets. Obviously you might have a better list of items to populate the Resource keys to genuinely restrict attachable volumes (just be careful with dynamically provisioned volumes, as would be created by PersistentVolume resources)

Can you define Kubernetes Services / Pods using YAML in Terraform?

I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?
While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.
The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.
You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider

Modelling Kubernetes Custom Types Resources

I am building an opinionated PaaS like service on top of Kubernetes ecosystem.
I have a desire to model an SSHService and SSHUser, I'll either extend Kubernetes api server by registering new types/schemas (looks pretty simple) or using custom resources via ThirdPartyResource http://kubernetes.io/v1.1/docs/design/extending-api.html
I previous built my own API server on non-kubernetes infrastructure. The way I modelled it was somewhat as below, so an admin would do via restful actions:
1) Create SSH Service
2) Create SSh User
3) Add User to SSH Service
The third action would run on the SSH Service resource, which would check the universe to ensure an SSH User with name ref existed within the universe before adding it to its allowed user array attribute.
In Kubernetes I don't think cross resource transaction are supported, or intentional looking at how other things are modeled ** (for example I can create a pod with secret volume referring a secret name that does not exist and this is accepted).
So in Kubernetes world I intend to
1) Create SSh Service with .Spec.AllowedGroups [str]
2) Create SSH User with .Spec.BelongToGroups [str] where groups is just an array of group names as strings
A kubernetes client will watch for changes to ssh service and ssh users where the sets change update back to the API a secret volume (later configmap volume) for passwd/shadow to be used in the SSH container
Is this a sane approach to model custom resources?
First reaction is that if you already have your own API server, and it works, there is no need to rewrite the API in kubernetes style. I'd just try to reuse the thing that works.
If you do want to rewrite, here are my thoughts:
If you need lots of SSHServices, and you need lots of people to use your API for creating SSHServices, then it makes sense to represent the parameters of the ssh service as a ThirdParty resource.
But if you have just 1 or a few SSHServices, and you update it infrequently, then I would not create a ThirdParty resource for it. I would just write an RC that runs the SSH service Pod mount a secret (later configMap) volume that contains a configuration file, in the format of your choice. The config file would include the AllowedGroups. Once you have v1.2 with config map, which will be like in a month, you will be able to update the config by POSTing a new configmap to the apiserver, without needing the SSH service to restart. (It should watch the config file for changes). Basically, you can think of a configMap as a simpler version of ThirdParty resource.
As far as SSHUsers, you could use a ThirdParty resource and have the SSH controller watch the SSHUsers endpoint for changes. (Come to think of it, I'm not sure how you watch a third party resource.)
Or maybe you want to just put the BelongToGroups information into the same ConfigMap. This gives you the "transactionality" you wanted. It just means that updates to the config are serialized and require an operator or cron job to push the config. Maybe that is not so bad?