What prerequisites do I need for Kubernetes to mount an EBS volume? - kubernetes

The documentation doesn’t go into detail. I imagine I would at least need an iam role.

This is what we have done and it worked well.
I was on kubernetes 1.7.2 and trying to provision storage (dynamic/static) for kubernetes pods on AWS. Some of the things mentioned below may not be needed if you are not looking for dynamic storage classes.
Made sure that the DefaultStorageClass admission controller is enabled on the API server. (DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component.)
I have given options --cloud-provider=aws and --cloud-config=/etc/aws/aws.config (while starting apiserver, controller-manager, kubelet)
(the file /etc/aws/aws.conf is present on instance with below contents)
$ cat /etc/aws/aws.conf
[Global]
Zone = us-west-2a
Created IAM policy added to role (as in link below), created instance profile for it and attached to the instances. (NOTE: I missed attaching instance profile and it did not work).
https://medium.com/#samnco/using-aws-elbs-with-the-canonical-distribution-of-kubernetes-9b4d198e2101
For dynamic provisioning:
Created storage class and made it default.
Let me know it this did not work.
Regards
Sudhakar

This is the one used by kubespray, and is very likely indicative of a rational default:
https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/contrib/aws_iam/kubernetes-minion-policy.json
with the tl;dr of that link being to create an Allow for the following actions:
s3:*
ec2:Describe*
ec2:AttachVolume
ec2:DetachVolume
route53:*
(although I would bet that s3:* is too wide, I don't have the information handy to provide a more constrained version; similar observation on the route53:*)
All of the Resource keys for those are * except the s3: one which restricts the resource to buckets beginning with kubernetes-* -- unknown if that's just an example, or there is something special in the kubernetes prefixed buckets. Obviously you might have a better list of items to populate the Resource keys to genuinely restrict attachable volumes (just be careful with dynamically provisioned volumes, as would be created by PersistentVolume resources)

Related

Kubernetes configMap or persistent volume?

What is the best approach to passing multiple configuration files into a POD?
Assume that we have a legacy application that we have to dockerize and run in a Kubernetes environment. This application requires more than 100 configuration files to be passed. What is the best solution to do that? Create hostPath volume and mount it to some directory containing config files on the host machine? Or maybe config maps allow passing everything as a single compressed file, and then extracting it in the pod volume?
Maybe helm allows somehow to iterate over some directory, and create automatically one big configMap that will act as a directory?
Any suggestions are welcomed
Create hostPath volume and mount it to some directory containing config files on the host machine
This should be avoided.
Accessing hostPaths may not always be allowed. Kubernetes may use PodSecurityPolicies (soon to be replaced by OPA/Gatekeeper/whatever admission controller you want ...), OpenShift has a similar SecurityContextConstraint objects, allowing to define policies for which user can do what. As a general rule: accessing hostPaths would be forbidden.
Besides, hostPaths devices are local to one of your node. You won't be able to schedule your Pod some place else, if there's any outage. Either you've set a nodeSelector restricting its deployment to a single node, and your application would be done as long as your node is. Or there's no placement rule, and your application may restart without its configuration.
Now you could say: "if I mount my volume from an NFS share of some sort, ...". Which is true. But then, you would probably be better using a PersistentVolumeClaim.
Create automatically one big configMap that will act as a directory
This could be an option. Although as noted by #larsks in comments to your post: beware that ConfigMaps are limited in terms of size. While manipulating large objects (frequent edit/updates) could grow your etcd database size.
If you really have ~100 files, ConfigMaps may not be the best choice here.
What next?
There's no one good answer, not knowing exactly what we're talking about.
If you want to allow editing those configurations without restarting containers, it would make sense to use some PersistentVolumeClaim.
If that's not needed, ConfigMaps could be helpful, if you can somewhat limit their volume, and stick with non-critical data. While Secrets could be used storing passwords or any sensitive configuration snippet.
Some emptyDir could also be used, assuming you can figure out a way to automate provisioning of those configurations during container startup (eg: git clone in some initContainer, and/or some shell script contextualizing your configuration based on some environment variables)
If there are files that are not expected to change over time, or whose lifecycle is closely related to that of the application version shipping in your container image: I would consider adding them to my Dockerfile. Maybe even add some startup script -- something you could easily call from an initContainer, generating whichever configuration you couldn't ship in the image.
Depending on what you're dealing with, you could combine PVC, emptyDirs, ConfigMaps, Secrets, git stored configurations, scripts, ...

How can a file inside a pod be copied to the outside?

I have an audit pod, which has logic to generate a report file. Currently, this file is present in the pod itself. I have only one pod having only one replica.
I know, I can run kubectl cp to copy those files from my pod. This command has to be executed on the Kubernetes node itself, but the task is to copy the file from the pod itself due to many restrictions.
I cannot use a Persistent Volume due to restrictions. I checked the Kubernetes API, but couldn't find anything by which I can do a copy.
Is there another way to copy that file out of the pod?
This is a community wiki answer posted to sum up the whole scenario and for better visibility. Feel free to edit and expand on it.
Taking under consideration all the mentioned restrictions:
not supposed to use the Kubernetes volumes
no cloud storage
pod names not accessible to your user
no sidecar containers
the only workaround for your use case is the one you currently use:
the dynamic PV with the annotations."helm.sh/resource-policy": keep
use PVCs and explicitly mention the user to not to delete the
namespace
If any one has a better idea. Feel free to contribute.

Is it possible to always mount a config map to all pods in Kubernetes

I am trying to follow something similar to the Istio injection model where Istio is able to run a mutating admission webhook in order to auto inject their sidecar.
We would like to do something similar, but with some config maps. We have a need to mount config maps to all new pods in a given namespace, always mounted at the same path. Is it possible to create a mutating admission webhook that will allow me to mount this config map at the known path while admitting new pods?
docs to mutating webhooks: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
This should be entirely possible and is, in fact, an aligned use-case for a custom mutating admission webhook. Unfortunately, the official documentation on actually implementing them is somewhat sparse.
This is the most useful reference material I found when I was working on this mutating admission webhook.
The general process is as follows:
Generate and deploy certificates as demonstrated here
Deploy MutatingWebhookConfiguration to your cluster. This configuration allows us to tell the API server where to send the AdmissionReview objects. This is also where you should specify which operations on which API resources in which namespaces you want to target.
Write the actual webhook server, which should accept AdmissionReview objects at the specified endpoint (/mutate is the convention) and return AdmissionResponse objects with the mutated object, as is shown here (note: in the linked example, I added an annotation to incoming pods that fit a certain criteria, while your application would add a field for the ConfigMap)
Deploy the webhook server and expose it using normal methods (Deployment and Service, or whatever fits your use case). Make sure it's accessible at the location you specified in the configuration for the MutatingWebhookConfiguration
Hope this was enough information! Let me know if I left anything too vague / was unclear.

kubernetes : dynamic persistent volume provisioning using iSCSI and NFS

I am using Kubernetes 1.4 persistent volume support, iSCSI/NFS PV and PVC successfully, in my containers. However it needs to first provision the storage by specifying the capacity both at PV creation and during claiming the storage.
My requirement is to just provide storage to cluster(and don't want to mention the capacity of storage) and let users/developers claim the storage based on their requirements. So need to use dynamic provisioning using StorageClass. Just declare the storage with details and let developers claim it based on their needs.
However got confused about using dynamic volume provisioning for iSCSI and NFS using Storage class and not getting exact steps to follow. As per documentation i need to use external volume plugin for both these types and it has already been made available as a part of incubator project - https://github.com/kubernetes-incubator/external-storage/. But i am not getting how to load/run that external provisioner(i need to run it as a container itself??i guess) and then write storage class with details of iSCSI/NFS storage.
Can somebody who has already done/used it can guide/provide pointers on this?
Thanks in advance,
picku
The project you pointed to is specific to iSCSI targets running targetd. You basically download the YAML files here https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd/kubernetes, modify them with your storage provider's parameters and deploy the pods using kubectl create. In your pods you need to specify the a storageclass. The storageclass then specifies a the iSCSI provisioner. There are more steps but that's the gist of it.
See this link for more detailed instructions https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd
the OpenEBS community has folks running this way afaik. There is a blog here for example explaining one approach supporting WordPress: https://blog.openebs.io/setting-up-persistent-volumes-in-rwx-mode-using-openebs-142632244cb2

Modelling Kubernetes Custom Types Resources

I am building an opinionated PaaS like service on top of Kubernetes ecosystem.
I have a desire to model an SSHService and SSHUser, I'll either extend Kubernetes api server by registering new types/schemas (looks pretty simple) or using custom resources via ThirdPartyResource http://kubernetes.io/v1.1/docs/design/extending-api.html
I previous built my own API server on non-kubernetes infrastructure. The way I modelled it was somewhat as below, so an admin would do via restful actions:
1) Create SSH Service
2) Create SSh User
3) Add User to SSH Service
The third action would run on the SSH Service resource, which would check the universe to ensure an SSH User with name ref existed within the universe before adding it to its allowed user array attribute.
In Kubernetes I don't think cross resource transaction are supported, or intentional looking at how other things are modeled ** (for example I can create a pod with secret volume referring a secret name that does not exist and this is accepted).
So in Kubernetes world I intend to
1) Create SSh Service with .Spec.AllowedGroups [str]
2) Create SSH User with .Spec.BelongToGroups [str] where groups is just an array of group names as strings
A kubernetes client will watch for changes to ssh service and ssh users where the sets change update back to the API a secret volume (later configmap volume) for passwd/shadow to be used in the SSH container
Is this a sane approach to model custom resources?
First reaction is that if you already have your own API server, and it works, there is no need to rewrite the API in kubernetes style. I'd just try to reuse the thing that works.
If you do want to rewrite, here are my thoughts:
If you need lots of SSHServices, and you need lots of people to use your API for creating SSHServices, then it makes sense to represent the parameters of the ssh service as a ThirdParty resource.
But if you have just 1 or a few SSHServices, and you update it infrequently, then I would not create a ThirdParty resource for it. I would just write an RC that runs the SSH service Pod mount a secret (later configMap) volume that contains a configuration file, in the format of your choice. The config file would include the AllowedGroups. Once you have v1.2 with config map, which will be like in a month, you will be able to update the config by POSTing a new configmap to the apiserver, without needing the SSH service to restart. (It should watch the config file for changes). Basically, you can think of a configMap as a simpler version of ThirdParty resource.
As far as SSHUsers, you could use a ThirdParty resource and have the SSH controller watch the SSHUsers endpoint for changes. (Come to think of it, I'm not sure how you watch a third party resource.)
Or maybe you want to just put the BelongToGroups information into the same ConfigMap. This gives you the "transactionality" you wanted. It just means that updates to the config are serialized and require an operator or cron job to push the config. Maybe that is not so bad?