Dynamic provisioning of Cinder volume and Persistent volume using Terraform through Kubernetes - kubernetes

I have been doing a research and I've been trying to find out if there is way to create Cinder and Persistent volumes dynamically using Terraform through Kubernetes. So I am taking info from here:
https://www.terraform.io/docs/providers/kubernetes/r/persistent_volume.html https://docs.okd.io/latest/install_config/persistent_storage/persistent_storage_cinder.html
but looks like Cinder volume must be created manually before and then Persistent volume could be associated with already created "volume_id" .
However, I believe there is a way of dynamic creation of PV looking here
https://www.terraform.io/docs/providers/kubernetes/d/storage_class.html
but I am not sure how should it looks like AND If it is possible using Terraform ?
Thank you !

I found the way .Here is the way to do that --> https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/ and https://www.terraform.io/docs/providers/kubernetes/r/storage_class.html and https://kubernetes.io/docs/concepts/storage/storage-classes/#openstack-cinder
So when when you deploying with Terraform you must specify "storage_class_name = name_of_your_class" in your "resource "kubernetes_persistent_volume_claim"" in "spec" section .
Storage class must be created before tat in Kubernetes.

Related

ecs instances metadata files for EKS

I know that in Amazon ECS container agent by setting the variable ECS_ENABLE_CONTAINER_METADATA=true ecs metadata files are created for the containers.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html
Is there any similar feature for the EKS?. I would like to retrieve instance metadata info from a file inside the container instead of using the IMDSv2 api.
you simply can't, you still need to use IMDSv2 api in your service,if you want to have get instance metadata
if you're looking at the Pod Metadata, ref https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
or you can use pod labels too...
Try adding this as part of the user data:
"echo 'ECS_ENABLE_CONTAINER_METADATA=1' > /etc/ecs/ecs.config"
Found here: https://github.com/aws/amazon-ecs-agent/issues/1514

How to backup PVC regularly

What can be done to backup kubernetes PVC regularly for GCP and AWS?
GCP has VolumeSnapshot but I'm not sure how to schedule it, like every hour or every day.
I also tried Gemini/fairwinds but I get the following error when for GCP. I installed the charts as mentioned in README.MD and I can't find anyone else encountering the same error.
error: unable to recognize "backup-test.yml": no matches for kind "SnapshotGroup" in version "gemini.fairwinds.com/v1beta1"
You can implement Velero, which gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes.
Unfortunately, Velero only allows you to backup & restore PV, not PVCs.
Velero’s restic integration backs up data from volumes by accessing the node’s filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
Might wanna look into stash.run
Agree with #hdhruna - Velero is really the most popular tool for doing that task.
However, you can also try miracle2k/k8s-snapshots
Automatic Volume Snapshots on Kubernetes
How is it useful? Simply add
an annotation to your PersistentVolume or PersistentVolumeClaim
resources, and let this tool create and expire snapshots according to
your specifications.
Supported Environments:
Google Compute Engine disks,
AWS EBS disks.
I evaluated multiple solutions including k8s CSI VolumeSnapshots, https://stash.run/, https://github.com/miracle2k/k8s-snapshots and CGP disks snapshots.
The best one in my opinion, is using k8s native implementation of snapshots via CSI driver, that is if you have a cluster version > = 1.17. This allows snapshoting volumes while in use, doesn't require having a read many or write many volume like stash.
I chose gemini by fairwinds also to automate backup creation and deletion and restoration and it works like a charm.
I believe your problem is caused by that missing CRD from gemini in your cluster. Verify that the CRD is installed correctly and also that the version installed is indeed the version you are trying to use.
My installation went flawlessly using their install guide with Helm.

How to add files/application to the persistent volume for my pods to read from

I have a setup complete and running for PHP including the creation and mounting of a persistent volume. What I do not understand and so far can not find a how tutorial is on how to add my application code to the volume for my servers/pods to read from.
I am using AWS EKS to house my environment and I do see the volume being created.
My end goal would be to get code in an Azure repo onto the volume.
I used this tutorial to set up the environment - https://www.digitalocean.com/community/tutorials/how-to-deploy-a-php-application-with-kubernetes-on-ubuntu-16-04
Great guide but it never told you how to get your code on the volume.

How to execute shell commands from within a Kubernetes ConfigMap?

I am using Helm charts to create and deploy applications into my K8s cluster.
One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:
data:
app.conf: |
[sdkkey] # I want to be able to retrieve sdk from aws secrets manager
I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap.
Any ideas or alternative solutions?
Cheers
K
tl;dr; You can't execute a ConfigMap, it is just a static manifest. Use an init container instead.
ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap.
Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called Secrets, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.
Given your description it sounds like your best option would be to use an init container to retrieve the credentials and write them to a shared emptyDir Volume mounted into the container with the application that will use the credentials.

kubernetes : dynamic persistent volume provisioning using iSCSI and NFS

I am using Kubernetes 1.4 persistent volume support, iSCSI/NFS PV and PVC successfully, in my containers. However it needs to first provision the storage by specifying the capacity both at PV creation and during claiming the storage.
My requirement is to just provide storage to cluster(and don't want to mention the capacity of storage) and let users/developers claim the storage based on their requirements. So need to use dynamic provisioning using StorageClass. Just declare the storage with details and let developers claim it based on their needs.
However got confused about using dynamic volume provisioning for iSCSI and NFS using Storage class and not getting exact steps to follow. As per documentation i need to use external volume plugin for both these types and it has already been made available as a part of incubator project - https://github.com/kubernetes-incubator/external-storage/. But i am not getting how to load/run that external provisioner(i need to run it as a container itself??i guess) and then write storage class with details of iSCSI/NFS storage.
Can somebody who has already done/used it can guide/provide pointers on this?
Thanks in advance,
picku
The project you pointed to is specific to iSCSI targets running targetd. You basically download the YAML files here https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd/kubernetes, modify them with your storage provider's parameters and deploy the pods using kubectl create. In your pods you need to specify the a storageclass. The storageclass then specifies a the iSCSI provisioner. There are more steps but that's the gist of it.
See this link for more detailed instructions https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd
the OpenEBS community has folks running this way afaik. There is a blog here for example explaining one approach supporting WordPress: https://blog.openebs.io/setting-up-persistent-volumes-in-rwx-mode-using-openebs-142632244cb2