How to attach an EKS volume directly to an EKS Pod - kubernetes

I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node.
My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.
I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods.
Are there any documentations that I can refer?

That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.
Even when you did it with EBS, under the bonnet it was still attached to the node first.
However, you can restrict access to AWS resources with IAM using kube2iam or you can use the EKS native solution to assign IAM roles to Kubernetes Service Accounts. The benefit of using kube2iam is it going to work with Kops should you migrate to it from EKS.

Related

Persistent Volume and Kubernetes upgrade

What happens to the persistent volume post cluster upgrade ?
The Kubernetes cluster is for a stateful application. It has one pv and corresponding pvc for storing input data. I would like to understand if there is a way to preserve the input data during K3S upgrade.
The kubernetes PV are not created on the node disk storage: when you kill your StatefulSet pod, It may be deployed on a different node, with the same PV.
Most of cloud providers use their block storage services as a default backend for K8S PV (ex: AWS EBS) and they provide other CSI (Container Storage Interface) drivers to use other storage services (ex: NFS service).
So when you upgrade your cluster, you can re-use your data if they are stored outside the cluster, you need just to check which CSI you are using, and read its doc to understand where it is created.

Deploying Openstack Magnum on bare metal

When speaking about Openstack Magnum deployment of Kubernetes cluster (on bare metal nodes), is it somehow possible to leverage local disks on those nodes to act as persistent storage for containers?
In advance, thanks a lot.
Openstack Magnum uses Cinder to provision storage for kubernetes cluster. As you can read here:
In some use cases, data read/written by a container needs to persist
so that it can be accessed later. To persist the data, a Cinder volume
with a filesystem on it can be mounted on a host and be made available
to the container, then be unmounted when the container exits.
...
Kubernetes allows a previously created Cinder block to be mounted to a
pod and this is done by specifying the block ID in the pod YAML file.
When the pod is scheduled on a node, Kubernetes will interface with
Cinder to request the volume to be mounted on this node, then
Kubernetes will launch the Docker container with the proper options to
make the filesystem on the Cinder volume accessible to the container
in the pod. When the pod exits, Kubernetes will again send a request
to Cinder to unmount the volume’s filesystem, making it available to
be mounted on other nodes.
Its usage is described in this section of the documentation.
If setting up Cinder seems like too much overhead, you can use local volume type which allows to use local storage device such as a disk, partition or directory already mounted on a worker node's filesystem.

How to use Redhat CloudForms Cinder volumes for Kubernetes PersistentVolumes

I have deployed a Kubernetes cluster with 3 worker nodes on Redhat Clouldforms platform. The master node and all the worker nodes are deployed within virtual instances created in the Redhat Clouldforms. The Kubernetes cluster has been deployed successfully and also I could be able to deploy an application on top of that.
Now I want to use a PersistentVolume for my application. So, I created a block storage volume in Redhat Clouldforms platform. It says "Openstack cinder manager" as the storage manager. I have not attached the volume to any of worker nodes or master node since it should be accessible from all 3 worker nodes.
My question is how to bind my block storage volume with the Kubernetes PersistentVolume. What should I use for the volume type field in the PersistentVolume configuration yaml.
What are further configurations needed for the binding

Kubernetes: How to re-use pvc and pv in another cluster

Currently, I have deploy an artifactory running in a cluster, but not the cluster is down, I can't find the reason. And I start another cluster.
The data in old cluster is on a cloud disk, then create pv, and create pvc. And now I want to mount that disk to new cluster and use that data, is that possible and how to implement it ?
Thanks.

In GCP Kubernetes (GKE) how do I assign a stateless pod created by a deployment to a provisioned vm

I have several operational deployments on minikube locally and am trying to deploy them on GCP with kubernetes.
When I describe a pod created by a deployment (which created a replication set that spawned the pod):
kubectl get po redis-sentinel-2953931510-0ngjx -o yaml
It indicates it landed on one of the kubernetes vms.
I'm having trouble with deployments that work separately failing due to lack of resources e.g. cpu even though I provisioned a VM above the requirements. I suspect the cluster is placing the pods on it's own nodes and running out of resources.
How should I proceed?
Do I introduce a vm to be orchestrated by kubernetes?
Do I enlarge the kubernetes nodes?
Or something else all together?
It was a resource problem and node pool size was inhibiting the deployments.I was mistaken in trying to provide google compute instances and disks.
I ended up provisioning Kubernetes node pools with more cpu and disk space and solved it. I also added elasticity by provisioning autoscaling.
here is a node pool documentation
here is a terraform Kubernetes deployment
here is the machine type documentation