Deploying Openstack Magnum on bare metal - kubernetes

When speaking about Openstack Magnum deployment of Kubernetes cluster (on bare metal nodes), is it somehow possible to leverage local disks on those nodes to act as persistent storage for containers?
In advance, thanks a lot.

Openstack Magnum uses Cinder to provision storage for kubernetes cluster. As you can read here:
In some use cases, data read/written by a container needs to persist
so that it can be accessed later. To persist the data, a Cinder volume
with a filesystem on it can be mounted on a host and be made available
to the container, then be unmounted when the container exits.
...
Kubernetes allows a previously created Cinder block to be mounted to a
pod and this is done by specifying the block ID in the pod YAML file.
When the pod is scheduled on a node, Kubernetes will interface with
Cinder to request the volume to be mounted on this node, then
Kubernetes will launch the Docker container with the proper options to
make the filesystem on the Cinder volume accessible to the container
in the pod. When the pod exits, Kubernetes will again send a request
to Cinder to unmount the volume’s filesystem, making it available to
be mounted on other nodes.
Its usage is described in this section of the documentation.
If setting up Cinder seems like too much overhead, you can use local volume type which allows to use local storage device such as a disk, partition or directory already mounted on a worker node's filesystem.

Related

Persistent Volume and Kubernetes upgrade

What happens to the persistent volume post cluster upgrade ?
The Kubernetes cluster is for a stateful application. It has one pv and corresponding pvc for storing input data. I would like to understand if there is a way to preserve the input data during K3S upgrade.
The kubernetes PV are not created on the node disk storage: when you kill your StatefulSet pod, It may be deployed on a different node, with the same PV.
Most of cloud providers use their block storage services as a default backend for K8S PV (ex: AWS EBS) and they provide other CSI (Container Storage Interface) drivers to use other storage services (ex: NFS service).
So when you upgrade your cluster, you can re-use your data if they are stored outside the cluster, you need just to check which CSI you are using, and read its doc to understand where it is created.

Kubernetes: minikube persistent volume local filesystem storage location

I've read through all the docs and a few SO posts and can't find an answer to this question:
Where does minikube persist its persistent volumes in my local mac filing system?
Thanks
First of all keep in mind that Kubernetes is running on Minikube cluster. Minikube itself run on Virtual Machine, so all data would be stored in this VM not on your MacOS.
When you want to point exact place where you would like to save this data in Kubernetes you could choose between:
hostpath
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
local
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
However, Minikube supports only hostpath.
In this case you should check Minikube documentation about Persistent Volumes
minikube supports PersistentVolumes of type hostPath out of the box. These PersistentVolumes are mapped to a directory inside the running minikube instance (usually a VM, unless you use --driver=none, --driver=docker, or --driver=podman). For more information on how this works, read the Dynamic Provisioning section below.
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
If you would like to mount directory from host you would need to use minikube mount.
$ minikube mount <source directory>:<target directory>
For more details, please check Minikube Mounting filesystems documentation.
If you are using the volume type hostPath the files are saved on your node.
To access your node filesystem you can use the command: minikube ssh and under your mounted path you'll find your documents.

How to attach an EKS volume directly to an EKS Pod

I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node.
My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.
I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods.
Are there any documentations that I can refer?
That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.
Even when you did it with EBS, under the bonnet it was still attached to the node first.
However, you can restrict access to AWS resources with IAM using kube2iam or you can use the EKS native solution to assign IAM roles to Kubernetes Service Accounts. The benefit of using kube2iam is it going to work with Kops should you migrate to it from EKS.

How does Kubernetes know on which node to schedule its POD when PVs are backed by logical volumes?

If i implement a CSI driver that will create logical volumes via lvcreate command, and give those volumes for Kubernetes to make PVs from, how will Kubernetes know the volume/node association so that it can schedule a POD which uses this PV on the node where my newly-created logical volume resides? Does it just automagically happen?
k8s Scheduler can be influenced using volume topology.
Here is the design proposal which walks through the whole dimension
Allow topology to be specified for both pre-provisioned and dynamic provisioned PersistentVolumes so that the Kubernetes scheduler can correctly place a Pod using such a volume to an appropriate node.
Volume Topology-aware Scheduling

who can create a persistent volume in kubernetes?

Its mentioned in kubernetes official website as below for PV and PVC.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
who is adminstrator here? when they mention it in persistent volume perspective?
An administrator in this context is the admin of the cluster. Whomever is deploying the PV/PVC. (An operations engineer, system engineer, SysAdmin)
For example - an engineer can configure AWS Elastic File System to have space available in the Kubernetes cluster, then use a PV/PVC to make that available to a specific pod container in the cluster. This means that if the pod is destroyed for whatever reason, the data in the PVC persists and is available to other resources.