I've got a grasp on mounting a single iSCSI "endurance" block storage volume, on a Softlayer VSI, under Linux, CentOS 6... however, wondering if there are instructions available for being able to mount multiple, different volumes of the same type, at the same time?
Thanks!
It should be possible to mount multiple iSCSI devices to a VM ata same time, when you are mounting all the available iSCSI devices should be displayed.
You can follow this instructions
https://knowledgelayer.softlayer.com/procedure/accessing-block-storage-linux
https://softlayerslayers.wordpress.com/storage/endurance-storage-overview/
Regards
Related
I am new to Kubernetes and am working on a Computer Vision project, in which, some of the services are deployed in Kubernetes and some of the services are running in a cluster of physical servers (Nvidia Jetson Boards) which has GPU. Can the non-Kubernetes services access the Persistent Volume of the K8s environment? Please let me know,
How to expose a Persistent Volume from K8s and mount it as a shared drive in a different physical server?
Instead using Persistent Volume, can I have a volume in the host machine where K8s is deployed and can I use it for both k8s and non-k8s services?
Please note that we are connecting Cameras through USBs to each of those Jetson Boards, so we cannot bring those Jetson Boards as nodes under K8s.
Not possible.
This is a better approach. Example, you can use NAS to back the k8s and the nvidia board cluster, both clusters can share files thru the NAS mounted volume. For pods on k8s cluster to access the mount point is as simple as using hostPath, or a more sophisticated storage driver depends on your storage architecture.
I'm teaching myself Kubernetes with a 5 Rpi cluster, and I'm a bit confused by the way Kubernetes treats Persistent Volumes with respect to Pod Scheduling.
I have 4 worker nodes using ext4 formatted 64GB micro SD cards. It's not going to give GCP or AWS a run for their money, but it's a side project.
Let's say I create a Persistent volume Claim requesting 10GB of storage on worker1, and I deploy a service which relies on this PVC, is that service then forced to be scheduled on worker1?
Should I be looking into distributed file systems like Ceph or Hdfs so that Pods aren't restricted to being scheduled on a particular node?
Sorry if this seems like a stupid question, I'm self taught and still trying to figure this stuff out! (Feel free to improve my tl;dr doc for kubernetes with a pull req)
just some examples, as already mentioned it depends on your storage system, as i see you use the local storage option
Local Storage:
yes the pod needs to be run on the same machine where the pv is located (your case)
ISCSI/Trident San:
no, the node will mount the iscsi block device where the pod will be scheduled
(as mentioned already volume binding mode is an important keyword, its possible you need to set this to 'WaitForFirstConsumer')
NFS/Trident Nas:
no, its nfs, mountable from everywhere if you can access and auth against it
VMWare VMDK's:
no, same as iscsi, the node which gets the pod scheduled mounts the vmdk from the datastore
ceph/rook.io:
no, you get 3 options for storage, file, block an object storage, every type is distributed so you can schedule a pod on every node.
also ceph is the ideal system for carrying a distributed software defined storage on commodity hardware, what i can recommend is https://rook.io/ basically an opensource ceph on 'container-steroids'
Let's say I create a Persistent volume Claim requesting 10GB of storage on worker1, and I deploy a service which relies on this PVC, is that service then forced to be scheduled on worker1?
This is a good question. How this works depends on your storage system. The StorageClass defined for your Persistent Volume Claim contains information about Volume Binding Mode. It is common to use dynamic provisioning volumes, so that the volume is first allocated when a user/consumer/Pod is scheduled. And typically this volume does not exist on the local Node but remote in the same data center. Kubernetes also has support for Local Persistent Volumes that are physical volumes located on the same Node, but they are typically more expensive and used when you need high disk performance and volume.
I'm using kubernetes v1.16.10 with a Ceph 13.2.2 Mimic cluster for dynamic volume provisioning through ceph-csi.
But then I have found ceph-rbd
Ceph RBD (kubernetes.io/rbd)
https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd
According to:
Ceph CSI (rbd.csi.ceph.com)
https://docs.ceph.com/docs/master/rbd/rbd-kubernetes/#block-devices-and-kubernetes
You may use Ceph Block Device images with Kubernetes v1.13 and later through ceph-csi, which dynamically provisions RBD images to back Kubernetes volumes and maps these RBD images as block devices (optionally mounting a file system contained within the image) on worker nodes running pods that reference an RBD-backed volume.
So... which one should I use?
Advantages / disadvantages?
Thanks in advance.
I don't know the exact differences, but I was told from a Ceph CSI developer that the Ceph RBD (kubernetes.io/rbd) i.e. the in-tree driver will be deprecated in a few Kubernetes releases. And I don't have any references to any official documentation as this was a slack conversation.
So the CSI driver is the way forward and makes it more future proof.
Say like I have a targetd-alike iSCSI server, which (just like targetd) can provision iSCSI LUNs via APIs. To make this iSCSI server work with K8s dynamic PV provisioning, I've found two possible solutions after some Googlings.
The first solution is CSI. Basically, I need to implement a CSI plugin that translate volume creation requests to LUN creation API calls, and also translate stash/mount requests to iscsiadm commands.
However, as I already knew that K8s supports statically pre-provisioned iSCSI LUN out-of-box, I wondered if I could just do the dynamic provision part and leave all the heavy-liftings (mount and iscsiadm commands) to K8s built-in iSCSI functionality. So later, I've found iSCSI-targetd provisioner for K8s. It seems much simpler than a CSI plugin, and it only took 150 LOC to implement my provisioner for my iSCSI server.
I have a vague impression that K8s community is now moving towards CSI for external storage integrations. Does this mean that my latter provisioner way could be deprecated and should move to a CSI plugin?
In fact CSI is the standardized way to storage provisioning, you can get iSCSi (emulated) block storage with several options nowadays, based on my experience, I would recommend to use:
rook.io: Really great, good docs and coverage different aspects of storage (block,file,object and for different backends ...)
gluster-block: it is a plug-in for gluster storage, this is used combined with heketi. See docs k8s provisioning
By the way, gluster is the solution for CSI adopted by RedHat on Openshift 3, and it is pretty decent, feels like for Openshift 4 will be something with Ceph (mostly likely rook)
I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)