Monut Volume from sidecar with longhorn on k3s - kubernetes

i am using k3s on prem with longhron as central storage. i am using multilpe deployments of joomla. I have to mount a nfs container as sidecar to upload data to the containers persistent volume. The problem ist, that the longshorn storae is a block storage and can only be mounted from one pod. Is there a possibility to use a sidecar container with ftp service to acces the jommla pods filesystem with the mounted volume? Or is there an other way to do this?
Greetings

Related

Deploying Openstack Magnum on bare metal

When speaking about Openstack Magnum deployment of Kubernetes cluster (on bare metal nodes), is it somehow possible to leverage local disks on those nodes to act as persistent storage for containers?
In advance, thanks a lot.
Openstack Magnum uses Cinder to provision storage for kubernetes cluster. As you can read here:
In some use cases, data read/written by a container needs to persist
so that it can be accessed later. To persist the data, a Cinder volume
with a filesystem on it can be mounted on a host and be made available
to the container, then be unmounted when the container exits.
...
Kubernetes allows a previously created Cinder block to be mounted to a
pod and this is done by specifying the block ID in the pod YAML file.
When the pod is scheduled on a node, Kubernetes will interface with
Cinder to request the volume to be mounted on this node, then
Kubernetes will launch the Docker container with the proper options to
make the filesystem on the Cinder volume accessible to the container
in the pod. When the pod exits, Kubernetes will again send a request
to Cinder to unmount the volume’s filesystem, making it available to
be mounted on other nodes.
Its usage is described in this section of the documentation.
If setting up Cinder seems like too much overhead, you can use local volume type which allows to use local storage device such as a disk, partition or directory already mounted on a worker node's filesystem.

How to attach an EKS volume directly to an EKS Pod

I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node.
My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.
I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods.
Are there any documentations that I can refer?
That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.
Even when you did it with EBS, under the bonnet it was still attached to the node first.
However, you can restrict access to AWS resources with IAM using kube2iam or you can use the EKS native solution to assign IAM roles to Kubernetes Service Accounts. The benefit of using kube2iam is it going to work with Kops should you migrate to it from EKS.

On GCE Kubernetes, how can I create volumes where multiple consumers can write?

K8S Volume documentation mentions that only a single consumer can write to GCE PD. What can be used on GCE for volumes where multiple consumers can write simultaneously, for example when hosting a private Docker registry?
I see an sample for creating NFS volume on GCE. Is there a straightforward solution that I am missing?
I followed this solution to
create a GCE PD,
host a NFS server with GCE volume mounted at "/exports"
Use this NFS server as a volume
This was easy to do. One change I made was to add storage-class "" to the GCE PD PV and PVC file as I did not have a default storage class.

Kubernetes mount volume with noatime

Is there a way to mount a volume with the noatime flag in Kubernetes? I am using Google container engine. The volumes are for the data directories of a Riak database and it is more performant with noatime.

Can I mount a GCS bucket inside Kubernetes Pod?

I know that Kubernetes does not support mounting GCS buckets inside a Pod. But If I use GoogleFuse to mount a GCS bucket on the Node and then expose it to a Pod as a host path will that work?
It should work. For Host Path volumes, kube doesn't enforce any policy. But if your FUSE daemon restarts, the mount will become inaccessible. AFAIK, kube does not support mount propagation for volumes.