Kubernetes access Network Fileshare - kubernetes

Recently we have started using Kubernetes as a path for moving forward with new projects. We started implementing some of them and we are now struggling with one issue. How to access network file share ?
Our Kubernetes cluster is linux based cluster installed on Windows machine. Services hosted in that cluster need to be able to access file share which is accessible on that machine (i.e. \\myFileShare\myfolder ).
We can't find a solution to this one. We have tried using "https://www.nuget.org/packages/SharpCifs.Std/" library to acccess the files over SMB but it turned out it, the library won't support SMB 2.0.
We were also thinking about mounting this drive as Persistent Volume but if i understand correctly persistent volume is supposed to have its lifecycle managed by Kubernetes so i don't think it's designed for this kind of stuff.
We have tried to find solution in the internet but we didn't find anything, but i'm pretty sure we are not the first people who need to access Network fileshare from Kuberenetes cluster. Did anyone struggle with this problem before and could provide us some solution to that one ?

Have a look at cifs-volumedriver or this Kubernetes CIFS Volume Driver.
It should apply to your case, and it works with SMB2.1
The following is an example of PersistentVolume that uses the volume driver.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
Credentials are passed using a Secret, which can be declared as follows:
apiVersion: v1
data:
password: ###
username: ###
kind: Secret
metadata:
name: my-secret
type: juliohm/cifs
NOTE: Pay attention to the secret's type field, which MUST match the
volume driver name. Otherwise the secret values will not be passed to
the mount script.
Also, please take a look at this question on Stack. Its author has the same problem and shows how to solve it.

Related

Unable to mount volume from ArangoDB container on my local system in Kubernetes

I am trying to follow this documentation to mount volume from the ArangoDB container to my local system. https://www.arangodb.com/docs/stable/deployment-kubernetes-storage-resource.html The code is given below:
apiVersion: "storage.arangodb.com/v1alpha"
kind: "ArangoLocalStorage"
metadata:
name: "example-arangodb-storage"
spec:
storageClass:
name: my-local-ssd
localPath:
- C:/kubernetes_volumes/arangodb_data
C:/kubernetes_volumes/arangodb_data is the folder where I want to mount volume on my local system. When I tried to create the resource of type ArangoLocalStorage, it created successfully but I am unable to see my-local-ssd in the storageclass list. Also the folder C:/kubernetes_volumes/arangodb_data remains empty that indicates that no volume has been mounted. I have also attached the image
Can anyone guide me how to solve this problem. Thanks

how to connect vmware storage to kuberentes built using rancher 2.8

The cluster nodes are on-prem vmware servers, we used rancher just to build the k8s cluster.
Built was successful, when trying to host apps that are using PVC we have problems, the dynamic volume provisioning isn't happening and pvc are stuck in 'pending' state.
VMWare storage class is being used, we got confirmed from our vsphere admins that the VM's have visibility to the datastores and ideally it should work.
While configuring the cluster we have used the cloud provider credentials according the rancher docs.
cloud_provider:
name: vsphere
vsphereCloudProvider:
disk:
scsicontrollertype: pvscsi
global:
datacenters: nxs
insecure-flag: true
port: '443'
soap-roundtrip-count: 0
user: k8s_volume_svc#vsphere.local
Storage class yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nxs01-k8s-0004
parameters:
datastore: ds1_K8S_0004
diskformat: zeroedthick
reclaimPolicy: Delete
PVC yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: arango
namespace: arango
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nxs01-k8s-0004
Now wanted understand why my PVC are stuck under pending state? is there any other steps missed out.
I saw in the rancher documentation saying Storage Policy has to be given as an input
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass
In a vmware document it referred that as an optional parameter, and also had a statement on the top stating it doesn't apply to the tools that use CSI(container storage Interface)
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html
I found that rancher is using an CSI driver called rshared.
So now is this storage policy a mandatory one? is this one that stopping me from provisioning a VMDK file?
I gave the documentation of creating the storage policy to the vsphere admins, they said this is for VSAN and the datastores are in VMax. I couldn't understand the difference or find an diff doc for VMax.
It would be a great help!! if fixed :)
The whole thing is about just the path defined for the storage end, in the cloud config yaml the PATH was wrong. The vpshere admins gave us the PATH where the vm
's residing instead they should have given the path where the Storage Resides.
Once this was corrected the PVC came to bound state.

How to change helm value for persistance to use a path on a certain node

I'm just learning k3s and Helm using a raspberry pi cluster. I have added a thumbdrive to one of the workers and given it a path like /mnt/thumb and I want to store my data from Node red here (actually I have data in this directory that I want it to use). But I can't seem to change the helm chart to point to the path on that specific node to make that happen. I'm using this values.yaml. I've tried following different instructions but none of them have worked. Can someone please show me an example of how to do this? Thanks in advance
I am using the following approach for mounting a specific folder into a pod using persistent volume claims with kurlsh. It should also work with k3s on a raspberry:
Before installing your helm chart, create a new Persistent volume from the following yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-volume
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
storageClassName: local-volume
hostPath:
path: /mnt/thumb
What you're doing there is essentially creating a kubernetes volume backed by a folder on your filesystem and assigning a storage class to it ( and a size ).
Next, you will need to tell your helm chart to create a new PersistentVolumeClaim using the storageClass you've just created. To do that, fill in the persistence section of the values.yaml as follows:
persistence:
enabled: true
storageClass: "local-volume"
accessMode: ReadWriteOnce
size: 20Gi
Make sure that the size of the volume matches in both the PersistentVolume and the values.yaml.
You will need to take a closer look at the folder structure inside the persistent Volume and also make sure, that the folder's permissions are set so that your pod can actually write data into said folder, but that should be enough to get you started, without relying on fancy third party storage solutions.

Methods of Verifying Kubernetes Configuration

I've been working on a small side project to try and learn Kubernetes. I have a relatively simple cluster with two services, an ingress, and working on adding a Redis database now. I'm hosting this cluster in Google Kubernetes Engine (GKE), but using Minikube to run the cluster locally and try everything out before I commit any changes and push them to the prod environment in GKE.
During this project, I have noticed that GKE seems to have some slight differences in how it wants the configuration vs what works in Minikube. I've seen this previously with ingresses and now with persistent volumes.
For example, to run Redis with a persistent volume in GKE, I can use:
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatter-db-deployment
labels:
app: chatter
spec:
replicas: 1
selector:
matchLabels:
app: chatter-db-service
template:
metadata:
labels:
app: chatter-db-service
spec:
containers:
- name: master
image: redis
args: [
"--save", "3600", "1", "300", "100", "60", "10000",
"--appendonly", "yes",
]
ports:
- containerPort: 6379
volumeMounts:
- name: chatter-db-storage
mountPath: /data/
volumes:
- name: chatter-db-storage
gcePersistentDisk:
pdName: chatter-db-disk
fsType: ext4
The gcePersistentDisk section at the end refers to a disk I created using gcloud compute disks create. However, this simply won't work in Minikube as I can't create disks that way.
Instead, I need to use:
volumes:
- name: chatter-db-storage
persistentVolumeClaim:
claimName: chatter-db-claim
I also need to include separate configuration for a PeristentVolume and a PersistentVolumeClaim.
I can easily get something working in either Minikube OR GKE, but I'm not sure what is the best means of getting a config which works for both. Ideally, I want to have a single k8s.yaml file which deploys this app, and kubectl apply -f k8s.yaml should work for both environments, allowing me to test locally with Minikube and then push to GKE when I'm satisfied.
I understand that there are differences between the two environments and that will probably leak into the config to some extent, but there must be an effective means of verifying a config before pushing it? What are the best practices for testing a config? My questions mainly come down to:
Is it feasible to have a single Kubernetes config which can work for both GKE and Minikube?
If not, is it feasible to have a mostly shared Kubernetes config, which overrides the GKE and Minikube specific pieces?
How do existing projects solve this particular problem?
Is the best method to simply make a separate dev cluster in GKE and test on that, rather than bothering with Minikube at all?
Yes, you have found some parts of Kubernetes configuration that was not perfect from the beginning. But there are newer solutions.
Storage abstraction
The idea in newer Kubernetes releases is that your application configuration is a Deployment with Volumes that refers to PersistentVolumeClaim for a StorageClass.
While StorageClass and PersistentVolume belongs more to the infrastructure configuration.
See Configure a Pod to Use a PersistentVolume for Storage on how to configure a Persistent Volume for Minikube. For GKE you configure a Persistent Volume with GCEPersistentDisk or if you want to deploy your app to AWS you may use a Persistent Volume for AWSElasticBlockStore.
Ingress and Service abstraction
Service with type LoadBalancer and NodePort in combination with Ingress does not work the same way across cloud providers and Ingress Controllers. In addition, Services Mesh implementations like Istio have introduced VirtualService. The plan is to improve this situation with Ingress v2 as how I understand it.

Share persistent volume claim with more than one pod

Cant share a PVC with multiple pods in the GCP (with the GCP-CLI)
When I apply the config with ReadWriteOnce works at once
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <name>
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
But with ReadWriteMany the status hangs on pending
Any ideas?
So it is normal that when you apply the config with ReadWriteOnce works at once, that's the rule.
ReadWriteOnce is the most common use case for Persistent Disks and works as the default access mode for most applications.
GCE persistent disk do not support ReadWriteMany !
Instead of ReadWriteMany, you can just use ReadOnlyMany.
More information you can find here: persistentdisk. But as you now result will not be the same as you want.
If you want to share volumes you could try some workarounds:
You can create services.
Your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. Multi-service access to data is an anti-pattern akin to global variables in OOP.
If you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure or add extra features.
Also try to use high-performance, fully managed file storage for applications that require a file system interface and a shared file system.
More information you can find here: access-fileshare.
Referring to the Kubernetes-Documentation, GCE does not support ReadWriteMany-Storage: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
There are some options here:
How to share storage between Kubernetes pods?
https://cloud.google.com/filestore/docs/accessing-fileshares