Expose Volume of Kubernetes to non-kubernetes apps - kubernetes

I am new to Kubernetes and am working on a Computer Vision project, in which, some of the services are deployed in Kubernetes and some of the services are running in a cluster of physical servers (Nvidia Jetson Boards) which has GPU. Can the non-Kubernetes services access the Persistent Volume of the K8s environment? Please let me know,
How to expose a Persistent Volume from K8s and mount it as a shared drive in a different physical server?
Instead using Persistent Volume, can I have a volume in the host machine where K8s is deployed and can I use it for both k8s and non-k8s services?
Please note that we are connecting Cameras through USBs to each of those Jetson Boards, so we cannot bring those Jetson Boards as nodes under K8s.

Not possible.
This is a better approach. Example, you can use NAS to back the k8s and the nvidia board cluster, both clusters can share files thru the NAS mounted volume. For pods on k8s cluster to access the mount point is as simple as using hostPath, or a more sophisticated storage driver depends on your storage architecture.

Related

kubernetes persistent volume for bare metal accessible on all nodes and pods

I was trying the local or host path volumes on a LAN bare metal servers.
tried local but each node was having there own copy of the data.
How can i use volumes across all the nodes and pods.
Persistent Volumes have access semantics. Example on GCE if you are using a Persistent Disk, can either be mounted as writable to a single pod or to multiple pods as read-only. If you want multi writer semantics, you need to setup NFS or some other storage that let's you write from multiple pods. NFS can support multiple read/write clients.
In case you are interested in running NFS take a look: nfs-setup.
The NFS persistent volume and NFS claim gives an indirection that allow multiple pods to refer to the NFS server using a symbolic name rather than the hardcoded server address.
Take a look: pv-multiple-pods.
If you want to share data through your cluster, then you need to use network storage.
You can't expect kubernetes to just share your data accross all the nodes of your cluster. So local storage and host path won't work in that case.
As #MaggieO said, you can setup and use a NFS server.
If you just want to try it out, you can also use your favorite cloud provider storage solution (AWS S3, GCP Bucket, Azure Disk, etc). You can see the full list here

Opensource Storage Options for Kubernetes Cluster running on bare metal

I have a 3 node cluster running on bare metal. it was setup using kubeadm.
Each node in the cluster has 100GB disk space, total add up to 300GB.
I would like to utilize the 300GB disk space available on them to run stateful pods like mysql, postgresql, mongodb, cassandra etc. what are the different opensource options available to create persistent volumes.
I still havent used kuberentes v1.14 which offers local persistent volumes out of the box. It would be one of the option
second option is to run NFS server on each node and utilize the nfs share from the respective machine to create PV's.
Apart from these what other options can be looked at. please suggest

Is pre disk addition is a must before deploying OpenEBS?

I have a 3 node k8s cluster and having a remote storage box with additional disks connected to it. I want to utilize these disks. So is this use case supported on OpenEBS? Also, do I have to attach the disks to Node before deploying OpenEBS? Is this a prerequisites?
Sure. It's supported and you need the disk attached when you setup OpenEBS as your block storage.
After you set it up, essentially you can create volumes (pvcs, pvs) for Kubernetes and mount them on your pods for consumption.
You can setup OpenEBS on Kubernetes cluster where you run your workloads either using helm or kubectl
Yes OpenEBS support storage with additional disks connected. With 0.7 it has a feature NDM (Node Disk Manager) which would monitor the disks attached to the nodes. Once the disks are attached you can create a pool on top of it and use the same. For more details, document link

Can kubernetes cluster formed from mix of AWS nodes, Azure nodes, VMWare nodes

Is HA across multiple cloud providers i.e ONE kubernetes cluster from mix of Azure nodes, AWS nodes, VMware nodes. (Consider all have same OS image)
If so how dynamic provisioning works.
Can Kubernetes CSI (container storage interface) help me with this.
That will not work very well. The cloud provider needs to be set on the apiserver & controller-manager and you can't run multiple copies of those in different configurations.
Now if you don't need a cloud provider, as in you are just using these as generic VMs, you will not have access to cloud storage via the kubernetes api. Otherwise it's workable but is still not a great setup. This would essentially be a cross region cluster which is not a supported use case. You are meant to use 1 cluster per region and arrange for LB somehow (yes, this is the tricky bit).

Kubernetes deployment using shared-disk FC HBA options

I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)