Currently, my Kubernetes cluster is provisioned via GKE.
I use GCE Persistent Disks to persist my data.
In GCE, persistent storage is provided via GCE Persistent Disks. Kubernetes supports adding them to Pods or PersistenVolumes or StorageClasses via the gcePersistentDisk volume/provisioner type.
What if I would like to transfer my cluster from Google to, lets say, Azure or AWS?
Then I would have to change value of volume type to azureFile or awsElasticBlockStore respectively in all occurrences in the manifest files.
I hope CSI driver will solve that problem, unfortunately, they also use a different type of volume for each provider cloud provider, for example pd.csi.storage.gke.io for GCP or disk.csi.azure.com for Azure.
Is there any convenient way to make the Kubernetes volumes to be cloud agnostic? In which I wouldn't have to make any changes in manifest files before K8s cluster migration.
You cannot have cloud agnostic storage by using the CSI drivers or the native VolumeClaims in Kubernetes. That's because these API's are the upstream way of provisioning storage which each cloud provider has to integrate with to translate them into the Cloud Specific API (PD for Google, EBS for AWS...)
Unless you have a self-managed Storage that you can access via an NFS driver or a specific driver from the tools managed above. And still with that the Self-Managed Storage solution is going to be based on a Cloud provider specific volume. So You are just going to shift the issue to a different place.
Related
I'm running a k3s single node cluster and have the k3s local-path-provisioner as storage. As I want to be able to add nodes in the future, I looked at minio to use on top of the local-path as storage. But I'm not sure if it's the right choice, cause I my workloads primarily use mariadb for data and I read, that an s3 compatible bucket isn't the best for database applications.
I hope you can help me figure this out.
If you don't want to use object storage then here are your options for running a local storage provisioner:
GlusterFS StorageClass
Doesn't have lot of documentation on how to set it up. But if you know your way around GlusterFS It'll be a good option.
local-path-provisioner
I
t provides a way for the Kubernetes users to utilize the local storage in each node
OpenEBS -> has a local volume storage engine but I think this is not designed to work on a shared volume mount and it end up tying a pod to a specific node since the data "doesn't exist" on the other nodes.
longhorn [recommened]
It creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes.
rook
Rook is a storage operators for Kubernetes, It supports multiple storage backends. Don't use the NFS one tho cause we hit a wall when using it with our DBs.
This is my proposed kubernetes cluster, I want to be able to run Postgresql database, with my nodes accessing storage machine for storing the data, is this using NFS a good option? How best can I run a database instance here?
I recommend you to use helm chart for deploying any kind of data base, it is very handy and easy to deploy, visit the link:
https://github.com/bitnami/charts/tree/master/bitnami/postgresql
Any way if you want to deploy Postgresql, first of all you need to create persistent volume(pv) and persistent volume claim(pvc), because you choosed NFS for your cluster storage solution. You have to manually create your pv and pvc.
But kubernetes has storage class solution too, it is better to use some kubernetes volume plugin with internal provisioner like glusterfs or cephfs.
https://kubernetes.io/docs/concepts/storage/storage-classes/
With Kubernetes on can define storage classes with provisioners. How does one find which provisioners are installed and available in the cluster?
Inspecting the storage classes will reveal which provisioners are already in use, but not whether there are more available.
A provisioner does not necessarily need to run in the cluster, e.g. the provisioner for an external storage appliance just connects to the cluster api server and watches for new persistent volume requests created with a storage class bound to its provisioner name. This is why as of Kubernetes 1.7 there is no intended universal way to see if a storage classes provisioner is actually available or not.
I want to understand the role of openstack when kubernetes is deployed on top of it. Will the user be able to access the underlying openstack layer in this case? (I mean to ask if user can create instances, networks and access any other openstack resource)Or will the user be only provided with Kubernetes offerings? Any link or answer would help.
I don't seem to find the functionality part mentioned in any guide.
Openstack's role in the k8s world is to provide k8s with instances and storage to do it's job, just like GCE and Azure.
Kubernetes tries to abstract underlying cloud infrastructure so applications can be ported from one cloud provider to another transparently.
k8s achieves this by defining abstractions like persistent volumes and persistent volume claims allowing a pod to define a requirement for storage without needing to state it requires a cinder volume directly.
There should be no need to access openstack directly from your kubernetes-based app unless you app needs to actually manage an openstack cluster in which case you can provide your openstack credentials to your app and access the openstack api.
I'm following the Spring Cloud Data Flow "Getting Started" guide here (section 13): http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_deploying_streams_on_kubernetes
I'm new to cloud computing, and I'm stuck at at the point where I should create a disk for a MySQL DB via gcloud:
gcloud compute disks create mysql-disk --size 200 --type pd-standard
Well, that throws:
The required property [project] is not currently set.
There is one thing that I quite don't understand yet (not my main question): Gcloud requires me to register a project un my google account. I wonder how my Google account (and cloud project there), the to-be-created disk image and the server are related to each another. How does this all relate to another?
Though my actual question is, how can I create the disk for the master sserver locally without using gcloud? Because I don't want a cloud server connected to my account on google.
Kubernetes does not manage any remote storage on its own. You can manage local storage by mounting an emptyDir volume.
Gcloud creates cloud bloc storage on your Google cloud account, and on Google Container Engine (GKE) Kubernetes is configured to be able to access these resources by ID, and can mount this type of volume into your Pod.
If you're not running Kubernetes on GKE, then you can't really mount a Google Cloud volume into your pod: the resources need to be managed by the same provider.