GKE How many Persistent Disks could be attached to a single node? - kubernetes

Is it possible to attach ~30 persistent disks to single k8s node (e.g. n1-standard-4)?
According to the documentation 2-4 core node can support up to 64 attached disks in Beta: Link.
Is it supported by GKE? Is there any limit in GKE Kubernetes?

GKE has the same limitation as vanilla Kubernetes on GCP per se. The Kubernetes limits for the largest public cloud providers are documented here
You can also change those limits using the KUBE_MAX_PD_VOLS on the kube-scheduler (After restarting). Unfortunately, you won't be able to change this on GKE yet, because GKE doesn't give you access to the master(s) configuration yet.
Also documented here is Dynamic Volume Limits introduced in Kubernetes 1.11 and currently in Beta.
I believe you self-answered your first question, the n1-standard-4 VM has 4 vCPUs and per the link that you provided you can attach up to 64 disks. So yes, you should be able to attach 30 persistent disks, a PVC/PV in the GCE storage class maps to GCP VM disk.

Related

Apache Kafka - Volume Mapping for Message Log files in Kubernetes (K8s)

When we deploy apache kafka on Linux/Windows, we have log.dirs and broker.id properties. on bare metal, the files are saved on the individual host instances. However, when deployed via K8s on public cloud - there must be some form of volume mounting to make sure that the transaction log fils are saved somewhere?
Has anyone done this on K8s? I am not referring to Confluent (because it's a paid subscription).
As far as I understand you are just asking how to deal with storage in Kubernetes.
Here is a great clip that talks about Kubernetes Storage that I would recommend to You.
In Kubernetes you are using Volumes
On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state. Second, when running Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes Volume abstraction solves both of these problems.
There is many types of Volumes, some are cloud specific like awsElasticBlockStore, gcePersistentDisk, azureDisk and azureFile.
There are also other types like glusterfs, iscsi, nfs and many more that are listed here.
You can also use Persistent Volumes which provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
Here is a link to Portworx Kafka Kubernetes in production: How to Run HA Kafka on Amazon EKS, GKE and AKS which might be handy for you as well.
And if you would be interested in performance then Kubernetes Storage Performance Comparison is a great 10min read.
I hope those materials will help you understand Kubernetes storage.

GCE volume mounts as compared to Kubernetes volume mounts

Kubernetes has pretty extensive volume and volume mounting support (many different volume types, subpaths, mounting single files).
Can the same be achieved with GCE VMs?
Update:
I have some Kubernetes workflow that uses NFS and GCE PD volumes.
Suppose I want to run the same workflow without Kubernetes (by just starting GCE VMs).
What volume-related features will I lose/keep?
Some examples of features:
Having the same volume shared between multiple producer Pods/VMs.
Mounting single files into container/VM (as opposed to mounting directories only).
The PVs and GCE PD volumes used by GKE use Google Persistent Disks and thus are bound by the same limitations. This also means that there isn't much you can do on k8s that you can't do on GCE. The major difference is the resources won't be as fluid.
You can attach a disk to a GCE VM and mount it as a subpath if you want at the OS level or just mount the entire disk normally. You can also use a single disk in readOnlyMany mode which can be shared by multiple VMs in the same zone (same restriction you have in GKE). If you need scalability, you can use a Managed Instance Group that uses a snapshot of your disk so that replication won't skew the data.
You can also mount NFS in GCE as in GKE.
Migrating from GKE to GCE generally does not have too many restrictions. The major difference is that you are moving from a managed orchestration system to an unmanaged VM so you may need to do some more leg work to make sure that there is scalability (if need be) and resiliency.
Aside from the benefits that k8s offers all around, I can't think of any major benefits you lose concerning the volumes specifically.

Is pre disk addition is a must before deploying OpenEBS?

I have a 3 node k8s cluster and having a remote storage box with additional disks connected to it. I want to utilize these disks. So is this use case supported on OpenEBS? Also, do I have to attach the disks to Node before deploying OpenEBS? Is this a prerequisites?
Sure. It's supported and you need the disk attached when you setup OpenEBS as your block storage.
After you set it up, essentially you can create volumes (pvcs, pvs) for Kubernetes and mount them on your pods for consumption.
You can setup OpenEBS on Kubernetes cluster where you run your workloads either using helm or kubectl
Yes OpenEBS support storage with additional disks connected. With 0.7 it has a feature NDM (Node Disk Manager) which would monitor the disks attached to the nodes. Once the disks are attached you can create a pool on top of it and use the same. For more details, document link

Can kubernetes cluster formed from mix of AWS nodes, Azure nodes, VMWare nodes

Is HA across multiple cloud providers i.e ONE kubernetes cluster from mix of Azure nodes, AWS nodes, VMware nodes. (Consider all have same OS image)
If so how dynamic provisioning works.
Can Kubernetes CSI (container storage interface) help me with this.
That will not work very well. The cloud provider needs to be set on the apiserver & controller-manager and you can't run multiple copies of those in different configurations.
Now if you don't need a cloud provider, as in you are just using these as generic VMs, you will not have access to cloud storage via the kubernetes api. Otherwise it's workable but is still not a great setup. This would essentially be a cross region cluster which is not a supported use case. You are meant to use 1 cluster per region and arrange for LB somehow (yes, this is the tricky bit).

What is the minimum Google Kubernetes Engine cluster size / Configuration for Istio?

I tried to launch Istio on Google Kubernetes Engine using the Google Cloud Deployment Manager as described in the Istio Quick Start Guide.
My goal is to have a cluster as small as possible for a few very lightweight microservices.
Unfortunately, Istio pods in the cluster failed to boot up correctly when using a
1 node GKE
g1-small or
n1-standard-1
cluster.
For example, istio-pilot fails and the status is "0 of 1 updated replicas available - Unschedulable".
I did not find any hints that the resources of my cluster are exceeded so I am wondering:
What is the minimum GKE cluster size to successfully run Istio (and a few lightweight microservices)?
What I found is the issue Istio#216 but it did not contain the answer. Also, of course, the cluster size depends on the microservices but I am basically interested in the minimum cluster to start with.
As per this page
If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM.