How to include a persistent volume claim when using Memgraph on Kubernetes? - kubernetes

I run Memgraph on Kubernetes using the sample service+deployment found in the memgraph/bolt-proxy repo. Unfortunately, that config doesn’t include a persistent volume claim. I'd like to keep Memgraph’s log and snapshots persistent in Kubernetes. How can I do that?

Configure a Pod to Use a PersistentVolume for Storage
This page shows you how to configure a Pod to use a PersistentVolumeClaim for storage. Here is a summary of the process:
View More Here

There are some Helm Charts for Memgraph that do include PersistentVolumeClaim.
Memgraph itself is a StatefulSet since it is not stateless. The StatefulSet is then provided with three volumes. Two of them are volume claims (lib and log), and the 3rd one is the config.

Related

Does the storage class dynamically provision persistent volume per pod?

Kubernetes newbie here, so my question might not make sense. Please bear with me.
So my question is, given I have setup Storage Class in my cluster, then I have a PVC (Which uses that Storage Class). If I use that PVC into my Deployment, and that Deployment have 5 replicas, will the Storage Class create 5 PV? one per Pod? Or only 1 PV shared by all Pods under that Deployment?
Edit: Also I have 3 Nodes in this cluster
Thanks in advance.
The Persistent Volume Claim resource is specified separately from a deployment. It doesn't matter how many replicas the deployment has, kubernetes will only have the number of PVC resources that you define.
If you are looking for multiple stateful containers that create their own PVC's, use a StatefulSet instead. This includes a VolumeClaimTemplate definition.
If you want all deployment replicas to share a PVC, the storage class provider plugin will need to be either ReadOnlyMany or ReadWriteMany
To answer my question directly.
The Storage Class in this case will only provision one PV and is shared across all pods under the Deployment which uses that PVC.
The accessModes of the PVC does not dictate whether to create one PV for each pod. You can set the accessModes to either ReadWriteOnce/ReadOnlyMany/ReadWriteMany and it will always create 1 PV.
If you want that each Pod will have its own PV, you can not do that under a Deployment
You will need to use StatefulSet using volumeClaimTemplates.
It is Important that the StatefulSet uses volumeClaimTemplates or else, it will still act the same as the Deployment, that is the Storage Class will just provision one PV that is shared across all pods under that StatefulSet.
References:
Kubernetes Deployments vs StatefulSets
Is there a way to create a persistent volume per pod in a kubernetes deployment (or statefulset)?

Kubernetes Volume, PersistentVolume, PersistentVolumeClaim

I've been working with Kubernetes for quite a while, but still often got confused about Volume, PersistentVolume and PersistemtVolumeClaim. It would be nice if someone could briefly summarize the difference of them.
Volume - For a pod to reference a storage that is external , it needs volume spec. This volume can be from configmap, from secrets, from persistantvolumeclaim, from hostpath etc
PeristentVolume - It is representation of a storage that is made avaliable. The plugins for cloud provider enable to create this resource.
PeristentVolumeClaim - This claims specific resources and if the persistent volume is avaliable in namespaces match the claim requirement, then claim get tied to that Peristentvolume
At this point this PVC/PV aren't used. Then in Pod spec, pod makes use of claim as volumes and then the storage is attached to Pod
These are all in a Kubernetes application context. Too keep applications portable between different Kubernetes platforms, it is good to abstract away the infrastructure from the application. Here I will explain the Kubernetes objects that belongs to Application config and also to the Platform config. If your application runs on both e.g. GCP and AWS, you will need two sets of platform configs, one for GCP and one for AWS.
Application config
Volume
A pod may mount volumes. The source for volumes can be different things, e.g. a ConfigMap, Secret or a PersistentVolumeClaim
PersistentVolumeClaim
A PersistentVolumeClaim represents a claim of a specific PersistentVolume instance. For portability this claim can be for a specific StorageClass, e.g. SSD.
Platform config
StorageClass
A StorageClass represents PersistentVolume type with specific properties. It can be e.g. SSD. But the StorageClass is different on each platform, e.g. one definition on AWS, Azure, another on GCP or on Minikube.
PersistentVolume
This is a specific volume on the platform. And it may be different on platforms, e.g. awsElasticBlockStore or gcePersistentDisk. This is the instance that holds the actual data.
Minikube example
See Configure a Pod to Use a PersistentVolume for Storage for a full example on how to use PersistentVolume, StorageClass and Volume for a Pod using Minikube and a hostPath.

How to make consul to use manually created PersistentVolumeClaim in Helm

When installing consul using Helm, it expects the cluster to dynamic provison the PersistentVolume requested by consul-helm chart. It is the default behavior.
I have the PV and PVC created manually and need to use this PV to be used by consul-helm charts. Is it posisble to install consul using helm to use manually created PV in kubernetes.
As #coderanger said
For this to be directly supported the chart author would have to provide helm variables you could set. Check the docs.
As showed on github docs there is no variables to change that.
If You have to change it, You would have to work with consul-statefulset.yaml, this chart provide dynamically volumes for each statefulset pod created.
volumeMounts
volumeClaimTemplates
Use helm fetch to download consul files to your local directory
helm fetch stable/consul --untar
Then i found a github answer with good explain and example about using one PV & PVC in all replicas of Statefulset, so I think it could actually work in consul chart.

Does stellar core deployment on k8s needs persistent storage?

I want to deploy stellar core on k8s with CATCHUP COMPLETE. I'm using this docker image satoshipay/stellar-core
In docker image docs mentioned /data used to store the some informations about DB. And I've seen that helm template is using a persistent volume and mounting it in /data.
I was wondering what will happen if I use a deployment instead of the stateful set and I restart the pod, update it's docker version or delete it? Does it initialize the DB again?
Also does the stellar core need any extra storage for the catchup?
Statefulset vs Deployment
A StatefulSet "provides guarantees about the ordering and uniqueness of these Pods".
If your application needs to be brought up in a specific order, use statefulset.
Storage
Definitely leverage a persistent volume for database. From K8S Docs
On-disk files in a Container are ephemeral
Since it appears you're deploying some kind of blockchain application, this could cause significant delays for startup
In Deployment you specify a PersistentVolumeClaim that is shared by all pod replicas. In other words, shared volume.
The backing storage obviously must have ReadWriteMany or ReadOnlyMany accessMode if you have more than one replica pod.
StatefulSet you specify a volumeClaimTemplates so that each replica pod gets a unique PersistentVolumeClaim associated with it.
In other words, no shared volume.
StatefulSet is useful for running things in cluster e.g Hadoop cluster, MySQL cluster, where each node has its own storage.
So in your case to have more isolation (no shared volumes) is better to have statefulset based solution.
If you use deployment based solution (restart the pod, update it's docker version or delete it) your DB will be initialized again.
Regarding catchup:
In general, running CATCHUP_COMPLETE=true is not recommended in docker containers as they have limited resources by default (if you really want to do it, make sure to give them access to more resources: CPU, memory and disk space).

Difference bw configmap and downwardapi

I am new to kubernetes can somebody please explain why there are multiple volume types like
configMap
emptyDir
projected
secret
downwardAPI
persistentVolumeClaim
For few I am able to figure out, like why we need secret instead of configmap.
For rest I am not able to understand the need for others.
Your question is too generic to answer, here are the few comments off the top of my head
If the deployed pod or containers wants to have configuration data then you need to use configMap resource, if there are secrets or passwords its obvious to use secret resource.
Now if the deployed pods wants to use POD_NAME which is generated during schedule or Run time, then it need to use DownwardAPI resources.
Emptydir share the lifecycle with the Deployed pod, If the pods dies then all of the data which are stored using emptydir resource will be gone, now If you want to persist the data then you need to use persistentVolume, persistentVolumeClaim and storageclass Resources.
for further information k8s volumes
Configmap is used to make the application specific configuration data available to container at run time.
DownwardAPI is used to make kubernetes metadata ( like pod namespace, pod name, pod ip, pod lebels etc ) available to container at run time