We are running a cluster of x nodes.
Every node in the cluster pulls some files from remote storage. Unfortunately, the remote server is getting overloaded. So we are exploring a solution in which only a subset of the nodes pulls the files and are served to the remaining nodes (read-only - the other nodes do not need to write). Some subset of nodes can undergo maintenance often and can be taken offline.
I was experimenting with running NFS as a pod in a replica set with a service (fixed IP) for each of the NFS pods. If one node with the NFS-pod goes down, k8 will take care of bringing up an NFS-pod in another node with the same sticky IP.
But this new NFS would still need to remounted on the other nodes.
Any better solution for this storage problem?
Note that we would ideally not like to use remote storage since this adds extra latency.
Try Expanding Persistent Volume Claims. It's overhead for you to maintain, I recommend you to go with some locally managed the same. After that your choice.
There 2 options also recommended like : hostPath & GlusterFS volume, Please refer to this SO for more information.
#scenox suggested that's also a good option.
Related
According to the documentation:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned ... It is a resource in the cluster just like a node is a cluster resource...
So I was reading about all currently available plugins for PVs and I understand that for 3rd-party / out-of-cluster storage this doesn't matter (e.g. storing data in EBS, Azure or GCE disks) because there are no or very little implications when adding or removing nodes from a cluster. However, there are different ones such as (ignoring hostPath as that works only for single-node clusters):
csi
local
which (at least from what I've read in the docs) don't require 3rd-party vendors/software.
But also:
... local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume becomes inaccessible by the pod. The pod using this volume is unable to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.
The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle.
Use-case
Let's say I have a single-node cluster with a single local PV and I want to add a new node to the cluster, so I have 2-node cluster (small numbers for simplicity).
Will the data from an already existing local PV be 1:1 replicated into the new node as in having one PV with 2 nodes of redundancy or is it strictly bound to the existing node only?
If the already existing PV can't be adjusted from 1 to 2 nodes, can a new PV (created from scratch) be created so it's 1:1 replicated between 2+ nodes on the cluster?
Alternatively if not, what would be the correct approach without using a 3rd-party out-of-cluster solution? Will using csi cause any change to the overall approach or is it the same with redundancy, just different "engine" under the hood?
Can a new PV be created so it's 1:1 replicated between 2+ nodes on the cluster?
None of the standard volume types are replicated at all. If you can use a volume type that supports ReadWriteMany access (most readily NFS) then multiple pods can use it simultaneously, but you would have to run the matching NFS server.
Of the volume types you reference:
hostPath is a directory on the node the pod happens to be running on. It's not a directory on any specific node, so if the pod gets recreated on a different node, it will refer to the same directory but on the new node, presumably with different content. Aside from basic test scenarios I'm not sure when a hostPath PersistentVolume would be useful.
local is a directory on a specific node, or at least following a node-affinity constraint. Kubernetes knows that not all storage can be mounted on every node, so this automatically constrains the pod to run on the node that has the directory (assuming the node still exists).
csi is an extremely generic extension mechanism, so that you can run storage drivers that aren't on the list you link to. There are some features that might be better supported by the CSI version of a storage backend than the in-tree version. (I'm familiar with AWS: the EBS CSI driver supports snapshots and resizing; the EFS CSI driver can dynamically provision NFS directories.)
In the specific case of a local test cluster (say, using kind) using a local volume will constrain pods to run on the node that has the data, which is more robust than using a hostPath volume. It won't replicate the data, though, so if the node with the data is deleted, the data goes away with it.
I have a simple 3-node cluster created using AKS. Everything has been going fine for 3 months. However, I'm starting to have some disk space usage issues that seem related to the Os disks attached to each nodes.
I have no error in kubectl describe node and all disk-related checks are fine. However, when I try to run kubectl logs on some pods, I sometimes get "no space left on device".
How can one manage storage used in those disks? I can't seem to find a way to SSH into those nodes as it seems to only be manageable via Azure CLI / web interface. Is there also a way to clean what takes up this space (I assume unused docker images would take place, but I was under the impression that those would get cleaned automatically...)
Generally, the AKS nodes just run the pods or other resources for you, the data is stored in other space just like remote storage server. In Azure, it means managed disks and Azure file Share. You can also store the growing data in the nodes, but you need to configure big storage for each node and I don't think it's a good way.
To SSH into the AKS nodes, there are ways. One is that set the NAT rule manually for the node which you want to SSH into in the load balancer. Another is that create a pod as the jump box and the steps here.
The last point is that the AKS will delete the unused images regularly and automatically. It's not recommended to delete the unused images manually.
Things you can do to fix this:
Create AKS with bigger OS disk (I usually use 128gb)
Upgrade AKS to a newer version (this would replace all the existing vms with new ones, so they won't have stale docker images on them)
Manually clean up space on nodes
Manually extend OS disk on nodes (will only work until you scale\upgrade the cluster)
I'd probably go with option 1, else this problem would haunt you forever :(
Problem
I have a project to do, it consists on the following: I need to have a containerized Database. When the load to the database goes up and the database gets overloaded, I need to pull up another container(pod) with the database. The problem is that the new pod needs to have some data preloaded (for reading purposes to the users) and when the load goes down, the new pod will get terminated and the data stored in there needs to be stored in a central database (to avoid loosing it).
I'm using a Kubernetes cluster (Google Kubernetes Engine) for the pods and Mongo db. It kind of looks like this
DB Diagram
I know that the problem described above is probably not the recommended approach, but that's what they are asking us to do.
Now, the problem is that MongoDB does not allow to do that (merge content from several databases into one database). A script that is controlling the pods (that need to be handled dinamically) and pulling the data from them and pushing it to the central database is complicated and things like having control of the data that was already pulled need to be taking care of.
My Idea of Solution
So, my idea was that all the containers point to the same volume. That means that the files in the directory '/data/db' (where mongo stores it files) of every pod are the same for all the pods because the same volume is mounted for all the pods. It kind of looks like this
Same Volume Mounted for each Pod
Kubernetes allows you to use several volumes. The ones that allow ReadWriteMany are NFS and Cephfs among others. I tried the example of this NFS link but it did not worked. The first pod started successfully but the others got stucked in "Starting Container". I assume that the volume could not be mounted for the other pods because the WriteMany was not allowed.
I tried creating a Cephfs cluster manually, but I am having some trouble to use the cluster from the Kubernetes Cluster. My Question is: will cephfs do the job? Can cephfs handle several writers in the same file? If it can, will the Mongo pods go up successfully? NFS was supposed to work, but it didn't.
I am in the process of learning Kubernetes with a view to setting up a simple cluster with Citus DB and I'm having a little trouble with getting things going, so would be grateful for any help.
I have a docker image containing my base debian image configured for Citus for the project, and I want to set it up at this point with one master, that should mount a GCP master disk with a Postgres DB that I'll then distribute among the other containers, each mounted with a individual separate disk with empty tables (configured with the Citus extension) to hold what gets distributed to each. I'd like to automate this further at some point, but now I'm aiming for just a master container, and eight nodes. My plan is to create a deployment that opens port 5432 and 80 on each node, and I thought that I can create two pods, one to hold the master and one to hold the eight nodes. Ideally I'd want to mount all the disks and then run a post-mount script on the master that will find all the node containers (by IP or hostname??), add them as Citus nodes, then run create_distributed_table to distribute the data.
My confusion at present is about how to label all the individual nodes so they will keep their internal address or hostname and so in the case of one going down it will be replaced and resume with the data on the PD. I've read about ConfigMaps and setting hostname aliases but I'm still unclear about how to proceed. Is this possible, or is this the wrong way to approach this kind of setup?
You are looking for a StatefulSet. That lets you have a known number of pod replicas; with attached storage (PersistentVolumes); and consistent DNS names. In the pod spec I would launch only a single copy of the server and use the StatefulSet's replica count to control the number of "nodes" (also a Kubernetes term), if the replica is #0 then it's the master.
A lot of legacy applications are deployed as containers. Most of them only need a few changes to work in a container but many of them are not built to scale, for example because they maintain session data or write to a volume (concurrency issues).
I was wondering if those applications are intended to run on Kubernetes and if so what is a good way to do so. Pods are not durable, so the desired way to start an application is by using a replication controller and setting replicas to 1. The RC ensures that the right amount of pods are running. The documentation also specifies that it kills pods if there are too many. I was wondering if that's ever the case (if a pod is not started manually).
I guess a database like Postgres (with an external data volume) is a good example. I have seen tutorials deploying those using a replication controller.
Creating a Replication Controller with 1 replica is indeed a good approach, it's more reliable than starting a single pod since you benefit from the auto-healing mechanism: in case the node your app is running on dies, your pod will be terminated an restarted somewhere else.
Data persistence in the context of a cluster management system like Kubernetes means that your data should be available outside the cluster itself (separate storage). I personally use EC2 EBS since our app runs in AWS, but Kubernetes supports a lot of other volume types. If your pod runs on node A, the volumes it uses will be mounted locally and inside your pod containers. Now if your pod is destroyed and restarted on node B this volume will be unmounted from node A and mounted on node B before the containers of your pod are recreated. Pretty neat.
Take a look at persistent volumes, this should be particularly interesting for you.