In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool.
Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume.
sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0
volume create: VolName: failed: /mnt/ppshare/brick0 is already part
of a volume
But no volumes are present in this instance.
I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether.
It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully.
I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck.
https://linuxsysadm.wordpress.com/2013/05/16/glusterfs-remove-extended-attributes-to-completely-remove-bricks/.
apt-get install attr
cd /glusterfs
for i in attr -lq .; do setfattr -x trusted.$i .; done
attr -lq /glusterfs (for testing, the output should pe empty)
Simply put "force" in the end of "gluster volume create ..." command.
Please check if you have directories /mnt/ppshare/brick0 created.
You should have /mnt/ppshare without the brick0 folder. The create command creates those folders. The error indicates that the brick0 folders are present.
Related
Kubernetes version:
V1.22.2
Cloud Provider Vsphere version 6.7
Architecture:
3 Masters
15 Workers
What happened:
One of the pods for some "unknown" reason went down, and when we try to lift him up, it couldn't attach the existing PVC.
This only happened to a specific pod, all the others didn't have any kind of problem.
What did you expect to happen:
Pods should dynamically assume PVCs
Validation:
First step: The connection to Vsphere has been validated, and we have confirmed that the PVC exists.
Second step: The Pod was restarted (Statefulset 1/1 replicas) to see if the pod would rise again and assume the pvc, but without success.
Third step: Made a restart to the services (kube-controller, kube-apiserve, etc)
Last step: All workers and masters were rebooted but without success, each time the pod was launched it had the same error ""Multi-Attach error for volume "pvc......" Volume is already exclusively attached to one node and can't be attached to another""
When I delete a pod and try to recreate it, I get this warning:
Multi-Attach error for volume "pvc-xxxxx" The volume is already exclusively attached to a node
and cannot be attached to another
Anything else we need to know:
I have a cluster (3 master and 15 nodes)
Temporary resolution:
Erase the existing PVC and launch the pod again to recreate the PVC.
Since this is data, it is not the best solution to delete the existing PVC.
Multi-Attach error for volume "pvc-xxx" Volume is already
exclusively attached to one node and can't be attached to another
A longer term solution is referring to 2 facts:
You're using ReadWriteOnce access mode where the volume can be mounted as read-write by a single node.
Pods might be schedule by K8S Scheduler on a different node for multiple reason.
Consider switching to ReadWriteMany where the volume can be mounted as read-write by many nodes.
I have a home Kubernetes cluster with multiple SSDs attached to one of the nodes.
I currently have one persistence volume per mounted disk. Is there an easy way to create a persistence volume that can access data from multiple disks? I thought about symlink but it doesn't seem to work.
You would have to combine them at a lower level. The simplest approach would be Linux LVM but there's a wide range of storage strategies. Kubernetes orchestrates mounting volumes but it's not a storage management solution itself, just the last-mile bits.
As already mentioned by coderanger Kubernetes does not manage your storage at lower level. While with cloud solutions there might some provisioners that will do some of the work for you with bare metal there isn't.
The closest thing that help you manage local storage is Local-volume-static-provisionner.
The local volume static provisioner manages the PersistentVolume
lifecycle for pre-allocated disks by detecting and creating PVs for
each local disk on the host, and cleaning up the disks when released.
It does not support dynamic provisioning.
Have a look at this article for more example it.
I have a trick which is working for me.
You can mount these disks at a directory like /disks/, and then make a loop filesystem, mounted it, and make a symbol link from disks to the loop filesystem.
for example:
touch ~/disk-bunch1 && truncate -s 32M ~/disk-bunch1 && mke2fs -t ext4 -F ~/disk-bunch1
mount it and make a symbol link from disks to the loop filesystem:
mkdir -p /local-pv/bunch1 && mount ~/disk-bunch1 /local-pv/bunch1
ln -s /disk/disk1 /local-pv/bunch1/disk1
ln -s /disk/disk2 /local-pv/bunch1/disk2
Finally, use sig-storage-local-static-provisioner, modify the "hostDir" to "/local-pv" in the values.yaml and deploy the provisioner. And then, you could make a pod use multiple disks.
But this method have a drawback, when you run "kubectl get pv", the CAPACITY is just the size of the loop filesystem instead of the sum of several disk capacities...
By the way, this method, is not recommended ... You'd better think of such as raid0 or lvm and etc...
Environment: external NFS share for persistent storage, accessible to all, R/W, Centos7 VMs (NFS share and K8s cluster), NFS utils installed on all workers.
Mount on a VM, e.g. a K8s worker node, works correctly, the share is R/W
Deployed in the K8s cluster: PV, PVC, Deployment (Volumes - referenced to PVC, VolumeMount)
The structure of the YAML files corresponds to the various instructions and postings, including the postings here on the site.
The pod starts, the share is mounted. Unfortunately, it is read-only. All the suggestions from the postings I have found about this did not work so far.
Any idea what else I could look out for, what else I could try?
Thanks. Thomas
After digging deep, I found the cause of the problem. Apparently, the syntax for the NFS export is very sensitive. One more space can be problematic.
On the NFS server, two export entries were stored in the kernel tables. The first R/O and the second R/W. I don't know whether this is a Centos bug because of the syntax in /etc/exports.
On another Centos machine I was able to mount the share without any problems (r/w). In the container (Debian-based image), however, not (only r/o). I have not investigated whether this is due to Kubernetes or Debian behaves differently.
After correcting the /etc/exports file and restarting the NFS server, there was only one correct entry in the kernel table. After that, mounting R/W worked on a Centos machine as well as in the Debian-based container inside K8s.
Here are the files / table:
privious /etc/exports:
/nfsshare 172.16.6.* (rw,sync,no_root_squash,no_all_squash,no_acl)
==> kernel:
/nfsshare 172.16.6.*(ro, ...
/nfsshare *(rw, ...
corrected /etc/exports (w/o the blank):
/nfsshare *(rw,sync,no_root_squash,no_all_squash,no_acl)
In principle, the idea of using an init container is a good one. Thank you for reminding me of this.
I have tried it.
Unfortunately, it doesn't change the basic problem. The file system is mounted "read-only" by Kubernetes. The init container returns the following error message (from the log):
chmod: /var/opt/dilax/enumeris: Read-only file system
I am migrating my previous deployment made with docker-compose to Kubernetes.
In my previous deployment, some containers do have some data made at build time in some paths and these paths are mounted in persistent volumes.
Therefore, as the Docker volume documentation states,the persistent volume (not a bind mount) will be pre-populated with the container directory content.
I'd like to achieve this behavior with Kubernetes and its persistent volumes, How can I do ? Do I need to add some kind of logic using scripts in order to copy my container's files to the mounted path when data is not present the first time the container starts ?
Possibly related question: Kubernetes mount volume on existing directory with files inside the container
I think your options are
ConfigMap (are "some data" configuration files?)
Init containers (as mentioned)
CSI Volume Cloning (clone combining an init or your first app container)
there used to be a gitRepo; deprecated in favour of init containers where you can clone your config and data from
HostPath volume mount is an option too
An NFS volume is probably a very reasonable option and similar from an approach point of view to your Docker Volumes
Storage type: NFS, iscsi, awsElasticBlockStore, gcePersistentDisk and others can be pre-populated. There are constraints. NFS probably the most flexible for sharing bits & bytes.
FYI
The subPath might be of interest too depending on your use case and
PodPreset might help in streamlining the op across the fleet of your pods
HTH
I would like to have Kubernetes use the local SSD in my Google Kubernetes engine cluster without using alpha features. Is there a way to do this?
Thanks in advance for any suggestions or your help.
https://cloud.google.com/kubernetes-engine/docs/concepts/local-ssd explains how to use local SSDs on your nodes in Google Kubernetes Engine. Based on the gcloud commands, the feature appears to be beta (not alpha) so I don't think you need to rely on any alpha features to take advantage of it.
You can use local SSD with your Kubernetes nodes as explained in the below documentation:
Visit the Kubernetes Engine menu in GCP Console.
Click Create cluster.
Configure your cluster as desired. Then, from the Local SSD disks (per node) field, enter the desired number of SSDs as an absolute number.
Click Create.
To create a node pool with local SSD disks in an existing cluster:
Visit the Kubernetes Engine menu in GCP Console.
Select the desired cluster.
Click Edit.
From the Node pools menu, click Add node pool.
Configure the node pool as desired. Then, from the Local SSD disks (per node) field, enter the desired number of SSDs as an absolute number.
Click Save.
Be aware of the disadvantages/limitations of local SSD storage in Kubernetes as explained in this documentation link:
Because local SSDs are physically attached to the node's host virtual machine instance, any data stored in them only exists on that node. As the data stored on the disks is local, you should ensure that your application is resilient to having this data being unavailable.
A Pod that writes to a local SSD might lose access to the data stored on the disk if the Pod is rescheduled away from that node. Additionally, upgrading a node causes the data to be erased.
You cannot add local SSDs to an existing node pool.
Above points are very important if you want to have high availability in your Kubernetes deployment.
Kubernetes local SSD storage is ephemeral and presents some problems for non-trivial applications when running in containers.
In Kubernetes, when a container crashes, kubelet will restart it, but the files in it will be lost because the container starts with a clean state.
Also, when running containers together in a Pod it is often necessary that those containers share files.
You can use Kubernetes Volume abstraction to solve above problems as explained in the following documentation.
If you're looking to run the whole of Docker on SSD's in your Kubernetes cluster, this is how I did it on my node pool (ubuntu nodes):
Go to Compute Engine > VM Instances
Edit your node to add a new SSD (explained in the first step "Create and attach a persistent disk in the Google Cloud Platform Console" here: https://cloud.google.com/compute/docs/disks/add-persistent-disk)
On your server:
# stop docker
sudo service docker stop
# format and mount disk
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
rm -fr /var/lib/docker
sudo mkdir -p /var/lib/docker
sudo mount -o discard,defaults /dev/sdb /var/lib/docker
sudo chmod 711 /var/lib/docker
# backup and edit fstab
sudo cp /etc/fstab /etc/fstab.backup
echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /var/lib/docker ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
# start docker
sudo service docker start
As mentioned by others, you might want to look into the "Local SSD's option" provided by GKE first. Reason the provided option of adding SSD's didn't cut it for me, was that my nodes needed a single SSD of 4TB and as I understand the local ssd's are a fixed size.