I'm new to Ceph and am setting up a small cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which mention cephadm only mention adding a drive but not specifying the WAL+DB locations.
I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How?!
It seems for the more advanced cases, like using dedicated WAL and/or DB, you have to use the concept of drivegroups
If the version of your Ceph is Octopus(which ceph-deploy is deprecated), I suppose you could try this.
sudo ceph-volume lvm create --bluestore --data /dev/data-device --block.db /dev/db-device
I built Ceph from source codes but I think this method should be supported and you could
try
ceph-volume lvm create --help
to see more parameters.
Related
I'm pretty new to Ceph, so I've included all my steps I used to set up my cluster since I'm not sure what is or is not useful information to fix my problem.
I have 4 CentOS 8 VMs in VirtualBox set up to teach myself how to bring up Ceph. 1 is a client and 3 are Ceph monitors. Each ceph node has 6 8Gb drives. Once I learned how the networking worked, it was pretty easy.
I set each VM to have a NAT (for downloading packages) and an internal network that I called "ceph-public". This network would be accessed by each VM on the 10.19.10.0/24 subnet. I then copied the ssh keys from each VM to every other VM.
I followed this documentation to install cephadm, bootstrap my first monitor, and added the other two nodes as hosts. Then I added all available devices as OSDs, created my pools, then created my images, then copied my /etc/ceph folder from the bootstrapped node to my client node. On the client, I ran rbd map mypool/myimage to mount the image as a block device, then used mkfs to create a filesystem on it, and I was able to write data and see the IO from the bootstrapped node. All was well.
Then, as a test, I shutdown and restarted the bootstrapped node. When it came back up, I ran ceph status but it just hung with no output. Every single ceph and rbd command now hangs and I have no idea how to recover or properly reset or fix my cluster.
Has anyone ever had the ceph command hang on their cluster, and what did you do to solve it?
Let me share a similar experience. I also tried some time ago to perform some tests on Ceph (mimic i think) an my VMs on my VirtualBox acted very strange, nothing comparing with actual bare metal servers so please bare this in mind... the tests are not quite relevant.
As regarding your problem, try to see the following:
have at least 3 monitors (or an even number). It's possible that hang is because of monitor election.
make sure the networking part is OK (separated VLANs for ceph servers and clients)
DNS is resolving OK. (you have added the servername in hosts)
...just my 2 cents...
Cloud platforms like Linode.com often provide hot-pluggable storage volumes that you can easily attach and detach from a Linux virtual machine without restarting it.
I am looking for a way to install Postgres so that its data and configuration ends up on a volume that I have mounted to the virtual machine. The end result should allow me to shut down the machine, detach the volume, spin up another machine with an identical version of Postgres already installed, attach the volume and have Postgres work just like it did on the old machine with all the data, file system permissions and server-wide configuration intact.
Is such a thing possible? Is there a reliable way to move installations (i.e databases and configuration, not the actual binaries) of Postgres across machines?
CLARIFICATION: the virtual machine has two disks:
the "built-in" one which is created when the VM is created and mounted to /. That's where Postgres gets installed to and you can't move this disk.
the hot-pluggable disk which you can easily attach and detach from a running VM. This is where I want Postgres data and configuration to be so I can just detach the disk (after shutting down the VM to prevent data loss/corruption) and attach it to another VM when I want my data to move so it behaves like it did on the old VM (i.e. no failures to start Postgres, no errors about permissions or missing files, etc).
This works just fine. It is not really any different to starting and stopping PostgreSQL and not removing the disk. There are a couple of things to consider though.
You have to make sure it is stopped + writing synced before unmounting the volume. Obvious enough, and I can't believe you'd be able to unmount before sync completed, but worth repeating.
You will want the same version of PostgreSQL, probably on the same version of operating system with the same locales too. Different distributions might compile it with different options.
Although you can put configuration and data in the same directory hierarchy, most distros tend to put config in /etc. If you compile from source yourself this won't be a problem. Alternatively, you can usually override the default locations or, and this is probably simpler, bind-mount the data and config directories into the places your distro expects.
Note that if your storage allows you to connect the same volume to multiple hosts in some sort of "read only" mode that won't work.
Edit: steps from comment moved into body for easier reading.
start up PG, create a table put one row in it.
Stop PG.
Mount your volume at /mnt/db
rsync /var/lib/postgresql/NN/main to /mnt/db/pg_data and /etc/postgresql/NN/main to /mnt/db/pg_etc
rename /var/lib/postgresql/NN/main and add .OLD to the name and do the same with the /etc
bind-mount the dirs from /mnt to replace them
restart PG
Test
Repeat
Return to step 8 until you are happy
I have a big (300Gb) Postgres DB running on GKE cluster (Stateful Set, SSD Volume). I need to move this DB to another GKE cluster.
What is the easiest way to accomplish it?
I tried to do it with piping pg_dump/pg_restore, but it takes forever and for some reason, not all constraints/triggers were recreated.
Is there any proper way to gracefully "shutdown" Postgres server running in Kubernetes and copy the /pgdata folder directly (from one volume to another)?
Other ideas?
tnx
I got few ideas (listed from the most probable to the least) about how you could approach this:
Remember to use proper format when using pg_dump. Default plain format may not work properly with pg_restore. Try to specify different format with pg_dump or use psql -f xxx.tar instead of pg_restore. Remember that it might take a while.
You can use a tool to assist you with that. For example pghoard.
You can make a tared backup of you DB and try to copy as a object via Google Cloud Storage.
You can try to create PVCs manually, attach pods to those PVCs and than copy your dataset onto those pods.
Finally, you may try to create an Init container and use it later for your new cluster.
I suggest starting from point 1 as I think it is the most possible solution. If that would not be enough, try later points from the list.
Please let me know if that helped.
I successfully set up a Ceph Object Storage Cluster based on this tutorial: https://www.twoptr.com/2018/05/installing-ceph-luminous.html.
Now I am stuck because I would like to add an MDS node in order to setup a Ceph Filesystem from that cluster. I have already set up the MDS node and tried to set up the FS, following several different guides and tutorials (e.g. the Ceph docs), but nothing has really worked so far.
I would be very grateful if someone could point me into the right direction of how to do this the right way.
My setup includes 5 VM's with Ubuntu 16.04 server installed:
ceph-1 (mon, mgr, osd.0)
ceph-2 (osd.1)
ceph-3 (osd.2)
ceph-4 (radosgw, client)
ceph-5 (mds)
I also tried to create a pool which seemed to work, because it's showing in the Ceph Dashboard, which I installed on ceph-1. But I am not sure how to continue....
Thank you for your help!
hi your install not Standard
please read a below link very helpfull for install ceph:
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
then
http://docs.ceph.com/docs/mimic/cephfs/createfs/
for erasure coding below link
http://karan-mj.blogspot.com/2014/04/erasure-coding-in-ceph.html
I am having a problem with a VMWare VM which Centos7 is installed in it.
lsblk command gives something like below
df -h gives this
I am trying to extend root lvm to the partition but I am not able to do it no matter what I tried.
I tried fdisk /dev/sda to create a new partition and extend lvm to this partition but fdisk is getting stuck after partition number.
Some other useful commands give these results just in case they are helpful.
Any help would be appreciated. Thanks in advance.
from your screenshot, your sda2 partition is 199G, sda1 takes 1G, your sda total size is 200G, you cannot make new partition from sda, that's why it stuck there;
and to resolve your issue I provide two options for you to refer,but before operation please make sure you back up all your important data or vm files.
Option 1: this one is just my thoughts,unverified:
from your vgs and pvs, you can see your sda2 only contributes <29G to whole VG(centos), so very simple way of extending your /root is:
1) pvresize /dev/sda2
,after execute, pls run pvs to check whether the pv size increased, if not, stop here
2) vgextend centos /dev/sda2
,after execute,pls check your vgs, see whether the size increased, if so go on to the next
3) lvextend -l 100%FREE /dev/mapper/centos-root
,after this, check lvs, if the root size not increased, go on
4) try:
xfs_growfs /dev/mapper/centos-root
or
resize2fs /dev/mapper/centos-root
Option 2: best practice is to use pvmove, this is strongly recommended for product environment, you can learn from
https://askubuntu.com/questions/161279/how-do-i-move-my-lvm-250-gb-root-partition-to-a-new-120gb-hard-disk
I have executed pvresize /dev/sda2 command and after this I got pvs like below.
After this, I tried to execute vgextend centos /dev/sda2 but I got this error.
However vgs and vgdisplay centos commands are giving something different than before like below.