How to create Ceph Filesystem after Ceph Object Storage Cluster Setup? - ceph

I successfully set up a Ceph Object Storage Cluster based on this tutorial: https://www.twoptr.com/2018/05/installing-ceph-luminous.html.
Now I am stuck because I would like to add an MDS node in order to setup a Ceph Filesystem from that cluster. I have already set up the MDS node and tried to set up the FS, following several different guides and tutorials (e.g. the Ceph docs), but nothing has really worked so far.
I would be very grateful if someone could point me into the right direction of how to do this the right way.
My setup includes 5 VM's with Ubuntu 16.04 server installed:
ceph-1 (mon, mgr, osd.0)
ceph-2 (osd.1)
ceph-3 (osd.2)
ceph-4 (radosgw, client)
ceph-5 (mds)
I also tried to create a pool which seemed to work, because it's showing in the Ceph Dashboard, which I installed on ceph-1. But I am not sure how to continue....
Thank you for your help!

hi your install not Standard
please read a below link very helpfull for install ceph:
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
then
http://docs.ceph.com/docs/mimic/cephfs/createfs/
for erasure coding below link
http://karan-mj.blogspot.com/2014/04/erasure-coding-in-ceph.html

Related

How can I fix ceph commands hanging after a reboot?

I'm pretty new to Ceph, so I've included all my steps I used to set up my cluster since I'm not sure what is or is not useful information to fix my problem.
I have 4 CentOS 8 VMs in VirtualBox set up to teach myself how to bring up Ceph. 1 is a client and 3 are Ceph monitors. Each ceph node has 6 8Gb drives. Once I learned how the networking worked, it was pretty easy.
I set each VM to have a NAT (for downloading packages) and an internal network that I called "ceph-public". This network would be accessed by each VM on the 10.19.10.0/24 subnet. I then copied the ssh keys from each VM to every other VM.
I followed this documentation to install cephadm, bootstrap my first monitor, and added the other two nodes as hosts. Then I added all available devices as OSDs, created my pools, then created my images, then copied my /etc/ceph folder from the bootstrapped node to my client node. On the client, I ran rbd map mypool/myimage to mount the image as a block device, then used mkfs to create a filesystem on it, and I was able to write data and see the IO from the bootstrapped node. All was well.
Then, as a test, I shutdown and restarted the bootstrapped node. When it came back up, I ran ceph status but it just hung with no output. Every single ceph and rbd command now hangs and I have no idea how to recover or properly reset or fix my cluster.
Has anyone ever had the ceph command hang on their cluster, and what did you do to solve it?
Let me share a similar experience. I also tried some time ago to perform some tests on Ceph (mimic i think) an my VMs on my VirtualBox acted very strange, nothing comparing with actual bare metal servers so please bare this in mind... the tests are not quite relevant.
As regarding your problem, try to see the following:
have at least 3 monitors (or an even number). It's possible that hang is because of monitor election.
make sure the networking part is OK (separated VLANs for ceph servers and clients)
DNS is resolving OK. (you have added the servername in hosts)
...just my 2 cents...

Adding OSDs to Ceph with WAL+DB

I'm new to Ceph and am setting up a small cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which mention cephadm only mention adding a drive but not specifying the WAL+DB locations.
I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How?!
It seems for the more advanced cases, like using dedicated WAL and/or DB, you have to use the concept of drivegroups
If the version of your Ceph is Octopus(which ceph-deploy is deprecated), I suppose you could try this.
sudo ceph-volume lvm create --bluestore --data /dev/data-device --block.db /dev/db-device
I built Ceph from source codes but I think this method should be supported and you could
try
ceph-volume lvm create --help
to see more parameters.

Zookeeper cluster with two nodes

I have two nodes and setting up zookeeper (3.5.3) for cluster . Recommended is 3 But i dont have option to get one more node .
Please suggest is there any option by reconfig or weight change etc .
If it is for demo/prototype/experimental purposes, I would suggest you to use vagrant.
I have done this in my host linux machine. I have done a repository and the code is here. If you have linux machine, all that you have to do is vagrant up. Please try and let me know your comments/questions.

How to install Multi Machine Cluster in Standalone Service Fabric?

I am going through guide here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server
Section "Step 1B: Create a multi-machine cluster".
I have installed Cluster on one box and trying to use the same json (as per instructions) and trying to install it on another box so that i can have Cluster running on 2 VMs.
I am now getting this error when I run by TestConfig.ps1:
Previous Fabric installation detected on machine XXX. Please clean the machine.
Previous Fabric installation detected on machine XXX. Please clean the machine.
Data Root node Dev Box1 exists on machine XXX in \XXX\C$\ProgramData\SF\Dev Box1. This is an artifact from a previous installation - please delete the directory corresponding to this node.
First, take a look on this link. These are the requirements for each cluster node that are need to be met if you want to create the cluster.
The error is pretty obvious. You most likely have already SF installed on the machine. So either you have SF runtime or some uncleaned cluster data there.
Your first try should be running CleanFabric powershell script from the SF standalone package on each node. It should clean all SF data (cluster, runtime, registry etc.). Try this and then run TestConfiguration script once again. If this does not help, you would have to go to each node and manually delete any SF data that TestConfiguration script is complaining about.

i need a client library in scala for redis cluster

I have created a redis in Amazon Eleasticache with cluster enabled mode.
I have a client for scala(scala-redis) but this client works fine in normal mode(cluster disabled mode) but not working in cluster mode.I'm getting error of MOVED 12351 127.0.0.1:7000.I searched many clients but i can't find one to support with cluster mode.
So please help me to overcome this issue.
https://github.com/etaty/rediscala is fairly active and it seems to support cluster mode.