How to set IO limit on RBD image (ceph qos settings) - ceph

From the ceph doc :
librbd supports limiting per image IO, controlled by the following settings.
Running the commands from the doc prints unknown options qos ....
I haven't found anything on the web so far. Can anyone help me please?

Are you using Ceph nautilus or octopus? QOS on rbd images released in nautilus and above.

I've finally found the correct command here.
I'm posting it here hoping it helps someone in the future.
At the image level:
rbd config image set <pool>/<image> rbd_qos_iops_limit <value>
At the pool level:
rbd config pool set <pool> rbd_qos_iops_limit <value>

Related

Where does Kubernetes download image to?

I've read through this page and I'm interested in where Kubernetes downloads an image to and how long it stores it for.
For example, let's say we have a large 3GB image. When i start up a pod will the image be downloaded to disk of the node the pod is being deployed to, and remain until that node is destroyed? If so does that mean i could allocate only 400MB of memory to a pod that is using a 3GB image?
As mentioned in comments correctly ,CRI does this rather than K8s, Assuming you are running Docker
If you want to access the image data directly, it’s usually stored in the following locations:
Linux: /var/lib/docker/
Windows: C:ProgramDataDockerDesktop
macOS: ~/Library/Containers/com.docker.docker/Data/vms/0/
If you are using containerd as runtime then the images are stored at /var/lib/containerd
it is configured in /etc/containerd/config.toml as shown below
contianerd runtime is responsible for downoading and running container using the image.

Ceph usage control

Im using Ceph nautilus with 3 node OSD+mgr and 2 node monitor+rgw. What i need is, i want to track any user usage. Im using Ceph as Object Storage and i need to get a report or info about any object gateway user's details such as how many documents written, how much size its using etc. I found some articles about to enable usage on rados gateway (http://manpages.ubuntu.com/manpages/bionic/man8/radosgw.8.html) and i did.
but when i type sudo radosgw-admin usage show --bucket=test --start-date=2020-07-17
i got
{ "entries": [], "summary": [] }
Is there any way get these informations? Am i missing something?
You should enable usage log for rgw in ceph.conf:
rgw enable usage log = true

Adding OSDs to Ceph with WAL+DB

I'm new to Ceph and am setting up a small cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which mention cephadm only mention adding a drive but not specifying the WAL+DB locations.
I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How?!
It seems for the more advanced cases, like using dedicated WAL and/or DB, you have to use the concept of drivegroups
If the version of your Ceph is Octopus(which ceph-deploy is deprecated), I suppose you could try this.
sudo ceph-volume lvm create --bluestore --data /dev/data-device --block.db /dev/db-device
I built Ceph from source codes but I think this method should be supported and you could
try
ceph-volume lvm create --help
to see more parameters.

How to create Ceph Filesystem after Ceph Object Storage Cluster Setup?

I successfully set up a Ceph Object Storage Cluster based on this tutorial: https://www.twoptr.com/2018/05/installing-ceph-luminous.html.
Now I am stuck because I would like to add an MDS node in order to setup a Ceph Filesystem from that cluster. I have already set up the MDS node and tried to set up the FS, following several different guides and tutorials (e.g. the Ceph docs), but nothing has really worked so far.
I would be very grateful if someone could point me into the right direction of how to do this the right way.
My setup includes 5 VM's with Ubuntu 16.04 server installed:
ceph-1 (mon, mgr, osd.0)
ceph-2 (osd.1)
ceph-3 (osd.2)
ceph-4 (radosgw, client)
ceph-5 (mds)
I also tried to create a pool which seemed to work, because it's showing in the Ceph Dashboard, which I installed on ceph-1. But I am not sure how to continue....
Thank you for your help!
hi your install not Standard
please read a below link very helpfull for install ceph:
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
then
http://docs.ceph.com/docs/mimic/cephfs/createfs/
for erasure coding below link
http://karan-mj.blogspot.com/2014/04/erasure-coding-in-ceph.html

Linux kernel tune in Google Container Engine

I deployed a redis container to Google Container Engine and get the following warnings.
10:M 01 Mar 05:01:46.140 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
I know to correct the warning I need executing
echo never > /sys/kernel/mm/transparent_hugepage/enabled
I tried that in container but does not help.
How to solve this warning in Google Container Engine?
As I understand, my pods are running on the node, and the node is a VM private for me only? So I ssh to the node and modify the kernel directly?
Yes, you own the nodes and can ssh into them and modify them as you need.