Ceph usage control - ceph

Im using Ceph nautilus with 3 node OSD+mgr and 2 node monitor+rgw. What i need is, i want to track any user usage. Im using Ceph as Object Storage and i need to get a report or info about any object gateway user's details such as how many documents written, how much size its using etc. I found some articles about to enable usage on rados gateway (http://manpages.ubuntu.com/manpages/bionic/man8/radosgw.8.html) and i did.
but when i type sudo radosgw-admin usage show --bucket=test --start-date=2020-07-17
i got
{ "entries": [], "summary": [] }
Is there any way get these informations? Am i missing something?

You should enable usage log for rgw in ceph.conf:
rgw enable usage log = true

Related

Vespa.ai storage 0 down

I recently start using vespa, and I deployed a cluster on Kubernetes and index some data.. but toda one of storage shows down on "vespa-get-cluster-state":
[vespa#vespa-0 /]$ vespa-get-cluster-state
Cluster feature:
feature/storage/0: down
feature/storage/1: up
feature/storage/2: up
feature/distributor/0: up
feature/distributor/1: up
feature/distributor/2: up
I don't know what is this storage... this cluster had 2 content nodes, 2 containers nodes and 1 master.
How see logs and diagnostic why this down.
Just a tip: This question would work better on the Vespa Slack, or as a github issue.
According to the message you shared you have 3 content nodes (each have a "storage service responsible for storing the data, and a "distributor" service, responsible for managing a subset of the data space. The reason the node is down is not included in this message but you can find it by running vespa-logfmt -l warning,error on your admin node.

Adding OSDs to Ceph with WAL+DB

I'm new to Ceph and am setting up a small cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which mention cephadm only mention adding a drive but not specifying the WAL+DB locations.
I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How?!
It seems for the more advanced cases, like using dedicated WAL and/or DB, you have to use the concept of drivegroups
If the version of your Ceph is Octopus(which ceph-deploy is deprecated), I suppose you could try this.
sudo ceph-volume lvm create --bluestore --data /dev/data-device --block.db /dev/db-device
I built Ceph from source codes but I think this method should be supported and you could
try
ceph-volume lvm create --help
to see more parameters.

AWS elastic search cluster becoming unresponsive

we have several AWS elastic search domains which sometimes become unresponsive for no apparent reason. The ES endpoint as well as Kibana return bad gateway errors after a few minutes of trying to load the resources.
The node status message is the following (not that it's any help):
/_cluster/health: {"code":"ProxyRequestServiceException","message":"Unable to execute HTTP request: Read timed out (Service: null; Status Code: 0; Error Code: null; Request ID: null)"}
Error logs are activated for the cluster but do not show anything relevant for the time at which the cluster became inactive.
I would like to at least be able to restart the cluster but the status remains "processing" seemingly forever.
Unfortunately, if you are using the AWS ElasticSearch Service (as in not building it on your own EC2 instances), many... well... MOST... of the admin API's and capabilities are restricted so you cannot dig as much into it as you could if you built it from the ground up.
I have found that AWS Support does a pretty good job in getting to the bottom of things when needed, so I would suggest you open a support ticket.
I wish this wasn't the case, but using their service is nice and easy (as in you don't have to build and maintain the infra yourself), but you lose a LOT of capabilities from an Admin or Troubleshooting perspective. :(

How to create Ceph Filesystem after Ceph Object Storage Cluster Setup?

I successfully set up a Ceph Object Storage Cluster based on this tutorial: https://www.twoptr.com/2018/05/installing-ceph-luminous.html.
Now I am stuck because I would like to add an MDS node in order to setup a Ceph Filesystem from that cluster. I have already set up the MDS node and tried to set up the FS, following several different guides and tutorials (e.g. the Ceph docs), but nothing has really worked so far.
I would be very grateful if someone could point me into the right direction of how to do this the right way.
My setup includes 5 VM's with Ubuntu 16.04 server installed:
ceph-1 (mon, mgr, osd.0)
ceph-2 (osd.1)
ceph-3 (osd.2)
ceph-4 (radosgw, client)
ceph-5 (mds)
I also tried to create a pool which seemed to work, because it's showing in the Ceph Dashboard, which I installed on ceph-1. But I am not sure how to continue....
Thank you for your help!
hi your install not Standard
please read a below link very helpfull for install ceph:
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
then
http://docs.ceph.com/docs/mimic/cephfs/createfs/
for erasure coding below link
http://karan-mj.blogspot.com/2014/04/erasure-coding-in-ceph.html

Openshift says Quota limit reached

In the Open shift i have 4 projects and 25Gb of space allocated to the projects.
And db i use is Mongo Db(3.2 Version).
So in openshift iam getting the message has Quota limit reached and if i check all the 25 GB has been used as per openshift
But in Mongo db if i check using db.stats() for all the projects i have used 5.7GB
I want to know where the remaining space is used Or how to find exact space that i am using.
I think you’d like to do double checks about your resource issues.
check what resource limit was reached, is it a storage?
you should check the event logs which provide more details.
check what quota limits were configured your cluster or project.
have you been experienced some troubles after the showing the messages? Such as db hanging up, no response from pod and so on.
They are just troubleshooting guides, but i hope it help you.