I want to give access for a client to selected images on rbd pool.
How can i do it ?
For now i use
ceph auth get-or-create client.cdata1 mon 'profile rbd' osd 'profile rbd pool=data1'
This allow client "cdata1" to see and mount all images in pool "data1" I want to user cdata1 will have access to only selected images inside pool "data1", how to do that ?
Thanks
first you have to create an rbd image , example:
rbd create rbd-image --size 1000000 --pool data1 --image-feature layering
rbd info data1/rbd-image
then you can use the block_name_prefix
ceph auth get-or-create client.cdata1 mon 'profile rbd' osd 'allow rwx pool data1 object_prefix rbd_data.xxxxxxxxxx; allow rwx pool data1 object_prefix rbd_header.xxxxxxxxxxx ;allow rx pool data1 object_prefix rbd_id.rbd-image' -o ceph.client.rbd-image.keyring
Related
For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.
It seems like I have trouble connecting the s3fs client to the Minio container.
Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)
import s3fs
import docker
client = docker.from_env()
container = client.containers.run('minio/minio',
"server /data --console-address ':9090'",
environment={
"MINIO_ACCESS_KEY": "minio",
"MINIO_SECRET_KEY": "minio123",
},
ports={
"9000/tcp": 9000,
"9090/tcp": 9090,
},
volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
detach=True)
container.reload() # why reload: https://github.com/docker/docker-py/issues/2681
fs = s3fs.S3FileSystem(
anon=False,
key='minio',
secret='minio123',
use_ssl=False,
client_kwargs={
'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
}
)
===========
>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception
check that the container is running:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e22c19a65 minio/minio "/usr/bin/docker-ent…" 56 seconds ago Up 55 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp hardcore_ride
check that the relevant volume is attached:
➜ ~ docker exec -it 127e22c19a65 bash
[root#127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root#127e22c19a65 /]# exit
Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.
What is the bucket name that was created as part of this setup?
From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.
fs.ls('my-bucket')
['my-file.txt']
Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?
https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf
I am trying to follow this tutorial on deploying Hyperledger Fabric on Kubernetes. But instead of IBM Cloud, I'm doing it with Google Cloud. I encountered this same issue (see my logs below) and tried:
changing docker image to docker:18.09-dind in docker.yaml.
setting FABRIC_CFG_PATH=$PWD/configFiles instead of FABRIC_CFG_PATH=$PWD in create_channel.yaml according to another StackOverflow answer.
However, these workaround did not work for me and I still encounter the error.
How do I fix this to be able to successfully deploy the network?
> ./setup_blockchainNetwork.sh
peersDeployment.yaml file was configured to use Docker in a container.
Creating Docker deployment
persistentvolume/docker-pv created
persistentvolumeclaim/docker-pvc created
service/docker created
deployment.apps/docker-dind created
Creating volume
The Persistant Volume does not seem to exist or is not bound
Creating Persistant Volume
Running: kubectl create -f /home/me/blockchain-network-on-kubernetes/configFiles/createVolume.yaml
persistentvolume/shared-pv created
persistentvolumeclaim/shared-pvc created
Success creating Persistant Volume
Creating Copy artifacts job.
Running: kubectl create -f /home/me/blockchain-network-on-kubernetes/configFiles/copyArtifactsJob.yaml
job.batch/copyartifacts created
Wating for container of copy artifact pod to run. Current status of copyartifacts-dcg4m is Pending
copyartifacts-dcg4m is now Running
Starting to copy artifacts in persistent volume.
Waiting for 10 more seconds for copying artifacts to avoid any network delay
Waiting for copyartifacts job to complete
Copy artifacts job completed
Generating the required artifacts for Blockchain network
Running: kubectl create -f /home/me/blockchain-network-on-kubernetes/configFiles/generateArtifactsJob.yaml
job.batch/utils created
Waiting for generateArtifacts job to complete
Waiting for generateArtifacts job to complete
Creating Services for blockchain network
Running: kubectl create -f /home/me/blockchain-network-on-kubernetes/configFiles/blockchain-services.yaml
service/blockchain-ca created
service/blockchain-orderer created
service/blockchain-org1peer1 created
service/blockchain-org2peer1 created
service/blockchain-org3peer1 created
service/blockchain-org4peer1 created
Creating new Deployment to create four peers in network
Running: kubectl create -f /home/me/blockchain-network-on-kubernetes/configFiles/peersDeployment.yaml
deployment.apps/blockchain-orderer created
deployment.apps/blockchain-ca created
deployment.apps/blockchain-org1peer1 created
deployment.apps/blockchain-org2peer1 created
deployment.apps/blockchain-org3peer1 created
deployment.apps/blockchain-org4peer1 created
Checking if all deployments are ready
Waiting for 15 seconds for peers and orderer to settle
Creating channel transaction artifact and a channel
Running: kubectl create -f /home/me/blockchain-network-on-kubernetes/configFiles/create_channel.yaml
job.batch/createchannel created
Waiting for createchannel job to be completed
Waiting for createchannel job to be completed
Create Channel Failed
> kubectl get pods
NAME READY STATUS RESTARTS AGE
blockchain-ca-58b4bbbcc7-dqmnw 1/1 Running 0 30s
blockchain-orderer-ddc9466d-2sqt8 1/1 Running 0 30s
blockchain-org1peer1-ffbf698bb-fd6nf 1/1 Running 0 29s
blockchain-org2peer1-98f7fb5f9-mb5m7 1/1 Running 0 29s
blockchain-org3peer1-75d6b8bf5c-bxd24 1/1 Running 0 29s
blockchain-org4peer1-675669ffff-b4dxj 1/1 Running 0 29s
copyartifacts-dcg4m 0/1 Completed 0 60s
createchannel-9wt54 1/2 Error 0 12s
docker-dind-54767c54c5-crk7b 0/1 CrashLoopBackOff 3 73s
utils-wbpcz 0/2 Completed 0 37s
> kubectl logs createchannel-9wt54 -c createchanneltx
/shared
systemd-private-3cbb0a492497473087eda0bb66fbd738-systemd-networkd.service-QHqKfL
systemd-private-3cbb0a492497473087eda0bb66fbd738-systemd-resolved.service-NuNfWF
systemd-private-3cbb0a492497473087eda0bb66fbd738-systemd-timesyncd.service-SzE37R
2021-02-03 08:49:16.970 UTC [common.tools.configtxgen] main -> INFO 001 Loading configuration
2021-02-03 08:49:16.970 UTC [common.tools.configtxgen.localconfig] Load -> PANI 002 Error reading configuration: Unsupported Config Type ""
2021-02-03 08:49:16.970 UTC [common.tools.configtxgen] func1 -> PANI 003 Error reading configuration: Unsupported Config Type ""
panic: Error reading configuration: Unsupported Config Type "" [recovered]
panic: Error reading configuration: Unsupported Config Type ""
...
FABRIC_CFG_PATH setting is wrong.
Currently, your error is a phrase that occurs when there is a problem with the syntax in the configtx.yaml file or when the file path is wrong and cannot be found.
For configtxgen, refer to the configtx.yaml file under FABRIC_CFG_PATH.
In the tutorial you provided, configtx.yaml is not found under configFiles directory and it exists under artifacts directory.
I'll suggest two of the easiest solutions out of many.
move artifacts/configtx.yaml to configFiles/configtx.yaml
mv ./artifacts/configtx.yaml configFiles/configtx.yaml
Or, set FABRIC_CFG_PATH to configFiles
export FABRIC_CFG_PATH=${PWD}/artifacts
I'm deploying rook-ceph into a minikube cluster. Everything seems to be working. I added 3 unformatted disk to the vm and its connected. The problem that im having is when I run ceph status, I get a health warm message that tells me "1 pg undersized". How exactly do I fix this?
The documentation(https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-pg/) stated "If you are trying to create a cluster on a single node, you must change the default of the osd crush chooseleaf type setting from 1 (meaning host or node) to 0 (meaning osd) in your Ceph configuration file before you create your monitors and OSDs." I don't know where to make this configuration but if there's any other way to fix this that I should know of, please let me know. Thanks!
I came across this problem installing ceph using rook (v1.5.7) with a single data bearing host having multiple OSDs.
The install shipped with a default CRUSH rule replicated_rule which had host as the default failure domain:
$ ceph osd crush rule dump replicated_rule
{
"rule_id": 0,
"rule_name": "replicated_rule",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
I had to find out the pool name associated with pg 1 that was "undersized", luckily in a default rook-ceph install, there's only one:
$ ceph osd pool ls
device_health_metrics
$ ceph pg ls-by-pool device_health_metrics
PG OBJECTS DEGRADED ... STATE
1.0 0 0 ... active+undersized+remapped
And to confirm the pg is using the default rule:
$ ceph osd pool get device_health_metrics crush_rule
crush_rule: replicated_rule
Instead of modifying the default CRUSH rule, I opted to create a new replicated rule, but this time specifying the osd (aka device) type (docs: CRUSH map Types and Buckets), also assuming the default CRUSH root of default:
# osd crush rule create-replicated <name> <root> <type> [<class>]
$ ceph osd crush rule create-replicated replicated_rule_osd default osd
$ ceph osd crush rule dump replicated_rule_osd
{
"rule_id": 1,
"rule_name": "replicated_rule_osd",
"ruleset": 1,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "choose_firstn",
"num": 0,
"type": "osd"
},
{
"op": "emit"
}
]
}
And then assigning the new rule to the existing pool:
$ ceph osd pool set device_health_metrics crush_rule replicated_rule_osd
set pool 1 crush_rule to replicated_rule_osd
$ ceph osd pool get device_health_metrics crush_rule
crush_rule: replicated_rule_osd
Finally confirming pg state:
$ ceph pg ls-by-pool device_health_metrics
PG OBJECTS DEGRADED ... STATE
1.0 0 0 ... active+clean
As you mentioned in your question you should change your crush failure-domain-type to OSD that it means it will replicate your data between OSDs not hosts. By default it is host and when you have only one host it doesn't have any other hosts to replicate your data and so your pg will always be undersized.
You should set osd crush chooseleaf type = 0 in your ceph.conf before you create your monitors and OSDs.
This will replicate your data between OSDs rather that hosts.
New account so can't add as comment, wanted to expound on #zamnuts answer as I hit the same in my cluster with rook:v1.7.2, if wanting to change the default device_health_metrics in the Rook/Ceph Helm chart or in the YAML, the following document is relevant
https://github.com/rook/rook/blob/master/deploy/examples/pool-device-health-metrics.yaml
https://github.com/rook/rook/blob/master/Documentation/helm-ceph-cluster.md
I am trying to add a osd node by following command
ceph-deploy osd prepare ceph-02:/dev/sdb
Found error that config file /etc/ceph/ceph.conf exists
with different content. used --overwrite-conf to overwrite.
How do I use overwrite-conf?
Error log:
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-02
[ceph-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists
with different content; use --overwrite-conf to overwrite
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
This is a normal behavior for a ceph-deploy command. Just run ceph-deploy --overwrite-conf osd prepare ceph-02:/dev/sdb. This will replace your existing /etc/ceph/ceph.conf.
This will resolve your issue.
I'm running proxmox and I try to remove a pool which I created wrong.
However it keeps giving this error:
mon_command failed - pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool1_U (500)
OK
But:
root#kvm-01:~# ceph -n mon.0 --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = true
root#kvm-01:~# ceph -n mon.1 --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = true
root#kvm-01:~# ceph -n mon.2 --show-config | grep mon_allow_pool_delete
mon_allow_pool_delete = true
root#kvm-01:~# cat /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.0.0.0/24
filestore xattr use omap = true
fsid = 41fa3ff6-e751-4ebf-8a76-3f4a445823d2
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.0.0.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.0]
host = kvm-01
mon addr = 10.0.0.1:6789
mon allow pool delete = true
[mon.2]
host = kvm-03
mon addr = 10.0.0.3:6789
mon allow pool delete = true
[mon.1]
host = kvm-02
mon addr = 10.0.0.2:6789
mon allow pool delete = true
So that's my full config. Any idea why I am unable to delete my pools?
Another approach:
ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
ceph osd pool rm test-pool test-pool --yes-i-really-really-mean-it
You can set the config via the CLI or via the dashboard of Ceph under Cluster -> Configuration (advanced settings).
The CLI command is the following:
ceph config set mon mon_allow_pool_delete true
you need to do:
systemctl restart ceph-mon.target
Otherwise you can restart the server an infinite number of times and nothing happens
After editing the config you need to reboot the node. After the reboot everything went smoothly!
After added the following lines to the /etc/ceph/ceph.conf or /etc/ceph/ceph.d/ceph.conf and restart the ceph.target servivce, the issue still exists.
[mon.1]
host = kvm-02
mon addr = 10.11.110.112:6789
mon allow pool delete = true