How to Mount Multiple CephFS on Client-Node? - ceph

I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?

It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.

If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.

ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2

Related

kubernetes - k3d how to use local directory as a Persistent volume

I am using k3d to run local kubernetes
I have created a cluster using k3d.
Now I want to mount a local directory as a persistent volume.
How can i do this while using k3d.
I know in minikube
$ minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount
Then If you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Is there any similar technique here also while using k3d
According to the answers to this Github question the teature you're looking for is not available yet.
Here is some idea from this link:
The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do k3d cluster create -v "$HOME/git:/git#agent:*" to get all the repositories on my host present in all agent nodes to be used for hot-reloading.
According to this documentation one can use the following command with the adequate flag:
k3d cluster create NAME -v [SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
This command mounts volumes into the nodes
(Format:[SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
Example:
`k3d cluster create --agents 2 -v /my/path#agent:0,1 -v /tmp/test:/tmp/other#server:0`
Here is also an interesting article how volumes and storage work in a K3s cluster (with examples).
I think this feature is not yet available
https://github.com/k3d-io/k3d/issues/566
So far we can only mount volumn when we create a new cluster.
k3d cluster create mykube --volume HOME/go/src/github.com/nginx:/data

503 Service Temporarily Unavailable nginx/1.13.9 in jenkins x

I have created kubernetes cluster using minikube and installed Jenkins x on it.
I am not able to access the Jenkins x dashboard.
Error 503 Service Temporarily Unavailable nginx/1.13.9.
note: I have tried with restarting minikube cluster also.
As mentioned by James Rawlings in the comments, it's likely to be a resource issue. The recommendations in the manual are:
A known good configuration on a 2015 model Macbook Pro is to use 8 GB of RAM, 8 cores, a 150 GB disk size and hyperkit.
The disk size is particularly large as a number of images will need to be downloaded.
So we highly recommend using one of the public clouds above to try out Jenkins X. They all have free tiers so it should not cost you any significant cash and it’ll give you a chance to try out the cloud.
Default minikube installation uses only 2048 MB of RAM, 2 CPU cores and 20Gb disk space. You can adjust minikube VM size using command line options:
$ minikube start --cpus=8 --memory=8192 --disk-size=150g --vm-driver=hyperkit
or use jx tool for that:
For MacOS
$ brew install docker-machine-driver-hyperkit
# docker-machine-driver-hyperkit need root owner and uid
$ sudo chown root:wheel /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
$ sudo chmod u+s /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
$ brew tap jenkins-x/jx
$ brew install jx
$ jx create cluster minikube
For linux:
$ curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.784/jx-linux-amd64.tar.gz | tar xzv
$ sudo mv jx /usr/local/bin
$ jx create cluster minikube

Unable to bootstrap (cloud type: localhost) - Error when installing Kuberneters cluster locally with LXD/Conjure-up

Using Ubuntu 18.04.
I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin
When I run:
conjure-up kubernetes
I select the following installation:
and select localhost for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:
Here is the log:
https://pastebin.com/raw/re1UvrUU
Where one error says:
2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in <Task finished coro=<BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15> exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)>
but that does not really help much.
Any suggestion to why the install wizard/conjure-up fails?
Also based on this post:
https://github.com/conjure-up/conjure-up/issues/1308
I have tried to first disable firewall:
sudo ufw disable
and then re-run installation/conjure install wizard. But I get the same error.
Some more details on how I installed and configured LXD/conjure-up below:
$ snap install lxd
lxd 3.2 from 'canonical' installed
$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=26GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Configured group membership:
sudo usermod -a -G lxd $USER
newgrp lxd
Next installed:
sudo snap install conjure-up --classic
And then ran installation:
conjure-up kubernetes
I wasn't able to reproduce your exact problem but i got conjure-up + lxd installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!
I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention lxd but not the part with lxd init which i assume you picked up in the conjure-up user manual.
So with that said, i followed the conjure-up user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of conjure-up, i started off with the stable one but changed to edge when testing different combinations.
Also please ensure that you have the recommended resources available stated by the user manual, conjure-up and the Canoncial Distribution of Kubernetes launches a number of containers for you. You might not need 3 x etcd, 3 x worker nodes and 2 x Master, and if you don't just tune the number of containers down in the conjure-up wizard.
These are the steps i performed (as my local user):
Make sure your Ubuntu box are updated: sudo apt update && sudo apt upgrade
Install conjure-up by running: sudo snap install conjure-up --classic --edge
Install lxd by running: sudo snap install lxd
With lxd comes the client part which is lxc, if you run e.g. lxc list you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: sudo chown -R lxd:lxd /var/snap/lxd/ to change owner and group of the lxd directory containing the socket you'll be communicating with using lxc.
Add your user to the lxdgroup: sudo usermod -a -G lxd $USER && newgrp lxd, log off and on to make this permanent and not only active in your current shell.
Now create a lxd bridge manually with the following command: lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false
Now let's run the init part of lxd with lxd init. Remember to answer no when being asked to create a new local network bridge?, in the next prompt provide your newly created network bridge instead (lxdbr1). The rest of the answers to the questions can be left as default.
Now continue with running conjure-up kubernetes and choose localhost as your type. For me the localhost choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the lxd init step.
Skip the additional components you can install like Rancher, Prometheus etc.
Choose your new network bridge and the default storage pool, proceed to the next step.
In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!
You can always troubleshoot and list all containers created with the lxc tool. If you've ever used Docker the lxc tool feels a lot like the docker client.
And finally some thoughts and observations, there's a lot of moving parts to conjure-up as you might have seen. It's actually described as: conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.
For reference, i ended up having the following versions installed:
lxd version 3.3
conjure-up version 2.6.1

minikube install package using toolbox but the container does not internet conexion

I'm wondering how can install a package inside the minikube VM. I need some tools.
I have tried the /bin/toolbox container, but It does not have internet conexion.
[root#docker-fedora-24 ~]# dnf update --verbose
cachedir: /var/cache/dnf
DNF version: 1.1.9
Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org].
Error: Failed to synchronize cache for repo 'updates'
I have tried the same toolbox script in my computer and it is properly working.
What configuration parameters I'm missing in minikube or systemd-nspaw?
Or how can I cook a customized minikube VM?
Thanks a lot
You can run minicube without VM on your local docker (if you use linux):
minikube start --vm-driver=none
A alternative, run toolbox with docker run --net=host ... to make network for container more transparent. Troubleshoot your internet connection with nslookup, traceroute/tracepath, curl -v, ifconfig.
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:Ch04:_Simple_Network_Troubleshooting#.WfY1xGi0OUk
Minikube is not meant to be tweaked. The advised method is to prepare a helm chart for your application. As part of the helm chart you can add whatever tool you need in your docker file... Including make... Then you can install or upgrade your package in kubernetes/minikube using helm.
I had a similar problem when I wanted to use tcpdump in the minikube VM.
I ended up using minikube mount SRC-dir:DST-dir to mount the host folder inside the VM and copying the tcpdump binary along with dependent libs (libcrypto and libpcap) to the mount point.
Then I executed tcpdump from the minikube VM and it worked.
Note: My host arch and the minikube VM arch (x86_64) was the same.
Note also: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:DST-dir has to be done.

Is there a way to choose the path of named volumes with the default volume drives in docker?

Is there a way to choose the path of named volumes with the default volume drives in docker?
I know I can bind mount volumes in each service. I know I can create named volumes outside of services and them share them by mounting them in each service. But I can't find out a way to get the data (to be shared) on a path that I select instead of docker's /var/lib/docker/volumes/.
Anyone tried to share a volume as a mount point in two different containers of the same docker-compose file where the volume is a location at your choosing?
You cannot do that with the built-in docker volume plugin. You need to install a thirdparty plugin like below
https://github.com/CWSpear/local-persist
curl -fsSL https://raw.githubusercontent.com/CWSpear/local-persist/master/scripts/install.sh | sudo bash
docker volume create -d local-persist -o mountpoint=/data/images --name=images
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two