I don't find mkcephfs tool - ceph

I am starting with Cepth and i am following the QuickStart Install Tutorial from the official website and in one step i need the tool MKCEPHFS but i don't find in the place where should be, /etc/ceph. Neither he is in /sbin/ folder. I am trying install and test the Ceph system in one only host. I tried to do the installation from several forms but i don't find the problem.
Note:
ceph version 0.87
System: "Ubuntu 14.04.1 LTS"
These are the tools instaled:
dpkg -l | grep ceph
ii ceph 0.87-1trusty amd64 distributed storage and file system
ii ceph-common 0.87-1trusty amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-deploy 1.5.21trusty all Ceph-deploy is an easy to use configuration tool
ii ceph-fs-common 0.87-1trusty amd64 common utilities to mount and interact with a ceph file system
ii ceph-fuse 0.87-1trusty amd64 FUSE-based client for the Ceph distributed file system
ii ceph-mds 0.87-1trusty amd64 metadata server for the ceph distributed file system
ii libcephfs1 0.87-1trusty amd64 Ceph distributed file system client library
ii python-ceph 0.87-1trusty amd64 Python libraries for the Ceph distributed filesystem

As you use the version 0.87(Giant), you can see the doc here, http://ceph.com/docs/giant/start/.

MKCEPHFS tool has been replaced by ceph-deploy in the latest versions and in the Storage Cluster Quick Start we can see how use it.

I saw the same (apparently) ancient docs here
http://eu.ceph.com/docs/wip-6919/start/quick-start/
in a seemingly authoritative place.

Related

Can I install minikube on ubuntu without virtualBox?

I want to start practicing with k8s for the CKAD exam. I run on ubuntu 18.04.
I noticed everywhere that I need to download Virtualbox for minikube. I believe that VB is needed in case I don't start my cluster with a driver but if I use the Docker driver when I start my cluster shouldn't that be enough? Is microk8s a better option?
It seems that the preferred way is use --driver=docker driver instead of --driver=none for minikube, although it is technically not baremetal as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only. (info below from https://minikube.sigs.k8s.io/docs/drivers/docker/)
docker
Overview
The Docker driver allows you to install Kubernetes into an existing Docker install. On Linux, this does not require virtualization to be enabled.
Requirements
Install Docker 18.09 or higher
amd64 or arm64 system.
Usage
Start a cluster using the docker driver:
minikube start --driver=docker
To make docker the default driver:
minikube config set driver docker
Yes you can. Check here.
Minikube also supports a --driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a Linux environment but not a hypervisor.
Jus run
$ minikube start
Caution: If you use the none driver, some Kubernetes components run as privileged containers that have side effects outside of the Minikube environment. Those side effects mean that the none driver is not recommended for personal workstations

Unable to bootstrap (cloud type: localhost) - Error when installing Kuberneters cluster locally with LXD/Conjure-up

Using Ubuntu 18.04.
I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin
When I run:
conjure-up kubernetes
I select the following installation:
and select localhost for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:
Here is the log:
https://pastebin.com/raw/re1UvrUU
Where one error says:
2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in <Task finished coro=<BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15> exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)>
but that does not really help much.
Any suggestion to why the install wizard/conjure-up fails?
Also based on this post:
https://github.com/conjure-up/conjure-up/issues/1308
I have tried to first disable firewall:
sudo ufw disable
and then re-run installation/conjure install wizard. But I get the same error.
Some more details on how I installed and configured LXD/conjure-up below:
$ snap install lxd
lxd 3.2 from 'canonical' installed
$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=26GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Configured group membership:
sudo usermod -a -G lxd $USER
newgrp lxd
Next installed:
sudo snap install conjure-up --classic
And then ran installation:
conjure-up kubernetes
I wasn't able to reproduce your exact problem but i got conjure-up + lxd installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!
I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention lxd but not the part with lxd init which i assume you picked up in the conjure-up user manual.
So with that said, i followed the conjure-up user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of conjure-up, i started off with the stable one but changed to edge when testing different combinations.
Also please ensure that you have the recommended resources available stated by the user manual, conjure-up and the Canoncial Distribution of Kubernetes launches a number of containers for you. You might not need 3 x etcd, 3 x worker nodes and 2 x Master, and if you don't just tune the number of containers down in the conjure-up wizard.
These are the steps i performed (as my local user):
Make sure your Ubuntu box are updated: sudo apt update && sudo apt upgrade
Install conjure-up by running: sudo snap install conjure-up --classic --edge
Install lxd by running: sudo snap install lxd
With lxd comes the client part which is lxc, if you run e.g. lxc list you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: sudo chown -R lxd:lxd /var/snap/lxd/ to change owner and group of the lxd directory containing the socket you'll be communicating with using lxc.
Add your user to the lxdgroup: sudo usermod -a -G lxd $USER && newgrp lxd, log off and on to make this permanent and not only active in your current shell.
Now create a lxd bridge manually with the following command: lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false
Now let's run the init part of lxd with lxd init. Remember to answer no when being asked to create a new local network bridge?, in the next prompt provide your newly created network bridge instead (lxdbr1). The rest of the answers to the questions can be left as default.
Now continue with running conjure-up kubernetes and choose localhost as your type. For me the localhost choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the lxd init step.
Skip the additional components you can install like Rancher, Prometheus etc.
Choose your new network bridge and the default storage pool, proceed to the next step.
In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!
You can always troubleshoot and list all containers created with the lxc tool. If you've ever used Docker the lxc tool feels a lot like the docker client.
And finally some thoughts and observations, there's a lot of moving parts to conjure-up as you might have seen. It's actually described as: conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.
For reference, i ended up having the following versions installed:
lxd version 3.3
conjure-up version 2.6.1

Can't Install Kubernetes on Ubuntu 16.04

I have tried to install Kubernetes on 3 separate Ubuntu 16.04 machines, with poor results. On all three machines, the recommended installation, using snap and conjure-up did not work:
gknight#pz1:~$ sudo snap install conjure-up --classic
[sudo] password for gknight:
gknight#pz1:~$ sudo reboot
gknight#pz1:~$ conjure-up kubernetes
dropping privs did not work
This is the snap version:
gknight#pz1:~$ snap --version
snap 2.33.1ubuntu2
snapd 2.33.1ubuntu2
series 16
ubuntu 16.04
kernel 4.4.0-130-generic
On two, local, machines, the repository method worked:
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
add the following to sources.list.d, as kubernetes.list:
deb http://apt.kubernetes.io/ kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
But, on a remote 512mb KVM VPS (PnZ Hosting), although Docker installs and runs just fine, when I install kubelet, etc. and do nothing else, it soon runs the uptime load average up to 12 or so, and I can barely get through to it to reboot. There are no obvious error messages (and swap is turned off).
So, does the "conjure-up" method work on any Ubuntu 16.04 today?
What is Kubernetes doing that's taking over the KVM machine?
Finally, is there any other way to install Kubernetes?
remote 512mb KVM VPS
That's almost certainly the problem, as I don't know of very much software nowadays that will run in that little memory. It matches your experience that the machine will start swapping like mad, driving the I/O pressure through the roof
Agree with #Matthew & #Michael - 512mb is not enough to run Kubernetes.
Increase your memory up to 1GB min and retry.
Apiserver and etcd together are fine on a machine with 1 core and 1GB
RAM for clusters with 10s of nodes.
You can read more documentation here.
Conjure method works fine for me using this instruction.
Ubuntu version:
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Ways to install Kubernetes:
Local Kubernetes development with LXD
Running Kubernetes Locally via Minikube
Using kubeadm
Use prepared cloud solutions, for example Google Kubernetes Engine, Amazon EKS or many many others.

How to Mount Multiple CephFS on Client-Node?

I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2

Postgres failed to install, 'unexpected character ";"'

So this morning I couldn't install Postgres 9.1 from the Ubuntu repo. I tried installing 9.2 from postgres repo, but if failed with the same error. The error trace is really uninformative (I don't even know what is the source of this error). Google didn't tell me anything as well.
It failed during installation with the same error, and I tried to create the cluster manually. But...
root#Ubuntu-1304-raring-64-minimal /home/tmp # pg_createcluster 9.2 main --start
Creating new cluster (configuration: /etc/postgresql/9.2/main, data: /var/lib/postgresql/9.2/main)...
FATAL: syntax error at line 5067: unexpected character ";"
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/9.2/main"
Error: initdb failed
What is wrong?
According to this output from dpkg -l 'postgres*':
ii postgresql-9.2 9.2.4-1.pgdg12.4+1 amd64 object-relational SQL database, version 9.2 server
un postgresql-client (no description available)
ii postgresql-client-9.1 9.1.9-1ubuntu1 amd64 front-end programs for PostgreSQL 9.1
ii postgresql-client-9.2 9.2.4-1.pgdg12.4+1 amd64 front-end programs for PostgreSQL 9.2
ii postgresql-client-common 140 all manager for multiple PostgreSQL client versions
ii postgresql-common 140 all
postgresql-9.2 is already installed (see the ii flags in the leftmost column), as well as the client tools for 9.1 and 9.2 from a mix of pgdg and ubuntu repositories.
Anyway, the error encountered by pg_createcluster is quite unusual. From the output, especially the line number, it would seem that the underlying initdb fails when playing the postgres.bki file.
For 9.2, this file is: /usr/share/postgresql/9.2/postgres.bki. It contains low-level commands in a sql-like dialect to populate the cluster with pre-initialized data (template databases, pre-defined types and views, etc.)
It's hard to imagine that this file would be corrupted, especially since you have a similar problem when installing 9.1 that comes with a different postgres.bki file right from the package.
Still you may check just in case what's at line 5067 and around. In my build directory for 9.2.4, I have this:
insert OID = 1 ( template1 10 ENCODING "LC_COLLATE" "LC_CTYPE" t t -1 0 0 1663 _null_)
And there isn't a ; character anywhere in the entire file.
Other than that, you may want to remove the entire postgresql installation to restart from a clean base:
# purge client packages
dpkg --purge postgresql-client-9.1 postgresql-client-9.2 postgresql-client-common
# purge server packages
dpkg --purge postgresql-9.2 postgresql-common
You're... trying to install postgres?
Either one of these commands should get you on your way:
sudo apt-get install postgresql-9.1
sudo apt-get install postgresql-9.2
Here's the download page
If those commands returned an error, the error response would be helpful information to include in your question.
I'm afraid I've never tried manually creating the clusters before, so I'm probably not of much help.