Unable to bootstrap (cloud type: localhost) - Error when installing Kuberneters cluster locally with LXD/Conjure-up - kubernetes

Using Ubuntu 18.04.
I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin
When I run:
conjure-up kubernetes
I select the following installation:
and select localhost for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:
Here is the log:
https://pastebin.com/raw/re1UvrUU
Where one error says:
2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in <Task finished coro=<BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15> exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)>
but that does not really help much.
Any suggestion to why the install wizard/conjure-up fails?
Also based on this post:
https://github.com/conjure-up/conjure-up/issues/1308
I have tried to first disable firewall:
sudo ufw disable
and then re-run installation/conjure install wizard. But I get the same error.
Some more details on how I installed and configured LXD/conjure-up below:
$ snap install lxd
lxd 3.2 from 'canonical' installed
$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=26GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Configured group membership:
sudo usermod -a -G lxd $USER
newgrp lxd
Next installed:
sudo snap install conjure-up --classic
And then ran installation:
conjure-up kubernetes

I wasn't able to reproduce your exact problem but i got conjure-up + lxd installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!
I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention lxd but not the part with lxd init which i assume you picked up in the conjure-up user manual.
So with that said, i followed the conjure-up user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of conjure-up, i started off with the stable one but changed to edge when testing different combinations.
Also please ensure that you have the recommended resources available stated by the user manual, conjure-up and the Canoncial Distribution of Kubernetes launches a number of containers for you. You might not need 3 x etcd, 3 x worker nodes and 2 x Master, and if you don't just tune the number of containers down in the conjure-up wizard.
These are the steps i performed (as my local user):
Make sure your Ubuntu box are updated: sudo apt update && sudo apt upgrade
Install conjure-up by running: sudo snap install conjure-up --classic --edge
Install lxd by running: sudo snap install lxd
With lxd comes the client part which is lxc, if you run e.g. lxc list you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: sudo chown -R lxd:lxd /var/snap/lxd/ to change owner and group of the lxd directory containing the socket you'll be communicating with using lxc.
Add your user to the lxdgroup: sudo usermod -a -G lxd $USER && newgrp lxd, log off and on to make this permanent and not only active in your current shell.
Now create a lxd bridge manually with the following command: lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false
Now let's run the init part of lxd with lxd init. Remember to answer no when being asked to create a new local network bridge?, in the next prompt provide your newly created network bridge instead (lxdbr1). The rest of the answers to the questions can be left as default.
Now continue with running conjure-up kubernetes and choose localhost as your type. For me the localhost choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the lxd init step.
Skip the additional components you can install like Rancher, Prometheus etc.
Choose your new network bridge and the default storage pool, proceed to the next step.
In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!
You can always troubleshoot and list all containers created with the lxc tool. If you've ever used Docker the lxc tool feels a lot like the docker client.
And finally some thoughts and observations, there's a lot of moving parts to conjure-up as you might have seen. It's actually described as: conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.
For reference, i ended up having the following versions installed:
lxd version 3.3
conjure-up version 2.6.1

Related

Starting a postgres SQL 9.6 Server on Amazon Linux returns unrecognized service

I am attempting to start a Postgres SQL server on amazon Linux using the command
sudo service postgresql start
I installed the server using this method. I have added it here for simplicity
sudo rpm -i https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-ami201503-96-9.6-2.noarch.rpm
and then
sudo yum install postgresql96-server.x86_64
after which i did this to install the command line tools for postgres
sudo yum install postgresql96.x86_64 postgresql96-libs.x86_64
Any suggestions on how I can start the server ? I usually start the server using
the command
sudo service postgresql start
however its not working in this case as it says "Unrecognized service"
I then tried this
postgres -D /usr/local/pgsql/data
postgres: could not access directory "/usr/local/pgsql/data": No such file or directory. Run initdb or pg_basebackup to initialize a PostgreSQL data directory.
Having the same issue, or similar. May be I installed pgsql from source, don't remember. We could make our own service start files. How? Let's find out! >>RTFM<< starting with what we already know:
man service
which leads us to chkconfig(8), so
man chkconfig
and it gives us an option
chkconfig --add ${svcname}
to add a brand new service under a name we choose!
But before we do, we might actually want to check what's already there. With
service --status-all
we get a list of all known services and their run status. And I found "postmaster" in my list, and as you might know, the PostgreSQL master server to connect to used to be called "postmaster". Yet, when I try
service postmaster status
it also tells me it doesn't know such service. OK, forget it -- for now -- just let's move on with making our own! But I still want to peek what there is in run-level 3 (normal server run level). So I go
ls -1 /etc/rc.d/rc3.d |fgrep post
and there I find: "K36postgresql95"! So, accordingly our service name should be "postgresql95". Trying that:
service postgresql95 status
it says now "postmaster is stopped". Confusingly the name the service reports for itself both in service --status-all and when we individually inquire for it is different than the name used to actually address it in the service command. Good to know. Easy enough to search /etc/rc.d for the name of interest.
service postgresql95 start
now starts the service. And check with
psql -U ${pguser} ${pgdb}
and I find that working. So now all I need to do is enable that service at system boot to auto-start
chkconfig --levels 3 postgresql95 on
and that works, doesn't it?
PS: It doesn't matter that I happen to run version 9.5
I recently installed PostgreSQL 9.2.24 on Amazon Linux 2 and I had to initialize the database manually before being able to create ROLE and DATABASE as I normally would on Ubuntu.
// initialize database after installing with yum
$ sudo postgresql-setup initdb
// start
$ sudo systemctl start postgresql.service

Can't Install Kubernetes on Ubuntu 16.04

I have tried to install Kubernetes on 3 separate Ubuntu 16.04 machines, with poor results. On all three machines, the recommended installation, using snap and conjure-up did not work:
gknight#pz1:~$ sudo snap install conjure-up --classic
[sudo] password for gknight:
gknight#pz1:~$ sudo reboot
gknight#pz1:~$ conjure-up kubernetes
dropping privs did not work
This is the snap version:
gknight#pz1:~$ snap --version
snap 2.33.1ubuntu2
snapd 2.33.1ubuntu2
series 16
ubuntu 16.04
kernel 4.4.0-130-generic
On two, local, machines, the repository method worked:
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
add the following to sources.list.d, as kubernetes.list:
deb http://apt.kubernetes.io/ kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
But, on a remote 512mb KVM VPS (PnZ Hosting), although Docker installs and runs just fine, when I install kubelet, etc. and do nothing else, it soon runs the uptime load average up to 12 or so, and I can barely get through to it to reboot. There are no obvious error messages (and swap is turned off).
So, does the "conjure-up" method work on any Ubuntu 16.04 today?
What is Kubernetes doing that's taking over the KVM machine?
Finally, is there any other way to install Kubernetes?
remote 512mb KVM VPS
That's almost certainly the problem, as I don't know of very much software nowadays that will run in that little memory. It matches your experience that the machine will start swapping like mad, driving the I/O pressure through the roof
Agree with #Matthew & #Michael - 512mb is not enough to run Kubernetes.
Increase your memory up to 1GB min and retry.
Apiserver and etcd together are fine on a machine with 1 core and 1GB
RAM for clusters with 10s of nodes.
You can read more documentation here.
Conjure method works fine for me using this instruction.
Ubuntu version:
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Ways to install Kubernetes:
Local Kubernetes development with LXD
Running Kubernetes Locally via Minikube
Using kubeadm
Use prepared cloud solutions, for example Google Kubernetes Engine, Amazon EKS or many many others.

minikube install package using toolbox but the container does not internet conexion

I'm wondering how can install a package inside the minikube VM. I need some tools.
I have tried the /bin/toolbox container, but It does not have internet conexion.
[root#docker-fedora-24 ~]# dnf update --verbose
cachedir: /var/cache/dnf
DNF version: 1.1.9
Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org].
Error: Failed to synchronize cache for repo 'updates'
I have tried the same toolbox script in my computer and it is properly working.
What configuration parameters I'm missing in minikube or systemd-nspaw?
Or how can I cook a customized minikube VM?
Thanks a lot
You can run minicube without VM on your local docker (if you use linux):
minikube start --vm-driver=none
A alternative, run toolbox with docker run --net=host ... to make network for container more transparent. Troubleshoot your internet connection with nslookup, traceroute/tracepath, curl -v, ifconfig.
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:Ch04:_Simple_Network_Troubleshooting#.WfY1xGi0OUk
Minikube is not meant to be tweaked. The advised method is to prepare a helm chart for your application. As part of the helm chart you can add whatever tool you need in your docker file... Including make... Then you can install or upgrade your package in kubernetes/minikube using helm.
I had a similar problem when I wanted to use tcpdump in the minikube VM.
I ended up using minikube mount SRC-dir:DST-dir to mount the host folder inside the VM and copying the tcpdump binary along with dependent libs (libcrypto and libpcap) to the mount point.
Then I executed tcpdump from the minikube VM and it worked.
Note: My host arch and the minikube VM arch (x86_64) was the same.
Note also: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:DST-dir has to be done.

Local Kubernetes on CentOS

I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes

How to connect to your Cloud SQL instance with a kubernetes service?

I'm creating a container with a connection to a cloudsql database, when I run the image with kubernetes It does not have an external IP that I can use to allow the new image to connect to the database. But as this is part of the init configuration I can't wait to know what is the public IP to add to the whitelist databases.
I know that are ways to connect a database through services in the same cluster, but I can't figure out how to connect with the cloudsql provided by google.
There are two ways to solve that:
The first option is to use a cloudsql proxy using the instructions available in: https://cloud.google.com/sql/docs/sql-proxy
In your docker image you need to ensure that fuse is available in your installation, in wasn't my case (using a ubuntu:trusty-20160119 as base image). If you need to able that, then use the following steps in your Dockerfile:
# install fusermount
# RUN apt-get install build-essential -y
# RUN wget https://github.com/libfuse/libfuse/releases/download/fuse_2_9_5/fuse-2.9.5.tar.gz
# RUN tar -xzvf fuse-2.9.5.tar.gz
# RUN cd fuse-2.9.5 && ./configure && make -j8 && make install
Then at the startup of your container you must create a script that open the socket as described in https://cloud.google.com/sql/docs/sql-proxy#example_proxy_invocations_and_connection_strings.
The second way is just to allow the ips from the nodes that support the kubernetes cluster in the whitelist for the cloudsql.
I prefer the first option, because it works in any machine I deploy the image and I don't need to care about to add or remove ips if I need to deliver more nodes in the kubernetes cluster.