Use jessie (Debian) for Kubernetes cluster - kubernetes

I want to set up gcsFUSE on my cluster. It's easier to do this in Debian jessie according to the gcsFUSE page.
The config-default.sh that kube-up.sh uses contains the following:
NODE_OS_DISTRIBUTION=${KUBE_NODE_OS_DISTRIBUTION:-${KUBE_OS_DISTRIBUTION:-debian}}
which sets up wheezy. What do I change this to to get jessie? I've tried replacing debian with the values debian-8 and jessie, without any luck:
$ cluster/kube-up.sh
Cannot operate on cluster using node os distro: jessie

from reading the cluster/gce/util.sh you can use KUBE_GCE_MASTER_IMAGE / KUBE_GCE_MASTER_PROJECT and KUBE_GCE_NODE_IMAGE / KUBE_GCE_NODE_PROJECT for that purpose.
E.g. with:
KUBE_GCE_MASTER_IMAGE=debian-8-jessie-v20170124
KUBE_GCE_MASTER_PROJECT=debian-8
KUBE_GCE_NODE_IMAGE=debian-8-jessie-v20170124
KUBE_GCE_NODE_PROJECT=debian-8
You can find the relevant images on the with:
gcloud compute images list --filter=debian
These environment variables are used to then create the instances with
gcloud compute instance-templates create ...
The related documentation has some further details.

Related

kubernetes - k3d how to use local directory as a Persistent volume

I am using k3d to run local kubernetes
I have created a cluster using k3d.
Now I want to mount a local directory as a persistent volume.
How can i do this while using k3d.
I know in minikube
$ minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount
Then If you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Is there any similar technique here also while using k3d
According to the answers to this Github question the teature you're looking for is not available yet.
Here is some idea from this link:
The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do k3d cluster create -v "$HOME/git:/git#agent:*" to get all the repositories on my host present in all agent nodes to be used for hot-reloading.
According to this documentation one can use the following command with the adequate flag:
k3d cluster create NAME -v [SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
This command mounts volumes into the nodes
(Format:[SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
Example:
`k3d cluster create --agents 2 -v /my/path#agent:0,1 -v /tmp/test:/tmp/other#server:0`
Here is also an interesting article how volumes and storage work in a K3s cluster (with examples).
I think this feature is not yet available
https://github.com/k3d-io/k3d/issues/566
So far we can only mount volumn when we create a new cluster.
k3d cluster create mykube --volume HOME/go/src/github.com/nginx:/data

Can I install minikube on ubuntu without virtualBox?

I want to start practicing with k8s for the CKAD exam. I run on ubuntu 18.04.
I noticed everywhere that I need to download Virtualbox for minikube. I believe that VB is needed in case I don't start my cluster with a driver but if I use the Docker driver when I start my cluster shouldn't that be enough? Is microk8s a better option?
It seems that the preferred way is use --driver=docker driver instead of --driver=none for minikube, although it is technically not baremetal as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only. (info below from https://minikube.sigs.k8s.io/docs/drivers/docker/)
docker
Overview
The Docker driver allows you to install Kubernetes into an existing Docker install. On Linux, this does not require virtualization to be enabled.
Requirements
Install Docker 18.09 or higher
amd64 or arm64 system.
Usage
Start a cluster using the docker driver:
minikube start --driver=docker
To make docker the default driver:
minikube config set driver docker
Yes you can. Check here.
Minikube also supports a --driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a Linux environment but not a hypervisor.
Jus run
$ minikube start
Caution: If you use the none driver, some Kubernetes components run as privileged containers that have side effects outside of the Minikube environment. Those side effects mean that the none driver is not recommended for personal workstations

Unable to bootstrap (cloud type: localhost) - Error when installing Kuberneters cluster locally with LXD/Conjure-up

Using Ubuntu 18.04.
I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin
When I run:
conjure-up kubernetes
I select the following installation:
and select localhost for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:
Here is the log:
https://pastebin.com/raw/re1UvrUU
Where one error says:
2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in <Task finished coro=<BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15> exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)>
but that does not really help much.
Any suggestion to why the install wizard/conjure-up fails?
Also based on this post:
https://github.com/conjure-up/conjure-up/issues/1308
I have tried to first disable firewall:
sudo ufw disable
and then re-run installation/conjure install wizard. But I get the same error.
Some more details on how I installed and configured LXD/conjure-up below:
$ snap install lxd
lxd 3.2 from 'canonical' installed
$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=26GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Configured group membership:
sudo usermod -a -G lxd $USER
newgrp lxd
Next installed:
sudo snap install conjure-up --classic
And then ran installation:
conjure-up kubernetes
I wasn't able to reproduce your exact problem but i got conjure-up + lxd installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!
I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention lxd but not the part with lxd init which i assume you picked up in the conjure-up user manual.
So with that said, i followed the conjure-up user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of conjure-up, i started off with the stable one but changed to edge when testing different combinations.
Also please ensure that you have the recommended resources available stated by the user manual, conjure-up and the Canoncial Distribution of Kubernetes launches a number of containers for you. You might not need 3 x etcd, 3 x worker nodes and 2 x Master, and if you don't just tune the number of containers down in the conjure-up wizard.
These are the steps i performed (as my local user):
Make sure your Ubuntu box are updated: sudo apt update && sudo apt upgrade
Install conjure-up by running: sudo snap install conjure-up --classic --edge
Install lxd by running: sudo snap install lxd
With lxd comes the client part which is lxc, if you run e.g. lxc list you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: sudo chown -R lxd:lxd /var/snap/lxd/ to change owner and group of the lxd directory containing the socket you'll be communicating with using lxc.
Add your user to the lxdgroup: sudo usermod -a -G lxd $USER && newgrp lxd, log off and on to make this permanent and not only active in your current shell.
Now create a lxd bridge manually with the following command: lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false
Now let's run the init part of lxd with lxd init. Remember to answer no when being asked to create a new local network bridge?, in the next prompt provide your newly created network bridge instead (lxdbr1). The rest of the answers to the questions can be left as default.
Now continue with running conjure-up kubernetes and choose localhost as your type. For me the localhost choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the lxd init step.
Skip the additional components you can install like Rancher, Prometheus etc.
Choose your new network bridge and the default storage pool, proceed to the next step.
In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!
You can always troubleshoot and list all containers created with the lxc tool. If you've ever used Docker the lxc tool feels a lot like the docker client.
And finally some thoughts and observations, there's a lot of moving parts to conjure-up as you might have seen. It's actually described as: conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.
For reference, i ended up having the following versions installed:
lxd version 3.3
conjure-up version 2.6.1

How to use DockerOperator from Airflow in Kubernetes

from this example. DockerOperator has the docker_url parameter which is "URL of the host running the docker daemon.".
But when i run in Kubernetes engine on Google Cloud Platform, how can i find this docker_url on Kubernetes?
You can run the following command to find out the docker url:
$docker-machine url [docker_machine_name]
Docker machine is not installed on the container images by default. You will have to install docker-machine manually by following these steps.
You will also have to use the Ubuntu image if you would like this functionality. I tried to install docker machine using a cos image, and it does not work since the image does not have the necessary dependencies.

How to Mount Multiple CephFS on Client-Node?

I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2