I've created a simple kubernetes cluster with minikube using minikube start --driver=kvm2 and then ssh'd into the VM with minikube ssh.
I'm using a volume mount between the minikube VM and some pods so that I can share a large dataset. However, I need to install python to download this dataset (onto the VM). Normally I would use apt-get to install python, but the VM does not have this installed. I can't install apt-get using dpkg either, because dpkg also doesn't exist.
The output of uname -r is 4.19.114 and the output of cat /etc/os-release is:
NAME=Buildroot
VERSION=2019.02.11
ID=buildroot
VERSION_ID=2019.02.11
PRETTY_NAME="Buildroot 2019.02.11"
Does anybody know how I install a package manager inside the VM?
There is a way to have custom configuration with the minikube image but that would require to configure the image. This document show how to build the image and modify buildroot components. You can add kernel modules or some third-party packages. You might also want to check out this case whereas tcpdump is need into minikube image.
Alternative way to to have mount the files using the minikube mount:
minikube mount <source directory>:<target directory>
or use one of the local driver mounts (KVM does not support this at the moment):
| Virtualbox | Linux | /home | /hosthome
| Virtualbox | macOS | /Users | /Users
| Virtualbox | Windows | C://Users | /c/Users |
| VMware Fusion | macOS | /Users | /Users
Third option is to use initContainer
Specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
You can use init container to pre-populate some volume with data you require for your pod/deployment. Here is a good document that shows how to do that.
Related
I have created kubernetes cluster using minikube and installed Jenkins x on it.
I am not able to access the Jenkins x dashboard.
Error 503 Service Temporarily Unavailable nginx/1.13.9.
note: I have tried with restarting minikube cluster also.
As mentioned by James Rawlings in the comments, it's likely to be a resource issue. The recommendations in the manual are:
A known good configuration on a 2015 model Macbook Pro is to use 8 GB of RAM, 8 cores, a 150 GB disk size and hyperkit.
The disk size is particularly large as a number of images will need to be downloaded.
So we highly recommend using one of the public clouds above to try out Jenkins X. They all have free tiers so it should not cost you any significant cash and it’ll give you a chance to try out the cloud.
Default minikube installation uses only 2048 MB of RAM, 2 CPU cores and 20Gb disk space. You can adjust minikube VM size using command line options:
$ minikube start --cpus=8 --memory=8192 --disk-size=150g --vm-driver=hyperkit
or use jx tool for that:
For MacOS
$ brew install docker-machine-driver-hyperkit
# docker-machine-driver-hyperkit need root owner and uid
$ sudo chown root:wheel /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
$ sudo chmod u+s /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
$ brew tap jenkins-x/jx
$ brew install jx
$ jx create cluster minikube
For linux:
$ curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.784/jx-linux-amd64.tar.gz | tar xzv
$ sudo mv jx /usr/local/bin
$ jx create cluster minikube
Using Ubuntu 18.04.
I am trying to install a kubernetes cluster on my local machine (localhost) using this guide (LXD + conjure-up kubernetes):
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/#before-you-begin
When I run:
conjure-up kubernetes
I select the following installation:
and select localhost for "Choose a cloud" and use the defaults for the rest of the install wizard. It then starts to install and after 30-40 minutes it completes with this error:
Here is the log:
https://pastebin.com/raw/re1UvrUU
Where one error says:
2018-07-25 20:09:38,125 [ERROR] conjure-up/canonical-kubernetes - events.py:161 - Unhandled exception in <Task finished coro=<BaseBootstrapController.run() done, defined at /snap/conjure-up/1015/lib/python3.6/site-packages/conjureup/controllers/juju/bootstrap/common.py:15> exception=BootstrapError('Unable to bootstrap (cloud type: localhost)',)>
but that does not really help much.
Any suggestion to why the install wizard/conjure-up fails?
Also based on this post:
https://github.com/conjure-up/conjure-up/issues/1308
I have tried to first disable firewall:
sudo ufw disable
and then re-run installation/conjure install wizard. But I get the same error.
Some more details on how I installed and configured LXD/conjure-up below:
$ snap install lxd
lxd 3.2 from 'canonical' installed
$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=26GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Configured group membership:
sudo usermod -a -G lxd $USER
newgrp lxd
Next installed:
sudo snap install conjure-up --classic
And then ran installation:
conjure-up kubernetes
I wasn't able to reproduce your exact problem but i got conjure-up + lxd installed and in the end Kubernetes on my newly installed VirtualBox Ubuntu 18.04 (Desktop) VM. Hopefully this answer could help you somehow!
I looked through the kubernetes.io documentation page and that one lacked tiny bits of information, it does mention lxd but not the part with lxd init which i assume you picked up in the conjure-up user manual.
So with that said, i followed the conjure-up user manual with some minor changes on the way. I'm assuming that it's OK for you to use the edge version of conjure-up, i started off with the stable one but changed to edge when testing different combinations.
Also please ensure that you have the recommended resources available stated by the user manual, conjure-up and the Canoncial Distribution of Kubernetes launches a number of containers for you. You might not need 3 x etcd, 3 x worker nodes and 2 x Master, and if you don't just tune the number of containers down in the conjure-up wizard.
These are the steps i performed (as my local user):
Make sure your Ubuntu box are updated: sudo apt update && sudo apt upgrade
Install conjure-up by running: sudo snap install conjure-up --classic --edge
Install lxd by running: sudo snap install lxd
With lxd comes the client part which is lxc, if you run e.g. lxc list you should get an empty table (no containers started yet). I got an permission error at this time, i ran the following: sudo chown -R lxd:lxd /var/snap/lxd/ to change owner and group of the lxd directory containing the socket you'll be communicating with using lxc.
Add your user to the lxdgroup: sudo usermod -a -G lxd $USER && newgrp lxd, log off and on to make this permanent and not only active in your current shell.
Now create a lxd bridge manually with the following command: lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false
Now let's run the init part of lxd with lxd init. Remember to answer no when being asked to create a new local network bridge?, in the next prompt provide your newly created network bridge instead (lxdbr1). The rest of the answers to the questions can be left as default.
Now continue with running conjure-up kubernetes and choose localhost as your type. For me the localhost choice was greyed out from the beginning, it worked when i created the network bridge manually and not via the lxd init step.
Skip the additional components you can install like Rancher, Prometheus etc.
Choose your new network bridge and the default storage pool, proceed to the next step.
In the next step customize your Kubernetes cluster if needed and then hit Deploy. And now you wait!
You can always troubleshoot and list all containers created with the lxc tool. If you've ever used Docker the lxc tool feels a lot like the docker client.
And finally some thoughts and observations, there's a lot of moving parts to conjure-up as you might have seen. It's actually described as: conjure-up is a thin layer spanning a few different underlying technologies - Juju, MAAS and LXD.
For reference, i ended up having the following versions installed:
lxd version 3.3
conjure-up version 2.6.1
When trying to run minikube with hyperkit, I was getting errors about xhyve not being installed. I installed that and reran minikube start --vm-driver hyperkit with no issues.
I was under the impression that hyperkit was a replacement for xhyve, not a supplement to it.
When I run ps I see both com.docker.hyperkit and docker-machine-driver-xhyve running.
How can I confirm that minikube is correctly using hyperkit?
Docker for Mac changed virtualization layer few times last years, and it can confuse users after updates of environment.
If the process list shows both com.docker.hyperkit and xhyve processes is probably due
to docker-machine environment which was previously set up using docker-machine-driver-xhyve.
You may consider cleaning up installation by
stopping Docker (from command line or from tray icon),
next removing machines created by docker-machine tool.
I can also suggest to remove current minikube installation using
minikube stop && minikube delete
and start fresh one with:
minikube start --v=10 --vm-driver=hyperkit"
That will add additional verbose output of building minikube environment.
This will give you the current driver for the current machine. Replace the second "minikube" with the name of your profile if you're using the --profile flag.
$ cat ~/.minikube/machines/minikube/config.json | grep DriverName
Strange, considering Hyperkit is supposed to replace xhyve eventually.
Make sure Hyperkit is built/installed and referenced by tour PATH.
And that you are using the latest docker-ce for Mac.
Use this command to get a list of each hypervisor instance that's running with hyperkit:
$ ps -ef | grep hyperkit
If minikube is running in hyperkit then the name 'minikube' should show up in the output:
0 29305 1 0 Tue06PM ?? 515:01.32 /usr/local/bin/hyperkit -A -u -F /Users/me/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2000M -s 0:0,...
The instance labeled as 'com.docker.hyperkit' is the process that's being used by Docker and is NOT the minikube instance.
I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2
I'm wondering how can install a package inside the minikube VM. I need some tools.
I have tried the /bin/toolbox container, but It does not have internet conexion.
[root#docker-fedora-24 ~]# dnf update --verbose
cachedir: /var/cache/dnf
DNF version: 1.1.9
Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org].
Error: Failed to synchronize cache for repo 'updates'
I have tried the same toolbox script in my computer and it is properly working.
What configuration parameters I'm missing in minikube or systemd-nspaw?
Or how can I cook a customized minikube VM?
Thanks a lot
You can run minicube without VM on your local docker (if you use linux):
minikube start --vm-driver=none
A alternative, run toolbox with docker run --net=host ... to make network for container more transparent. Troubleshoot your internet connection with nslookup, traceroute/tracepath, curl -v, ifconfig.
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:Ch04:_Simple_Network_Troubleshooting#.WfY1xGi0OUk
Minikube is not meant to be tweaked. The advised method is to prepare a helm chart for your application. As part of the helm chart you can add whatever tool you need in your docker file... Including make... Then you can install or upgrade your package in kubernetes/minikube using helm.
I had a similar problem when I wanted to use tcpdump in the minikube VM.
I ended up using minikube mount SRC-dir:DST-dir to mount the host folder inside the VM and copying the tcpdump binary along with dependent libs (libcrypto and libpcap) to the mount point.
Then I executed tcpdump from the minikube VM and it worked.
Note: My host arch and the minikube VM arch (x86_64) was the same.
Note also: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:DST-dir has to be done.