Following the instructions on this page - http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#setup, I'm getting the following error when trying to get Kubernetes up on a Mac running El Capitan, using Vagrant and VirtualBox. Where am I going wrong?
OS X El Capitan 10.11.2 (15C50)
Vagrant 1.8.1
VirtualBox 5.0.12 r104815
and trying to get a cluster up by executing these steps:
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
The output below is a capture of having executed these two commands. I'm following these instructions [link]http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#prerequisites
Unpacking kubernetes release v1.1.4
Creating a kubernetes on vagrant...
... Starting cluster using provider: vagrant
... calling verify-prereqs
... calling kube-up
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'minion-1' up with 'virtualbox' provider...
==> master: VirtualBox VM is already running.
==> minion-1: Importing base box 'kube-fedora21'...
==> minion-1: Matching MAC address for NAT networking...
==> minion-1: Setting the name of the VM: kubernetes_minion-1_1454028157203_24352
==> minion-1: Fixed port collision for 22 => 2222. Now on port 2200.
==> minion-1: Clearing any previously set network interfaces...
==> minion-1: Preparing network interfaces based on configuration...
minion-1: Adapter 1: nat
minion-1: Adapter 2: hostonly
==> minion-1: Forwarding ports...
minion-1: 22 (guest) => 2200 (host) (adapter 1)
==> minion-1: Running 'pre-boot' VM customizations...
==> minion-1: Booting VM...
==> minion-1: Waiting for machine to boot. This may take a few minutes...
minion-1: SSH address: 127.0.0.1:2200
minion-1: SSH username: vagrant
minion-1: SSH auth method: private key
minion-1:
minion-1: Vagrant insecure key detected. Vagrant will automatically replace
minion-1: this with a newly generated keypair for better security.
minion-1:
minion-1: Inserting generated public key within guest...
minion-1: Removing insecure key from the guest if it's present...
minion-1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> minion-1: Machine booted and ready!
==> minion-1: Checking for guest additions in VM...
==> minion-1: Configuring and enabling network interfaces...
==> minion-1: Mounting shared folders...
minion-1: /vagrant => /Users/lee/kubernetes
==> minion-1: Running provisioner: shell...
minion-1: Running: /var/folders/cb/lpcc0zbs441777bwsl1zrcbh0000gn/T/vagrant-shell20160128-14233-gm7iq9.sh
==> minion-1: Adding kubernetes-master to hosts file
==> minion-1: Provisioning network on minion
==> minion-1: Resolving Dependencies
==> minion-1: --> Running transaction check
==> minion-1: ---> Package flannel.x86_64 0:0.5.0-3.fc21 will be installed
==> minion-1: --> Finished Dependency Resolution
==> minion-1:
==> minion-1: Dependencies Resolved
==> minion-1:
==> minion-1: ================================================================================
==> minion-1: Package Arch Version Repository Size
==> minion-1: ================================================================================
==> minion-1: Installing:
==> minion-1: flannel x86_64 0.5.0-3.fc21 updates 1.6 M
==> minion-1:
==> minion-1: Transaction Summary
==> minion-1: ================================================================================
==> minion-1: Install 1 Package
==> minion-1: Total download size: 1.6 M
==> minion-1: Installed size: 7.0 M
==> minion-1: Downloading packages:
==> minion-1: warning:
==> minion-1: /var/cache/yum/x86_64/21/updates/packages/flannel-0.5.0-3.fc21.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 95a43f54: NOKEY
==> minion-1: Public key for flannel-0.5.0-3.fc21.x86_64.rpm is not installed
==> minion-1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64
==> minion-1: Importing GPG key 0x95A43F54:
==> minion-1: Userid : "Fedora (21) <fedora#fedoraproject.org>"
==> minion-1: Fingerprint: 6596 b8fb abda 5227 a9c5 b59e 89ad 4e87 95a4 3f54
==> minion-1: Package : fedora-repos-21-2.noarch (#anaconda)
==> minion-1: From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64
==> minion-1: Running transaction check
==> minion-1: Running transaction test
==> minion-1: Transaction test succeeded
==> minion-1: Running transaction (shutdown inhibited)
==> minion-1: Installing : flannel-0.5.0-3.fc21.x86_64 1/1
==> minion-1:
==> minion-1: Verifying : flannel-0.5.0-3.fc21.x86_64 1/1
==> minion-1:
==> minion-1:
==> minion-1: Installed:
==> minion-1: flannel.x86_64 0:0.5.0-3.fc21
==> minion-1: Complete!
==> minion-1: Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
==> minion-1: Network configuration verified
==> minion-1: Disable swap memory to ensure proper QoS
==> minion-1: * INFO: sh -- Version 2015.11.09
==> minion-1:
==> minion-1: * INFO: System Information:
==> minion-1: * INFO: CPU: GenuineIntel
==> minion-1: * INFO: CPU Arch: x86_64
==> minion-1: * INFO: OS Name: Linux
==> minion-1: * INFO: OS Version: 3.17.4-301.fc21.x86_64
==> minion-1: * INFO: Distribution: Fedora 21
==> minion-1: * INFO: Installing minion
==> minion-1: * INFO: Found function install_fedora_deps
==> minion-1: * INFO: Found function install_fedora_stable
==> minion-1: * INFO: Found function install_fedora_stable_post
==> minion-1: * INFO: Found function install_fedora_restart_daemons
==> minion-1: * INFO: Found function daemons_running
==> minion-1: * INFO: Found function install_fedora_check_services
==> minion-1: * INFO: Running install_fedora_deps()
==> minion-1: which: no dnf in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
==> minion-1: * INFO: Adding SaltStack's COPR repository
==> minion-1:
==> minion-1:
==> minion-1: File contains no section headers.
==> minion-1: file: file:///etc/yum.repos.d/saltstack-salt-fedora-21.repo, line: 1
==> minion-1: '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n'
==> minion-1: * ERROR: Failed to run install_fedora_deps()!!!
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Your problem with the vagrant setup is currently an open issue https://github.com/kubernetes/kubernetes/issues/20088#issuecomment-174528066 . As a temporary fix you can do this to get it to work: https://stackoverflow.com/a/35015586/5834774
Related
while running vagrant up command on win machine I am getting the below error:
==> kubemaster: Booting VM...
There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "29046172-fba4-4516-9e28-64ece907dcb7", "--type", "headless"]
Stderr: VBoxManage.exe: error: Failed to open/create the internal network 'HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter #3' (VERR_INTNET_FLT_IF_NOT_FOUND).
VBoxManage.exe: error: Failed to attach the network LUN (VERR_INTNET_FLT_IF_NOT_FOUND)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole
I resolved problem by installing latest virtualbox version.
I installed Minikube on my Debian 10, but when I try to start it, I
get these errors:
$ minikube start
* minikube v1.25.2 on Debian 10.1
* Unable to pick a default driver. Here is what was considered, in preference order:
- docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
- docker: Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker' <https://docs.docker.com/engine/install/linux-postinstall/>
- kvm2: Not healthy: /usr/bin/virsh domcapabilities --virttype kvm failed:
error: failed to get emulator capabilities
error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
exit status 1
- kvm2: Suggestion: Follow your Linux distribution instructions for configuring KVM <https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/>
* Alternatively you could install one of these drivers:
- podman: Not installed: exec: "podman": executable file not found in $PATH
- vmware: Not installed: exec: "docker-machine-driver-vmware": executable file not found in $PATH
- virtualbox: Not installed: unable to find VBoxManage in $PATH
I added my user to the docker group using:
sudo usermod -aG docker $USER
and I insalled kvm without any apparent problems as far as I understand:
kvm --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8~deb10u1)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
$ lsmod | grep kvm
kvm 729088 0
irqbypass 16384 1 kvm
$ sudo virsh list --all
Id Name State
-----------------------------
1 debian10-MK running
What could be the problem and solution then?
Thanks,
Tamar
When mounting glusterfs on servers where kubernetes is installed via kubespray, an error occurs:
Mount failed. Please check the log file for more details.
[2020-12-20 11:40:42.845231] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --volfile-server=kube-pv01 --volfile-id=/replicated /mnt/replica/)
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2020-12-20 11:40:42
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.8.8
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x7e)[0x7f084d99337e]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x334)[0x7f084d99cac4]
/lib/x86_64-linux-gnu/libc.so.6(+0x33060)[0x7f084bfe2060]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_ports_reserved+0x13a)[0x7f084d99d12a]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_process_reserved_ports+0x8e)[0x7f084d99d35e]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(+0xc09b)[0x7f08481ef09b]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(client_bind+0x9d)[0x7f08481ef48d]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(+0x98d3)[0x7f08481ec8d3]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_reconnect+0xc9)[0x7f084d75e0f9]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_start+0x39)[0x7f084d75e1c9]
/usr/sbin/glusterfs(glusterfs_mgmt_init+0x159)[0x5604fe77df79]
/usr/sbin/glusterfs(glusterfs_volumes_init+0x44)[0x5604fe778e94]
/usr/sbin/glusterfs(main+0x811)[0x5604fe7754b1]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f084bfcf2e1]
/usr/sbin/glusterfs(_start+0x2a)[0x5604fe7755ea]
---------
[11:41:47] [root#kube01.unix.local ~ ]# lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 9.12 (stretch)
Release: 9.12
Codename: stretch
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
On servers without kubespray is mounted successfully.
How do I fix this error?
When mounting glusterfs on servers where kubernetes is installed via kubespray, an error occurs:
Solved. Update Debian 10
I tried
to spin up a CentOS 7 VM. Below is my settings
Vagrant File
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "zabbix1" do |zabbix1|
zabbix1.vm.box = "centos/7"
zabbix1.vm.hostname = "zabbix1"
zabbix1.ssh.insert_key = false
zabbix1.vm.network :private_network, ip: "10.11.12.55"
zabbix1.ssh.private_key_path = "~/.ssh/id_rsa"
zabbix1.ssh.forward_agent = true
end
end
Result
vagrant reload
==> zabbix1: Attempting graceful shutdown of VM...
zabbix1: Guest communication could not be established! This is usually because
zabbix1: SSH is not running, the authentication information was changed,
zabbix1: or some other networking issue. Vagrant will force halt, if
zabbix1: capable.
==> zabbix1: Forcing shutdown of VM...
==> zabbix1: Checking if box 'centos/7' is up to date...
==> zabbix1: Clearing any previously set forwarded ports...
==> zabbix1: Fixed port collision for 22 => 2222. Now on port 2204.
==> zabbix1: Clearing any previously set network interfaces...
==> zabbix1: Preparing network interfaces based on configuration...
zabbix1: Adapter 1: nat
zabbix1: Adapter 2: hostonly
==> zabbix1: Forwarding ports...
zabbix1: 22 (guest) => 2204 (host) (adapter 1)
==> zabbix1: Booting VM...
==> zabbix1: Waiting for machine to boot. This may take a few minutes...
zabbix1: SSH address: 127.0.0.1:2204
zabbix1: SSH username: vagrant
zabbix1: SSH auth method: private key
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
vagrant ssh-config
Host zabbix1
HostName 127.0.0.1
User vagrant
Port 2204
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/bheng/.ssh/id_rsa
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
What did I do wrong ? What did I miss ?
I had the same issue with the same box and the way I fixed it was to log into the VM from VirtualBox (vagrant/vagrant as username/password) and change the permission of .ssh/authorized_keys
chmod 0600 .ssh/authorized_keys
Do that after you run vagrant up (while the error repeats) and the VM is up and you will see vagrant up will complete successfully and you will be able to ssh into the VM from vagrant ssh
Private networks can be configured manually or with the VirtualBox built-in DHCP server. This works for me.
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "zabbix1" do |zabbix1|
zabbix1.vm.box = "centos/7"
zabbix1.vm.hostname = "zabbix1"
zabbix1.ssh.insert_key = false
zabbix1.vm.network :private_network, type: "dhcp"
end
end
Next you have to use vagrant destory and vagrant up.
I am doing this as part of the Ambari setup. followed the steps for quick start with Ambari and Vagrant.
I am using this CentOS 6.4 image:
https://github.com/u39kun/ambari-vagrant/blob/master/centos6.4/Vagrantfile
I did this on Google Cloud from RHEL 7.2 host and with VirtualBox 5, but went to install, as suggested, CentOS 6.4 guests.
I successfully installed and configured the pre-requisities (with tweaking required to make vbox 5 work on RHEL 7.2).
When I try to bring up 6 hosts, I see the timeouts where machines are not coming up.
Host machine I am running on is fast - 32 cores, 64 GB RAM, 500 GB SSD ...
Does anyone know what might be the issue?
Is there some firewall I need to turn off, etc.?
[<myuser>#ambari-host-rhel7 centos6.4]$ ./up.sh 6
Bringing machine 'c6401' up with 'virtualbox' provider...
==> c6401: Box 'centos6.4' could not be found. Attempting to find and install... c6401: Box Provider: virtualbox c6401: Box Version: >= 0
==> c6401: Box file was not detected as metadata. Adding it directly...
==> c6401: Adding box 'centos6.4' (v0) for provider: virtualbox c6401: Downloading: http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130427.box
==> c6401: Box download is resuming from prior download progress
==> c6401: Successfully added box 'centos6.4' (v0) for 'virtualbox'!
==> c6401: Importing base box 'centos6.4'...
==> c6401: Matching MAC address for NAT networking...
==> c6401: Setting the name of the VM: centos64_c6401_1456171923223_2329
==> c6401: Clearing any previously set network interfaces...
==> c6401: Preparing network interfaces based on configuration... c6401: Adapter 1: nat c6401: Adapter 2: hostonly
==> c6401: Forwarding ports... c6401: 22 (guest) => 2222 (host) (adapter 1)
==> c6401: Running 'pre-boot' VM customizations...
==> c6401: Booting VM...
==> c6401: Waiting for machine to boot. This may take a few minutes... c6401: SSH address: 127.0.0.1:2222 c6401: SSH username: vagrant c6401: SSH auth method: private key
Timed out while waiting for the machine to boot. This means thatVagrant was unable to communicate with the guest machine withinthe configured ("config.vm.boot_timeout" value) time period.If you look above, you should be able to see the error(s) thatVagrant had when attempting to connect to the machine. These errorsare usually good hints as to what may be wrong.If you're using a custom box, make sure that networking is properlyworking and you're able to connect to the machine. It is a commonproblem that networking isn't setup properly in these boxes.Verify that authentication configurations are also setup properly,as well.If the box appears to be booting properly, you may want to increasethe timeout ("config.vm.boot_timeout") value.
As a final step I get this summary error:
There was an error while executing `VBoxManage`, a CLI used by Vagrantfor controlling VirtualBox.
The command and stderr is shown below.
Command: ["import", "/home/<me>/.vagrant.d/boxes/centos6.4/0/virtualbox/box.ovf", "--vsys", "0", "--vmname", "CentOS-6.4-x86_64_1456173504674_45962", "--vsys", "0", "--unit", "9", "--disk", "/home/<me>/VirtualBox VMs/CentOS-6.4-x86_64_1456173504674_45962/box-disk1.vmdk"]
Stderr: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Interpreting /home/<me>/.vagrant.d/boxes/centos6.4/0/virtualbox/box.ovf...OK.0%...
Progress state: VBOX_E_FILE_ERRORVBoxManage: error: Appliance import failedVBoxManage: error: Could not create the imported medium '/home/<me>/VirtualBox VMs/CentOS-6.4-x86_64_1456173504674_45962/box-disk1.vmdk'.
VBoxManage: error: VMDK: cannot write allocated data block in '/home/<me>/VirtualBox VMs/CentOS-6.4-x86_64_1456173504674_45962/box-disk1.vmdk' (VERR_DISK_FULL)
VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component ApplianceWrap, interface IAppliance
VBoxManage: error: Context: "RTEXITCODE handleImportAppliance(HandlerArg*)" at line 877 of file VBoxManageAppliance.cpp
Any ideas what might be going on?
Do you still have free space on your drive ?
Generally VERR_DISK_FULL indicates that the hard drive is full, it cannot provision enough space for the vdi files.