having problem when using k3sup join command - kubernetes

I have generated ssh-key on client and copied them to master and worker nodes. The path is ~/.ssh/id_rsa. I have this error and using sudo -S doesn't fix it too.
k3sup join --ip $WORKER_IP --user $WORKER_USER --server-ip $MASTER_IP --server-user $MASTER_USER --k3s-extra-args "--node-external-ip $WORKER_IP --node-ip $WORKER_IP" --k3s-channel stable --print-command
Running: k3sup join
ssh: sudo cat /var/lib/rancher/k3s/server/node-token
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
Error: unable to get join-token from server: Process exited with status 1
However, I expect getting the following output:
$ k3sup join --ip $WORKER_IP --user $WORKER_USER --server-ip $MASTER_IP --server-user $MASTER_USER --k3s-extra-args "--node-external-ip $WORKER_IP --node-ip $WORKER_IP" --k3s-channel stable --print-command
Running: k3sup join
ssh: sudo cat /var/lib/rancher/k3s/server/node-token
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx::server:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ssh: curl -sfL https://get.k3s.io | K3S_URL='https://10.1.1.1:6443' K3S_TOKEN='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx::server:xxxxxxxxxxxxxxxxxxxxxxxxx' INSTALL_K3S_CHANNEL='stable' sh -s - --node-external-ip 10.1.1.2 --node-ip 10.1.1.2
\[INFO\] Finding release for channel stable
\[INFO\] Using v1.20.0+k3s2 as release
\[INFO\] Downloading hash https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/sha256sum-amd64.txt
\[INFO\] Downloading binary https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/k3s
\[INFO\] Verifying binary download
\[INFO\] Installing k3s to /usr/local/bin/k3s
\[INFO\] Creating /usr/local/bin/kubectl symlink to k3s
\[INFO\] Creating /usr/local/bin/crictl symlink to k3s
\[INFO\] Creating /usr/local/bin/ctr symlink to k3s
\[INFO\] Creating killall script /usr/local/bin/k3s-killall.sh
\[INFO\] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
\[INFO\] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
\[INFO\] systemd: Creating service file /etc/systemd/system/k3s-agent.service
\[INFO\] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
\[INFO\] systemd: Starting k3s-agent
Logs: Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
Output: \[INFO\] Finding release for channel stable
\[INFO\] Using v1.20.0+k3s2 as release
\[INFO\] Downloading hash https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/sha256sum-amd64.txt
\[INFO\] Downloading binary https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/k3s
\[INFO\] Verifying binary download
\[INFO\] Installing k3s to /usr/local/bin/k3s
\[INFO\] Creating /usr/local/bin/kubectl symlink to k3s
\[INFO\] Creating /usr/local/bin/crictl symlink to k3s
\[INFO\] Creating /usr/local/bin/ctr symlink to k3s
\[INFO\] Creating killall script /usr/local/bin/k3s-killall.sh
\[INFO\] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
\[INFO\] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
\[INFO\] systemd: Creating service file /etc/systemd/system/k3s-agent.service
\[INFO\] systemd: Enabling k3s-agent unit
\[INFO\] systemd: Starting k3s-agent

Related

k3s multimaster with embedded etcd is failing to form join cluster

I have two fresh ubuntu VM(s)
VM-1 (65.0.54.158)
VM-2 (65.2.136.2)
I am trying to set up a HA k3s cluster with embedded ETCD. I am referring to the official document
Here is what I have executed on VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --cluster-init
Here is the response from VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --cluster-init
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
additionally, I have checked
sudo kubectl get nodes
and this worked perfectly
NAME STATUS ROLES AGE VERSION
ip-172-31-41-34 Ready control-plane,etcd,master 18m v1.24.4+k3s1
Now I am going to ssh into VM-2 and make it join the server running on VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --server https://65.0.54.158:6443
response
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details
here is the contents of /var/log/syslog
Sep 6 19:10:00 ip-172-31-46-114 systemd[1]: Starting Lightweight Kubernetes...
Sep 6 19:10:00 ip-172-31-46-114 sh[9516]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Sep 6 19:10:00 ip-172-31-46-114 sh[9517]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Sep 6 19:10:00 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:00Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Sep 6 19:10:00 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:00Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2"
Sep 6 19:10:02 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:02Z" level=info msg="Starting k3s v1.24.4+k3s1 (c3f830e9)"
Sep 6 19:10:22 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:22Z" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://65.0.54.158:6443/cacerts\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: k3s.service: Failed with result 'exit-code'.
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: Failed to start Lightweight Kubernetes.
I am stuck at this for two days. I would really appreciate some help. Thank you.

Fail to start Minikube on Debian

I installed Minikube on my Debian 10, but when I try to start it, I
get these errors:
$ minikube start
* minikube v1.25.2 on Debian 10.1
* Unable to pick a default driver. Here is what was considered, in preference order:
- docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
- docker: Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker' <https://docs.docker.com/engine/install/linux-postinstall/>
- kvm2: Not healthy: /usr/bin/virsh domcapabilities --virttype kvm failed:
error: failed to get emulator capabilities
error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
exit status 1
- kvm2: Suggestion: Follow your Linux distribution instructions for configuring KVM <https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/>
* Alternatively you could install one of these drivers:
- podman: Not installed: exec: "podman": executable file not found in $PATH
- vmware: Not installed: exec: "docker-machine-driver-vmware": executable file not found in $PATH
- virtualbox: Not installed: unable to find VBoxManage in $PATH
I added my user to the docker group using:
sudo usermod -aG docker $USER
and I insalled kvm without any apparent problems as far as I understand:
kvm --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8~deb10u1)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
$ lsmod | grep kvm
kvm 729088 0
irqbypass 16384 1 kvm
$ sudo virsh list --all
Id Name State
-----------------------------
1 debian10-MK running
What could be the problem and solution then?
Thanks,
Tamar

Hyperledger cli container does not bind volumes with host

I am trying to implement the BYFN Hyperledger example form my Windows 10 Linux Subsystem (Ubuntu Xenial). However, the ./byfn.sh -m up command fails with the following output:
$GOPATH/fabric-samples/first-network$ ./byfn.sh -m up
Starting with channel 'mychannel' and CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] y
proceeding ...
2018-04-24 22:12:44.343 UTC [main] main -> INFO 001 Exiting.....
LOCAL_VERSION=1.1.0
DOCKER_IMAGE_VERSION=1.1.0
Creating peer0.org1.example.com ... done
Creating orderer.example.com ... done
Creating peer1.org1.example.com ... done
Creating peer0.org2.example.com ... done
Creating peer1.org2.example.com ... done
Creating cli ... done
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"scripts/script.sh\": stat scripts/script.sh: no such file or directory": unknown
ERROR !!!! Test failed
I see that only one container is built:
$GOPATH/fabric-samples/first-network$ dps
CONTAINER ID NAMES NETWORKS STATUS SIZE
3e66d31c6b9a cli net_byfn Up 27 minutes 17B (virtual 1.46GB)
From the output it seems that the cli container cannot see the script.sh script. Thinking this maybe a docker-compose volume-bind issue I tried to check the binds in the cli container:
$GOPATH/fabric-samples/first-network$ docker exec -ti cli bash
root#3e66d31c6b9a:/opt/gopath/src/github.com/hyperledger/fabric/peer# ls scripts/
root#3e66d31c6b9a:/opt/gopath/src/github.com/hyperledger/fabric/peer# exit
exit
$GOPATH/fabric-samples/first-network$ ls scripts/
capabilities.json script.sh step1org3.sh step2org3.sh step3org3.sh testorg3.sh upgrade_to_v11.sh utils.sh
Looking at the the docker-compose-cli.yaml file I see the following binds for the cli container:
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
My Docker settings:
$GOPATH/fabric-samples/first-network$ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.2
Git commit: 0520e24
Built: Wed Mar 21 23:05:52 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:14:32 2018
OS/Arch: linux/amd64
Experimental: false
$GOPATH/fabric-samples/first-network$ docker-compose version
docker-compose version 1.21.0, build 5920eb0
docker-py version: 3.2.1
CPython version: 3.6.5
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
My Go version:
$GOPATH/fabric-samples/first-network$ go version
go version go1.10.1 linux/amd64
Wondering if I'm missing something. I should mention that I used the following command to start form scratch, based on a fresh set of images (no prior images) as outlined in this script:
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s 1.1.0
Thanks
Its most probably the version of golang---Fabric needs go version 1.9.x and the error message exec failed: container_linux.go:348: indicates the same thing.
In my case, it was the first Note on this page https://hyperledger-fabric.readthedocs.io/en/latest/install.html
I'm running on Windows 10, but Docker won't work since i need Windows 10 Pro or better to run it. So i'm using Docker Toolbox, and must follow the Windows 7 trick of cloning the sources anywhere under C:\Users

"bosh create-env concourse-lite.yml" fails while Creating VM for instance 'concourse/0' from stemcell

I am trying to spin up Concourse using "bosh create-env concourse-lite.yml".
I am using a VM with Ubuntu to setup Concourse. Below are the sytem details and the command output.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Command Output:bosh create-env concourse-lite.yml
Deployment manifest: '/root/concourse-lite.yml'
Deployment state: '/root/concourse-lite-state.json'
Started validating
Downloading release 'concourse'... Skipped [Found in local cache] (00:00:00)
Validating release 'concourse'... Finished (00:00:03)
Downloading release 'garden-runc'... Skipped [Found in local cache] (00:00:00)
Validating release 'garden-runc'... Finished (00:00:03)
Downloading release 'bosh-virtualbox-cpi'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh-virtualbox-cpi'... Finished (00:00:03)
Validating cpi release... Finished (00:00:00)
Validating deployment manifest... Finished (00:00:00)
Downloading stemcell... Skipped [Found in local cache] (00:00:00)
Validating stemcell... Finished (00:00:03)
Finished validating (00:00:13)
Started installing CPI
Compiling package 'golang-1.8-linux/c97f9a00c26b34a3f59ca15b0f5a079d7f7e27c334cc8100248143c5dc0d4c0a'... Finished (00:00:00)
Compiling package 'golang-1.8-darwin/ee2bb46a25872469cd8fe0f4b0804bab5c39cc5512bbcc4335c8691a038d3e73'... Finished (00:00:00)
Compiling package 'virtualbox_cpi/cb3116b9b6c2111a873bb4ea14a1f3544ccdd2af'... Finished (00:00:00)
Installing packages... Finished (00:00:12)
Rendering job templates... Finished (00:00:00)
Installing job 'virtualbox_cpi'... Finished (00:00:00)
Finished installing CPI (00:00:12)
Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-vsphere-esxi-ubuntu-trusty-go_agent/3468.1'... Skipped [Stemcell already uploaded] (00:00:00)
Started deploying
Creating VM for instance 'concourse/0' from stemcell 'sc-6196df8a-6e26-411f-70b5-0cc0466a5adf'... Failed (00:01:19)
Failed deploying (00:01:19)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)
Deploying:
Creating instance 'concourse/0':
Creating VM:
Creating vm with stemcell cid 'sc-6196df8a-6e26-411f-70b5-0cc0466a5adf':
CPI 'create_vm' method responded with error: CmdError{"type":"Bosh::Clouds::CloudError","message":"Creating VM with agent ID '{{559adf3b-c06e-4af0-5564-6e86f5d95d81}}': Starting VM: Retried '30' times: Running command: 'VBoxManage startvm vm-e9498e46-c2ef-4772-69f0-627f95044b4a --type headless', stdout: 'Waiting for VM \"vm-e9498e46-c2ef-4772-69f0-627f95044b4a\" to power on...\n', stderr: 'VBoxManage: error: The virtual machine 'vm-e9498e46-c2ef-4772-69f0-627f95044b4a' has terminated unexpectedly during startup with exit code 1 (0x1)\nVBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine\n': exit status 1","ok_to_retry":false}
Exit code 1
Seems like an issue with VirtualBox on your machine. Some folks running on Ubuntu have also reported seeing this same error here

minikube: could not find capabilities for domaintype=kvm

What are installed for minikube:
$ ls -al /usr/local/bin/
-rwxr-xr-x 1 root root 26406912 Jun 14 12:05 docker-machine
-rwxrwxr-x 1 me libvirtd 11889064 Jun 14 12:07 docker-machine-driver-kvm
-rwxrwxr-x 1 me me 70232912 Jun 14 11:58 kubectl
-rwxrwxr-x 1 me me 82512696 Jun 14 11:57 minikube
Trying to start cluster by minikube
$ minikube start --vm-driver=kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
E0614 12:07:39.515994 14655 start.go:127] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ').
Retrying.
E0614 12:07:39.517076 14655 start.go:133] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ')
I am new to kubernetes. Any idea how to fix it? Thanks
UPDATE
sudo /usr/sbin/kvm-ok
INFO: /dev/kvm does not exist
HINT: sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
INFO: KVM (vmx) is disabled by your BIOS
HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
and then hard poweroff/poweron your system
KVM acceleration can NOT be used
$ dmesg | grep kvm
[ 2.114855] kvm: disabled by bios
[ 2.327746] kvm: disabled by bios
[ 120.423249] kvm: disabled by bios
[ 222.250977] kvm: disabled by bios
My update is close to the solution. The solution is to enable virtualization in the BIOS.
1, Power on your PC and open the BIOS.
2, Go to the security section and enable virtualization.
you need to install the kvm package refer package.
https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm-driver
# Install libvirt and qemu-kvm on your system, e.g.
# Debian/Ubuntu
$ sudo apt install libvirt-bin qemu-kvm
# Fedora/CentOS/RHEL
$ sudo yum install libvirt-daemon-kvm kvm
# Add yourself to the libvirtd group (use libvirt group for rpm based distros) so you don't need to sudo
# Debian/Ubuntu (NOTE: For Ubuntu 17.04 change the group to `libvirt`)
$ sudo usermod -a -G libvirtd $(whoami)
# Fedora/CentOS/RHEL
$ sudo usermod -a -G libvirt $(whoami)
# Update your current session for the group change to take effect
# Debian/Ubuntu (NOTE: For Ubuntu 17.04 change the group to `libvirt`)
$ newgrp libvirtd
# Fedora/CentOS/RHEL
$ newgrp libvirt