Rancher agent installation: systemctl not found - kubernetes

I have a rancher installation on cloud (integrated with harvester) and a couple of VM's in a local node (with K3os), created with harvester.
Now I would like to connect the K3S cluster running on a VM with rancher, but when I try to run in the VM the script of agent given to me by rancher, it goes into an error:
systemctl: command not found
Am I doing something wrong?

I found the problem.
When you run a VM with k3os, a k3s cluster within it is also started in the VM as mentioned before. So I was wrong choosing "Create a cluster", i should have chosen instead "import a cluster". In this way, the script you run into the VM works perfectly.

Related

Exposing ingress to host windows machine when running minikube in vagrant virtualbox (ubuntu VM) with docker driver

I am running a vagrant box using virtual box (running headless ubuntu 18.04) on windows 10 host machine.
Inside the virtual box, I have minikube set up using docker as the vm-driver
minikube start --memory=6144 --cpus=2 --disk-size=40g --vm-driver=docker --bootstrapper kubeadm --kubernetes-version=1.17.4
My application is exposed via an ingress to the ubuntu machine running inside virtual box and I am able to access the application via wget/cURL
On running minikube IP it gave me the IP of the docker container in which minikube runs
Some additional configuration info -
Vagrant file -
I would like to access the application from my windows machine's browser , any idea how to achieve that ? vagrant port forwarding doesn't seem to help.
If you really want to use a setup like this(using vagrant etc.). You can just use --vm-driver=none and let the kubernetes run in your ubuntu box directly, this way you can leverage port-forwarding. You probably also can do it your way but I've never tried so I wouldn't know, but I know that none works. You can follow this guide.
There are different options like running minikube on windows directly. Which is perfectly fine as well.

VSCode devcontainer connect to kubernetes cluster on vm

Ultimate Goal
From a dotnet/core/sdk devcontainer (using VSCode Remote Containers), debug a .NET Core app running in a kubernetes cluster hosted on another vm of my host machine.
Current Setup
Docker Desktop for Windows running via Hyper-V
default DockerNAT network adapter
Ubuntu VM (multipass) running on same Hyper-V host
microk8s cluster running on this ubuntu instance
default "Default Switch" network adapter
Errors
When I try to ping the ubuntu vm from a docker container by hostname, the IP is resolved properly but I get the error "Destination Host Unreachable"
When I try to curl the cluster api, I get the error "No route to host"
I put this problem aside for a week, and over that time the host has been rebooted multiple times, but no further modifications were made to the networking, Hyper-V setup, etc.
Starting the Ubuntu VM today, the IP changed from what used to be 172.?.?.? to 192.168.92.x . I do not know what caused this change.
Now, Docker Desktop containers can ping the Ubuntu VM and curl the microk8s /api endpoint. Until such a time that I can reproduce the issue, I will mark this question as "solved" and reopen and try Nick's recommended solution if the issue returns.

Minikube Start Error (Kubernetes) When Using hyperv Driver on Windows server 2016

I am trying to install Kubernetes on windows server 2016.
I tried to install minikube, and got some errors.
This is the tutorial that I followed:
https://www.assistanz.com/installing-minikube-on-windows-2016-server/
This is the command + error that I got:
PS C:\Windows\system32> minikube start –vm-driver=hyperv –hyperv-virtual-switch=Minikube
Starting local Kubernetes v1.10.0 cluster...
Starting VM... Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
E1106 19:29:10.616564 11852 start.go:168] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.
Retrying.
E1106 19:29:10.689675 11852 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Someone knows how to solve it?
I googled it, but no luck.
Thanks!
I was never able to get the config parameters to work with minikube start.
I was able to get past this error using the minikube config commands in PowerShell (should also work at a command prompt):
minikube config set vm-driver hyperv
minikube config set hyperv-virtual-switch ExternalSwitch
minikube config view
minikube delete
minikube start
For more information on the command run: minikube config -h
Looking at the documentation you have provided, I have noticed that the screenshot shows a slight difference to the one they've quote.
I have also found this command in another piece of documentation from kubernetes here, showing the same command as that from the screenshot.
I suggest you try the following command;
minikube start --vm-driver=hyperv --hyperv-virtual-switch=Minikube
It is true that OP has pasted the incorrect command, because there is - instead of --. I tried to pass this arguments to minikube and all you get is an instant error. So the issue must be somewhere else. I remember having similar issue and it got resolved after deleting the .kube and .minikube folders and trying to run it again.
After taking a closer look this tutorial is destined for installation of minikube inside of a Windows Server 2016 Virtual Machine, so you have to have a Nested Virtualization able hardware:
Prerequisites The Hyper-V host and guest must both be Windows Server
2016/Windows 10 Anniversary Update or later. VM configuration version
8.0 or greater. An Intel processor with VT-x and EPT technology -- nesting is currently Intel-only. There are some differences with
virtual networking for second-level virtual machines. See "Nested
Virtual Machine Networking".
So the main question is, is that true in your scenario? Are you trying to perform your steps on Windows Server Hyper-V virtual machine with nested virtualization feature?
If you confirm that I have technical possibilities to check it in that scenario.
Otherwise I recommend using the "traditional way" of running minikube in Windows, according for example to this tutorial.

Kubernetes ssh into nodes not working in local

How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.