Falco installation on Digital Ocean Kubernetes - kubernetes

While installing falco on Digital Ocean managed kubernetes. I am getting the following error
Runtime error: can't open BPF probe '/root/.falco/falco-bpf.o': Errno 2. Exiting. I know this is caused because the ebpf probe was not installed on the cluster nodes.
But when I try installing without the ebpf probe I get the following error
Unable to load the driver. Runtime error: error opening device /host/dev/falco0. Make sure you have root credentials and that the falco module is loaded.. Exiting.
We have to manually install falco and the drivers by sshing into the nodes. After that also we can install only without the ebpf probe, using the kernel module. Has anyone faced with the same issue? Is there any solution to this without doing ssh into the nodes?
Falco version: latest
System info: Worker node Debian 10 Linux kernel 5.10.0
Cloud provider or hardware configuration: 3 vCPU, 6GB Memory, 150 GB Disk
OS: Debian 10
Kernel: Linux 5.10.0
Installation method: from source

Related

Monitor daemon running but not in quorum

I'm currently testing OS and version upgrades for a ceph cluster. Starting info:
The cluster is currently on Centos 7 and Ceph version Nautilus. I'm trying to change OS with ubuntu 20.04 and version with Octopus. I started with upgrading mon1 first. I will write down the things done in order.
First of I stopped monitor service - systemctl stop ceph-mon#mon1
Then I removed the monitor from cluster - ceph mon remove mon1
Then installed ubuntu 20.04 on mon1. Updated the system and configured ufw.
Installed ceph octopus packages.
Copied ceph.client.admin.keyring and ceph.conf to mon1 /etc/ceph/
Copied ceph.mon.keyring to mon1 to a temporary folder and changed ownership to ceph:ceph
Got the monmap ceph mon getmap -o ${MONMAP} - The thing is i did this after removing the monitor.
Created /var/lib/ceph/mon/ceph-mon1 folder and changed ownership to ceph:ceph
Created the filesystem for monitor - sudo -u ceph ceph-mon --mkfs -i mon1 --monmap /folder/monmap --keyring /folder/ceph.mon.keyring
After noticing I got the monmap after the monitors removal I added it manually - ceph mon add mon1 <ip> --fsid <fsid>
After starting manually and checking cluster state with ceph -s I can see mon1 is listed but is not in quorum. The monitor daemon runs fine on the said mon1 node. I noticed on logs that mon1 is stuck in "probe" state and on other monitor logs there is an output such as mon1 (rank 2) addr [v2:<ip>:3300/0,v1:<ip>:6789/0] is down (out of quorum) , as i said the the monitor daemon is running on mon1 without any visible errors just stuck in probe state.
I wondered if it was caused by os&version change so i first tried out configuring manager, mds and radosgw daemons by creating the respective folders in /var/lib/ceph/... and copying keyrings. All these services work fine, i was able to reach to my buckets, was able to open the Octopus version dashboard, and metadata server is listed as active in ceph -s. So evidently my problem is only with monitor configuration.
After doing some checking found this on red hat ceph documantation:
If the Ceph Monitor is in the probing state longer than expected, it
cannot find the other Ceph Monitors. This problem can be caused by
networking issues, or the Ceph Monitor can have an outdated Ceph
Monitor map (monmap) and be trying to reach the other Ceph Monitors on
incorrect IP addresses. Alternatively, if the monmap is up-to-date,
Ceph Monitor’s clock might not be synchronized.
There is no network error on the monitor, I can reach all the other machines in the cluster. The clocks are synchronized. If this problem is caused by the monmap situation how can I fix this?
Ok so as a result, directly from centos7-Nautilus to ubuntu20.04-Octopus is not possible for monitor services only, apparently the issue is about hostname resolution with different Operating systems. The rest of the services is fine. There is a longer way to do this without issue and is the correct solution. First change os from centos7 to ubuntu18.04 and install ceph-nautilus packages and add the machines to cluster (no issues at all). Then update&upgrade the system and apply "do-release-upgrade". Works like a charm. I think what eblock mentioned was this.

Minikube: Unable to start minikube - Exiting due to DRV_NO_IP:

I am trying to create the minikube cluster, but It always fails.
Any suggestions are very welcome:
C:\WINDOWS\system32>minikube start --driver=vmware
minikube v1.16.0 on Microsoft Windows 10 Home 10.0.19042 Build 19042
Using the vmware driver based on user configuration
Starting control plane node minikube in cluster minikube
Creating vmware VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
Deleting "minikube" in vmware ...
! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting
Creating vmware VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
Failed to start vmware VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting
X Exiting due to DRV_NO_IP: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting
Suggestion: Check your firewall rules for interference, and run 'virt-host-validate' to check for KVM configuration issues. If you are running minikube within a VM, consider using --driver=none
Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
Related issues:
https://github.com/kubernetes/minikube/issues/4249
https://github.com/kubernetes/minikube/issues/3566
I had a similar error when setting up Minikube on Mac OS.
When I run the command minikube start I get the error below:
😄 minikube v1.22.0 on Darwin 11.4
✨ Using the vmware driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🏃 Updating the running vmware "minikube" VM ...
🤦 StartHost failed, but will try again: provision: IP not found for MAC 00:0c:29:41:e9:b9 in DHCP leases
🏃 Updating the running vmware "minikube" VM ...
😿 Failed to start vmware VM. Running "minikube delete" may fix it: provision: IP not found for MAC 00:0c:29:41:e9:b9 in DHCP leases
❌ Exiting due to GUEST_PROVISION: Failed to start host: provision: IP not found for MAC 00:0c:29:41:e9:b9 in DHCP leases
The issue was caused by an interruption when I was creating the VMWare VM for Minikube.
I tried fixing it by deleting the existing minikube vm and creating another one using:
minikube delete
minikube start
But then I ran into another issue this time:
Exiting due to DRV_NO_IP: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting.
Here's how I fixed it
After multiple research, it's just best to set up minikube with docker driver which will save you all this hassle.
For Mac OS:
If you already have docker installed or docker-compose simply uninstall it using:
brew uninstall docker
brew uninstall docker-compose
Next, install Docker desktop. This will install Docker/Docker Desktop, Docker-compose and other dependencies using:
brew install --cask docker
Next, start the docker engine by opening the Docker application. Afterwhich you can confirm the docker version using:
docker --version
Finally, setup minikube using docker driver:
minikube start --driver=docker

K3S on Raspberry Pi 4 - kubectl get pods runs into timeout

Problem
When I connect a k3s agent to the server and run "kubectl get nodes" on the server. I get the following error:
root#k3s-master:/home/marc# kubectl get nodes
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
But if the server is standalone, I can easily run "kubectl get nodes".
CPU utilisation on the server stays about 30-40%. And RAM usage is at 583M of 3.74G.
Specs
2x Raspberry Pi 4b with 4GB RAM
Fresh install of raspbian lite (buster)
Enabled legacy iptables and cgroups
K3S Installation
On the server, I've done:
curl -sfL https://get.k3s.io | sh -
And on the agent:
curl -sfL https://get.k3s.io | K3S_URL=https://k3s-master:6443 K3S_TOKEN=<token> sh -
Thanks in advance, it drives me crazy!
Thanks to stack overflows related question feature, I've stumbled upon this question: PI4 k3s install server currently unable to handle the request
There seems to be an issue regarding cgroup memory failures with buster kernel 5.4.x
Edit
Adding the cgroup didnt helped, so I switched to Ubuntu 20.05 and its working now.

Minikube Start Error (Kubernetes) When Using hyperv Driver on Windows server 2016

I am trying to install Kubernetes on windows server 2016.
I tried to install minikube, and got some errors.
This is the tutorial that I followed:
https://www.assistanz.com/installing-minikube-on-windows-2016-server/
This is the command + error that I got:
PS C:\Windows\system32> minikube start –vm-driver=hyperv –hyperv-virtual-switch=Minikube
Starting local Kubernetes v1.10.0 cluster...
Starting VM... Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
E1106 19:29:10.616564 11852 start.go:168] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.
Retrying.
E1106 19:29:10.689675 11852 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Someone knows how to solve it?
I googled it, but no luck.
Thanks!
I was never able to get the config parameters to work with minikube start.
I was able to get past this error using the minikube config commands in PowerShell (should also work at a command prompt):
minikube config set vm-driver hyperv
minikube config set hyperv-virtual-switch ExternalSwitch
minikube config view
minikube delete
minikube start
For more information on the command run: minikube config -h
Looking at the documentation you have provided, I have noticed that the screenshot shows a slight difference to the one they've quote.
I have also found this command in another piece of documentation from kubernetes here, showing the same command as that from the screenshot.
I suggest you try the following command;
minikube start --vm-driver=hyperv --hyperv-virtual-switch=Minikube
It is true that OP has pasted the incorrect command, because there is - instead of --. I tried to pass this arguments to minikube and all you get is an instant error. So the issue must be somewhere else. I remember having similar issue and it got resolved after deleting the .kube and .minikube folders and trying to run it again.
After taking a closer look this tutorial is destined for installation of minikube inside of a Windows Server 2016 Virtual Machine, so you have to have a Nested Virtualization able hardware:
Prerequisites The Hyper-V host and guest must both be Windows Server
2016/Windows 10 Anniversary Update or later. VM configuration version
8.0 or greater. An Intel processor with VT-x and EPT technology -- nesting is currently Intel-only. There are some differences with
virtual networking for second-level virtual machines. See "Nested
Virtual Machine Networking".
So the main question is, is that true in your scenario? Are you trying to perform your steps on Windows Server Hyper-V virtual machine with nested virtualization feature?
If you confirm that I have technical possibilities to check it in that scenario.
Otherwise I recommend using the "traditional way" of running minikube in Windows, according for example to this tutorial.

Running Kubernetes Locally via minikube

I am working on openam deployment on Google cloud platform (GCP) and the OS is RHEL7.
I am facing issue while running minikube start.
[root#test ~]# minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
150.53 MB / 150.53 MB [============================================] 100.00% 0s
E0509 06:20:12.950109 16264 start.go:159] Error starting host: Error creating host: Error executing step: Running precreate checks.
: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory.
Retrying.
E0509 06:20:12.951500 16264 start.go:165] Error starting host: Error creating host: Error executing step: Running precreate checks.
: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory
I already installed virtualbox on RHEL.
I want to know how to enable VT-X on GCP?
Thanks
Ashish
You can use --vm-driver=none to run your minikube in cloud. This flag will run your minukube in Docker. You should have installed Docker first.
Also you can create a custom image where VMX will be enabled. Just follow the official documentation instruction.
Example from the documentation on how to create a custom image with enabled VMX:
gcloud compute images create nested-vm-image --source-disk disk1 --source-disk-zone us-central1-a --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
Then, just create a new VM with the custom image.
gcloud compute instances create example-nested-vm --zone us-central1-b --image nested-vm-image
After all, you can install the VirtualBox or KVM and start minikube.