when using kubeadm join token to join worker node to a k8 master. Iam receiving following errors.
[preflight] running pre-flight checks
[preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh nf_conntrack_ipv4 ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
*********************
error 2 : when run modprobe_nrfilter
modprobe: FATAL: Module br_netfilter not found.
It seems like docker is not installed or not in your PATH:
Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
This can be fixed by installing docker and ensuring the docker executable is in your PATH.
Related
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-19T02:25:52Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
I Enabled the port 6443 and 10250 in Security group.
Also tried the commands-
rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
The containerd is not available in etc directory.
Steps:
kubeadm init --apiserver-advertise-address= --pod-network-cidr=192.168.0.0/16
after executing above command it throw below issue
1-->[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
2-->[ERROR Mem]: the system RAM (966 MB) is less than the minimum 1700 MB
and re execute below comment
kubeadm init --apiserver-advertise-address=172.31.92.76 --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
it seem this error
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING Mem]: the system RAM (966 MB) is less than the minimum 1700 MB
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root#ip-172-31-92-76:/home/ubuntu#
how to over come this one
I am creating a egress-operator. I have a pod egress-operator-controller-manager created from makefile by command make deploy IMG=my_azure_repo/egress-operator:v0.1. The pod is failing , its showing error in pod description as:
State: Waiting
Reason: RunContainerError
Last State: Terminated
Reason: StartError
Message: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/manager": stat /manager: no such file or directory: unknown
Failed 44s (x3 over 59s) kubelet Error: failed to create containerd task:
OCI runtime create failed: container_linux.go:380: starting container process caused:
exec: "/manager": stat /manager: no such file or directory: unknown
I suspect that in manager.yaml, under command: /manager is executed.Can someone let me know what is going wrong in this manager.yaml and whether /manager is valid under command: in deployment.yaml
Debugging UPDATE
Instead of running Dockerfile, now I just build and run image_azure_repo/egress-operator:v0.1 locally (on same ubuntu 18 VM), When I try to login with: docker exec -it <container_id> /bin/bash, I am getting error :
OCI runtime exec failed: exec failed: container_linux.go:380:
starting container process caused: exec: "/bin/bash": stat /bin/bash:
no such file or directory: unknown
This is similar error what I see in pod description. Instead of /bin/bash , I also tried docker exec with /bin/sh and only sh ; Its giving same error
This should be a mismatch for libc.
Golang uses glibc by default while alpine uses musl.
You could try those to fix it:
Use a base image using glibc
Install glibc in your alpine image
In Dockerfile , the Entrypoint you have mentioned is ENTRYPOINT ["/manager"] but shouldnt it be ENTRYPOINT ["./manager"]
regarding "starting container process caused: exec: "/bin/bash": stat /bin/bash: " , it seems base image does not have /bin/bash it might have just /bin/sh
I want to install kubernetes on centos 7. I have installed kubeadm and use kubeadm to install kubernetes. kubeadm init
error:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
[ERROR SystemVerification]: unsupported graph driver: vfs
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
then I run the command mkdir /proc/sys/net/bridge and get error mkdir: cannot create directory ‘/proc/sys/net/bridge’: No such file or directory.
How to fix it?
I am trying to install kubernetes with kubeadm in my laptop which has Ubuntu 16.04. I have disabled swap, since kubelet does not work with swap on. The command I used is :
swapoff -a
I also commented out the reference to swap in /etc/fstab.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
#UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none swap sw 0 0
I confirmed swap is turned off by running the following:
free -m
total used free shared buff/cache available
Mem: 15936 2108 9433 954 4394 12465
Swap: 0 0 0
When I start kubeadm, I get the following error:
kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I also tried restarting my laptop, but I get the same error. What could the reason be?
below was the root cause.
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
you need to update the docker cgroup driver.
follow the below fix
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
you could try kubeadm reset , then kubeadm init --ignore-preflight-errors Swap .
first try with sudo
sudo swapoff -a
then check if there's anything swapped
cat /proc/swaps
and
free -h