Steps followed:
updated bindIp in the /etc/mongod.conf file
record entered in the iptables
root#:/var/log/mongodb# iptables -L -n -v
=============================================================
Chain INPUT (policy DROP 108K packets, 5349K bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 194.195.119.119 0.0.0.0/0 tcp dpt:27017 state NEW,ESTABLISHED
Chain OUTPUT (policy ACCEPT 1199 packets, 75946 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 194.195.119.119 tcp spt:27017 state ESTABLISHED
allowed ufw :
root#:/var/log/mongodb# ufw status
===============================================
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
27017 ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
27017 (v6) ALLOW Anywhere (v6)
bindIp works fine for 0.0.0.0 but unbale to start mongodb service when added actual ip ids
e.g. e.g. bindIp: 194.195.119.119
bindIp: 194.195.119.119,103.208.71.9
and getting below error on start :
root#:~# systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2022-07-11 09:02:37 UTC; 1s ago
Docs: https://docs.mongodb.org/manual
Process: 161544 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=48)
Main PID: 161544 (code=exited, status=48)
CPU: 365ms
Jul 11 09:02:37 .ip.linodeusercontent.com systemd[1]: Started MongoDB Database Server.
Jul 11 09:02:37 .ip.linodeusercontent.com systemd[1]: mongod.service: Main process exited, code=exited, status=48/n/a
Jul 11 09:02:37 .ip.linodeusercontent.com systemd[1]: mongod.service: Failed with result 'exit-code'.
Related
I am setting postgresql loadbalance using Haproxy and I met a error messages as below:
Jun 30 07:57:43 vm0 systemd[1]: Starting HAProxy Load Balancer...
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadWrite: cannot bind socket [0.0.0.0:8081]
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadOnly: cannot bind socket [0.0.0.0:8082]
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jun 30 07:57:43 vm0 systemd[1]: Failed to start HAProxy Load Balancer.
the below is my haproxy.cfg file and I kept checking all the possiblilites but I couldn't find the reason why I have the error. actualy I check the port is already used but no other process use the port 8001, 8002
-- haproxy.cfg
listen ReadWrite
bind *:8081
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg1 pg1:5432 maxconn 100 check port 23267
listen ReadOnly
bind *:8082
option httpchk
http-check expect status 206
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg2 pg1:5432 maxconn 100 check port 23267
server pg3 pg2:5432 maxconn 100 check port 23267
I have configured 1 master 2 workers.
after installation successfully kubernetes. It is OK with worker1 joining cluster but I can not join worker2 to the cluster
because kubelet service is not running. It seems like the kubelet isn't running or healthy
sudo kubectl get nodes:
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 23m v1.22.2
node1 NotReady 4m13s v1.22.2
I want to know why the kubelet service is not running.
Here kubelet logs.
The start-up result is RESULT.
Dec 04 20:21:26 node2 kubelet[25435]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Dec 04 20:21:26 node2 kubelet[25435]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.659131 25435 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.659587 25435 server.go:868] "Client rotation is on, will bootstrap in background"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.678863 25435 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.684321 25435 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.728096 25435 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.728320 25435 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.728388 25435 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.729329 25435 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="c
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.729345 25435 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.729367 25435 state_mem.go:36] "Initialized new in-memory state store"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.729408 25435 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.729430 25435 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.729441 25435 client.go:97] "Start docker client with request timeout" timeout="2m0s"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.744324 25435 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscu
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.744354 25435 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.744554 25435 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.750011 25435 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.750260 25435 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.753050 25435 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Dec 04 20:21:26 node2 kubelet[25435]: I1204 20:21:26.764080 25435 docker_service.go:264] "Docker Info" dockerInfo=&{ID:4UUR:AFJU:SXYE:5IRP:6G6B:SFDY:H3AA:D5ZB:JSDO:GXVQ:UYNG:POJY Containe
Dec 04 20:21:26 node2 kubelet[25435]: E1204 20:21:26.765777 25435 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" i
Dec 04 20:21:26 node2 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 04 20:21:26 node2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
kubeadm join logs:
I1204 20:27:56.222794 29796 join.go:405] [preflight] found NodeName empty; using OS hostname as NodeName
I1204 20:27:56.223032 29796 initconfiguration.go:116] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I1204 20:27:56.223834 29796 preflight.go:92] [preflight] Running general checks
I1204 20:27:56.225983 29796 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1204 20:27:56.226133 29796 checks.go:282] validating the existence of file /etc/kubernetes/kubelet.conf
I1204 20:27:56.226271 29796 checks.go:282] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1204 20:27:56.226408 29796 checks.go:106] validating the container runtime
I1204 20:27:56.282374 29796 checks.go:132] validating if the "docker" service is enabled and active
I1204 20:27:56.300100 29796 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1204 20:27:56.300279 29796 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1204 20:27:56.300580 29796 checks.go:649] validating whether swap is enabled or not
I1204 20:27:56.300738 29796 checks.go:372] validating the presence of executable conntrack
I1204 20:27:56.301009 29796 checks.go:372] validating the presence of executable ip
I1204 20:27:56.301613 29796 checks.go:372] validating the presence of executable iptables
I1204 20:27:56.301801 29796 checks.go:372] validating the presence of executable mount
I1204 20:27:56.302057 29796 checks.go:372] validating the presence of executable nsenter
I1204 20:27:56.302384 29796 checks.go:372] validating the presence of executable ebtables
I1204 20:27:56.302473 29796 checks.go:372] validating the presence of executable ethtool
I1204 20:27:56.302569 29796 checks.go:372] validating the presence of executable socat
I1204 20:27:56.302610 29796 checks.go:372] validating the presence of executable tc
I1204 20:27:56.303072 29796 checks.go:372] validating the presence of executable touch
I1204 20:27:56.303472 29796 checks.go:520] running all checks
I1204 20:27:56.372402 29796 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I1204 20:27:56.373211 29796 checks.go:618] validating kubelet version
I1204 20:27:56.467792 29796 checks.go:132] validating if the "kubelet" service is enabled and active
I1204 20:27:56.485715 29796 checks.go:205] validating availability of port 10250
I1204 20:27:56.486624 29796 checks.go:282] validating the existence of file /etc/kubernetes/pki/ca.crt
I1204 20:27:56.487016 29796 checks.go:432] validating if the connectivity type is via proxy or direct
I1204 20:27:56.487841 29796 join.go:475] [preflight] Discovering cluster-info
I1204 20:27:56.488260 29796 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "192.168.1.53:6443"
I1204 20:27:56.520182 29796 token.go:118] [discovery] Requesting info from "192.168.1.53:6443" again to validate TLS against the pinned public key
I1204 20:27:56.530589 29796 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.53:6443"
I1204 20:27:56.530702 29796 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1204 20:27:56.530924 29796 join.go:489] [preflight] Fetching init configuration
I1204 20:27:56.531171 29796 join.go:534] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I1204 20:27:56.549808 29796 interface.go:431] Looking for default routes with IPv4 addresses
I1204 20:27:56.549913 29796 interface.go:436] Default route transits interface "enp0s3"
I1204 20:27:56.550259 29796 interface.go:208] Interface enp0s3 is up
I1204 20:27:56.550564 29796 interface.go:256] Interface "enp0s3" has 2 addresses :[192.168.1.50/24 fe80::a00:27ff:fe7e:db8b/64].
I1204 20:27:56.550644 29796 interface.go:223] Checking addr 192.168.1.50/24.
I1204 20:27:56.550887 29796 interface.go:230] IP found 192.168.1.50
I1204 20:27:56.550955 29796 interface.go:262] Found valid IPv4 address 192.168.1.50 for interface "enp0s3".
I1204 20:27:56.551237 29796 interface.go:442] Found active IP 192.168.1.50
I1204 20:27:56.563573 29796 preflight.go:103] [preflight] Running configuration dependant checks
I1204 20:27:56.563872 29796 controlplaneprepare.go:219] [download-certs] Skipping certs download
I1204 20:27:56.565399 29796 kubelet.go:112] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1204 20:27:56.569613 29796 kubelet.go:120] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I1204 20:27:56.572216 29796 kubelet.go:141] [kubelet-start] Checking for an existing Node in the cluster with name "node2" and status "Ready"
I1204 20:27:56.576685 29796 kubelet.go:155] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1204 20:28:01.956734 29796 kubelet.go:190] [kubelet-start] preserving the crisocket information for the node
I1204 20:28:01.956911 29796 patchnode.go:31] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation
I1204 20:28:01.957066 29796 cert_rotation.go:137] Starting client certificate rotation controller
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
First, check if swap is diabled on your node as you MUST disable swap in order for the kubelet to work properly.
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Also check out if kubernetes and docker cgroup driver is set to same.
From kubernetes documentation:
Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
Warning:
Matching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet process will fail.
The Container runtimes page explains that the systemd driver is recommended for kubeadm based setups instead of the cgroupfs driver, because kubeadm manages the kubelet as a systemd service.
For docker:
docker info |grep -i cgroup
You can add this to /etc/docker/daemon.json to set the docker cgroup driver to systemd:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Restart your docker service after making any changes with
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
You can try to execute kubeadm join after performing the above steps.
I have setup a small cluster with kubeadm, it was working fine and 6443 port was up. But after rebooting my system, the cluster is not getting up anymore.
What should I do?
Here is some information:
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2020-04-05 14:16:44 UTC; 6s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 31079 (kubelet)
Tasks: 20 (limit: 4915)
CGroup: /system.slice/kubelet.service
└─31079 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet
k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://infra01.mydomainname.com:6443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-infra01&limit=500&resourceVersion=0: dial tcp 116.66.187.210:6443: connect: connection refused
kubectl get nodes
The connection to the server infra01.mydomainname.com:6443 was refused - did you specify the right host or port?
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:12:12Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
journalctl -xeu kubelet
6 18167 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458:
Failed to list *v1.Node: Get https://infra01.mydomainname.com
1 18167 reflector.go:153]
k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://huawei-infra01.s
4 18167 aws_credentials.go:77] while getting AWS credentials
NoCredentialProviders: no valid providers in chain. Deprecated.
messaging see aws.Config.CredentialsChainVerboseErrors
6 18167 kuberuntime_manager.go:211] Container runtime docker initialized,
version: 19.03.7, apiVersion: 1.40.0
6 18167 server.go:1113] Started kubelet
1 18167 kubelet.go:1302] Image garbage collection failed once. Stats
initialization may not have completed yet: failed to get imageF
8 18167 server.go:144] Starting to listen on 0.0.0.0:10250
4 18167 server.go:778] Starting healthz server failed: listen tcp
127.0.0.1:10248: bind: address already in use
5 18167 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
4 18167 volume_manager.go:265] Starting Kubelet Volume Manager
1 18167 desired_state_of_world_populator.go:138] Desired state populator
starts to run
3 18167 server.go:384] Adding debug handlers to kubelet server.
4 18167 server.go:158] listen tcp 0.0.0.0:10250: bind: address already in
use
Docker
docker run hello-world
Hello from Docker!
ubuntu
lsb_release -a
Ubuntu 18.04.2 LTS
swap && kubeconfig
swap is turned off and kubeconfig was correctly exported
Note
Things can be fixed by resetting the cluster, but this should be the final option.
Kubelet is not started because of port already in use and hence not able to create pod for api server.
Use following command to find out which process is holding the port 10250
root#master admin]# ss -lntp | grep 10250
LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=23373,fd=20))
It will give you PID of that process and name of that process. If it is unwanted process which is holding the port, you can always kill the process and that port becomes available to use by kubelet.
After killing the process again run the above command, it should return no value.
Just to be on safe side run kubeadm reset and then run kubeadm init and it should go through
Edit:
Using snap stop kubelet did the trick of stopping kubelet on the node.
My VPS is running on CentOS 7.2 , I opened a port by firewall-cmd --zone=public --add-port=8006/tcp --permanent and have already type the firewall-cmd --reload command, but when I check the port by nmap, nmap -p 8006 ip-addressxxx, it still shows it is closed. Here is some information may help:
[root#localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-04-07 02:06:50 EDT; 3 days ago
Docs: man:firewalld(1)
Main PID: 663 (firewalld)
CGroup: /system.slice/firewalld.service
└─663 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
Apr 07 02:06:50 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...
Apr 07 02:06:50 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
Apr 10 02:03:42 localhost.localdomain firewalld[663]: ERROR: ALREADY_ENABLED: 80:tcp
Apr 10 02:03:49 localhost.localdomain firewalld[663]: ERROR: ALREADY_ENABLED: 8006:tcp
.
.
.
[root#localhost ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens3
sources:
services: dhcpv6-client ssh
ports: 8009/tcp 80/tcp 8080/tcp 8006/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
.
.
.
[root#localhost ~]# firewall-cmd --list-ports
8009/tcp 80/tcp 8080/tcp 8006/tcp
.
.
.
[root#localhost ~]# netstat -plunt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 992/sshd
tcp6 0 0 :::8009 :::* LISTEN 1027/java
tcp6 0 0 :::3306 :::* LISTEN 1383/mysqld
tcp6 0 0 :::80 :::* LISTEN 1027/java
tcp6 0 0 :::22 :::* LISTEN 992/sshd
tcp6 0 0 127.0.0.1:8006 :::* LISTEN 1027/java
Revisited my answer
The process you have listening on port 8006 is only listening on the loopback interface, 127.0.0.1, it should be listening on 0.0.0.0. See the sshd process in your process list 0.0.0.0:22 it works fine.
Use something like netcat to test. This will open a port on 8006 on the 0.0.0.0 interface, which is open to the world because of your firewall rules
On your VPS Try:
nc -l 8006
and then scan with nmap again and you will see the port is open, provided your firewall rules are in place.
You want to see this in the process list
tcp6 0 0 0.0.0.0:8006 :::* LISTEN 1027/java
and not
tcp6 0 0 127.0.0.1:8006 :::* LISTEN 1027/java
I'm trying to start a kubelet on a fedora 24/lxc container, but getting an error which appears to be related to libvirt/iptables
Docker (installed using dnf/yum):
[root#node2 ~]# docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built:
OS/Arch: linux/amd64
Kubernetes (downloaded v1.3.3 and extracted tar):
root#node2 bin]# ./kubectl version
Client Version: version.Info{
Major:"1", Minor:"3", GitVersion:"v1.3.3",
GitCommit:"c6411395e09da356c608896d3d9725acab821418",
GitTreeState:"clean", BuildDate:"2016-07-22T20:29:38Z",
GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Startup, params, and error:
[root#node2 bin]# ./kubelet --address=0.0.0.0 --api-servers=http://master1:8080 --container-runtime=docker --hostname-override=node1 --port=10250
I0802 17:43:04.264454 2348 docker.go:327] Start docker client with request timeout=2m0s
W0802 17:43:04.271850 2348 server.go:487] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W0802 17:43:04.271906 2348 server.go:448] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I0802 17:43:04.272241 2348 manager.go:138] cAdvisor running in container: "/"
W0802 17:43:04.275956 2348 manager.go:146] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0802 17:43:04.280283 2348 fs.go:139] Filesystem partitions: map[/dev/mapper/fedora_kg--fedora-root:{mountpoint:/ major:253 minor:0 fsType:ext4 blockSize:0}]
I0802 17:43:04.284868 2348 manager.go:192] Machine: {NumCores:4 CpuFrequency:3192789
MemoryCapacity:4125679616 MachineID:1e80444278b7442385a762b9545cec7b
SystemUUID:5EC24D56-9CA6-B237-EE21-E0899C3C16AB BootID:44212209-ff1d-4340-8433-11a93274d927
Filesystems:[{Device:/dev/mapper/fedora_kg--fedora-root
Capacity:52710469632 Type:vfs Inodes:3276800}]
DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:85899345920 Scheduler:cfq}
253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none}
253:1:{Name:dm-1 Major:253 Minor:1 Size:4160749568 Scheduler:none}
253:2:{Name:dm-2 Major:253 Minor:2 Size:27518828544 Scheduler:none}
253:3:{Name:dm-3 Major:253 Minor:3 Size:107374182400 Scheduler:none}]
NetworkDevices:[
{Name:eth0 MacAddress:00:16:3e:b9:ce:f3 Speed:10000 Mtu:1500}
{Name:flannel.1 MacAddress:fa:ed:34:75:d6:1d Speed:0 Mtu:1450}]
Topology:[
{Id:0 Memory:4125679616
Cores:[{Id:0 Threads:[0]
Caches:[]} {Id:1 Threads:[1] Caches:[]}]
Caches:[{Size:8388608 Type:Unified Level:3}]}
{Id:1 Memory:0 Cores:[{Id:0 Threads:[2]
Caches:[]} {Id:1 Threads:[3] Caches:[]}]
Caches:[{Size:8388608 Type:Unified Level:3}]}]
CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0802 17:43:04.285649 2348 manager.go:198]
Version: {KernelVersion:4.6.4-301.fc24.x86_64 ContainerOsVersion:Fedora 24 (Twenty Four)
DockerVersion:1.12.0 CadvisorVersion: CadvisorRevision:}
I0802 17:43:04.286366 2348 server.go:768] Watching apiserver
W0802 17:43:04.286477 2348 kubelet.go:561] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I0802 17:43:04.286575 2348 kubelet.go:384] Hairpin mode set to "hairpin-veth"
W0802 17:43:04.303188 2348 plugins.go:170] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
I0802 17:43:04.307700 2348 docker_manager.go:235] Setting dockerRoot to /var/lib/docker
I0802 17:43:04.310175 2348 server.go:730] Started kubelet v1.3.3
E0802 17:43:04.311636 2348 kubelet.go:933] Image garbage collection failed: unable to find data for container /
E0802 17:43:04.312800 2348 kubelet.go:994] Failed to start ContainerManager [open /proc/sys/kernel/panic: read-only file system, open /proc/sys/kernel/panic_on_oops: read-only file system, open /proc/sys/vm/overcommit_memory: read-only file system]
I0802 17:43:04.312962 2348 status_manager.go:123] Starting to sync pod status with apiserver
I0802 17:43:04.313080 2348 kubelet.go:2468] Starting kubelet main sync loop.
I0802 17:43:04.313187 2348 kubelet.go:2477] skipping pod synchronization - [Failed to start ContainerManager [open /proc/sys/kernel/panic: read-only file system, open /proc/sys/kernel/panic_on_oops: read-only file system, open /proc/sys/vm/overcommit_memory: read-only file system] network state unknown container runtime is down]
I0802 17:43:04.313525 2348 server.go:117] Starting to listen on 0.0.0.0:10250
I0802 17:43:04.315021 2348 volume_manager.go:216] Starting Kubelet Volume Manager
I0802 17:43:04.325998 2348 factory.go:228] Registering Docker factory
E0802 17:43:04.326049 2348 manager.go:240] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0802 17:43:04.326073 2348 factory.go:54] Registering systemd factory
I0802 17:43:04.326545 2348 factory.go:86] Registering Raw factory
I0802 17:43:04.326993 2348 manager.go:1072] Started watching for new ooms in manager
I0802 17:43:04.331164 2348 oomparser.go:185] oomparser using systemd
I0802 17:43:04.331904 2348 manager.go:281] Starting recovery of all containers
I0802 17:43:04.368958 2348 manager.go:286] Recovery completed
I0802 17:43:04.419959 2348 kubelet.go:1185] Node node1 was previously registered
I0802 17:43:09.313871 2348 kubelet.go:2477] skipping pod synchronization - [Failed to start ContainerManager [open /proc/sys/kernel/panic: read-only file system, open /proc/sys/kernel/panic_on_oops: read-only file system, open /proc/sys/vm/overcommit_memory: read-only file system]]
Flannel (installed using dnf/yum):
root#node2 bin]# systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2016-08-01 22:14:06 UTC; 21h ago
Process: 1203 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker (code=exited, status=0/SUCCESS)
Main PID: 1195 (flanneld)
Tasks: 11 (limit: 512)
Memory: 2.7M
CPU: 4.012s
CGroup: /system.slice/flanneld.service
└─1195 /usr/bin/flanneld -etcd-endpoints=http://master1:2379 -etcd-prefix=/flannel/network
LXC settings for the container:
[root#kg-fedora node2]# cat config
# Template used to create this container: /usr/share/lxc/templates/lxc-fedora
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)
# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
lxc.network.type = veth
lxc.network.link = virbr0
lxc.network.hwaddr = 00:16:3e:b9:ce:f3
lxc.network.flags = up
lxc.network.ipv4 = 192.168.122.23/24
lxc.network.ipv4.gateway = 192.168.80.2
# Include common configuration
lxc.include = /usr/share/lxc/config/fedora.common.conf
lxc.arch = x86_64
# When using LXC with apparmor, uncomment the next line to run unconfined:
#lxc.aa_profile = unconfined
# example simple networking setup, uncomment to enable
#lxc.network.type = veth
#lxc.network.flags = up
#lxc.network.link = lxcbr0
#lxc.network.name = eth0
# Additional example for veth network type
# static MAC address,
#lxc.network.hwaddr = 00:16:3e:77:52:20
# persistent veth device name on host side
# Note: This may potentially collide with other containers of same name!
#lxc.network.veth.pair = v-fedora-template-e0
lxc.cgroup.devices.allow = a
lxc.cap.drop =
lxc.rootfs = /var/lib/lxc/node2/rootfs
lxc.rootfs.backend = dir
lxc.utsname = node2
libvirt-1.3.3.2-1.fc24.x86_64:
[root#kg-fedora node2]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2016-07-29 16:33:09 EDT; 3 days ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 1191 (libvirtd)
Tasks: 18 (limit: 512)
Memory: 7.3M
CPU: 9.108s
CGroup: /system.slice/libvirtd.service
├─1191 /usr/sbin/libvirtd
├─1597 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─1599 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
Flannel/Docker config:
[root#node2 ~]# systemctl stop docker
[root#node2 ~]# ip link delete docker0
[root#node2 ~]# systemctl start docker
[root#node2 ~]# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 10.100.72.0/16 scope global flannel.1
inet 172.17.0.1/16 scope global docker0
inet 192.168.122.23/24 brd 192.168.122.255 scope global dynamic eth0
Notice that the docker0 interface is not using the same ip range as the flannel.1 interface
Any pointers would be much appreciated!
For anyone who may look for the solution to this issue:
Since you are using LXC, you need to make sure that the filesystem in question is mounted as rw. This it is needed to specify the following option in the config file for LXC:
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw sys:rw"
or just
lxc.mount.auto: proc:rw sys:rw
Here are the references:
https://medium.com/#kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c
https://github.com/corneliusweig/kubernetes-lxd