Job for kube-apiserver.service failed because the control process exited with error code - kubernetes

On the beginning i wanted to point out i am fairly new into Linux systems, and totally, totally new with kubernetes so my question may be trivial.
As stated in the title i have problem with setting up the Kubernetes cluster. I am working on the Atomic Host Version: 7.1707 (2017-07-31 16:12:06)
I am following this guide:
http://www.projectatomic.io/docs/gettingstarted/
in addition to that i followed this:
http://www.projectatomic.io/docs/kubernetes/
(to be precise, i ran this command:
rpm-ostree install kubernetes-master --reboot
everything was going fine until this point:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
the problem is with:
systemctl start etcd kube-apiserver
as it gives me back this response:
Job for kube-apiserver.service failed because the control process
exited with error code. See "systemctl status kube-apiserver.service"
and "journalctl -xe" for details.
systemctl status kube-apiserver.service
gives me back:
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2017-08-25 14:29:56 CEST; 2s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 17876 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=255)
Main PID: 17876 (code=exited, status=255)
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 25 14:29:56 master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
I have no clue where to start and i will be more than thankful for any advices.

It turned out to be a typo in /etc/kubernetes/config. I misunderstood the "# Comma separated list of nodes in the etcd cluster".
Idk how to close the thread or anything.

Related

k3s.service won't start - Air Gapped install of k3s in Rocky 9 VM

Failed: k3s.service won't start - Air Gapped install of k3s in Rocky 9 VM
I'm trying to install k3s in a disconnected environment on a VM running Rocky 9.
The k3s.service fails to start. It mentions permission denied.
As part of troubleshooting I did the following:
Disabled SELinux
Disabled Swap memory
Install media:
Tar file
https://github.com/k3s-io/k3s/releases/download/v1.24.3%2Bk3s1/k3s-airgap-images-amd64.tar ->
/var/lib/rancher/k3s/agent/images/k3s-airgap-images-amd64.tar
K3S Binary
https://github.com/k3s-io/k3s/releases/download/v1.24.3%2Bk3s1/k3s ->
/usr/local/bin/k3s
Install Script
https://get.k3s.io/ ->
/usr/local/install/k3s/install.sh
Install CMD using install script:
sudo INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh
I noticed the following in: /etc/systemd/system/
-rw-r--r-- 1 root root 836 Aug 4 15:14 k3s.service
-rw------- 1 root root 0 Aug 4 15:14 k3s.service.env
The install script is meant to set permissions to 755 on the service. That doesn't happen. Doing chmod 755 and rebooting the VM makes no difference to k3s.service starting
Errors:
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xeu k3s.service" for details.
[admin#demolab01 k3s]$ systemctl status k3s.service
k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2022-08-04 15:30:22 UTC; 3s ago
Docs: https://k3s.io
Process: 4247 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 4249 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 4250 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 4251 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE)
Main PID: 4251 (code=exited, status=1/FAILURE)
CPU: 18ms
[admin#demolab01 k3s]$ journalctl -xeu k3s.service
Subject: A start job for unit k3s.service has failed
Defined-By: systemd
Support: https://access.redhat.com/support
A start job for unit k3s.service has finished with a failure.
The job identifier is 38821 and the job result is failed.
Aug 04 15:31:24 demolab01.****<fqdn> systemd[1]: k3s.service: Scheduled restart job, restart counter is at 267.
Subject: Automatic restarting of a unit has been scheduled
Defined-By: systemd
Support: https://access.redhat.com/support
Automatic restarting of the unit k3s.service has been scheduled, as the result for
the configured Restart= setting for the unit.
Aug 04 15:31:24 demolab01.****<fqdn> systemd[1]: Stopped Lightweight Kubernetes.
Subject: A stop job for unit k3s.service has finished
Defined-By: systemd
Support: https://access.redhat.com/support
A stop job for unit k3s.service has finished.
The job identifier is 38959 and the job result is done.
Aug 04 15:31:24 demolab01.****<fqdn> systemd[1]: Starting Lightweight Kubernetes...
Subject: A start job for unit k3s.service has begun execution
Defined-By: systemd
Support: https://access.redhat.com/support
A start job for unit k3s.service has begun execution.
The job identifier is 38959.
Aug 04 15:31:24 demolab01.****<fqdn> sh[4359]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 04 15:31:24 demolab01.****<fqdn> sh[4360]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Aug 04 15:31:25 demolab01.****<fqdn> k3s[4363]: time="2022-08-04T15:31:25Z" level=fatal msg="permission denied"
Aug 04 15:31:25 demolab01.****<fqdn> systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Subject: Unit process exited
Defined-By: systemd
Support: https://access.redhat.com/support
An ExecStart= process belonging to unit k3s.service has exited.
The process' exit code is 'exited' and its exit status is 1.
Aug 04 15:31:25 demolab01.****<fqdn> systemd[1]: k3s.service: Failed with result 'exit-code'.
Subject: Unit failed
Defined-By: systemd
Support: https://access.redhat.com/support
The unit k3s.service has entered the 'failed' state with result 'exit-code'.
Aug 04 15:31:25 demolab01.****<fqdn> systemd[1]: Failed to start Lightweight Kubernetes.
Subject: A start job for unit k3s.service has failed
Defined-By: systemd
Support: https://access.redhat.com/support
A start job for unit k3s.service has finished with a failure.
Any ideas welcome. I am no linux expert :-(

MongoDB does´t start, exit code 203

I installed MongoDB on the Ruspberry pi desktop on a VM, then I started it with the following command:
sudo service mongod start
The result of this command is the following:
systemctl list-unit-files --state enabled
mongodb.service enabled
Then when I check the status using
sudo service mongod status
The other issue is that there´s no .sock file in my /tmp folder.
PS: I removed MongoDB and reinstalled it trying to fix the issue but I always get the same problem.
Can anyone help me please?
Thank you in advance.
I get the following error:
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-06-08 12:33:17 CEST; 3s ago
Docs: https://docs.mongodb.org/manual
Process: 22168 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=203/EXEC)
Main PID: 22168 (code=exited, status=203/EXEC)
Jun 08 12:33:17 raspberry systemd[1]: Started MongoDB Database Server.
Jun 08 12:33:17 raspberry systemd[22168]: mongod.service: Failed to execute command: Exec format error
Jun 08 12:33:17 raspberry systemd[22168]: mongod.service: Failed at step EXEC spawning /usr/bin/mongod: Exec format error
Jun 08 12:33:17 raspberry systemd[1]: mongod.service: Main process exited, code=exited, status=203/EXEC
Jun 08 12:33:17 raspberry systemd[1]: mongod.service: Failed with result 'exit-code'.```

Fatal error when starting orion context broker

My orion context broker does not start and when I enter the command
/etc/init.d/contextBroker start I get this message
[root#context-broker ~]# /etc/init.d/contextBroker start
Starting contextBroker (via systemctl): Job for contextBroker.service failed because the control process exited with error code. See "systemctl status contextBroker.service" and "journalctl -xe" for details.
[FAILED]
The systemctl status contextBroker.service commannd gives this message
[root#context-broker ~]# systemctl status contextBroker.service
● contextBroker.service - LSB: run contextBroker
Loaded: loaded (/etc/rc.d/init.d/contextBroker; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2019-05-24 11:38:50 UTC; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 9782 ExecStart=/etc/rc.d/init.d/contextBroker start (code=exited, status=1/FAILURE)
May 24 11:38:47 context-broker.novalocal systemd[1]: Starting LSB: run contextBroker...
May 24 11:38:48 context-broker.novalocal contextBroker[9782]: contextBroker is stopped
May 24 11:38:48 context-broker.novalocal contextBroker[9782]: Starting...
May 24 11:38:48 context-broker.novalocal su[9788]: (to orion) root on none
May 24 11:38:50 context-broker.novalocal contextBroker[9782]: Starting contextBroker... cat: /var/run/contextBroker/contextBroker.pid...irectory
May 24 11:38:50 context-broker.novalocal systemd[1]: contextBroker.service: control process exited, code=exited status=1
May 24 11:38:50 context-broker.novalocal contextBroker[9782]: pidfile not found[FAILED]
May 24 11:38:50 context-broker.novalocal systemd[1]: Failed to start LSB: run contextBroker.
May 24 11:38:50 context-broker.novalocal systemd[1]: Unit contextBroker.service entered failed state.
May 24 11:38:50 context-broker.novalocal systemd[1]: contextBroker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Also the /tmp/contextBroker.log file looks like this
time=2019-05-24T11:41:12.971Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=rest.cpp[1753]:restStart | msg=Fatal Error (error starting REST interface)
I checked if mongodb is running and it is running correctly.
UPDATE
With some searching I realised I had to kill the pid of the process and after I did that the service successfully starts according to the messaags but I find it doesnt actually work. When I ask for the status I get the following:
[root#context-broker centos]# /etc/init.d/contextBroker status
● contextBroker.service - LSB: run contextBroker
Loaded: loaded (/etc/rc.d/init.d/contextBroker; bad; vendor preset: disabled)
Active: active (exited) since Sun 2019-05-26 18:34:49 UTC; 4min 56s ago
Docs: man:systemd-sysv-generator(8)
Process: 16295 ExecStop=/etc/rc.d/init.d/contextBroker stop (code=exited, status=0/SUCCESS)
Process: 16319 ExecStart=/etc/rc.d/init.d/contextBroker start (code=exited, status=0/SUCCESS)
May 26 18:34:47 context-broker.novalocal systemd[1]: Starting LSB: run contextBroker...
May 26 18:34:47 context-broker.novalocal contextBroker[16319]: contextBroker is stopped
May 26 18:34:47 context-broker.novalocal contextBroker[16319]: Starting...
May 26 18:34:47 context-broker.novalocal su[16325]: (to orion) root on none
May 26 18:34:49 context-broker.novalocal systemd[1]: Started LSB: run contextBroker.
May 26 18:34:49 context-broker.novalocal contextBroker[16319]: Starting contextBroker... [ OK ]
The log file has the same message as previously.
With some searching again I believe the cause is that the service doesnt have a daemon(??). So if that is the case how do I add one?
Normally when getting the error starting REST interface, it's because there is already a broker running, which means the port is already taken. Make sure there is no broker already running.

Error to create a systemd service, with live socket

I'm trying to create a systemd service on CentOS 7.5, to acces livestatos from remote thru
File proxy-to-livestatus.service:
[Unit]
Requires=naemon.service
After=naemon.service
[Service]
ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live
File proxy-to-livestatus.socket:
[Unit]
StopWhenUnneeded=true
[Socket]
ListenStream=6557
Status:
systemctl status proxy-to-livestatus.service
● proxy-to-livestatus.service
Loaded: loaded (/etc/systemd/system/proxy-to-livestatus.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since mié 2018-07-18 09:11:58 CEST; 15s ago
Process: 3203 ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live (code=exited, status=1/FAILURE)
Main PID: 3203 (code=exited, status=1/FAILURE)
jul 18 09:11:58 chuwi systemd[1]: Started proxy-to-livestatus.service.
jul 18 09:11:58 chuwi systemd[1]: Starting proxy-to-livestatus.service...
jul 18 09:11:58 chuwi systemd-socket-proxyd[3203]: Didn't get any sockets passed in.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service: main process exited, code=exited, status=1/FAILURE
jul 18 09:11:58 chuwi systemd[1]: Unit proxy-to-livestatus.service entered failed state.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service failed.
Hi to resolve this issue, we haver to enable the socket with --now option
systemctl enable --now proxy-to-livestatus.socket
and the start the proxy-to-livestatus.service
systemctl start systemctl enable --now proxy-to-livestatus.socket
Regards

Filebeat Service will not start on RHEL 7

I have a trouble/problem with my Filebeat installation.
When I try it to start with "service filebeat start", it says "Starting Filebeat". After "service filebeat status" I get 4 PIDs (until here everything looks "normal"):
[root#(Server) run]# service filebeat status
Filebeat is running with pid: 30650 30657 30658 30659
But after checking the PID, we see that it is not running:
[root#(Server) run]# ps -ef | grep 30650
root 30665 31360 0 16:27 pts/0 00:00:00 grep --color=auto 30650
Trying to start it with systemctl doesn't help:
[root#(Server) run]# systemctl start filebeat
Job for filebeat.service failed because a configured resource limit was exceeded. See "systemctl status filebeat.service" and "journalctl -xe" for details.
Status says:
[root#Server run]# systemctl status filebeat
● filebeat.service - LSB: start and stop filebeat
Loaded: loaded (/etc/rc.d/init.d/filebeat; bad; vendor preset: disabled)
Active: failed (Result: resources) since Tue 2017-09-26 16:30:33 CEST; 1min 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 32118 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
Sep 26 16:30:33 Server... systemd[1]: Starting LSB: start and stop filebeat...
Sep 26 16:30:33 Server... filebeat[32118]: Starting Filebeat
Sep 26 16:30:33 Server... su[32119]: (to user) root on none
Sep 26 16:30:33 Server... systemd[1]: PID file /var/run/filebeat.pid not readable (yet?) after start.
Sep 26 16:30:33 Server... systemd[1]: Failed to start LSB: start and stop filebeat.
Sep 26 16:30:33 Server... systemd[1]: Unit filebeat.service entered failed state.
Sep 26 16:30:33 Server... systemd[1]: filebeat.service failed.
Does somebody has any idea?
Regards
Problem was "chown permissions". I installed filebeat not as root and the "data" directory had root user & group ownership. After changing that, it runs and starts automatically after boot.
Regards