k3s.service won't start - Air Gapped install of k3s in Rocky 9 VM - kubernetes

Failed: k3s.service won't start - Air Gapped install of k3s in Rocky 9 VM
I'm trying to install k3s in a disconnected environment on a VM running Rocky 9.
The k3s.service fails to start. It mentions permission denied.
As part of troubleshooting I did the following:
Disabled SELinux
Disabled Swap memory
Install media:
Tar file
https://github.com/k3s-io/k3s/releases/download/v1.24.3%2Bk3s1/k3s-airgap-images-amd64.tar ->
/var/lib/rancher/k3s/agent/images/k3s-airgap-images-amd64.tar
K3S Binary
https://github.com/k3s-io/k3s/releases/download/v1.24.3%2Bk3s1/k3s ->
/usr/local/bin/k3s
Install Script
https://get.k3s.io/ ->
/usr/local/install/k3s/install.sh
Install CMD using install script:
sudo INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh
I noticed the following in: /etc/systemd/system/
-rw-r--r-- 1 root root 836 Aug 4 15:14 k3s.service
-rw------- 1 root root 0 Aug 4 15:14 k3s.service.env
The install script is meant to set permissions to 755 on the service. That doesn't happen. Doing chmod 755 and rebooting the VM makes no difference to k3s.service starting
Errors:
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xeu k3s.service" for details.
[admin#demolab01 k3s]$ systemctl status k3s.service
k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2022-08-04 15:30:22 UTC; 3s ago
Docs: https://k3s.io
Process: 4247 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 4249 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 4250 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 4251 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE)
Main PID: 4251 (code=exited, status=1/FAILURE)
CPU: 18ms
[admin#demolab01 k3s]$ journalctl -xeu k3s.service
Subject: A start job for unit k3s.service has failed
Defined-By: systemd
Support: https://access.redhat.com/support
A start job for unit k3s.service has finished with a failure.
The job identifier is 38821 and the job result is failed.
Aug 04 15:31:24 demolab01.****<fqdn> systemd[1]: k3s.service: Scheduled restart job, restart counter is at 267.
Subject: Automatic restarting of a unit has been scheduled
Defined-By: systemd
Support: https://access.redhat.com/support
Automatic restarting of the unit k3s.service has been scheduled, as the result for
the configured Restart= setting for the unit.
Aug 04 15:31:24 demolab01.****<fqdn> systemd[1]: Stopped Lightweight Kubernetes.
Subject: A stop job for unit k3s.service has finished
Defined-By: systemd
Support: https://access.redhat.com/support
A stop job for unit k3s.service has finished.
The job identifier is 38959 and the job result is done.
Aug 04 15:31:24 demolab01.****<fqdn> systemd[1]: Starting Lightweight Kubernetes...
Subject: A start job for unit k3s.service has begun execution
Defined-By: systemd
Support: https://access.redhat.com/support
A start job for unit k3s.service has begun execution.
The job identifier is 38959.
Aug 04 15:31:24 demolab01.****<fqdn> sh[4359]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 04 15:31:24 demolab01.****<fqdn> sh[4360]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Aug 04 15:31:25 demolab01.****<fqdn> k3s[4363]: time="2022-08-04T15:31:25Z" level=fatal msg="permission denied"
Aug 04 15:31:25 demolab01.****<fqdn> systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Subject: Unit process exited
Defined-By: systemd
Support: https://access.redhat.com/support
An ExecStart= process belonging to unit k3s.service has exited.
The process' exit code is 'exited' and its exit status is 1.
Aug 04 15:31:25 demolab01.****<fqdn> systemd[1]: k3s.service: Failed with result 'exit-code'.
Subject: Unit failed
Defined-By: systemd
Support: https://access.redhat.com/support
The unit k3s.service has entered the 'failed' state with result 'exit-code'.
Aug 04 15:31:25 demolab01.****<fqdn> systemd[1]: Failed to start Lightweight Kubernetes.
Subject: A start job for unit k3s.service has failed
Defined-By: systemd
Support: https://access.redhat.com/support
A start job for unit k3s.service has finished with a failure.
Any ideas welcome. I am no linux expert :-(

Related

Job for mongod.service failed because a fatal signal was delivered to the control process

I installe mongodb following this tutorial https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-red-hat/ but this is the result of sudo systemctl status mongod
[root#ns1 ~]# sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Wed 2022-07-06 00:31:55 +03; 14min ago
Docs: https://docs.mongodb.org/manual
Process: 561243 ExecStart=/usr/bin/mongod $OPTIONS (code=killed, signal=ILL)
Process: 561241 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 561239 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 561238 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Jul 06 00:31:55 ns1.localhost.com systemd[1]: Starting MongoDB Database Server...
Jul 06 00:31:55 ns1.localhost.com systemd[1]: mongod.service: control process exited, code=killed status=4
Jul 06 00:31:55 ns1.localhost.com systemd[1]: Failed to start MongoDB Database Server.
Jul 06 00:31:55 ns1.localhost.com systemd[1]: Unit mongod.service entered failed state.
Jul 06 00:31:55 ns1.localhost.com systemd[1]: mongod.service failed.
journalctl -xe
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A new session with the ID c15063 has been created for the user root.
--
-- The leading process of the session is 561230.
Jul 06 00:31:55 ns1.localhost.com sudo[561230]: pam_unix(sudo:session): session opened for user root by root
Jul 06 00:31:55 ns1.localhost.com polkitd[458]: Registered Authentication Agent for unix-process:561232:1926
Jul 06 00:31:55 ns1.localhost.com systemd[1]: Starting MongoDB Database Server...
-- Subject: Unit mongod.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has begun starting up.
Jul 06 00:31:55 ns1.localhost.com kernel: traps: mongod[561243] trap invalid opcode ip:55b5947535da sp:7ffd0
Jul 06 00:31:55 ns1.localhost.com systemd[1]: mongod.service: control process exited, code=killed status=4
Jul 06 00:31:55 ns1.localhost.com systemd[1]: Failed to start MongoDB Database Server.
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Jul 06 00:31:55 ns1.localhost.com systemd[1]: Unit mongod.service entered failed state.
Jul 06 00:31:55 ns1.localhost.com systemd[1]: mongod.service failed.
Jul 06 00:31:55 ns1.localhost.com polkitd[458]: Unregistered Authentication Agent for unix-process:561232:19
Jul 06 00:31:55 ns1.localhost.com sudo[561230]: pam_unix(sudo:session): session closed for user root
Jul 06 00:31:55 ns1.localhost.com systemd-logind[459]: Removed session c15063.
-- Subject: Session c15063 has been terminated
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Docker Issue - (code exited, status=1/Failure)

The problem is, i've installed Docker in my pc.
Tried to make a Postgres container as the docs says.
The big problem that begun with my headache was that i started the container and the container exit with the code(1).
So i tried to seach a lot of "Solutions" in so much sites and any of them resolved that problem.
OS:
Deppin 15.11
Problem Terminal Report:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
invoke-rc.d: initscript docker, action "start" failed.
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─hosts.conf, override.conf
Active: activating (auto-restart) (Result: exit-code) since Sat 2020-04-25 13:12:26 -03; 18ms ago
Docs: https://docs.docker.com
Process: 10333 ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375 (code=exited, status=1/FAILURE)
Main PID: 10333 (code=exited, status=1/FAILURE)
CPU: 80ms
JOURNALCTL -XE:
root#usuario-PC:/home/usuario# journalctl -xe
-- A unidade docker.service falhou.
--
-- O resultado é failed.
mai 08 11:00:16 usuario-PC systemd[1]: docker.socket: Unit entered failed state.
mai 08 11:00:16 usuario-PC systemd[1]: docker.service: Unit entered failed state.
mai 08 11:00:16 usuario-PC systemd[1]: docker.service: Failed with result 'exit-code'.
mai 08 11:00:42 usuario-PC systemd[1]: Starting Laptop Mode Tools - Battery Polling Service...
-- Subject: Unidade lmt-poll.service sendo iniciado
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A unidade lmt-poll.service está sendo iniciada.
mai 08 11:00:42 usuario-PC systemd[1]: Reloading Laptop Mode Tools.
-- Subject: Unidade laptop-mode.service iniciou recarregamento de sua configuração
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A unidade laptop-mode.service iniciou o recarregamento de sua configuração.
mai 08 11:00:42 usuario-PC systemd[1]: Started Laptop Mode Tools - Battery Polling Service.
-- Subject: Unidade lmt-poll.service concluiu a inicialização
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A unidade lmt-poll.service concluiu a inicialização.
--
-- The start-up result is done.
mai 08 11:00:42 usuario-PC laptop_mode[11704]: Laptop mode
mai 08 11:00:42 usuario-PC laptop_mode[11704]: enabled, not active [unchanged]
mai 08 11:00:42 usuario-PC systemd[1]: Reloaded Laptop Mode Tools.
-- Subject: Unidade laptop-mode.service concluiu recarregamento de sua configuração
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A unidade laptop-mode.service concluiu o recarregamento de sua configuração.
--
-- O resultado é done.
The first thing is to make sure your docker service is running?
$ sudo systemctl status docker
The second thing you need to restart your docker service. I think it will be work.
$ sudo systemctl restart docker
And the final thing, if all of above it's not working then you can restart your computer (haha)

SonarQube is unable to start on CentOS: systemd returns (code=exited, status=203/EXEC)

Following https://www.vultr.com/docs/how-to-install-sonarqube-on-centos-7 to install SonarQube on my CentOS.
When I'm trying to start the SonarQube via sudo systemctl start sonar it is giving below error:
Job for sonar.service failed because the control process exited with error code. See "systemctl status sonar.service" and "journalctl -xe" for details.
Output of systemctl status sonar.service:
sonar.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonar.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2019-04-24 16:56:39 UTC; 19s ago
Process: 23573 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=exited, status=203/EXEC)
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service: control process exited, code=exited status=203
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Failed to start SonarQube service.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Unit sonar.service entered failed state.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service failed.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service holdoff time over, scheduling restart.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: start request repeated too quickly for sonar.service
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Failed to start SonarQube service.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Unit sonar.service entered failed state.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service failed.
Output of journalctl -xe:
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Starting SonarQube service...
-- Subject: Unit sonar.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit sonar.service has begun starting up.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[23573]: Failed at step EXEC spawning /opt/sonarqube/bin/linux-x86-64/sona
-- Subject: Process /opt/sonarqube/bin/linux-x86-64/sonar.sh could not be executed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- The process /opt/sonarqube/bin/linux-x86-64/sonar.sh could not be executed and failed.
--
-- The error number returned by this process is 2.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service: control process exited, code=exited status=203
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Failed to start SonarQube service.
-- Subject: Unit sonar.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit sonar.service has failed.
--
-- The result is failed.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Unit sonar.service entered failed state.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service failed.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service holdoff time over, scheduling restart.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: start request repeated too quickly for sonar.service
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Failed to start SonarQube service.
-- Subject: Unit sonar.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit sonar.service has failed.
--
-- The result is failed.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: Unit sonar.service entered failed state.
Apr 24 16:56:39 ip-172-31-13-96.ap-south-1.compute.internal systemd[1]: sonar.service failed.
status=203/EXEC means that systemd cannot execute ExecStart script. The most "popular" reasons are:
incorrect path to script
script has invalid permissions:
service user does not have permission to read script
script is not marked as executable
I hope /opt/sonarqube/bin/linux-x86-64/sonar.sh exists, so please verify that the script is marked as executable.

Error to create a systemd service, with live socket

I'm trying to create a systemd service on CentOS 7.5, to acces livestatos from remote thru
File proxy-to-livestatus.service:
[Unit]
Requires=naemon.service
After=naemon.service
[Service]
ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live
File proxy-to-livestatus.socket:
[Unit]
StopWhenUnneeded=true
[Socket]
ListenStream=6557
Status:
systemctl status proxy-to-livestatus.service
● proxy-to-livestatus.service
Loaded: loaded (/etc/systemd/system/proxy-to-livestatus.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since mié 2018-07-18 09:11:58 CEST; 15s ago
Process: 3203 ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live (code=exited, status=1/FAILURE)
Main PID: 3203 (code=exited, status=1/FAILURE)
jul 18 09:11:58 chuwi systemd[1]: Started proxy-to-livestatus.service.
jul 18 09:11:58 chuwi systemd[1]: Starting proxy-to-livestatus.service...
jul 18 09:11:58 chuwi systemd-socket-proxyd[3203]: Didn't get any sockets passed in.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service: main process exited, code=exited, status=1/FAILURE
jul 18 09:11:58 chuwi systemd[1]: Unit proxy-to-livestatus.service entered failed state.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service failed.
Hi to resolve this issue, we haver to enable the socket with --now option
systemctl enable --now proxy-to-livestatus.socket
and the start the proxy-to-livestatus.service
systemctl start systemctl enable --now proxy-to-livestatus.socket
Regards

Job for kube-apiserver.service failed because the control process exited with error code

On the beginning i wanted to point out i am fairly new into Linux systems, and totally, totally new with kubernetes so my question may be trivial.
As stated in the title i have problem with setting up the Kubernetes cluster. I am working on the Atomic Host Version: 7.1707 (2017-07-31 16:12:06)
I am following this guide:
http://www.projectatomic.io/docs/gettingstarted/
in addition to that i followed this:
http://www.projectatomic.io/docs/kubernetes/
(to be precise, i ran this command:
rpm-ostree install kubernetes-master --reboot
everything was going fine until this point:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
the problem is with:
systemctl start etcd kube-apiserver
as it gives me back this response:
Job for kube-apiserver.service failed because the control process
exited with error code. See "systemctl status kube-apiserver.service"
and "journalctl -xe" for details.
systemctl status kube-apiserver.service
gives me back:
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2017-08-25 14:29:56 CEST; 2s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 17876 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=255)
Main PID: 17876 (code=exited, status=255)
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 25 14:29:56 master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
I have no clue where to start and i will be more than thankful for any advices.
It turned out to be a typo in /etc/kubernetes/config. I misunderstood the "# Comma separated list of nodes in the etcd cluster".
Idk how to close the thread or anything.