My orion context broker does not start and when I enter the command
/etc/init.d/contextBroker start I get this message
[root#context-broker ~]# /etc/init.d/contextBroker start
Starting contextBroker (via systemctl): Job for contextBroker.service failed because the control process exited with error code. See "systemctl status contextBroker.service" and "journalctl -xe" for details.
[FAILED]
The systemctl status contextBroker.service commannd gives this message
[root#context-broker ~]# systemctl status contextBroker.service
● contextBroker.service - LSB: run contextBroker
Loaded: loaded (/etc/rc.d/init.d/contextBroker; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2019-05-24 11:38:50 UTC; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 9782 ExecStart=/etc/rc.d/init.d/contextBroker start (code=exited, status=1/FAILURE)
May 24 11:38:47 context-broker.novalocal systemd[1]: Starting LSB: run contextBroker...
May 24 11:38:48 context-broker.novalocal contextBroker[9782]: contextBroker is stopped
May 24 11:38:48 context-broker.novalocal contextBroker[9782]: Starting...
May 24 11:38:48 context-broker.novalocal su[9788]: (to orion) root on none
May 24 11:38:50 context-broker.novalocal contextBroker[9782]: Starting contextBroker... cat: /var/run/contextBroker/contextBroker.pid...irectory
May 24 11:38:50 context-broker.novalocal systemd[1]: contextBroker.service: control process exited, code=exited status=1
May 24 11:38:50 context-broker.novalocal contextBroker[9782]: pidfile not found[FAILED]
May 24 11:38:50 context-broker.novalocal systemd[1]: Failed to start LSB: run contextBroker.
May 24 11:38:50 context-broker.novalocal systemd[1]: Unit contextBroker.service entered failed state.
May 24 11:38:50 context-broker.novalocal systemd[1]: contextBroker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Also the /tmp/contextBroker.log file looks like this
time=2019-05-24T11:41:12.971Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=rest.cpp[1753]:restStart | msg=Fatal Error (error starting REST interface)
I checked if mongodb is running and it is running correctly.
UPDATE
With some searching I realised I had to kill the pid of the process and after I did that the service successfully starts according to the messaags but I find it doesnt actually work. When I ask for the status I get the following:
[root#context-broker centos]# /etc/init.d/contextBroker status
● contextBroker.service - LSB: run contextBroker
Loaded: loaded (/etc/rc.d/init.d/contextBroker; bad; vendor preset: disabled)
Active: active (exited) since Sun 2019-05-26 18:34:49 UTC; 4min 56s ago
Docs: man:systemd-sysv-generator(8)
Process: 16295 ExecStop=/etc/rc.d/init.d/contextBroker stop (code=exited, status=0/SUCCESS)
Process: 16319 ExecStart=/etc/rc.d/init.d/contextBroker start (code=exited, status=0/SUCCESS)
May 26 18:34:47 context-broker.novalocal systemd[1]: Starting LSB: run contextBroker...
May 26 18:34:47 context-broker.novalocal contextBroker[16319]: contextBroker is stopped
May 26 18:34:47 context-broker.novalocal contextBroker[16319]: Starting...
May 26 18:34:47 context-broker.novalocal su[16325]: (to orion) root on none
May 26 18:34:49 context-broker.novalocal systemd[1]: Started LSB: run contextBroker.
May 26 18:34:49 context-broker.novalocal contextBroker[16319]: Starting contextBroker... [ OK ]
The log file has the same message as previously.
With some searching again I believe the cause is that the service doesnt have a daemon(??). So if that is the case how do I add one?
Normally when getting the error starting REST interface, it's because there is already a broker running, which means the port is already taken. Make sure there is no broker already running.
Related
I have installed postgresql for a long time, but it suddenly stopped working when I added an IP address in the pg_hba file.
I'm trying to run it using sudo service postgresql restart but this doesn't work.
Also
root#ubuntu-dev-server:/# systemctl start postgresql#9.6-main
Job for postgresql#9.6-main.service failed because the service did not take the steps required by its unit configuration.
See "systemctl status postgresql#9.6-main.service" and "journalctl -xe" for details.
And then,
root#ubuntu-dev-server:/# systemctl status postgresql#9.6-main.service
● postgresql#9.6-main.service - PostgreSQL Cluster 9.6-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; indirect; vendor preset: enabled)
Active: failed (Result: protocol) since Fri 2020-07-24 16:56:06 UTC; 1min 54s ago
Process: 31877 ExecStop=/usr/bin/pg_ctlcluster --skip-systemctl-redirect -m fast 9.6-main stop (code=exited, status=0/SUCCESS)
Process: 951 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 9.6-main start (code=exited, status=1/FAILURE)
Main PID: 24944 (code=exited, status=0/SUCCESS)
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: Starting PostgreSQL Cluster 9.6-main...
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: The PostgreSQL server failed to start. Please check the log output:
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: 2020-07-24 16:56:06.236 UTC [956] FATAL: could not map anonymous shared memory: Cannot allocate memory
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: 2020-07-24 16:56:06.236 UTC [956] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 148471808 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: 2020-07-24 16:56:06.236 UTC [956] LOG: database system is shut down
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: postgresql#9.6-main.service: Can't open PID file /run/postgresql/9.6-main.pid (yet?) after start: No such file or directory
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: postgresql#9.6-main.service: Failed with result 'protocol'.
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: Failed to start PostgreSQL Cluster 9.6-main.
I have tried several things but still do not start. Any recommendation is welcome.
I have made this service file to start a python script when my raspberry pi (4) boots up:
/etc/systemd/system/plants.service
[Unit]
Description=plant-sender
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/home/theo/Repos/plants-monitor/remote
ExecStart=/usr/bin/python main.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
However, once the pi is on, I run sudo systemctl status plants, and get:
* plants.service - plant-sender
Loaded: loaded (/etc/systemd/system/plants.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-03-30 20:22:43 EDT; 1min 45s ago
Process: 323 ExecStart=/usr/bin/python main.py (code=exited, status=1/FAILURE)
Main PID: 323 (code=exited, status=1/FAILURE)
Mar 30 20:22:43 arpi systemd[1]: plants.service: Scheduled restart job, restart counter is at 5.
Mar 30 20:22:43 arpi systemd[1]: Stopped plant-sender.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Start request repeated too quickly.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Failed with result 'exit-code'.
Mar 30 20:22:43 arpi systemd[1]: Failed to start plant-sender.
But, after running sudo systemctl restart plants, the service starts up and everything is fine.
If it doesn't start on boot but does on systemctl restart, I'd be looking at whether /home/theo/Repos/plants-monitor/remote is mounted at that point.
There may be something automounting or home-mounting your home directory when you log in.
If so, you could change the working directory to something that exists always, even if only a test.
Additionally, using journalctl -n 9999 -u plants will get you more log messages, so you can see why it's failing, rather than just seeing the "tried too many times, giving up" messages.
I have used this tutorial to install fail2ban for my Ubuntu 16.04 server.
After going through this I tried to start with: /etc/init.d/fail2ban start
Here was the response:
[....] Starting fail2ban (via systemctl): fail2ban.serviceJob for fail2ban.service failed because the control process exited with error code. See "systemctl status fail2ban.service" and "journalctl -xe" for details.
failed!
When I then run: systemctl status fail2ban.service
I get this:
> fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Tue 2018-05-15 14:01:38 UTC; 1min 40s ago
Docs: man:fail2ban(1)
Process: 4468 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Control process exited, code=exited status=255
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Unit entered failed state.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Failed with result 'exit-code'.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Stopped Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Start request repeated too quickly.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
Some tutorials at DigitalOcean contain errors. Check your /etc/fail2ban/jail.local. Try to keep it as simple as you can, i.e. keep there only those options you want to change.
Otherwise, if you have copied jail.conf to jail.local (according to the guide at DO), then delete or comment out pam section, if you do not use it, in jail.local file.
Go to line 146 of /etc/fail2ban/jail.local
# [pam-generic]
# enabled = false
# pam-generic filter can be customized to monitor specific subset of 'tty's
# filter = pam-generic
# port actually must be irrelevant but lets leave it all for some possible uses
# port = all
# banaction = iptables-allports
# port = anyport
# logpath = /var/log/auth.log
# maxretry = 6
More details are here: https://github.com/fail2ban/fail2ban/issues/1396
I have a trouble/problem with my Filebeat installation.
When I try it to start with "service filebeat start", it says "Starting Filebeat". After "service filebeat status" I get 4 PIDs (until here everything looks "normal"):
[root#(Server) run]# service filebeat status
Filebeat is running with pid: 30650 30657 30658 30659
But after checking the PID, we see that it is not running:
[root#(Server) run]# ps -ef | grep 30650
root 30665 31360 0 16:27 pts/0 00:00:00 grep --color=auto 30650
Trying to start it with systemctl doesn't help:
[root#(Server) run]# systemctl start filebeat
Job for filebeat.service failed because a configured resource limit was exceeded. See "systemctl status filebeat.service" and "journalctl -xe" for details.
Status says:
[root#Server run]# systemctl status filebeat
● filebeat.service - LSB: start and stop filebeat
Loaded: loaded (/etc/rc.d/init.d/filebeat; bad; vendor preset: disabled)
Active: failed (Result: resources) since Tue 2017-09-26 16:30:33 CEST; 1min 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 32118 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
Sep 26 16:30:33 Server... systemd[1]: Starting LSB: start and stop filebeat...
Sep 26 16:30:33 Server... filebeat[32118]: Starting Filebeat
Sep 26 16:30:33 Server... su[32119]: (to user) root on none
Sep 26 16:30:33 Server... systemd[1]: PID file /var/run/filebeat.pid not readable (yet?) after start.
Sep 26 16:30:33 Server... systemd[1]: Failed to start LSB: start and stop filebeat.
Sep 26 16:30:33 Server... systemd[1]: Unit filebeat.service entered failed state.
Sep 26 16:30:33 Server... systemd[1]: filebeat.service failed.
Does somebody has any idea?
Regards
Problem was "chown permissions". I installed filebeat not as root and the "data" directory had root user & group ownership. After changing that, it runs and starts automatically after boot.
Regards
On the beginning i wanted to point out i am fairly new into Linux systems, and totally, totally new with kubernetes so my question may be trivial.
As stated in the title i have problem with setting up the Kubernetes cluster. I am working on the Atomic Host Version: 7.1707 (2017-07-31 16:12:06)
I am following this guide:
http://www.projectatomic.io/docs/gettingstarted/
in addition to that i followed this:
http://www.projectatomic.io/docs/kubernetes/
(to be precise, i ran this command:
rpm-ostree install kubernetes-master --reboot
everything was going fine until this point:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
the problem is with:
systemctl start etcd kube-apiserver
as it gives me back this response:
Job for kube-apiserver.service failed because the control process
exited with error code. See "systemctl status kube-apiserver.service"
and "journalctl -xe" for details.
systemctl status kube-apiserver.service
gives me back:
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2017-08-25 14:29:56 CEST; 2s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 17876 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=255)
Main PID: 17876 (code=exited, status=255)
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 25 14:29:56 master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
I have no clue where to start and i will be more than thankful for any advices.
It turned out to be a typo in /etc/kubernetes/config. I misunderstood the "# Comma separated list of nodes in the etcd cluster".
Idk how to close the thread or anything.