Mosquitto can't edit config file because of error? - raspberry-pi

I completely installed everything fresh to a Raspberry Pi and installed Mosquitto. Many online instructions state that you should put listener 1883 and allow_anonymous true at the end of the config file /etc/mosquitto/mosquitto.conf to allow remote access etc.
But as soon as I put these two twos in the config file, Mosquitto won't start anymore? Somehow it also seems like no one else has the problem? I reinstalled everything and still nothing works. When I type sudo systemctl status mosquitto I get the following:
Loaded: loaded (/lib/systemd/system/mosquitto.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-12-08 12:31:46 CET; 1min 50s ago
Docs: man:mosquitto.conf(5)
man:mosquitto(8)
Process: 630 ExecStartPre=/bin/mkdir -m 740 -p /var/log/mosquitto (code=exited, status=0/SUCCESS)
Process: 631 ExecStartPre=/bin/chown mosquitto /var/log/mosquitto (code=exited, status=0/SUCCESS)
Process: 632 ExecStartPre=/bin/mkdir -m 740 -p /run/mosquitto (code=exited, status=0/SUCCESS)
Process: 633 ExecStartPre=/bin/chown mosquitto /run/mosquitto (code=exited, status=0/SUCCESS)
Process: 634 ExecStart=/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf (code=exited, status=1/FAILURE)
Main PID: 634 (code=exited, status=1/FAILURE)
CPU: 91ms
Dec 08 12:31:46 MyPI systemd\[1\]: mosquitto.service: Scheduled restart job, restart counter is at 5.
Dec 08 12:31:46 MyPI systemd\[1\]: Stopped Mosquitto MQTT Broker.
Dec 08 12:31:46 MyPI systemd\[1\]: mosquitto.service: Start request repeated too quickly.
Dec 08 12:31:46 MyPI systemd\[1\]: mosquitto.service: Failed with result 'exit-code'.
Dec 08 12:31:46 MyPI systemd\[1\]: Failed to start Mosquitto MQTT Broker.
Configuration file:
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /run/mosquitto/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
listener 1883
allow_anonymous true
EDIT: I now changed the listener to port 1884. And now it worked. Could it be maybe, that the Wi-Fi-Router does have some settings where it blocks every Port on 1883 or something?

Related

Installation of MongoDB fails on CentOS 7

I am new to MongoDB and trying to install V5 on CentOS 7 Server. I am using the official documentation to install MongoDB from here:https://www.mongodb.com/docs/v5.0/tutorial/install-mongodb-on-red-hat/
When I try to run the MongoDB, I run the following command (as per the spec):
sudo systemctl start mongod
unfortunately, this does not succeed and I get the follow error:
Job for mongod.service failed because a fatal signal was delivered to the control process. See "systemctl status mongod.service" and "journalctl -xe" for details.
The command
systemctl status mongod.service
reveals the following:
mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Sat 2022-10-22 06:51:02 UTC; 8min ago
Docs: https://docs.mongodb.org/manual
Process: 22590 ExecStart=/usr/bin/mongod $OPTIONS (code=killed, signal=ILL)
Process: 22587 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 22585 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 22583 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Oct 22 06:51:02 server1.bjmanch.in systemd[1]: Starting MongoDB Database Server...
Oct 22 06:51:02 server1.bjmanch.in systemd[1]: mongod.service: control process exited, code=killed status=4
Oct 22 06:51:02 server1.bjmanch.in systemd[1]: Failed to start MongoDB Database Server.
Oct 22 06:51:02 server1.bjmanch.in systemd[1]: Unit mongod.service entered failed state.
Oct 22 06:51:02 server1.bjmanch.in systemd[1]: mongod.service failed.
And, that's where I am completely stuck from last 3 days.
Any help would be greatly appreciated.
Thanks

Can't get MongoDB to Start

I've tried this on CentOS 7 and 8 and I'm getting the same result. Also tried with different versions of MongoDB. I'm trying to run a Pritunl VPN on a hyper-v VM and have followed this tutorial for installing mongodb and this tutorial for setting everything up with the exception that I'm using a VM rather than a VPS.
When I run "systemctl start mongod" I get the error "Job for mongod.service failed because a fatal signal was delivered causing the control process to dump core.
See "systemctl status mongod.service" and "journalctl -xe" for details."
Running journalctl -xe yields something along the lines of the code included below. I tried disabling core dump and the "Process" number changed from 3397 to 4143. Also included the output of systemctl status mongod.service.
This is my first time working with something like this so there's a high possibility I'm missing something simple. I keep seeing mention of directory files in some of the solution posts but according to the mongodb install instructions it should create it's own directories. Any help is appreciated because I am beyond lost.
journalctl -xe:
-- Unit mongod.service has begun starting up.
Dec 22 14:54:12 localhost.localdomain kernel: traps: mongod[33894] trap invalid opcode ip:558aaaedaeda sp:7ffd3f5a6560 error>
Dec 22 14:54:12 localhost.localdomain systemd[1]: Started Process Core Dump (PID 33895/UID 0).
-- Subject: Unit systemd-coredump#8-33895-0.service has finished start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit systemd-coredump#8-33895-0.service has finished starting up.
--
-- The start-up result is done.
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Resource limits disable core dumping for process 33894 (mongo>
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Process 33894 (mongod) of user 974 dumped core.
-- Subject: Process 33894 (mongod) dumped core
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- Documentation: man:core(5)
--
-- Process 33894 (mongod) crashed and dumped core.
--
-- This usually indicates a programming error in the crashing program and
-- should be reported to its vendor as a bug.
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Control process exited, code=dumped status=4
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Failed with result 'core-dump'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit mongod.service has entered the 'failed' state with result 'core-dump'.
Dec 22 14:54:12 localhost.localdomain systemd[1]: Failed to start MongoDB Database Server.
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Dec 22 14:54:12 localhost.localdomain systemd[1]: systemd-coredump#8-33895-0.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit systemd-coredump#8-33895-0.service has successfully entered the 'dead' state.
status of mongod.service:
● mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: core-dump) since Wed 2021-12-22 14:54:12 EST; 5min ago
Docs: https://docs.mongodb.org/manual
Process: 33894 ExecStart=/usr/bin/mongod $OPTIONS (code=dumped, signal=ILL)
Process: 33892 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 33890 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 33888 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Dec 22 14:54:12 localhost.localdomain systemd[1]: Starting MongoDB Database Server...
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Process 33894 (mongod) of user 974 dumped core.
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Control process exited, code=dumped status=4
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Failed with result 'core-dump'.
Dec 22 14:54:12 localhost.localdomain systemd[1]: Failed to start MongoDB Database Server.
mongod.conf:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
"/etc/mongod.conf" 44L, 830C
mongod.service:
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings
[Install]
WantedBy=multi-user.target
~

Error: Unable to write pid file / mqtt broker - access from remote

I have been reading Eclipse mqtt documentations and relevant posts about the MQTT Broker failing to start and have implemented the suggestions and ideas which seem relevant to the solution of my problem. However as newbe I am now stuck and require more support to get Broker started and accessable from remote
I'm using Raspberry Pi OS Bullseye & Mosquitto version 2.0.11
mosquitto.conf is created in /etc/mosquitto:
pid_file /var/run/mosquitto/mosquitto.pid
per_listener_settings true
persistence true
persistence_file mosquitto.db
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto.log
include_dir /etc/mosquitto/conf.d
listener 1883 192.168.1.99
protocol mqtt
log_type all
acl_file /etc/mosquitto/acls
allow-anonymous false
connection_messages true
max_keepalive 10
log_timestamp true
log_dest topic
log_dest syslog
log_dest stdout
log_type all
password_file /etc/mosquitto/pwfile
and local.conf in /etc/mosquitto/conf.d to separate local access from remote access
allow_anonymous true
listener 1883 localhost
Updated /lib/systemd/system/mosquitto.service to:
ExecStartPre=/bin/mkdir -m 740 -p /var/run/mosquitto
ExecStartPre=/bin/chown mosquitto /var/run/mosquitto
(Have tried chown mosquitto:.., chown mosquitto:mosquitto.., chown -hR mosq... and chown -R mosq...)
Rights: /var/run/mosquitto/mosquitto.pid
drwxr----- 2 mosquitto root 60 Dec 16 10:14 .
drwxr-xr-x 33 root root 1000 Dec 16 14:46 ..
-rw-r--r-- 1 mosquitto mosquitto 4 Dec 16 10:14 mosquitto.pid
Broker is started with:
mosquitto -c /etc/mosquitto/mosquitto.conf -v
Error message returned:
1639655912: Loading config file /etc/mosquitto/conf.d/local.conf
2021-12-16|12:58:32: Error: Unable to write pid file
when I sudo delete mosquitto.pid or sudo rename its directory and restart mosquitto daemon, a new mosquitto.pid is not created and I get same error message as above
Command "systemctl status mosquitto.service" returns:
Warning: The unit file, source configuration file or drop-ins of mosquitto.service
changed on disk. Run 'systemctl daemon-reload' to reload >
mosquitto.service - Mosquitto MQTT Broker
Loaded: loaded (/lib/systemd/system/mosquitto.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-12-16 10:14:03 CET; 2h 56min ago
Process: 5035 ExecStartPre=/bin/mkdir -m 740 -p /var/log/mosquitto (code=exited, status=0/SUCCESS)
Process: 5036 ExecStartPre=/bin/chown mosquitto /var/log/mosquitto (code=exited, status=0/SUCCESS)
Process: 5037 ExecStartPre=/bin/mkdir -m 740 -p /var/run/mosquitto (code=exited, status=0/SUCCESS)
Process: 5038 ExecStartPre=/bin/chown mosquitto /var/run/mosquitto (code=exited, status=0/SUCCESS)
Process: 5039 ExecStart=/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf (code=exited, status=3)
Main PID: 5039 (code=exited, status=3)
Dec 16 10:14:03 Pi4 systemd[1]: mosquitto.service: Scheduled restart job, restart counter is at 5.
Dec 16 10:14:03 Pi4 systemd[1]: Stopped Mosquitto MQTT Broker.
Dec 16 10:14:03 Pi4 systemd[1]: mosquitto.service: Start request repeated too quickly.
Dec 16 10:14:03 Pi4 systemd[1]: mosquitto.service: Failed with result 'exit-code'.
Dec 16 10:14:03 Pi4 systemd[1]: Failed to start Mosquitto MQTT Broker.
I appreciate any guidance or help
Generate a new client ID. I'm testing mine using MQTTLENS which is a Chrome app. Once I did this the problem stopped and everything is working ok.

Service starting gunicorn failing with "Start request repeated too quickly"

Trying to start a service to run gunicorn as backend server for Flask, not working. Running nginx as frontend server for React, working.
Server:
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 8.4 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.4:GA
Kernel: Linux 4.18.0-305.3.1.el8_4.x86_64
Architecture: x86-64
Service file in /etc/systemd/system/myservice.service:
[Unit]
Description="Description"
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/home/project/app/api
ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app
Restart=always
[Install]
WantedBy=multi-user.target
/app/api:
-rwxr-xr-x. 1 root root 2018 Jun 9 20:06 api.py
drwxrwxr-x+ 5 root root 100 Jun 7 10:11 venv
Error message:
● myservice.service - "Description"
Loaded: loaded (/etc/systemd/system/myservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-06-10 19:01:01 CEST; 5s ago
Process: 18307 ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app (code=exited, status=203/EXEC)
Main PID: 18307 (code=exited, status=203/EXEC)
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Service RestartSec=100ms expired, scheduling restart.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Scheduled restart job, restart counter is at 5.
Jun 10 19:01:01 xxxx systemd[1]: Stopped "Description".
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Start request repeated too quickly.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Failed with result 'exit-code'.
Jun 10 19:01:01 xxxx systemd[1]: Failed to start "Description".
Tried, not working:
Adding Environment="PATH=/home/project/app/api/venv/bin" under [Service]
$ systemctl reset-failed myservice.service
$ systemctl daemon-reload
Reboot, ofc.
Tried, working:
Running (as root) /home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app while in /app/api directory
Does anyone know how to fix this problem?
Typically enough, I figured it out shortly after posting this issue.
SELinux is messing with permissions for files and directories, so for anyone experiencing the same issue, make sure to test with the following alterings (as root):
$ setsebool -P httpd_can_network_connect on
$ chcon -Rt httpd_sys_content_t /path/to/your/Flask/dir
In my case: $ chcon -Rt httpd_sys_content_t /home/project/app/api
While this is NOT a permanent fix, it's worth a try. Check out the SELinux docs for more permanent solutions.

systemd service not starting on boot, starts when i restart it

I have made this service file to start a python script when my raspberry pi (4) boots up:
/etc/systemd/system/plants.service
[Unit]
Description=plant-sender
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/home/theo/Repos/plants-monitor/remote
ExecStart=/usr/bin/python main.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
However, once the pi is on, I run sudo systemctl status plants, and get:
* plants.service - plant-sender
Loaded: loaded (/etc/systemd/system/plants.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-03-30 20:22:43 EDT; 1min 45s ago
Process: 323 ExecStart=/usr/bin/python main.py (code=exited, status=1/FAILURE)
Main PID: 323 (code=exited, status=1/FAILURE)
Mar 30 20:22:43 arpi systemd[1]: plants.service: Scheduled restart job, restart counter is at 5.
Mar 30 20:22:43 arpi systemd[1]: Stopped plant-sender.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Start request repeated too quickly.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Failed with result 'exit-code'.
Mar 30 20:22:43 arpi systemd[1]: Failed to start plant-sender.
But, after running sudo systemctl restart plants, the service starts up and everything is fine.
If it doesn't start on boot but does on systemctl restart, I'd be looking at whether /home/theo/Repos/plants-monitor/remote is mounted at that point.
There may be something automounting or home-mounting your home directory when you log in.
If so, you could change the working directory to something that exists always, even if only a test.
Additionally, using journalctl -n 9999 -u plants will get you more log messages, so you can see why it's failing, rather than just seeing the "tried too many times, giving up" messages.