I am using systemd to start an executable at boot time on a single board computer (embedded) platform. I want this app to run in headless mode - no logging in - and the user will control the input through a web browser.
I am using a TS-7800-V2 which runs Debian.
Here is my service (webserver.service) set up (etc/systemd/system):
[Unit]
Description=Run LWSWS webserver on startup
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=simple
ExecStart=/libwebsockets/build/bin/lwsws
[Install]
WantedBy=multi-user.target
I have enabled and started the service, and checked the status:
root#ts7800-v2:~# systemctl status webserver.service
��● webserver.service - Run LWSWS webserver on startup
Loaded: loaded (/etc/systemd/system/webserver.service; enabled; vendor preset
Active: active (running) since Mon 2017-12-11 11:23:39 PST; 11min ago
Main PID: 2408 (lwsws)
CGroup: /system.slice/webserver.service
��└��─2408 /libwebsockets/build/bin/lwsws
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: mounting callback://proto
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: Using non-SSL mode
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: created client ssl context f
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: Unable to find interface eth
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: init server failed
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: Failed to create vhost local
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: /etc/lwsws/conf.d/test-serve
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: lwsws[2416]: Context creation failed
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: GPS THREAD created successfully
Dec 11 11:23:39 ts7800-v2 lwsws[2408]: MOTOR CONTROL THREAD created successfully
So, the service is active and running, but the eth0 connection is not being made so that the web server (libwebsockets web server LWSWS) cannot start up properly.
My question is how to get Ethernet to startup on boot so that the web server can start up and run?
In the dependencies for this service, the networking shows in the tree before the rest of the service startup:
root#ts7800-v2:/etc/systemd/system# systemctl list-dependencies webserver.service
��● ��├��─system.slice
��● ��├��─network-online.target
��● ��│ ��└��─networking.service
��● ��└��─sysinit.target
��● ��├��─dev-hugepages.mount
��● ��├��─dev-mqueue.mount
��● ��├��─kmod-static-nodes.service
��● ��├��─proc-sys-fs-binfmt_misc.automount
��● ��├��─sys-fs-fuse-connections.mount
��● ��├��─sys-kernel-config.mount
��● ��├��─sys-kernel-debug.mount
��● ��├��─systemd-ask-password-console.path
��● ��├��─systemd-binfmt.service
��● ��├��─systemd-hwdb-update.service
��● ��├��─systemd-journal-flush.service
��● ��├��─systemd-journald.service
��● ��├��─systemd-machine-id-commit.service
��● ��├��─systemd-modules-load.service
��● ��├��─systemd-random-seed.service
��● ��├��─systemd-sysctl.service
��● ��├��─systemd-timesyncd.service
��● ��├��─systemd-tmpfiles-setup-dev.service
Advice on how to get the network started before the web server starts would be appreciated. Thanks!
I think you are missing managing of a network interface. Something like NetworkManager. If you are already using systemd maybe systemd-networkd is a solution for you. Please read systemd-networkd from ArchLinux Wiki for reference.
I had to get the ethernet connection to auto-start when the app is in headless mode. Here is the contents (cat) of the etc/network/interfaces file:
root#ts7800-v2:/etc/network# cat interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp
I added the line "auto eth0" and now ethernet is started up before my webserver.service is called.
Related
When I try to connect TDengine via its restful API but its 6041 port is not monitored.
Following is more detail info.
systemctl status taosd
● taosd.service - TDengine server service
Loaded: loaded (/etc/systemd/system/taosd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2021-10-20 15:08:33 CST; 48min ago
Process: 16246 ExecStartPre=/usr/local/taos/bin/startPre.sh (code=exited, status=0/SUCCESS)
Main PID: 16257 (taosd)
Tasks: 57
Memory: 30.1M
CGroup: /system.slice/taosd.service
└─16257 /usr/bin/taosd
Oct 20 15:08:33 ecs-29b3 systemd[1]: Starting TDengine server service...
Oct 20 15:08:33 ecs-29b3 systemd[1]: Started TDengine server service.
Oct 20 15:08:33 ecs-29b3 TDengine:[16257]: Starting TDengine service...
Oct 20 15:08:33 ecs-29b3 TDengine:[16257]: Started TDengine service successfully.
netstat -antp|grep 6030
tcp 0 0 0.0.0.0:6030 0.0.0.0:* LISTEN 16257/taosd
netstat -antp|grep 6041
Any suggestion?
you can check whether taosadapter is running with ps -e | grep taosadapter, If there is no taosadapter running, you should start it.
Trying to start a service to run gunicorn as backend server for Flask, not working. Running nginx as frontend server for React, working.
Server:
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 8.4 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.4:GA
Kernel: Linux 4.18.0-305.3.1.el8_4.x86_64
Architecture: x86-64
Service file in /etc/systemd/system/myservice.service:
[Unit]
Description="Description"
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/home/project/app/api
ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app
Restart=always
[Install]
WantedBy=multi-user.target
/app/api:
-rwxr-xr-x. 1 root root 2018 Jun 9 20:06 api.py
drwxrwxr-x+ 5 root root 100 Jun 7 10:11 venv
Error message:
● myservice.service - "Description"
Loaded: loaded (/etc/systemd/system/myservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-06-10 19:01:01 CEST; 5s ago
Process: 18307 ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app (code=exited, status=203/EXEC)
Main PID: 18307 (code=exited, status=203/EXEC)
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Service RestartSec=100ms expired, scheduling restart.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Scheduled restart job, restart counter is at 5.
Jun 10 19:01:01 xxxx systemd[1]: Stopped "Description".
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Start request repeated too quickly.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Failed with result 'exit-code'.
Jun 10 19:01:01 xxxx systemd[1]: Failed to start "Description".
Tried, not working:
Adding Environment="PATH=/home/project/app/api/venv/bin" under [Service]
$ systemctl reset-failed myservice.service
$ systemctl daemon-reload
Reboot, ofc.
Tried, working:
Running (as root) /home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app while in /app/api directory
Does anyone know how to fix this problem?
Typically enough, I figured it out shortly after posting this issue.
SELinux is messing with permissions for files and directories, so for anyone experiencing the same issue, make sure to test with the following alterings (as root):
$ setsebool -P httpd_can_network_connect on
$ chcon -Rt httpd_sys_content_t /path/to/your/Flask/dir
In my case: $ chcon -Rt httpd_sys_content_t /home/project/app/api
While this is NOT a permanent fix, it's worth a try. Check out the SELinux docs for more permanent solutions.
I want to register a python script as a daemon service, executed at system startup and running continuously in the background. The script opens network sockets, a local log file and executes a number of threads. The script is well-formed and runs without any compilation or runtime issues.
I used below service file for registration:
[Unit]
Description=ModBus2KNX Gateway Daemon
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/python3 /usr/bin/ModBusDaemon.py
[Install]
WantedBy=multi-user.target
Starting the service results in below error:
● ModBusDaemon.service - ModBus2KNX Gateway Daemon
Loaded: loaded (/lib/systemd/system/ModBusDaemon.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-01-04 21:46:29 CET; 6min ago
Process: 1390 ExecStart=/usr/bin/python3 /usr/bin/ModBusDaemon.py (code=exited, status=1/FAILURE)
Main PID: 1390 (code=exited, status=1/FAILURE)
Jan 04 21:46:29 raspberrypi systemd[1]: Started ModBus2KNX Gateway Daemon.
Jan 04 21:46:29 raspberrypi systemd[1]: ModBusDaemon.service: Main process exited, code=exited, status=1/FAILURE
Jan 04 21:46:29 raspberrypi systemd[1]: ModBusDaemon.service: Failed with result 'exit-code'.
Appreciate your support!
Related posts brought me to the resolution for my issue. Ubuntu systemd custom service failing with python script refers to the same issue. The proposed solution adding the WorkingDirectory to the Service section resolved the issue for me. Though, I could not find the adequate systemd documentation outlining on the implicit dependency.
As MBizm saim you must also add WorkingDirectory.
And After that you must also run these commands:
sudo systemctl daemon-reload
sudo systemctl enable your_service.service
sudo systemctl start your_service.service
I have to linux servers running on the same local network.
One of them is running mongoDB server.
I am trying to connect to the first server mongodb from the second server.
I have added port 27017 to the first server firewall rules.
I have modified the /etc/mongo.conf file as follows:
bind_ip=127.0.0.1,10.0.0.202
That did not work. I have also tried the next version:
bind_ip=[127.0.0.1,10.0.0.202]
That did not work as well.
After modifying the file I am trying to restart the mongod service but the service won't restart. It will only with the original line: bind_ip=127.0.0.1.
Here is the error once i restart the service and check the status:
mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod)
Active: failed (Result: exit-code) since Sun 2016-11-13 11:32:15 IST; 4min 58s ago
Docs: man:systemd-sysv-generator(8)
Process: 37572 ExecStop=/etc/rc.d/init.d/mongod stop (code=exited, status=0/SUCCESS)
Process: 37546 ExecStart=/etc/rc.d/init.d/mongod start (code=exited, status=0/SUCCESS)
Main PID: 37559 (code=exited, status=48)
Nov 13 11:32:15 localhost.localdomain systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
Nov 13 11:32:15 localhost.localdomain runuser[37555]: pam_unix(runuser:session): session opened for user mongod...d=0)
Nov 13 11:32:15 localhost.localdomain mongod[37546]: Starting mongod: [ OK ]
Nov 13 11:32:15 localhost.localdomain systemd[1]: Started SYSV: Mongo is a scalable, document-oriented database..
Nov 13 11:32:15 localhost.localdomain systemd[1]: mongod.service: main process exited, code=exited, status=48/n/a
Nov 13 11:32:15 localhost.localdomain mongod[37572]: Stopping mongod: [FAILED]
Nov 13 11:32:15 localhost.localdomain systemd[1]: Unit mongod.service entered failed state.
Nov 13 11:32:15 localhost.localdomain systemd[1]: mongod.service failed.
What am i doing wrong? How do i fix it?
Any help would be appreciated. Thank you.
Problem solved.
Apparently when adding an IP to the bind_ip=127.0.0.1 line, the next IP should be the one of that same machine.
If the IP of the machine running the mongo server is 10.0.0.201, so we should change the line into bind_ip=127.0.0.1,10.0.0.201. This way other machines that are on the same network will be able to connect to its mongo server.
Set bindIp as 0.0.0.0 to let the mongo db served bind to all interfaces:
# network interfaces
net:
port: 27017
# bindIp: 127.0.0.1
bindIp: 0.0.0.0
For someone who is using Amazon AWS for instance, bind local IP port and not public IP port. Than You allow port 27017 in your Security Group to allow it for incoming connections.
/etc/mongod.conf
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,172.33.1.10
Do not forget to secure it by creating user:
> show dbs
admin 135 kB
config 111 kB
local 73.7 kB
> db.createUser({
user: "LetMeIn",
pwd: "MyStrongPsswd",
roles: [{role: "userAdminAnyDatabase" , db: "admin"}]
});
{ ok: 1 }
I am trying to change the default port of openldap (not so experienced with openldap so I might be doing something incorrectly).
Currently I am installing it through yum package manager on CentOS 7.1.1503 as follows :
yum install openldap-servers
After installing 'openldap-servers' I can start the openldap server by invoking service slapd start
however when I try to change the port by editing /etc/sysconfig/slapd for instance by changing SLAPD_URLS to the following :
# OpenLDAP server configuration
# see 'man slapd' for additional information
# Where the server will run (-h option)
# - ldapi:/// is required for on-the-fly configuration using client tools
# (use SASL with EXTERNAL mechanism for authentication)
# - default: ldapi:/// ldap:///
# - example: ldapi:/// ldap://127.0.0.1/ ldap://10.0.0.1:1389/ ldaps:///
SLAPD_URLS="ldapi:/// ldap://127.0.0.1:3421/"
# Any custom options
#SLAPD_OPTIONS=""
# Keytab location for GSSAPI Kerberos authentication
#KRB5_KTNAME="FILE:/etc/openldap/ldap.keytab"
(see SLAPD_URLS="ldapi:/// ldap://127.0.0.1:3421/" )..
it is failing to start
service slapd start
Redirecting to /bin/systemctl start slapd.service
Job for slapd.service failed. See 'systemctl status slapd.service' and 'journalctl -xn' for details.
service slapd status
Redirecting to /bin/systemctl status slapd.service
slapd.service - OpenLDAP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/slapd.service; disabled)
Active: failed (Result: exit-code) since Fri 2015-07-31 07:49:06 EDT; 10s ago
Docs: man:slapd
man:slapd-config
man:slapd-hdb
man:slapd-mdb
file:///usr/share/doc/openldap-servers/guide.html
Process: 41704 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=1/FAILURE)
Process: 41675 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)
Main PID: 34363 (code=exited, status=0/SUCCESS)
Jul 31 07:49:06 osboxes runuser[41691]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41693]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41695]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41697]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41699]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41701]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes slapd[41704]: #(#) $OpenLDAP: slapd 2.4.39 (Mar 6 2015 04:35:49) $
mockbuild#worker1.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.39/openldap-2.4.39/servers/slapd
Jul 31 07:49:06 osboxes systemd[1]: slapd.service: control process exited, code=exited status=1
Jul 31 07:49:06 osboxes systemd[1]: Failed to start OpenLDAP Server Daemon.
Jul 31 07:49:06 osboxes systemd[1]: Unit slapd.service entered failed state.
ps I also disabled firewalld
the solution was provided when I ran journalctl -xn which basically says:
SELinux is preventing /usr/sbin/slapd from name_bind access on the tcp_socket port 9312.
***** Plugin bind_ports (92.2 confidence) suggests ************************
If you want to allow /usr/sbin/slapd to bind to network port 9312
Then you need to modify the port type.
Do
# semanage port -a -t ldap_port_t -p tcp 9312
***** Plugin catchall_boolean (7.83 confidence) suggests ******************
If you want to allow nis to enabled
Then you must tell SELinux about this by enabling the 'nis_enabled' boolean.
You can read 'None' man page for more details.
Do
setsebool -P nis_enabled 1
***** Plugin catchall (1.41 confidence) suggests **************************
If you believe that slapd should be allowed name_bind access on the port 9312 tcp_socket by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep slapd /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp