systemd (new style) daemon in C/C++ - daemon

On the Internet there are multiple code example which explains how to write the old style daemon in pure C or even in C++.
However, there is no such thing for a "new style" (or systemd one).
Is there a basic "simple daemon" code I can re-use to write a "new style" systemd daemon in C/C++?

I have many examples in my Snap! C++ environment. Look for the *.service files under the snapwebsites/debian directory to find the projects that act as a daemon.
More or less, you write a tool that opens a port to listen on and handle the messages you receive. The rest is taken care of by systemd.
Here is an example used by the snapfirewall daemon:
# Documentation available at:
# https://www.freedesktop.org/software/systemd/man/systemd.service.html
[Unit]
Description=Snap! Websites snapfirewall daemon
After=snapbase.service snapcommunicator.service snapdbproxy.service
Before=fail2ban.service
[Service]
Type=simple
WorkingDirectory=~
ProtectHome=true
# snapfirewall needs to run iplock which setuid to root so we can't set
# this parameter to true
NoNewPrivileges=false
ExecStart=/usr/sbin/snapfirewall
ExecStop=/usr/bin/snapstop --service "$MAINPID"
Restart=on-failure
RestartSec=1min
User=snapwebsites
Group=snapwebsites
LimitNPROC=1000
# For developers and administrators to get console output
#StandardOutput=tty
#StandardError=tty
#TTYPath=/dev/console
# Enter a size to get a core dump in case of a crash
#LimitCORE=10G
[Install]
WantedBy=multi-user.target
# vim: syntax=dosini
As mentioned at the top, the service unit documentation is found on freedesktop.
In this one, I have a NoNewPrivileges=false parameter. This means the suid will work as expected. Otherwise my tool could not add/remove rules to the firewall since the snapfirewall runs as the snapwebsites user, which can't run iptables at all.
That's pretty much all there is to it.
Note: The advantage of using a debian directory and building your own package is that it will automatically generate all the necessary scripts to start/enable/stop your service as expected. The only trick on this one is that the debian/rules must include the --with systemd command line option. But outside of that, it's going to be a breeze. (Update: on newer systems, at least Ubuntu 22.04+, the --with systemd is the default and as such is not required anymore)

Related

DBus how to start service

I am curious how to start my own service for DBus.
On official site I have found a lot of information regarding working with DBus services from client point of view, but how to start and develop service not enough:
1) Where should be located interface file ServiceName.xml
2) Where should be located service file ServiceName.service
3) How to launch service manually, not on start of system.
Can anybody help me or provide some usefull links ?
Make a service that is started by the service manager of the OS (initd, systemd,etc). In that program instantiate the server-side object using the dbus library.
Normally, you'll configure to start the service on boot, but with systemd it's also possible to configure it to start when something connects to specific socket or when something tries to use specific device object. It's called 'socket activation' and 'dbus activation' (see current systemd docs).
If you want to start service manually - then do systemctl disable <service-name> to disable start on boot. To start a service manually: systemctl start <service-name>.
The *.xml files aren't obligatory. Maybe look into other packages to see where they put these files.
The *.systemd files should be in some usual place (see systemd docs) like /usr/lib/systemd/system.

Raspberry Pi script boot order

There're three forms about running a script on the boot of the Raspberry, that are modifying /etc/rc.local, modifying the cron daemon and making a script that automatically run on boot in /etc/init.d
I want to know which of the methods listed about executed first.
The point of the question is that I'm trying to run wvdial with an Alcatel X600D at boot, that is as simple as modify the /etc/network/interfaces with these lines:
auto ppp0
iface ppp0 inet wvdial
But the problem is that the modem needs to receive the PIN before the wvdial is called. For that, I need to pass the PIN to the modem before the system raises the ppp0 connection.
Regards.
Script in /etc/init.d
Whatever is in /etc/rc.local
Your cron daemon command
Proof:
Scripts in /etc/init.d are ran according to their priority and dependencies (look within the files in /etc/init.d and in the runlevel directories /etc/rc*.d)
cat /etc/rc.local
get
# This script is executed at the end of each multiuser runlevel.
Cron scripts are executed whenever the timing pattern specified in them is reached which is independent from the boot order. So a script in cron probably would not make much sense.
Also have a look at https://wiki.debian.org/Modem/3G, it might be possible to do what you're trying to achieve without coding your own script.

Divolte-collector with MAPR, Storm, Kafka and Cassandra

I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: falseā€¦}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.

pactl called from systemd service always reports "pa_context_connect() failed connection refused"

I've setup a systemd service file to perform some pactl operations at system startup for a test process. While the commands work fine when performed from a terminal I always get "pa_context_connect() failed connection refused" when running the same script from the systemd service by starting the service. I'm also using the 'User=' directive in the service file to ensure that the auto-login user matches the user used to run the service commands.
I've read that this is somehow related to the pulseaudio session not being valid in the environmentless context of the systemd service but I haven't been able to figure that out further.
Although it might be a bit late for whatever project you might have been be working on, here's what I found out.
The regular systemctl, the PID 1, indeed cannot access the environement variables of the current user when launching a service. Since pactl relies on those variables to find what instance of pulseaudio it needs to connect to, it is unable to do so when launched though a service. I'm sure there's a fairly dirty workaround for this, but I found something better.
Most systems have a second instance of systemd running in userspace (accessible through systemctl --user while not connected as root). This instance indeed can access all the userspace environment variables and I found that pactl doesn't return any errors when being called either directly or through a script.
All you need to do is put your services in either /usr/lib/systemd/user/, /etc/systemd/user/, or ~/.config/systemd/user/, remove the User= directive from your service file and run systemctl --user daemon-reload as a regular user to make sure they've been detected.

Is there a way to automatically reload Supervisor processes?

I have a dev server which I often push code changes to over Git. After each push, I need to manually log into the server and restart the supervisor processes.
Is there a way to have Supervisor monitor a filesystem directory for changes and reload the process(es) on changes?
You should be able to use an Event Listener which monitors the filesystem (with perhaps watchdog) and emits a restart using the XML-RPC API. Check out the memmon listener from the superlance package for inspiration. It wouldn't need to be that complicated. And since the watchdog would call your restart routine you don't need to read the events using childutils.listener.wait.
Alternatively, git hooks might do the trick if the permissions are correct for the supervisord API to be accessed (socket permissions, HTTP passwords). A simpler but less-secure approach.
A simpler and even less-secure approach would be to allow you to issue a supervisorctl restart. The running user has to match your push user (or git, or www, depending on how you have it setup). Lot's of ways to have it go wrong security-wise. But for development, might do fine.
Related:
Supervisord: is there any way to touch-reload a child?
I also didn't find any solution so I tried to make my own.
Here it is.
You can install the package by this command:
pip install git+https://github.com/stavinsky/supervisord-touch-reload.git
(I will add it to PyPI after adding some tests. )
An example of setting up supervisor located in examples folder in github. Documentation will be very soon, I believe.
Basically all you need to start use this module is add event listener with command like:
python -m touch_reload --socket unix:///tmp/supervisor.sock --file <path/to file file> --program <program name>
where file is a file that will be monitored with absolute or relative to directory path, socket is the socket from supervisorctl section and program is program name from [program:<name>] section definition.
Also available --username and --password, that you can use if you have custom supervisor configuration.
While not a solution which uses supervisor, I typically solve this problem within the supervised app. For instance, add the --reload flag to gunicorn and it will reload whenever your app changes.
I had the same problem and created Superfsmon which can do what you want: https://github.com/timakro/superfsmon
pip install superfsmon
Here's a simple example from the README:
To restart your celery workers on changes in the /app/devops
directory your supervisord.conf could look like this.
[program:celery]
command=celery -A devops.celery worker --loglevel=INFO --concurrency=10
[program:superfsmon]
command=superfsmon /app/devops celery
Here is one liner solution with inotify tools:
apt-get install -y inotify-tools
while true; do inotifywait -r src/ && service supervisor restart; done