I have two services, A and B, installed by two different packages.
Service B depends on service A.
Both are disabled and stopped by default.
In order to get service B running on each boot, I enable it, then I start it:
systemctl enable B
systemctl start B
Since B depends on A, I expect A to be started, and it does get started! Yet A is not enabled. Is that an expected behavior? It kind of looks weird to me, somehow.
Yes, it is the expected behavior.
The systemctl enable and systemctl disable operations configure auto-starting of a unit.
More precisely, these operations simply perform what is described in the [Install] section of a unit file (or an inverse of these actions). Most of the times, this includes adding an artificial dependency to the unit from multi-user.target or a similar system-wide target, and nothing more.
Hence, starting the unit manually or via other dependencies is completely unaffected by this. If you really want to prevent starting the unit file, either manually or via a dependency, run systemctl mask UNIT.
Related
I have a problem with docker compose and build order. Below is my dockerfile for starting my .net application
As you can see as part of my build process I run some tests using "RUN dotnet test backend_test/backend_test.csproj"
These tests require a mongodb database to be present.
I try to solve this dependency with docker-compose and its "depends_on" feature, see below.
However this doesn't seem to work as when I run "docker-compose up" I get the following:
The tests eventually timeout since there is no mongodb present.
Does depends_on actually affect build order at all or does it only affect start-order (i.e builds everything the proceeds to start in correct order) ?
Is there another way of doing this ? (I want tests to run as part of building my final app)
Ty in advance, let me know If you need extra information
As you guessed, depends_on is for runtime order only, not build time - it just affects docker-compose up and docker-compose stop.
I highly recommend you make all the builds independent of each other. Perhaps you need to consider separate builder and runtime images here, and / or use a Docker-based CI (Gitlab, Travis, Circle etc) to have these dependencies available for testing.
Note also, depends_on often disappoints people - as it just waits for Docker's startup to finish, not the application startup. So your DB / service / whatever may still be starting up when the container that depends on it start will start using it, causing timeouts etc. This is why HEALTH_CHECK now exists (with a similar healthcheck feature in Docker Compose)
I have a Yocto based OS on which I have everything installed to start the network.
Nevertheless, at each boot I need to do systemctl start networking to start it. Initially the service was even masked. I found out how to unmask it but I can't find a way to start it automatically.
I don't know much about systemd but the networking.service is located in generator.late folder. From what I understood, it's generated afterward.
How can I enable it?
It depends if you want to enable the service only on one particular device. If yes, it is simple:
systemctl enable networking
Append the parameter --now if you also want to start the service just now.
If you want to enable the service on all your devices (i.e. it will be automatically enabled in all your images coming from build), the best way is to extend the recipe, but please see below for other ways how to handle the network. The process is describe at NXP support for example.
Some notes about networking.service itself: I assume that your networking.service comes from init-ifupdown recipe. If yes, is there any reason to handle network configuration using old SysV init script in system with systemd? The service is generated from SysV init script by systemd-sysv-generator. So I would suggest to try other networking services like systemd's native "systemd-networkd", "NetworkManager" or "connman". The best choice depends on type of your embedded systemd. These services are integrated with systemd much better.
Some more information on activating or enabling the services: https://unix.stackexchange.com/questions/302261/systemd-unit-activate-vs-enable
I do not seem to find a simple solution to the following problem:
I have a device listed in fstab, this should get mounted at boot. But if I manually unmount/remove the device after boot and if I present the device later on, systemd sees the device and automatically mounts it.
So how to prevent the latter (like pre-systemd behaviour). I can not use noauto in /etc/fstab since that will disable mounting at boot, which I still want to have.
There are some ways to workaround systemd for this problem. But I would like to see it fixed with using systemd.
After some digging it seems that the fstab systemd generator is creating device units and mount units. The generator seems to add implicit values to this generated device unit, one of them is a "Wants" to the mount unit. Causing a dependency between the mount and the device. How can I influence or override the systemd generators so that it does not create this "Wants" dependency between the device and the mount?
show dev-mapper-test.device |grep -i wants
Wants=mnt-test.mount
But now the tricky part, even if you could override that "wants" then also starting at boot will be disabled...
Thanks
You can write systemd unit with Type=oneshot.
Type=oneshot: this is useful for scripts that do a single job and then exit.
Example:
[Unit]
Description=one_mount
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/bin/mount /dev/partition /path/to/point
ExecStop=/usr/bin/umount /path/to/point
[Install]
WantedBy=multi-user.target
This is the first time I've used systemd and a bit unsure about something.
I've got a service that I've set up (for geoserver running under tomcat):
[Unit]
Description=Geoserver
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/geoserver/bin/startup-optis.sh
ExecStop=/usr/local/geoserver/bin/shutdown-optis.sh
User=geoserver
[Install]
WantedBy=multi-user.target
The start up script does an exec to run java/tomcat. Starting up the service from the commandline appears to work:
sudo systemctl start geoserver
However the command does not return until I ctrl-c, this doesn't seem right to me. The java process remains running afterwards though and functions normally. I'm reluctant to reboot the box to test this in case this is going to cause problems during init and it's a remote machine and it would be a pain to get someone to address it.
You need to set correct "Type" in "Service" section:
[Service]
...
Type=simple
...
Type
Configures the process start-up type for this service unit. One of simple, forking, oneshot, dbus, notify or idle.
If set to simple (the default if neither Type= nor BusName=, but
ExecStart= are specified), it is expected that the process configured
with ExecStart= is the main process of the service. In this mode, if
the process offers functionality to other processes on the system, its
communication channels should be installed before the daemon is
started up (e.g. sockets set up by systemd, via socket activation), as
systemd will immediately proceed starting follow-up units.
If set to forking, it is expected that the process configured with
ExecStart= will call fork() as part of its start-up. The parent
process is expected to exit when start-up is complete and all
communication channels are set up. The child continues to run as the
main daemon process. This is the behavior of traditional UNIX daemons.
If this setting is used, it is recommended to also use the PIDFile=
option, so that systemd can identify the main process of the daemon.
systemd will proceed with starting follow-up units as soon as the
parent process exits.
Behavior of oneshot is similar to simple; however, it is expected that
the process has to exit before systemd starts follow-up units.
RemainAfterExit= is particularly useful for this type of service. This
is the implied default if neither Type= or ExecStart= are specified.
Behavior of dbus is similar to simple; however, it is expected that
the daemon acquires a name on the D-Bus bus, as configured by
BusName=. systemd will proceed with starting follow-up units after the
D-Bus bus name has been acquired. Service units with this option
configured implicitly gain dependencies on the dbus.socket unit. This
type is the default if BusName= is specified.
Behavior of notify is similar to simple; however, it is expected that
the daemon sends a notification message via sd_notify(3) or an
equivalent call when it has finished starting up. systemd will proceed
with starting follow-up units after this notification message has been
sent. If this option is used, NotifyAccess= (see below) should be set
to open access to the notification socket provided by systemd. If
NotifyAccess= is not set, it will be implicitly set to main. Note that
currently Type=notify will not work if used in combination with
PrivateNetwork=yes.
Behavior of idle is very similar to simple; however, actual execution
of the service binary is delayed until all jobs are dispatched. This
may be used to avoid interleaving of output of shell services with the
status output on the console.
I understand the purpose of chef-client --daemonize, because it's a service that Chef Server can connect to and interact with.
But chef-solo is a command that simply brings the current system inline with specifications and then is done.
So what is the point of chef-solo --daemonize, and what specifically does it do? For example, does it autodetect when the system falls out of line with spec? Does it do so via polling or tapping into filesystem events? How does it behave if you update the cookbooks and node files it depends on when it's already running?
You might also ask why chef-solo supports the --splay and --interval arguments.
Don't forget that chef-server is not the only source of data.
Configuration values can rely on a bunch of other sources (APIs, OHAI, DNS...).
The most classic one is OHAI - think of a cookbook that configures memcached. You would probably want to keep X amount of RAM for the operating system and the rest goes to memcached.
Available RAM can be changed when running inside a VM, even without rebooting it.
That might be a good reason to run chef-solo as a daemon with frequent chef-runs, like you're used to when using chef-client with a chef-server.
As for your other questions:
Q: Does it autodetect when the system falls out of line with spec?
Does it do so via polling or tapping into filesystem events?
A: Chef doesn't respond to changes. Instead, it runs frequently and makes sure the current state is in sync with the desired state - which can be based on chef-server inventory, API calls, OHAI attributes, etc. The desired state is constructed from scratch every time you run Chef, at the compile stage when all the resources are generated. Read about it here
Q: How does it behave if you update the cookbooks and node files it depends on when it's already running?
A: Usually when running chef-solo, one uses the --json flag to specify a JSON file with node attributes and a run-list. When running in --daemonize mode with chef-solo, the node attributes are read only for the first run. For the rest of the runs, it's as if you were running it without a --json flag. I couldn't figure out a way to make it work as if you were running it with --json all over again, however, you can use the --override-runlist option to at least make the runlist stick.
Note that the attributes you're specifying in your JSON won't make it past the first run. This is possibly a bug.