At what point in the boot process are systemd generators run? - init

The systemd.generator man page says that generators are run very early at bootup and that they are all run at the same time. At what point in the bootup are they actually run?
As I understand it, CoreOS's ignition is implemented as a generator which runs after the initramfs is mounted but before pivoting to the root filesystem, is this a CoreOS specific thing or is this common to any OS using systemd init?

At what point in the bootup are they actually run?
They run every time the systemd-pid1 manager is started: https://github.com/systemd/systemd/blob/v235/src/core/manager.c#L1333
In practice, this means either as one of the very first steps when pid1 is executed or after a daemon-reload. The latter also includes the transition between initramfs and the real rootfs.
ignition is implemented as a generator
Ignition is not implemented as generator, but as a first-boot initramfs service. If you read any doc page stating that ignition is a systemd generator, please report a bug against that as it is incorrect.
is this a CoreOS specific thing or is this common to any OS using systemd init?
Ignition is a CoreOS specific component. It is open-source and can be ported to any systemd-based distribution, but I am not aware of any other distribution using it. See https://github.com/coreos/ignition

Related

Why do we need OpenDDS run_test.pl?

I am running OpenDDS MPC based example stockQuoter. I deleted the run_test.pl, still the project builds and runs properly. why do we need this Perl script?
You don't really need it and you're free to start the programs directly. All examples and tests in OpenDDS have a file called run_test.pl for the purposes of testing. Among other functions, they define what programs get called with what arguments for a certain test scenario and are responsible for killing the programs if they get stuck.

Running Beats with docker or without?

As Beats a Data shippers, and they suppose to ship files, metrics, and so on, from a machine, is it a good approach to run beats with docker, as in this way beats is containerized?
I have currently the issue that I want to ship log files from an application, and if I install filebeats with docker, I have to provide somehow the log to the container. Is it a good approach to do something like this with docker, or should I normally install Filebeats, configure and run it on the machine without a container?
I don't think there is a strict rule, but I would use the same approach for your application as for the Beats — either containerize both or neither. That will also help you keep the expectations and setups aligned: logging to a file vs stdout and how to collect that.

How is an efi application being set as the bootloader through code?

By following this tutorial, I am able to create a simple efi application that prints hello world when executed from an uefi shell. However, I was wondering how does one creates bootable EFI image. I tried to use bcfg command in the shell and added my efi binary as one of the booting sequence. However, is there anyway to do it without the need of going into the shell?
However, if you are actually building your own firmware, you can also
look at creating a bootable EFI image and set your default boot option
to this binary. This is most useful if you are including the binary as
a part of your ROM, but it might be a little involved to set up the
filesystem so that it is seen as a normal boot option.
In this question, Nicholas Embry gave a good answer but I was unable to find any resource to explore further into the topic that he mentioned. Any help would be appreciated. Thanks!
bcfg, just like efibootmgr in Linux, ultimately use the GetVariable() and SetVariable() runtime services (also available during boottime) to modify the system boot configuration.
The UEFI Shell, implementing the bcfg command, is itself a UEFI application. Since its source is publicly available, you can have a look at the implementation of the bcfg command - particularly the BcfgAdd() function.
Adding on to unixsmurf's answer, I figured that it is in the specification that UEFI will automatically look for a file name/located at EFI/BOOT/bootx64.efi. When making a UEFI application that is intended to be automatically loaded by the machine when booting, simply place the application at the specified path. Combining with what unixsmurf mentioned, I can make the computer load any UEFI application automatically at boot time.

How do I setup an init.d rc script for a Daemon-kit project?

I am using the Ruby Daemon-kit to setup a services that does various background operations for my Rails application.
It works fine when I call in on the commandline:
./bin/bgservice
How do I go about creating a daemon initd starter script for it, so that it will autostart on reboot?
There's a few approaches:
You could write /etc/init.d/ scripts that could be placed into the /etc/rc?.d/ directories (or wherever they live on your target distributions). Some details on this mechanism can be found in the Debian policy guidelines and openSUSE initscript tutorial. There's an annoying number of distribution-specific idiosyncrasies in initscripts, so don't feel about writing a simple one and asking distributions to contribute 'better' ones tailored for their environment. (For example, any Debian-derived distribution will provide the immensely useful start-stop-daemon(8) helper, but it is sorely missing from other distributions.)
You could write upstart job specifications for the distributions that support upstart (which I think is Ubuntu, Google ChromeOS, Fedora, .. more?). upstart documentation is still pretty weak, but there are some details and plenty of examples in /etc/init/ on Ubuntu, probably the same location in other distributions that use upstart. Getting the dependencies correct may be some work across all distributions, but upstart job specifications look far simpler to write and maintain than initscripts.
You could add lines to /etc/inittab on distributions that still support the standard SysV-init inittab(5) file. This would only be useful if your program doesn't do the usual daemon fork(2)/setsid(2)/fork(2) incantation, as init uses the pid it gets from fork(2) to determine if your program needs to be restarted.
Modern Vixie cron(8) supports a #reboot specifier in crontab(5) files. This can be used both by the system crontab as well as user crontabs, which might be nice if you just want to run the program as your usual login account.
As the author of daemon-kit I've avoided making any init-style scripts due to coping with the various distributions and they're migrations from old init-V style to newer upstart/insserv, saving myself a nightmare.
How I recommend to do this is to use the god config-generator, and ensure god is started on boot (by runit or some other means), and god starts the daemon up initially and keeps it running.
At best I'll expand daemon-kit to be able to generate runit scripts for boot...
HTH.

Deploying Perl to a share nothing cluster

Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
Take down half the cluster
Copy Perl modules ( CPAN style modules ) to downed cluster members
ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
Confirm deployment
In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.