How do I setup an init.d rc script for a Daemon-kit project? - daemon

I am using the Ruby Daemon-kit to setup a services that does various background operations for my Rails application.
It works fine when I call in on the commandline:
./bin/bgservice
How do I go about creating a daemon initd starter script for it, so that it will autostart on reboot?

There's a few approaches:
You could write /etc/init.d/ scripts that could be placed into the /etc/rc?.d/ directories (or wherever they live on your target distributions). Some details on this mechanism can be found in the Debian policy guidelines and openSUSE initscript tutorial. There's an annoying number of distribution-specific idiosyncrasies in initscripts, so don't feel about writing a simple one and asking distributions to contribute 'better' ones tailored for their environment. (For example, any Debian-derived distribution will provide the immensely useful start-stop-daemon(8) helper, but it is sorely missing from other distributions.)
You could write upstart job specifications for the distributions that support upstart (which I think is Ubuntu, Google ChromeOS, Fedora, .. more?). upstart documentation is still pretty weak, but there are some details and plenty of examples in /etc/init/ on Ubuntu, probably the same location in other distributions that use upstart. Getting the dependencies correct may be some work across all distributions, but upstart job specifications look far simpler to write and maintain than initscripts.
You could add lines to /etc/inittab on distributions that still support the standard SysV-init inittab(5) file. This would only be useful if your program doesn't do the usual daemon fork(2)/setsid(2)/fork(2) incantation, as init uses the pid it gets from fork(2) to determine if your program needs to be restarted.
Modern Vixie cron(8) supports a #reboot specifier in crontab(5) files. This can be used both by the system crontab as well as user crontabs, which might be nice if you just want to run the program as your usual login account.

As the author of daemon-kit I've avoided making any init-style scripts due to coping with the various distributions and they're migrations from old init-V style to newer upstart/insserv, saving myself a nightmare.
How I recommend to do this is to use the god config-generator, and ensure god is started on boot (by runit or some other means), and god starts the daemon up initially and keeps it running.
At best I'll expand daemon-kit to be able to generate runit scripts for boot...
HTH.

Related

Bundling up a perl script with its dependencies?

I have a perl script that I've put together to do some monitoring and graphing.
It works nicely on my dev system, where I have carte-blanch to install my own modules from CPAN.
What I'm looking at doing is bundling it up to deploy onto another system. But here's the catch - this other system is 'standalone' and has no network connection. (And I have change control paperwork to fill in, indicating what I'm installing).
As a result, I'd really like a nice easy way to figure out:
- What modules my scripts are making use of. (Including dependencies)
- how to easily grab them (cpan get probably)
- Is there an easy way to tell what external binaries I'm using? (I'm using for sure ssh and rrdtool - the former is definitely installed, the latter probably not).
I have a few thoughts on how to do this, but it strikes me as something that should be smoother.
I may also need to deploy a new perl, so I'm pondering whether I'm better off 'installing' the modules with system perl (probably 5.8.8 on RHEL5), or just 'packaging' the whole thing in a directory of it's own with a standalone perl instance.
Use pp to package your script and all dependant modules and libraries into a stand alone executable.
pp -x yourscript.pl -o outputfilename
See the documentation for examples of how to link to external shared objects (etc) if required. With pp you don't need perl on the target system where outputfilename will run.
Revisiting this, as the need hasn't really gone away. I have moved towards using docker - this is an 'image' and 'container' system for app deployment, which amongst other things, allows you to 'package' an application.
You create a Dockerfile - which is analagous to a Makefile - that runs through the steps to install perl + dependencies (either via a package manager, or from CPAN).
Once it has, you have a self contained, runnable 'image' that you can clone and create an instance of (a "container" in docker parlance).
It's also quite useful - even if you don't then deploy via container - to figure out what the dependencies of this application/packages were. The version in the container has everything locally installed that it needed, because it was a clean build.
When you have a system where you can't control the Perl installation (and the install is a really, really old version of Perl like 5.8.8 which is missing many nice improvements like state variables, autodie, say, and switch), you should look into Perlbrew.
Perlbrew allows you to install a user version of Perl. (In fact, it allows you to install multiple versions of Perl), and allows you to switch between the Perlbrew install and the officially installed version. It makes doing everything in Perl much, much easier.
You will have freer access to install new Perl modules, and you can do that task yourself rather than wait for your IT department to do it for you.
I ended up using it on one of our systems where the primitive version of Perl just wasn't doing what my version of Perl did. I originally asked our IT to upgrade, but they really messed up the upgrade. After going back and forth, I simply asked if I could install Perlbrew.
Which is an important point. Always ask permission. A lot of time, the IT department is more than happy to oblige. They're not Perl people, and CPAN is a world they don't want to deal with. Being able to get out of having to answer your beck and call about installing this or that Perl module is a great relief.

Automating Solaris custom software deployment and configuration for multiple nodes

Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.
Most of those steps sound like things handled by use of a packaging system to install your package. On Solaris 10, that would be the SVR4 packaging system included with the OS.

What are codepad.org's Perl runner limitations?

Sometimes I see people use http://codepad.org as a way to quickly run/test their Perl snippets (it supports doing that with a wide variety of languages, from C to Scheme to Perl).
It's pretty obvious that there must be some limitations as to what code/features can be tested with codepad - does anyone know what those limitations are for Perl runner?
I'll get the ball rolling on my own observation: not every CPAN module is available :(
Mostly based on their "about" page:
codepad only supports Perl 5.8.0
Presumably, like any Perl install, not every module (CPAN or otherwise) is present.
As a specific example, List::MoreUtils is missing.
As a sub-limitation, they seem to run on Linux. So any Windows specific modules would certainly be out.
It's in a chroot jail with system calls restrictions. Among other things this seems to prevent file creation (my snippets creating files in a current directory or /tmp both errored out, as well as File::Temp calls)
codepad code is executed on a virtual machine. Behind firewalls. And buried in a bunker. So certain functionality is probably disabled - especially networking/internet one. The exact "about" quote is:
The supervisor processes run on virtual machines, which are firewalled such that they are incapable of making outgoing connections.
The machines that run the virtual machines are also heavily firewalled, and restored from their source images periodically.
It's easier to just run Perl code locally. It's easy to install multiple versions of Perl and to track separate module repositories. It's also not hard to run just about any operating system you want in a virtual machine. Why you'd need anyone's else's service to do what you can do better yourself is beyond me.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.

Deploying Perl to a share nothing cluster

Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
Take down half the cluster
Copy Perl modules ( CPAN style modules ) to downed cluster members
ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
Confirm deployment
In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.