A meta-build solution managing multiple projects - deployment

I'm looking for an advice about the software that did not catch my eye during a few days of searches.
Is there any software solution for deployment of multiple projects at once that operates at the level of source-building package manager (like ports, portages, or nix), but could be resided locally?
As for details, we have few loosely connected software projects with the following traits:
Projects are written mostly on C/C ++ and Python, but more languages are represented as a minor mixture (Haskell, Rust, Perl)
Projects are grouped together within git superprojects with certain build/deployment presets carefully tuned for particular environments (and adapted for particular purposes by subset of boolean-like options)
We have already done with well-elaborated CMake build scripts for C++ projects that do support options, build configurations, exporting targets, and so on. It would be expensive to switch from it now.
We are forced to deal with various Linux distributions (from Gentoo and Ubuntu to Debian and CentOS).
We need an unifying build&deployment tool for various environments. The CMake does not integrate well with non-compiling languages (e.g. does not natively support local Python installations via virtualenv).
Instead of changing the things we already developed, I would like to use them in the manner the OS package manager does. For my vision, it should be something pretty similar to the so-called meta-build tool. In fact, the Gentoo Portages are pretty close:
Easy customization with simple boolean options (useflags, profiles)
Delegation of the building procedures to the reliable and customizable tools designed and well-tested especially for this purpose (CMake, autotools, Bazel, etc.)
Offers an ability to change the target compiler installation and specify the building process in a clear declarative way with a standardized instructions set.
Portages can not be ran locally, though (and have other unrelated flaws).
I have to become very confident before switching the whole build system to something like Meson or Bazel or whatever else I could find up to the moment.
Update
To be more specific, I could refer to what we have done up to the moment. The one of the superprojects we're maintaining deals with particular scientific experiment:
All the sub-projects are listed as git submodules
A simple BASH-script maintaining the entire build-install-deploy lifecycle
The presets referring particular settings for a concrete environment are represented as BASH sources and are included by this maintaining shell script.
As few days more had come with no result, I coming up with intention to write my own solution based on experience we already gained due to these shell scripting activity.

For a similar task Bob was developed some time ago. If you still need a build system it might be worth a look. There are some small tutorials available as well as a complex basement example build a linux from scratch.
Basically it is just a environment for a controlled execution of bash scripts with dependency and variant handling, CI integration, IDE support,..

Take a look at Sparrow - Linux scripting platform, it supports projects hierarchy and tasks - configurable scripts, where configuration is made in declarative way in Yaml, Json formats. This hopefully might address many if not all the needs you've mentioned here.
Disclosure - I am the tool author.

Related

Is there a way to specify where raco should install a package?

Many languages' package management systems install third-party packages in a subfolder of a given project's root repository. (E.g. node_modules/, deps/, etc.) This is nice as it allows multiple versions of a single package to coexist nicely, as each is isolated to the project that depends on it.
raco by default installs packages system-wide it seems. Is there a way to tell raco it should install packages in a particular folder?
The comment is right to point you to package scopes: using a directory package scope is the most fine-grained level of control available at the level of the package manager. However, as the docs say,
Conflict checking disallows installation of the same or conflicting
package in different scopes …
It is a design decision in the Racket package system not to allow multiple versions of the same package to coexist in the same installation. The idea is that a package shouldn't make breaking changes while using the same name. One inspiration that's been mentioned is OS level package systems like Debian's, where gcc-4.7 and gcc-4.8 are different packages that install different binaries. This is very different than the practice of some other language package systems (e.g. npm), but it works out well in practice because the Racket community takes backwards-compatibility quite seriously.
This is actually Racket's second package system. The old system, PLaneT, tried to be much more opinionated, including built-in notions of versioning. The new/current system consciously removed some of those elements to create a more minimal and focused package system, which means that a great deal of what formerly had to be "built in" to Racket can now be just another package. You may be interested in a talk about the design of the package system. There was also a fair amount of discussion at the time on the mailing list, which is archived here.
If you do find you need different package versions to an extent that can't be supported by existing mechanisms, you could use a stand-alone Racket installation (rather than a system-wide one) to get a completely isolated environment. I'm sure the Racket community would also be interested in hearing about any issues you have in practice.

Should we deploy scrrun.dll (Windows Scripting Runtime)?

Our VB6 application relies heavily on the use of scrrun.dll (Windows Scripting Host). Until a year ago we used to deploy this dll with our installer. Since the Windows Scripitng Host is supposed to be part of Windows we removed the dll from the installation package. However, now and then surface customers who have a non functional scrrun.dll on their system and we have to help them reinstall or reregister it.
So, should we put the scrrun.dll back in the installation package? Should we perform some check on installation? Or should we just live with the fact that we have to provide hands on support to some of our customers to set their systems right?
Don't try to deploy these libraries as part of a normal setup.
Microsoft Scripting Runtime must be installed through the use of a
self-extracting .exe file. For versions of Scripting Runtime mentioned
at the beginning of this article, the only way to distribute it is to
use the complete self extracting .exe file located at the following
locations...
It is possible that some users employ an older anti-malware suite, many of which tried to disable scripting. It is more likely though that some users have managed to break their Windows installation, either themselves or by using applications improperly packaged to try to include these libraries - and blindly remove them from the system on uninstall (cough, cough - Inno).
The libraries involved have been tailored code for some time. This is why the ancient .CAB file was "recalled" long ago. There is no single copy of them intended to run on any random version of Windows, and there are no redist packs for any modern version of Windows. The correct fix is a system restore or repair install.
While this can't be blamed directly on InnoSetup because it is the result of poorly authored scripts it is frustrating enough and common enough that I won't cry when its signature is added to anti-malware suites. There are just too many poorly written examples loose in the wild copy/pasted by too many people.
I spend plenty of time undoing the damage caused by uninstalls of these applications and have grown quite weary of it. Where possible I use isolated assemblies now in self-defense, which helps a lot. Windows File Protection is getting better about preventing abusive action for system files too.
But in general you are much better off avoiding any dependency on scripting tools in an application. There isn't very much that they can do as well as straight code anyway, though it may take some time to write alternative logic.

Deployments: MSI packages vs Scripts

I have been tasked with looking into our deployments, and seeing where they can be streamlined. Right now we have 4 different configurations (Debug/Dev, Test, Staging, Release) and 4 *.config files. We have a task that will overwrite app/web.config with the appropriate *.config pre-build time based on the active configuration. An MSI is created, and we do a full deployment of the component on release night.
This is not entirely ideal because if we change something in a config file, or fix the spelling in a specific view we have to re-deploy the entire thing. Not to metion that the MSI will occasionally require a reboot. One other option that has been brought up is instead of creating MSIs we could create custom deployment/rollback scripts and have the ability to do incremental release.
Has anyone here tried deployments both ways? What are some of the pros/cons you have found? Is there a third way we haven't thought of?
edit: Just to clarify a few things...We don't deploy to customers. All software is deployed to our servers. (a few sites, and a lot of windows services). We never change things in production. We actually use the built in system within VS to create the MSI, so that part isn't the terrible part. To me it just doesn't make sense to redeploy an entire website if you had to change 1 view. We also have to deploy to multiple servers. Right now that is done by running the MSI on each one.
MSI pros:
Application/service/site gets installed and registered like most other Windows apps, and shows up in Add/Remove programs
Some built-in support for re-installing, upgrading
Has some built-in support for installing Windows services/IIS sites/lower-level Windows features
MSI cons:
Seems really cryptic once you get "under the hood"
Seems more difficult to customize than using a custom script
Script pros:
Easier to customize, although certain steps might require lots of/cryptic scripting (working with IIS, lower-level computer administration)
Don't have to deal with low-level weirdness of MSI
Script cons:
.bat scripting is not the most readable or writable language. (Powershell is better, but then you have to worry about whether Powershell is installed on the target machine).
Low-level operations require a lot of administrative scripting for commit/rollback behavior
No built in support for installing or rolling-back (MSI has some support built-in)
One thing I've come across that helps with MSIs is WiX (http://wix.sourceforge.net/), but even WiX seems pretty cryptic in a lot of ways. We use a combination of MSBuild and WiX to do automated builds and deployment/installs, and it works okay for us.
Overall, I'd probably lean more towards doing MSI/WiX (or other installer toolkit) deployments over scripts. MSIs are the standard way of doing installs on Windows, and once you get it working, you usually don't have to change too much. MSBuild or some other build framework (NAnt, etc.), can be useful for setting up the deployment (renaming files, doing string replacements, etc.), before putting together the final MSI package.
Running a dev company that build web apps for five years we struggled with this and tried a bunch of solutions. Here are a couple tips:
Always replace the entire web directory with your code (except if you have content generated by the web site, like a CMS). It's pretty fast to do this and incremental deployments can introduce phantom bugs if files are left around.
Have your build process (Nant, MSBuild, whatever) mod the .config files for each environment and build for what you push for. Alternately you can use registry settings so that the .config files are the same but that means a dedicated machine for each environment. May or may not be an issue.
Don't make changes in production. If you need to make changes (spelling errors on site) make those top priority to get changed in dev so that you don't overwrite them with the next push.
If you aren't using MSI's then make sure you have a rollback process. Keeping a copy of the site just before you changed it really helps when something unexplained goes sideways during a roll-out.
I don't know that these tips point to MSI or script. I think it's a matter of which you are most comfortable with. MSI's can be hard to customize, but easy to run and manage. Microsoft has lots of tools for managing roll-outs of MSI's across an organization or farm. Scripts may require custom tools and custom tooling or lots of manual work on the production end.
We ran scripts with Nant and a custom deployment harness. These days (VS2008) building deployment packages is much easier.
Your best option is to get a decent MSI builder to do the job with - i'm talking about InstallShield etc (there are a couple, so do look around). While these invariably cost, they can save you a huge amount of time/money/pain further down the track. Having said that, the pain is not totally eliminated, just reduced :)
Anything tricky you need to do can be done as a custom task within the msi - and you can even do this with the setup builder that comes with Visual Studio (if you are using VS).
I have a suggestion for your config files - include all four in the msi, and then have a public property which can be set from the command line. You can then modify that public property to install the appropriate config file (and have the default value of that property set so that the release config gets installed). That way, your customers just use the msi and get the correct config file, but your test team can get their config file by changing the value of the public property; the command line they would use to do the install is this:
msiexec /i "MyInstaller.msi" CONFIG=test
You can do install scripts quite easily, but as already mentioned you also need to script the uninstall. Using install scripts precludes you from getting Windows certification for your product should you look at getting that done. But that doesn't mean you shouldn't use install scripts, they may be the perfect fit for your needs. Alternatively you may look at using a combined script/msi approach by having your scripts run as custom actions from within the msi.

What do you expect from a package manager for Emacs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Although several thousand Emacs Lisp libraries exist, GNU Emacs, until version 24.1 did not have an (internal) package manager.
I guess that most users would agree that it is currently rather inconvenient to find, install and especially keep up-to-date Emacs Lisp libraries.
Pages that make life a bit easier
For versions of Emacs older than 24.1:
Emacs Lisp List - Problem: I see dead people (links).
Emacswiki - Problem: May contain traces of nuts (malicious code).
Emacsmirror - The package repository I am working on. Problem: No package manager supports it natively yet.
Some package managers
It's not that nobody has tried yet. (Some of these did not exist when this question was asked.)
auto-install
borg.el - Assimilate Emacs packages using Git submodules.
el-get.el - Supports many sources.
elinstall.el
epackage aka DELPS - Debian packaging concepts applied to Emacs Lisp packages.
epkg.el - This is now just a tool for browsing the Emacsmirror.
install.el
install-elisp.el
jem-pkg.el
package.el - ELPA. Seems like it will be included in Emacs 24.
UPDATE -- package.el is included in GNU Emacs, starting with version 24.1
pases.el
pelm - Command line installer; using php.
plugin.el
straight.el - Recent and experimental, has not reached 1.0 release yet.
use-package.el
XEmacs package manager
package has been included in the Emacs trunk. epkg is not ready yet and also currently not available. At least install-elisp, plugin and use-package do not seem to be actively maintained anymore.
I have created a git repository containing all these package managers as submodules.
Some utilities that might be useful
Package managers could use these utilities and/or they could be used to maintain a mirror of packages.
date-calc.el - Date calculation and parsing routines.
ell.el - Browse the Emacs Lisp List.
elm.el, elx.el, xpkg.el - Used to maintain the Emacsmirror.
genauto.el - Helps generate autoloads for your elisp packages.
inversion.el - Require specific package versions.
loadhist.el, lib-requires.el, elisp-depend.el - Commands to list Emacs Lisp library dependencies.
project-root.el - Define a project root and take actions based upon it.
strptime.el - Partial implementation of POSIX date and time parsing.
wikirel.el - Visit relevant pages on the Emacs Wiki.
Discussions about the subject at hand
emacs-devel 20080801
comp.emacs 20021121
RationalElispPackaging
The question (finally)
So - I would like to know from you what you consider important/unimportant/supplementary etc. in a package manager for Emacs.
Some ideas
Many packages (the Emacsmirror provides that largest available collection of packages, but there is no explicit support in any package manager yet).
Only packages that have been tested.
Support for more than one package archive (so people can choose between many/tested packages).
Dependency calculated based on required features only.
Dependencies take particular versions into account.
Only use versions that have been released upstream.
Use versions from version control systems if available.
Packages are categorized.
Packages can be uninstalled and updated not only installed.
Support creating fork of upstream version of packages.
Support publishing these forks.
Support choosing a fork.
After installation packages are activated.
Generate autoload files.
Integration with Emacswiki (see wikirel.el).
Users can tag, comment etc. packages and share that information.
Only FSF-assigned/GPL/FOSS software or don't care about license.
Package manager should be integrated be distributed with Emacs.
Support for easily contacting author.
Lots of metadata.
Suggest alternatives before installing a particular package.
I am hoping for these kinds of answers
Pointers to more implementations, discussions etc.
Lengthy descriptions of a set of features that make up your ideal package manager.
Descriptions of one particular desired/undesired feature. Feel free to elaborate on my ideas from above.
Surprise me.
I'm still learning Emacs, so I haven't had a chance to look into package managers, but a great feature would be to inform the user that the package is available if they try to use it but it's not on their system. For example, I wanted to edit a PHP file on a server once, and I tried
M-x php-mode
and Emacs was all like
M-x php-mode [no match]
when it should have been like
php-mode available from ftp.gnu.org. install? (y/n)
and then it would have installed and loaded up php-mode for me. That would have made my day right there.
Automatic publishing from version control
I'd love to see a standard, central, and single Emacs package manager. Right now, I'd put my money on ELPA, but there is still a long way to go.
The biggest thing that would help an Emacs package manager would be to make it super trivial to publish packages. In my opinion, I'd like to see this happen in combination with a version control system like git on a central hosted platform like GitHub -- something that would make it easy for authors to publish their packages and would make it easy for others to contribute back.
Similar to how GitHub (used to) make it easy to publish RubyGems, I'd like to see something similar in an Emacs package manager. For example, tag your repository with "vX.Y.Z" and have your elisp goodness automatically available to all.
The added benefit of using a popular backend like GitHub is that you'd immediately get a lot of exposure which should help drive its success.
What I expect most is that everything useful is on it, and works well. This requires you (or a team of maintainers) to aggressively pursue packaging everything for it, and doing whatever that involves — emailing every author of a useful package, and so on.
For instance, the reason Debian (and its derivatives: Ubuntu etc.) is so good is that you can happily use your system without ever having to install something outside the repositories, and that everything on it is thoroughly tested. The actual features of the package manager are important, but secondary to the managed packages themselves.
Easy configuration synchronization: I, like many people, use Emacs on many different computers and servers, some of them my own and some not. It would be amazing if the package manager had some sort of file which I could transfer from one computer to another; then, on the latter computer, the package manager would bring my Emacs into the state I like it in -- all the packages installed and configurations set. Combined with the ability to be able to easily install either site-wide (if one has root permissions) or as a single user, I could synchronize all of Emacsen everywhere.
I'm nearly positive that the best solution involves submitting more packages to ELPA and adding multi-source support to package.el. The Emacs maintainers have said that they would consider including package.el in version 24 as long as it pointed to an FSF repository by default.
Of course, submission also needs to be an automated process too; the current method of mailing the ELPA maintainer only works on a small scale.
No matter how this is done, the most important thing in my opinion is that it should be trivial to submit packages to the repository. At the same time, we do not want those packages to be instantly available, to guard against malicious code(and due to licensing issues). Unless there is a "trust" system in place, based on crypto signatures.
Also useful:
"metapackages", to install several packages at once.
In the same way, we should be able to install a set of elisp files, for maintainability
"Broken" packages should not be allowed to disrupt Emacs startup. This is easy and I have it implemented in my own .emacs
Ability to install files other than scripts. This is often overlooked, but very useful. You'd be able to, for instance, ship images, for icons, toolbars, etc.
Versioning:package X requires package Y > 1.0
Testing: perform basic sanity checks, testing for conflicts (keybindings, function redefinitions, functions that are expected to be present but aren't, etc).
BUG TRACKING: I can't stress the importance of this enough. Having a centralized place to report package bugs (and being able to track them) is extremely important to assure the quality of the packages.
Some sort of compressed archive seems to be best to do some of the above.
So far, a much improved ELPA seems the way to go.
I once spent some time writing a small package manager for Emacs.
http://gmarceau.qc.ca/plugin.el
I wrote:
Plugin is my attempt at creating a
package manager for Emacs. Plugin
will automatically downloads Emacs
extensions, unpacks them in a
directory, adds that directory to the
load-path, generates auto-load
annotations, and modify your dot-emacs
file. The auto-load annotations are a
little-known feature of Emacs. Once
they are generated, Emacs extensions
load quickly and incrementally, which
is really nice if you have as many
extensions installed as I do.
You will need two library files to get it to run, loop-constructs.el and record.el
I think the hackers for the iPhone got quite close to what I want, as does Ubuntu's "apt".
I like to be able to:
add
remove (package only)
remove user settings
view documentation
upgrade ( after reading the change log)
add new archive ( aka add repository )
see dependencies
see version
search for name, keyword
browse by (date added, date modified, name)
save all installed packages & settings
load set of packages & settings
I would like a main set of things that all work nicely and are the recommended way of doing whatever. Then a global set where everything working gets in. Then the ability for anyone to host their own archive.
It would be nice if this was all tied into git/svn/whatever so that you could install old versions. Make your own patches by forking off etc etc etc....
Besides the mentioned above, i expect something like debian, and other repositories - set of the stable, experemental, untested packages. Ability to add my own repositories - i use lot of the packages directly from VCS, so it could be useful to create my own packages
I think that the package manager should take a lot of inspiration from Rubygems. I also think that it should have a site like Gemcutter.
A central repository could also be nice (like Emacsmirror). This however may not be necessary if a site like Gemcutter exists that collects all packages.
I think these things are important for this to work.
Central location of some kind that collects all packages
Easy to add packages
Easy to maintain packages
Easy to contribute to other packages
Easy to install, uninstall and update packages
Possibility to add package dependencies
Common structure for all packages
So a package manager like Rubygems with a site like Gemcutter and a central repository like Emacsmirror (preferably on Github because of it's social coding) would do Emacs really good.
All in all I think that much inspiration should be taken from Rails and how Rails handles Gems.
I don't know how fresh this question is...
but the model I would like to see is CPAN. I also don't know Rubygems but it sounds similar to CPAN.
CPAN is a perl archive + library management system. When I need to write a perl program that requires... FTP or SOAP or JSON or XML or ZIP, or...etc, I can run the CPAN package manager, select the requisite package for download, view and verify the dependencies, then install everything. CPAN is mirrored .."everywhere".
CPAN works wonderfully for my purposes, and something similar for emacs would be nice to have. It also supports building C/C++ code on demand.
That's what I would like to see in emacs.
Some further comment on requirements.
explicit download of packages. No auto install. No invisible downloads. I want to ask for new libraries or new function.
I should be able to list the name/version/timestamp of installed packages.
If my friend gives me his list, I should be able to diff his emacs state against mine.
check-for-updates function. What updates are available? What do they fix?
depedency checking, verification, and download. If I install csharp-mode and it requires v5.0.28 of cc-mode, then it should confirm with me, that I must also download cc-mode.
there should be some sort of community ranking of these packages, like ranking torrents on isohunt. I want to see if a package has 3 upvotes or 3000.
"transactional" behavior. If an install goes boom, it must unwind to a last-known-good state.
failsafes. If I've put custom mods in linum.el, it should refuse to install a new version over my changes, unless I explicitly allow it. It should warn me before even starting. Do this with checksums/md5's over the existing install.
have the option of running some packages from compressed archives, like zip files. So I never have any doubt that I have not updated any of the embbedded elisp.
ability to use mirrored hosts for package distribution.
all this function should be accessible through M-x library-manageemnt or something.
Finally, it would be nice to have a way to segregate or organize libraries of functions. Hierarchical namespaces. Emacs' flat namespace is very dated. This is sort of independent but complementary to the core function of package management. I'm not a lisp guru so I don't know how hard this would be; maybe there is already a way to do it.
Package managers don't offer anything I value w.r.t. single-file elisp packages with simple dependencies: adding and deleting from site-lisp has never caused problems. It's packages that depend on external programs (e.g., ispell), multi-file packages (e.g., auctex, org-mode) that can be tricky. Can't think of any single-file elisp package with nontrivial dependencies, offhand.
For these, short of a package manager, I'd like emacs' elisp-packages to acquire test suites which can be run en masse, and which provide useful information in the event of dependency failures.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.