TYPO3 composer installation - differences to non-Composer - typo3

I am currently working on migrating a non-Composer TYPO3 project to Composer.
From what I have seen and read, the autoloading changes and the file structure changes. You cannot import extensions via the extension manager anymore. Also, some extensions can only be installed via composer (unless you do ugly tweaking).
Also, from a security point of view, at least one attack vector is shut down because you can no longer install extensions via the backend.
The documentation and others state that using TYPO3 with Composer is recommended but what exactly are the advantages, do these apply in all cases?
For example, in my case, I maintain one site, updating both TYPO3 and extensions is easy. In practice there probably will be no huge improvement. I think working with a package manager is cool but I also have to "sell" why I am doing this now and other tasks are not getting done.

The advantages of running TYPO3 v8/v9/v10+ in composer-based mode:
Speed improvements for PHP, as the autoloader includes the requested extensions (and used system extensions)
Only the system extensions you use are using are available in your production site
If you are using Git, you do not need to add your typo3/ folder and your vendor folder with composer-based extensions, which makes your Git repository much smaller
You can add more PHP libraries via composer that you need in your site, and can fine-grain your dependencies based on your PHP environment
The differences:
You cannot use the Extension Manager to download extensions from TER anymore, you have to use the CLI tool for fetching third-party or system extensions
You still need to activate these extensions in Extension Manager or via TYPO3 Console to update your PackageStates.php file.
All your local extensions need to either be added via a) autoload sections in your root composer.json or b) include their own composer.json and include the autoload section in there.
The statements about the folder structure is wrong, in fact, Composer allows to configure having your vendor folder OUTSIDE of your document root, which makes your website more secure IMHO, but you do not have to.
In general, if you're not used to use CLI, or using Git or a deployment strategy, composer is actually not a practical solution for TYPO3 users.

I totally get your point in that you have to get it sell it.
Getting your project running with composer isn't that hard, but does come with some pitfalls. But once you get familiar with some basic commands, you will notice that it will lower your work time.
Not in all, but in many cases, managing your project with command line tools line https://github.com/TYPO3-Console/TYPO3-Console give your more powerful and faster tools to work with. And be honest, typing in a console always look cool.
So yes, even for a 'small site', learning a new tech always rewards in the long run. You don't sell time, you sell expertice. The learning price, you do have to pay for that yourself, but you get to keep the joy that you feel when you get it to work. win-win

Related

Configure dependencies in RPM

I have built a RPM-package for Centos 6.6 that is installed on a machine of our customer.
This package contains our own software, customized for the specific use case, but also uses the open-source package HAProxy.
HAProxy (RPM-version 1.5.4-2.el6_7.1) comes with a default-configuration in /etc/haproxy/haproxy.conf and it cannot be customized without changing this file.
But I want the configuration to be part my generated package. RPM throws an error if the /etc/haproxy/haproxy.conf file is in my package, because it is also part of the haproxy-package.
I have worked around this problem by providing a custom upstart-script which starts HAProxy with a different config file, but this does not seem to be the right way to do this.
Is there a preferred way to handle such customizations?
In cases like this, I've created an RPM which installs configuration files into a different subdirectory, and in its %post and %preun scriptlets modifies the uncooperative package's config-files:
when installing, I renamed the original config-files, and made symbolic links from those pathnames to the overwriting config-files, and
when uninstalling, the package removed the symbolic links and restored the original package's files.
Doing it that way of course meant that my config-RPM was dependent on the original RPM. A little awkward to describe, but it works.
In followup, the issue of updating was mentioned. Updating an RPM requires special handling to avoid uninstalling things. The rpm program passes a parameter $1 which you can test in the %pre and %preun scriptlets to notice that this is an upgrade and that there is no need to save the original config-files (or restore them). The rest of the scriptlet would be the same, by copying the new versions of your config-files over the others.
Further reading:
Defining installation scripts (shows the use of `$1)
RPM upgrade uninstalls the RPM
Your approach is correct. On EL6 and sysv there is no other choice than creating custom haproxy package or custom haproxy service or create script which customer runs after installation. I see creating another service as best option.
Note that on EL7 with SystemD you have much better option as you can use Drop-In feature of SystemD. For more information see:
https://coreos.com/os/docs/latest/using-systemd-drop-in-units.html
https://wiki.archlinux.org/index.php/systemd#Drop-in_snippets
https://wiki.archlinux.org/index.php/Systemd/User#Service_example
The usual way this is done is to have a drop-in configuration directory, e.g. /etc/httpd/conf.d/, where your package would drop its configuration, and you would tell the other daemon, e.g. httpd, to do a graceful restart in your %post/%postun.
I don't know anything about HAProxy, but a quick search implies that they do not support this configuration directory concept that has been around for many years. A few people have hacked it in, but unless it is out-of-the-box, you will run into your original problem again.

Deployments: MSI packages vs Scripts

I have been tasked with looking into our deployments, and seeing where they can be streamlined. Right now we have 4 different configurations (Debug/Dev, Test, Staging, Release) and 4 *.config files. We have a task that will overwrite app/web.config with the appropriate *.config pre-build time based on the active configuration. An MSI is created, and we do a full deployment of the component on release night.
This is not entirely ideal because if we change something in a config file, or fix the spelling in a specific view we have to re-deploy the entire thing. Not to metion that the MSI will occasionally require a reboot. One other option that has been brought up is instead of creating MSIs we could create custom deployment/rollback scripts and have the ability to do incremental release.
Has anyone here tried deployments both ways? What are some of the pros/cons you have found? Is there a third way we haven't thought of?
edit: Just to clarify a few things...We don't deploy to customers. All software is deployed to our servers. (a few sites, and a lot of windows services). We never change things in production. We actually use the built in system within VS to create the MSI, so that part isn't the terrible part. To me it just doesn't make sense to redeploy an entire website if you had to change 1 view. We also have to deploy to multiple servers. Right now that is done by running the MSI on each one.
MSI pros:
Application/service/site gets installed and registered like most other Windows apps, and shows up in Add/Remove programs
Some built-in support for re-installing, upgrading
Has some built-in support for installing Windows services/IIS sites/lower-level Windows features
MSI cons:
Seems really cryptic once you get "under the hood"
Seems more difficult to customize than using a custom script
Script pros:
Easier to customize, although certain steps might require lots of/cryptic scripting (working with IIS, lower-level computer administration)
Don't have to deal with low-level weirdness of MSI
Script cons:
.bat scripting is not the most readable or writable language. (Powershell is better, but then you have to worry about whether Powershell is installed on the target machine).
Low-level operations require a lot of administrative scripting for commit/rollback behavior
No built in support for installing or rolling-back (MSI has some support built-in)
One thing I've come across that helps with MSIs is WiX (http://wix.sourceforge.net/), but even WiX seems pretty cryptic in a lot of ways. We use a combination of MSBuild and WiX to do automated builds and deployment/installs, and it works okay for us.
Overall, I'd probably lean more towards doing MSI/WiX (or other installer toolkit) deployments over scripts. MSIs are the standard way of doing installs on Windows, and once you get it working, you usually don't have to change too much. MSBuild or some other build framework (NAnt, etc.), can be useful for setting up the deployment (renaming files, doing string replacements, etc.), before putting together the final MSI package.
Running a dev company that build web apps for five years we struggled with this and tried a bunch of solutions. Here are a couple tips:
Always replace the entire web directory with your code (except if you have content generated by the web site, like a CMS). It's pretty fast to do this and incremental deployments can introduce phantom bugs if files are left around.
Have your build process (Nant, MSBuild, whatever) mod the .config files for each environment and build for what you push for. Alternately you can use registry settings so that the .config files are the same but that means a dedicated machine for each environment. May or may not be an issue.
Don't make changes in production. If you need to make changes (spelling errors on site) make those top priority to get changed in dev so that you don't overwrite them with the next push.
If you aren't using MSI's then make sure you have a rollback process. Keeping a copy of the site just before you changed it really helps when something unexplained goes sideways during a roll-out.
I don't know that these tips point to MSI or script. I think it's a matter of which you are most comfortable with. MSI's can be hard to customize, but easy to run and manage. Microsoft has lots of tools for managing roll-outs of MSI's across an organization or farm. Scripts may require custom tools and custom tooling or lots of manual work on the production end.
We ran scripts with Nant and a custom deployment harness. These days (VS2008) building deployment packages is much easier.
Your best option is to get a decent MSI builder to do the job with - i'm talking about InstallShield etc (there are a couple, so do look around). While these invariably cost, they can save you a huge amount of time/money/pain further down the track. Having said that, the pain is not totally eliminated, just reduced :)
Anything tricky you need to do can be done as a custom task within the msi - and you can even do this with the setup builder that comes with Visual Studio (if you are using VS).
I have a suggestion for your config files - include all four in the msi, and then have a public property which can be set from the command line. You can then modify that public property to install the appropriate config file (and have the default value of that property set so that the release config gets installed). That way, your customers just use the msi and get the correct config file, but your test team can get their config file by changing the value of the public property; the command line they would use to do the install is this:
msiexec /i "MyInstaller.msi" CONFIG=test
You can do install scripts quite easily, but as already mentioned you also need to script the uninstall. Using install scripts precludes you from getting Windows certification for your product should you look at getting that done. But that doesn't mean you shouldn't use install scripts, they may be the perfect fit for your needs. Alternatively you may look at using a combined script/msi approach by having your scripts run as custom actions from within the msi.

How do I use rpm to update/replace existing files?

I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.
To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.
You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.
See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!

What do you expect from a package manager for Emacs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Although several thousand Emacs Lisp libraries exist, GNU Emacs, until version 24.1 did not have an (internal) package manager.
I guess that most users would agree that it is currently rather inconvenient to find, install and especially keep up-to-date Emacs Lisp libraries.
Pages that make life a bit easier
For versions of Emacs older than 24.1:
Emacs Lisp List - Problem: I see dead people (links).
Emacswiki - Problem: May contain traces of nuts (malicious code).
Emacsmirror - The package repository I am working on. Problem: No package manager supports it natively yet.
Some package managers
It's not that nobody has tried yet. (Some of these did not exist when this question was asked.)
auto-install
borg.el - Assimilate Emacs packages using Git submodules.
el-get.el - Supports many sources.
elinstall.el
epackage aka DELPS - Debian packaging concepts applied to Emacs Lisp packages.
epkg.el - This is now just a tool for browsing the Emacsmirror.
install.el
install-elisp.el
jem-pkg.el
package.el - ELPA. Seems like it will be included in Emacs 24.
UPDATE -- package.el is included in GNU Emacs, starting with version 24.1
pases.el
pelm - Command line installer; using php.
plugin.el
straight.el - Recent and experimental, has not reached 1.0 release yet.
use-package.el
XEmacs package manager
package has been included in the Emacs trunk. epkg is not ready yet and also currently not available. At least install-elisp, plugin and use-package do not seem to be actively maintained anymore.
I have created a git repository containing all these package managers as submodules.
Some utilities that might be useful
Package managers could use these utilities and/or they could be used to maintain a mirror of packages.
date-calc.el - Date calculation and parsing routines.
ell.el - Browse the Emacs Lisp List.
elm.el, elx.el, xpkg.el - Used to maintain the Emacsmirror.
genauto.el - Helps generate autoloads for your elisp packages.
inversion.el - Require specific package versions.
loadhist.el, lib-requires.el, elisp-depend.el - Commands to list Emacs Lisp library dependencies.
project-root.el - Define a project root and take actions based upon it.
strptime.el - Partial implementation of POSIX date and time parsing.
wikirel.el - Visit relevant pages on the Emacs Wiki.
Discussions about the subject at hand
emacs-devel 20080801
comp.emacs 20021121
RationalElispPackaging
The question (finally)
So - I would like to know from you what you consider important/unimportant/supplementary etc. in a package manager for Emacs.
Some ideas
Many packages (the Emacsmirror provides that largest available collection of packages, but there is no explicit support in any package manager yet).
Only packages that have been tested.
Support for more than one package archive (so people can choose between many/tested packages).
Dependency calculated based on required features only.
Dependencies take particular versions into account.
Only use versions that have been released upstream.
Use versions from version control systems if available.
Packages are categorized.
Packages can be uninstalled and updated not only installed.
Support creating fork of upstream version of packages.
Support publishing these forks.
Support choosing a fork.
After installation packages are activated.
Generate autoload files.
Integration with Emacswiki (see wikirel.el).
Users can tag, comment etc. packages and share that information.
Only FSF-assigned/GPL/FOSS software or don't care about license.
Package manager should be integrated be distributed with Emacs.
Support for easily contacting author.
Lots of metadata.
Suggest alternatives before installing a particular package.
I am hoping for these kinds of answers
Pointers to more implementations, discussions etc.
Lengthy descriptions of a set of features that make up your ideal package manager.
Descriptions of one particular desired/undesired feature. Feel free to elaborate on my ideas from above.
Surprise me.
I'm still learning Emacs, so I haven't had a chance to look into package managers, but a great feature would be to inform the user that the package is available if they try to use it but it's not on their system. For example, I wanted to edit a PHP file on a server once, and I tried
M-x php-mode
and Emacs was all like
M-x php-mode [no match]
when it should have been like
php-mode available from ftp.gnu.org. install? (y/n)
and then it would have installed and loaded up php-mode for me. That would have made my day right there.
Automatic publishing from version control
I'd love to see a standard, central, and single Emacs package manager. Right now, I'd put my money on ELPA, but there is still a long way to go.
The biggest thing that would help an Emacs package manager would be to make it super trivial to publish packages. In my opinion, I'd like to see this happen in combination with a version control system like git on a central hosted platform like GitHub -- something that would make it easy for authors to publish their packages and would make it easy for others to contribute back.
Similar to how GitHub (used to) make it easy to publish RubyGems, I'd like to see something similar in an Emacs package manager. For example, tag your repository with "vX.Y.Z" and have your elisp goodness automatically available to all.
The added benefit of using a popular backend like GitHub is that you'd immediately get a lot of exposure which should help drive its success.
What I expect most is that everything useful is on it, and works well. This requires you (or a team of maintainers) to aggressively pursue packaging everything for it, and doing whatever that involves — emailing every author of a useful package, and so on.
For instance, the reason Debian (and its derivatives: Ubuntu etc.) is so good is that you can happily use your system without ever having to install something outside the repositories, and that everything on it is thoroughly tested. The actual features of the package manager are important, but secondary to the managed packages themselves.
Easy configuration synchronization: I, like many people, use Emacs on many different computers and servers, some of them my own and some not. It would be amazing if the package manager had some sort of file which I could transfer from one computer to another; then, on the latter computer, the package manager would bring my Emacs into the state I like it in -- all the packages installed and configurations set. Combined with the ability to be able to easily install either site-wide (if one has root permissions) or as a single user, I could synchronize all of Emacsen everywhere.
I'm nearly positive that the best solution involves submitting more packages to ELPA and adding multi-source support to package.el. The Emacs maintainers have said that they would consider including package.el in version 24 as long as it pointed to an FSF repository by default.
Of course, submission also needs to be an automated process too; the current method of mailing the ELPA maintainer only works on a small scale.
No matter how this is done, the most important thing in my opinion is that it should be trivial to submit packages to the repository. At the same time, we do not want those packages to be instantly available, to guard against malicious code(and due to licensing issues). Unless there is a "trust" system in place, based on crypto signatures.
Also useful:
"metapackages", to install several packages at once.
In the same way, we should be able to install a set of elisp files, for maintainability
"Broken" packages should not be allowed to disrupt Emacs startup. This is easy and I have it implemented in my own .emacs
Ability to install files other than scripts. This is often overlooked, but very useful. You'd be able to, for instance, ship images, for icons, toolbars, etc.
Versioning:package X requires package Y > 1.0
Testing: perform basic sanity checks, testing for conflicts (keybindings, function redefinitions, functions that are expected to be present but aren't, etc).
BUG TRACKING: I can't stress the importance of this enough. Having a centralized place to report package bugs (and being able to track them) is extremely important to assure the quality of the packages.
Some sort of compressed archive seems to be best to do some of the above.
So far, a much improved ELPA seems the way to go.
I once spent some time writing a small package manager for Emacs.
http://gmarceau.qc.ca/plugin.el
I wrote:
Plugin is my attempt at creating a
package manager for Emacs. Plugin
will automatically downloads Emacs
extensions, unpacks them in a
directory, adds that directory to the
load-path, generates auto-load
annotations, and modify your dot-emacs
file. The auto-load annotations are a
little-known feature of Emacs. Once
they are generated, Emacs extensions
load quickly and incrementally, which
is really nice if you have as many
extensions installed as I do.
You will need two library files to get it to run, loop-constructs.el and record.el
I think the hackers for the iPhone got quite close to what I want, as does Ubuntu's "apt".
I like to be able to:
add
remove (package only)
remove user settings
view documentation
upgrade ( after reading the change log)
add new archive ( aka add repository )
see dependencies
see version
search for name, keyword
browse by (date added, date modified, name)
save all installed packages & settings
load set of packages & settings
I would like a main set of things that all work nicely and are the recommended way of doing whatever. Then a global set where everything working gets in. Then the ability for anyone to host their own archive.
It would be nice if this was all tied into git/svn/whatever so that you could install old versions. Make your own patches by forking off etc etc etc....
Besides the mentioned above, i expect something like debian, and other repositories - set of the stable, experemental, untested packages. Ability to add my own repositories - i use lot of the packages directly from VCS, so it could be useful to create my own packages
I think that the package manager should take a lot of inspiration from Rubygems. I also think that it should have a site like Gemcutter.
A central repository could also be nice (like Emacsmirror). This however may not be necessary if a site like Gemcutter exists that collects all packages.
I think these things are important for this to work.
Central location of some kind that collects all packages
Easy to add packages
Easy to maintain packages
Easy to contribute to other packages
Easy to install, uninstall and update packages
Possibility to add package dependencies
Common structure for all packages
So a package manager like Rubygems with a site like Gemcutter and a central repository like Emacsmirror (preferably on Github because of it's social coding) would do Emacs really good.
All in all I think that much inspiration should be taken from Rails and how Rails handles Gems.
I don't know how fresh this question is...
but the model I would like to see is CPAN. I also don't know Rubygems but it sounds similar to CPAN.
CPAN is a perl archive + library management system. When I need to write a perl program that requires... FTP or SOAP or JSON or XML or ZIP, or...etc, I can run the CPAN package manager, select the requisite package for download, view and verify the dependencies, then install everything. CPAN is mirrored .."everywhere".
CPAN works wonderfully for my purposes, and something similar for emacs would be nice to have. It also supports building C/C++ code on demand.
That's what I would like to see in emacs.
Some further comment on requirements.
explicit download of packages. No auto install. No invisible downloads. I want to ask for new libraries or new function.
I should be able to list the name/version/timestamp of installed packages.
If my friend gives me his list, I should be able to diff his emacs state against mine.
check-for-updates function. What updates are available? What do they fix?
depedency checking, verification, and download. If I install csharp-mode and it requires v5.0.28 of cc-mode, then it should confirm with me, that I must also download cc-mode.
there should be some sort of community ranking of these packages, like ranking torrents on isohunt. I want to see if a package has 3 upvotes or 3000.
"transactional" behavior. If an install goes boom, it must unwind to a last-known-good state.
failsafes. If I've put custom mods in linum.el, it should refuse to install a new version over my changes, unless I explicitly allow it. It should warn me before even starting. Do this with checksums/md5's over the existing install.
have the option of running some packages from compressed archives, like zip files. So I never have any doubt that I have not updated any of the embbedded elisp.
ability to use mirrored hosts for package distribution.
all this function should be accessible through M-x library-manageemnt or something.
Finally, it would be nice to have a way to segregate or organize libraries of functions. Hierarchical namespaces. Emacs' flat namespace is very dated. This is sort of independent but complementary to the core function of package management. I'm not a lisp guru so I don't know how hard this would be; maybe there is already a way to do it.
Package managers don't offer anything I value w.r.t. single-file elisp packages with simple dependencies: adding and deleting from site-lisp has never caused problems. It's packages that depend on external programs (e.g., ispell), multi-file packages (e.g., auctex, org-mode) that can be tricky. Can't think of any single-file elisp package with nontrivial dependencies, offhand.
For these, short of a package manager, I'd like emacs' elisp-packages to acquire test suites which can be run en masse, and which provide useful information in the event of dependency failures.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.