.war files packaged as RPMs for JBOSS/Tomcat - best practices? - jboss

I'm planing to package WAR files as RPMs. Current deployment process just doesn't work for us and my idea of fixing it would be to create a new Custom Channel in RHN Satellite and publish my WAR files through that Channel. Currently (as we are trying to win some time) I'm managing some config files through Satellite so configs are not a big problem. We don't keep them in WAR for many reasons but that's different story.
Anyway, has anyone packaged WAR as RPMs? Do you do hot deployment or do you force JBOSS/Tomcat to restart? Is that After RPM installation or as a part of it? What's your SPEC file looks like? Can I please see it as an example? Do you check in your SPEC for JBOSS/Java/Oracle client or just install WAR? Any stories to tell? Any major problems?
Should I consider something else?
I can build RPMs no problem but I'd like to hear what's the best way of doing this with WAR files and JBOSS (some Tomcats are still running here but they will be phased out soon-ish so I'm not too worried about them).
I do appropriate any input.
Thanks in advance
Kind Regards
Chris

I have packaged a WAR file and its companion config file as RPM. My SPEC file checks whether JBoss is running. If it is, the installation exits with an error message, requesting that JBoss be stopped before installation.
In general, it is not a good idea to kill processes or force restart through RPMs. The person in charge of installation should have a separate procedure for doing this.
Other considerations:
1. Does your organization have other installations on the same server?
If so, you might want to standardize on a path under which all installations on the server will go. (For example, /usr/local/bin/myorg/)
It might also help to have a custom *nix user and group, for which file attributes are set for all installations on the server.
2. Do you want a relocatable RPM?
Is there ever a chance that you might want to change the default path for your installation? If so, a relocatable RPM will help.
There are conditions where a relocatable RPM may not work, so check out these things to consider at www.rpm.org.
3. Do you want to back up an existing deployment when your RPM runs?
If so, you would need to write that code in your SPEC file.
Here is my complete SPEC file:
Summary: Summary for my Java project
Name: Name for my Java project
Version: 2.1.2
Release: 5
Requires: jboss >= 5.1
BuildArch: noarch
Group: Internet / Applications
Prefix: /usr/local/bin
License: (C) Copyright my organization
Vendor: my organization
%description
Description for my Java project
%prep
# Check if the WAR file has been created
%install
# Copy war file to buildroot's Jboss deployment directory
# Copy config file to buildroot's Jboss config directory
%files
# Set file permissions and ownership
%pre
# Check if JBoss deployment path exists on the web server.
# If not, exit with an error.
# Check if JBoss config file path exists on the web server.
# If not, exit with an error.
# Check if custom user 'myuser' exists. If not, exit with an error.
# Check if custom group 'mygroup' exists. If not, exit with an error
# Check if JBOSS is running. If yes, exit with an error.
# Take backup of existing deployment, if needed.
%post
# Perform post-installation steps, if needed.
echo "Installation complete."

We plan to somehow standardize Java webapp installation on Fedora, but there is nothing yet.
Debian guys created a rather nice proposal http://dep.debian.net/deps/dep7 which you should probably read to get an idea what you'd have to deal with more or less.
As for JBoss/Tomcat restart, I would advise against it. Leave it to the system (webserver) administrator. They should know what they are updating and why. Forcing restart of webserver all the time is asking for trouble IMO (especially if one server is hosting multiple webapps)
I would put the wars in /usr/share/webapps-java. Then probably have JBoss/Tomcat use that directory. That said, no spec files, sorry.

Related

PowerShell Module Deployment Duplication

I am using Azure DevOps to deploy PowerShell modules to a server. This release task deploys the modules to the directory C:\Windows\System32\WindowsPowerShell\v1.0\Modules\. I am able to use the modules once they are deployed to this folder successfully.
If I modify one of the modules and re-release it the file in C:\Windows\System32\WindowsPowerShell\v1.0\Modules\ gets updated, however the old version of the module is still used when running from a batch file using pwsh.
I discovered that the module file also exists in the following paths:
C:\Program Files\PowerShell\Modules\
C:\Program Files\PowerShell\6\Modules\
When deploying the new version using Azure DevOps the old version in the above two directories are not updated. Manually updating the module in those locations fixes the problem.
Why is the module file being copied into those two additional paths?
Should those copies be overwritten when a new version of the module is deployed?
What is the correct way of deploying a module in this scenario?
Powershell uses different paths to load modules. Use $env:PSModulePath -split ";" to know which are the paths being used.
The difference between each path is user scope and usage scope (e.g. made for custom modules or windows official modules).
Now, by default, PS looks for the latest version of each module across all the paths. So maybe the old version is being run because at the time you re-deploy. You are not updating the module version in the Module Manifest, so if PS see they are the "same" version it gets the last one loaded on the PSModulePath.
Take a look at this awesome post for more details: Everything you wanted to know about PowerShell's Module Path
Now to your questions.
Why is the module file being copied into those two additional paths?
This could be a server configuration or the script that you are using to deploy.
Should those copies be overwritten when a new version of the module is deployed?
Not necessarily, if the versions are maintained correctly. On the post shared says how to check the versions of each module.

Configure dependencies in RPM

I have built a RPM-package for Centos 6.6 that is installed on a machine of our customer.
This package contains our own software, customized for the specific use case, but also uses the open-source package HAProxy.
HAProxy (RPM-version 1.5.4-2.el6_7.1) comes with a default-configuration in /etc/haproxy/haproxy.conf and it cannot be customized without changing this file.
But I want the configuration to be part my generated package. RPM throws an error if the /etc/haproxy/haproxy.conf file is in my package, because it is also part of the haproxy-package.
I have worked around this problem by providing a custom upstart-script which starts HAProxy with a different config file, but this does not seem to be the right way to do this.
Is there a preferred way to handle such customizations?
In cases like this, I've created an RPM which installs configuration files into a different subdirectory, and in its %post and %preun scriptlets modifies the uncooperative package's config-files:
when installing, I renamed the original config-files, and made symbolic links from those pathnames to the overwriting config-files, and
when uninstalling, the package removed the symbolic links and restored the original package's files.
Doing it that way of course meant that my config-RPM was dependent on the original RPM. A little awkward to describe, but it works.
In followup, the issue of updating was mentioned. Updating an RPM requires special handling to avoid uninstalling things. The rpm program passes a parameter $1 which you can test in the %pre and %preun scriptlets to notice that this is an upgrade and that there is no need to save the original config-files (or restore them). The rest of the scriptlet would be the same, by copying the new versions of your config-files over the others.
Further reading:
Defining installation scripts (shows the use of `$1)
RPM upgrade uninstalls the RPM
Your approach is correct. On EL6 and sysv there is no other choice than creating custom haproxy package or custom haproxy service or create script which customer runs after installation. I see creating another service as best option.
Note that on EL7 with SystemD you have much better option as you can use Drop-In feature of SystemD. For more information see:
https://coreos.com/os/docs/latest/using-systemd-drop-in-units.html
https://wiki.archlinux.org/index.php/systemd#Drop-in_snippets
https://wiki.archlinux.org/index.php/Systemd/User#Service_example
The usual way this is done is to have a drop-in configuration directory, e.g. /etc/httpd/conf.d/, where your package would drop its configuration, and you would tell the other daemon, e.g. httpd, to do a graceful restart in your %post/%postun.
I don't know anything about HAProxy, but a quick search implies that they do not support this configuration directory concept that has been around for many years. A few people have hacked it in, but unless it is out-of-the-box, you will run into your original problem again.

Best Way to Deploy Zend Web Application

I've read a lot about deploying applications here, but haven't found a suitable answer to our needs yet.
We have a large web application built with the zend framework that we want to deploy to a remote server. We want to be able to easily and safely deploy a new version of our application to our production server.
What needs to be done is the following:
put up a maintenance page on the production application?
export version from SVN
run a shell script to minify the CSS files in a certain directory (shell script is done)
set file permissions on files and directories
copy/sync? files to a production server -> only changed files?
remove maintenance page from the production application?
We use SVN as a code versioning tool and we are running CentOS as our server OS in production.
I've read about:
rsync
fredistrano / capistrano
phing
custom shell scripts
What are your advices for easy one-click deployment?
I export (or checkout) a copy of the site under a different name (typically the subversion revision number & date) and symlink the document root into place
1000.20100515/
application/
public/
library/
1020.20100621/
current (symlink to 1000.20100515/)
dev (symlink to 1020.20100621/)
# copy whatever 'dev' points to as the new 'current' symlink.
rm current && cp -d dev current
The document root is set in apache to ../current/public
With this, I can check out a new version of the site at leisure, and put the new version live en-mass in a fraction of a second. Rolling back to a previous version of the site is as easy as changing the symlink - should a major problem be found.
Added The ruby-based tool 'Capistrano' can be an excellent method to fully automate this across a number of machines (be it one, or a dozen), and indeed it's my preferred method of deployment now. Capifony is a plugin for Capistrano that also supports Composer-based projects.
Try Capistrano. It is developed for Ruby and you need to have Ruby installed on your computer, but it is not necessary to have it on the target server.
It works with git or svn, and it creates versions on the target server. You can roll back and deploy your new version with one line of CMD.
I have found this tutorial: http://tfountain.co.uk/blog/2009/5/11/zend-framework-capistrano-deployment
You have a modified version of capistrano with another tutorial here: http://www.codewithstyle.eu/2011/05/03/deploying-zend-framework-applications-using-capistrano/

How to deploy classic asp website?

I would like to know how to deploy or what are the steps that are involved to deploy a classic asp website in IIS 6/7
Can we create an installer for the existing project?
You should consider using Web Deploy http://www.iis.net/download/WebDeploy, it can deploy your ASP applications, setup the IIS application and other settings (like APplication Pool, etc), and even include COM objects, Registry keys and more.
Even better you can parameterize content like Connection Strings, Title, settings, so that at install time you can pass those paramters either through the command line or the User Interface.
It can deploy between IIS 6 and IIS 7 and even help you compare existing deployed versions with packaged versions (zip files), or other servers.
Make sure a virtual directory has been set up in IIS.
Copy all files into the virtual directory
If applicable, register required DLLs with regsvr32.exe
Run.
Hope this helps.
EDIT: I see you want to make an installer for the application. Have a look here for a guide on how to do it. To my knowledge there isn't anything that is "plug and play" for installing your project; you will have to make it.
Copy the files to the virtual folder. If you have any depending dll's or exe's make sure to install them too.
As you said you may have to create an installer that will do this works to you. There is a lot of installers out there, like Inno Setup and Windows Installer.
If its just ASP and you have no DLL's or COM Components then you would just have to copy all the files to a Virtual Directory under approot or wwwroot. XCopy copies all directories, subdirectories and files. As for an installer, you wouldn't really need it but it would be useful if you make one that sets up the virtual directory, copies the files and configures any host headers if needed.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.