As many apps do, we have a number of config and properties files for our Java applications. We have gone with the approach of keeping these files separate from our codebase (i.e. they are not included in the war files for deployment) but in a separate directory. However, I would still like to track changes to these files in a source control and deploy them using our CI.
I'm looking for strategies on how others have done this. Did you write a script to push the files to the app server(s). Does the script live on the CI server?
Our SCM is Mercurial which we have set up on its own server to use as a central repo. Our CI is Hudson (not Jenkins) set up on its own server and of course our app servers are separate from these as well. All servers are *nix OS.
Consider using configuration management tools like puppet or chef for managing all your application configuration files. Both tools use "manifests" or "receipes" which can placed under revision control and matched to each server deploying the application.
Another option is to consider is to develop an install package for your OS, see the following articles for more details:
http://www.sonatype.com/people/2011/11/bringing-java-and-linux-together-on-the-way-to-continuous-live-deployment/
The advantage of doing it this way is that the install can be configured to generate the correct configuration tailored for the environment it is deployed onto. A more important benefit is that it's simpler to manage and install.
Related
For a project I am working on, we have CI setup using Jenkins.
We now want to setup Continuous Delivery (CD) using Puppet.
Here is our dev environment specs
Windows 2008 Server
Jboss-SOA-P (jboss AS 5.1 app server) - 2 instances
Jenkins for CI
Installed Puppet Learning VM (as we are evaluating, so we don't have a license to install Puppet Enterprise).
My question is: How can I automate deployment of my application(s) on already installed Jboss servers (on Windows m/c) using Puppet?
For my organization, I am using the following tools to achieve Continuous Delivery and Continuous Integration
Foreman: A provisioning and ECN ( External Node Classifier )
Puppet Master :
This will be running in our main server
Puppet Agents:
On rest of the servers
Jenkins:
On main server
Nexus repository for maintaining staging and release repositories on another server
A nexus repository module installed in puppet master. It has the logic to connect to nexus repository, fetch the latest release from "release" repository.
Flow:
I have a jenkins job defined whose purpose is making sure the build doesn't break and at the time of release, I perform maven release
that in turn upload the latest version to nexus release repository.
Load nexus repository module in foreman and map "Nexus" class to all my servers. This is the crucial part that allows me to perform
cloud deployment with a single button click in foreman. To do this, you need to have your own puppet files that perform dbmigration, undeploy and deployment. I always write a deploy maven module for all my projects which will have only these puppet files that allows me to perform deployment in all servers in one shot.
Since you are already familiar with CI and CD, I hope my statements are self explanatory
Without knowing how you want your application deployed, it's hard to answer exactly how to do it. But I'll see if I can point you in the right direction:
There are existing modules on the forge to deploy JBoss with Puppet. I'd recommend looking through and seeing if you can find one that suits your requirements.
You could then integrate your source control to automatically deploy changes to your JBoss instance as they get checked in.
There's an example of using Puppet with Tomcat and Maven for continuous delivery here. It's a few years old, but the concepts still apply: http://www.slideshare.net/carlossg/2013-02-continuous-delivery-apache-con
Here's also an example from CloudBees for contionous delivery with Puppet and Jenkins (which has a lot of the engineers behind Jenkins) https://www.cloudbees.com/event/continuous-delivery-jenkins-and-puppet-debug-bad-bits-production
Plus a general PuppetLabs one here: https://puppetlabs.com/blog/whats-continuous-delivery-get-speed-these-great-puppetconf-decks
Also: I don't want to sound too salesly, and full disclosure, I work at PuppetLabs. But you can use Puppet Enterprise with up to 10 nodes for free, so I'd recommend that over using the Learning VM, which isn't really designed for hosting applications.
I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries
I'm learning FluentMigrator. The thing that I like about FM is that it supports the idea of Forward and Back for migrations (aka Up/Down). I'm finding that it's not ideal about this; there are some holes. Still, it's good.
This leads me to wonder if there are any deployment tools (nant, msbuild or other) that support this idea of rolling forward and back. The scenario that I'm using it in is the deployment of a web app with a related database.
Ideally I'd like to set up my deployment so that, should any part of it fail, it will revert to the previous known working configuration. With FM, this is pretty easy to do (but there are rough spots), so that covers the db. How about the files that make up the web app? Do any deploy tools have support for this?
Deploying to a Windows Server. Assume that I can't make any changes to the server.
I don't know of any Microsoft-centric, automated provisioning/deployment tools like Capistrano. Here are some tools I've heard of, but never used:
MSDeploy, for deploying web application.
Microsoft Deployment Services, for managing operating system configuration
Microsoft's System Center Configuration Manager
BladeLogic
HP's Operations Center
Up until about three months ago, we did our deployment/provisioning using custom MSBuild scripts. After a server is provisioned, deploys happen automatically using Robocopy to copy files to a share on the application server, updating changed application binaries and markup files. We've never had a need to rollback any of our deployments, but since our scripts are custom, we could write the logic if we needed to.
MSBuild is a terrible deployment/provisioning language. For the past three months, we've been writing all new scripts in, and porting existing ones to, PowerShell. It is wonderful. With version 2, there is support for running commands on remote servers, like SSH. We haven't used that functionality yet, but I'm looking forward to pushing setup scripts to remote server to provision and deploy at the same time.
We have been using Git to do our deploys for the last 6 months.
Here is the whole process:
CI server build the project
CI server checks it in to a local git repository
CI server pushes the changes to the centralised git repository
User creates an empty repository on the live server
User adds the central git repository to the remotes
User pulls the latest version over https (no need to open any ports)
It is a lot to setup in the beginning but once setup it works great. Deploys take seconds as only changed files get copied.
Another great thing about this method is that git keeps history of changes so rolling back is pretty simple. You can also roll back a few revisions and it's done straight on the live server. If something goes wrong reverting is super fast.
Also you can save some time if you use a hosted git service (github) for your central repository.
This is a very brief description but I can give you more info if you want.
Of course! My favorite is Capistrano. This was originally built for Ruby but I've found that it works just as well for other languages.
https://github.com/capistrano/capistrano
Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on.
The issue is that there aren't any good solutions which is just appalling given the popularity of the two.
The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change.
The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files.
The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes.
I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between.
My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that).
Anyone have any ideas?
April 2010
Add EGit to your Eclipse+Aptana setup, and:
let developers push to a local bare repo their developments (see also this post)
let your local project be updated by a git pull from that same local bare repo (creating/updating) a local working directory with sources merged/updated (or by using a post-update hook as described in my previous SO link)
let your local Aptana+Eclipse(+EGit) reference that local working directory, also used by your web server.
In short, when you are speaking of file synchronization + merges, this is a job for a (D)VCS (Version Control System: Centralized or Distributed VCS)
Oct 2011: as xmedeko mentions in the comments, Aptana3 has its own Git plugin.
And it isn't very compatible with EGit: See bug 1988.
Adding to VonC answer (which is correct IMHO), what probably lies beneath this scenario is that the process you adopted is not correct in itself, apart from the tools used.
If I understood well, you should not allow nor perform a direct upload from a development version of the project to the web server. Merging is not a job for remote synchronization tools, and it should happen well before the deployment phase (upload to web server is practically a deploy).
You should have a dedicated repository taken from some point in development history (according to you release timeline), a point where merge has already happened. Then deploy it (by means of file synchronization if you want, but that is not mandatory) on a local/staging web server.
Perform there any test you run on the web site actively running (i.e. integration and/or functional tests). If there's any bug & fixing, well there are different ways to actually apply the fixes on development & staging code repository. Only after that, you deploy the staging repository on to production web server (again, synchronization tools are a way to do that).
We are a small team of 4 developers working on a web application. We use trac+svn on a shared server for version control and ticketing and we are happy and satisfied with this. The same shared server also hosts our web application. The application itself is a Perl CGI application that uses CGI::Application and a moderate number of standard (CPAN) and custom Perl modules that are installed in the usual (/usr/lib/perl...) and a few unusual locations (/home/user/lib/perl..). While the broad details might be irrelevant, the most important point is that the location/layout of libraries on our development machines is different from that on the production (shared) server. We have to live with this as a given. The library layout is identical on all development machines though.
Here is a typical, but clearly sub-optimal work-cycle that my colleagues and I follow:
Code and test on development machines
Checkout/Commit/Update our code onto the SVN
Periodically "svn export" onto the appropriate DocumentRoot of the server
Hand edit the exported tree to set the library includes match the library layout on the server
Test application on live server, raise tickets for each other
Go to 1
Clearly there must be a better way and would appreciate hearing from others who might be handling this better than we are. For example is there a way to svn export and fix the library locations in an automated way? Or is there some completely different way to handle this situation than we have been doing so far.
Thank you for your attention
You should have scripts that do this for you that can be run from a local box. Mine always look something like:
$> checkout from source or copy from working
$> run sed/perl -pi/copy to convert configs to the production values
(ie cp production.config myconfig)
$> upload to web server (rsync/ssh/ftp/etc)
$> ssh $SERVER migrate_db, set permissions, run unit tests, etc
The last one requires ssh access which I always look for but everything else can be done locally. You'd usually have a set of dev configs and a set of production configs (or a script to convert from dev to production
One step uploads are always a really good idea.
Keep a config file (e.g. config.pl) that stores all the system-dependant paths and variables. Then set the svn:ignore property on this file so that it is never commited. This will allow you to easily keep a local configuration script per system that is separate from the commited tree.
If you don't have the possibility of mirroring your development server in production, why can't you mirror your production server in development? That might take some reconfiguring, but what's the risk? Everything's checked into svn.
But maybe that really, really isn't an option for you. My preference for deploying web applications is to do an svn checkout and then run a symlinking script. The idea is that you write a system of rules that logically maps the contents of one folder to the contents of another. Of course, if you drop folder symlinks in your document root, you have to tell Apache to follow them.
Frankly, the absolute safest scenario would be to set up a virtual machine that you can configure exactly like your production machine. This way you can actually test the contents of your deployment script and submit tickets. Then, when the problem is discovered, you modify the script to make it more likely that development deployment will follow the new and improved procedure.
And, as a side note: I much prefer to use svn checkouts in place rather than svn exports. It shouldn't be hard (especially if you use a deployment script) to make sure that apache or whatever your web server is doesn't have permission on the .svn folders. Ideally, anything you can do to make an svn rollback a one-line command is absolutely key.
If this is a Linux box you could write a cron job to take care of this for you. You could use sed/awk to replace the needed strings in the code and svn export works fine from a cron job. You would need to maintain the script but it seems faster than doing it by hand every time.
For the hand-editing part, I would have a separate branch in Subversion for the local modifications you need. The developers commit into trunk and when you need to deploy, use 'svn merge' or svnmerge.py to merge changes from trunk to the branch.
After creating the branch the first time, make your local modifications in there.
On the servers, have the directories in DocumentRoot and /usr/lib/perl and /home/user/lib/perl be Subversion working copies checked out from the branch.
Do not use svn export, just have a checkout so you can 'cd /usr/lib/perl; svn up'.
The one thing to be careful about is not exposing your .svn directories in DocumentRoot, use this:
# Prevent any access to .svn directories.
<DirectoryMatch "^/.*/\.svn/">
Options None
AllowOverride None
Order allow,deny
Deny from all
</DirectoryMatch>
Having working copies deployed in DocumentRoot is also nice, if you need to rollback a change, just 'svn up -r PREV'.
Codesion offers enterprise grade subversion and trac on demand. In addition, we now have the ability to one-click publish / deploy your code via ftp, scp, rsync and many other methods. This will be an easy and quick way for you to accomplish what you are trying to do.
See our Codesion Publisher features:
https://help.codesion.com/View.jsp?procId=01fabe5e83381dda4edda959b97b2c5b