How should I create an automated deployment script? - deployment

I need to create some way to get a local WAR file deployed on a Linux server. What I have been doing until now is the following process:
Upload WAR using WinSCP.
SSH into server using PuTTY.
Move/Rename/Delete certain files folders to prepare for WAR explosion.
Explode WAR.
Send email notifying users of restart.
Stop Tomcat server.
Use tail to make sure server stopped correctly.
Change symlink to point to exploded WAR.
Start Tomcat.
Use tail to make sure server started correctly.
Send email notifying users of completed restart.
This stuff is all relatively straightforward. And I'm sure there are a million and one different ways to do it. Id like to hear about some options. My first thought was a Bash script. I have very little experience with scripting in general but thought this would be a good way to learn. I would also be interested in doing this with Ruby/Python or something current like this as I have little to no experience with these languages. I think as a young developer, I should definitely get some sort of scripting language under my belt. I may also be interested in some sort of software solution that could do this stuff for me, although I think scripting would be a better way to go for the sake of ease and customizability (I might have just made that word up).
Some actual questions for those that made it this far. What language would you recommend to automate the process I've listed above? Would this be a good opportunity for me to learn Bash/Ruby/Python/something else, or should I simply take the 10 minutes to do this by hand 2-3 times a week? I would think the answer to this is obviously no. Can I automate these things from my computer, or will I need to setup the scripts to run within the Linux server? Is the email something I can automate or am I better off doing that part myself?
More questions will almost certainly come up as I do this so thanks to all in advance.
UPDATE
I should mention, I am using Maven to build the WAR. So if I can do all of this with Maven please let me know.

This might be too heavy duty for your needs, but have you looked at build automation tools such as CruiseControl or Hudson? You might also want to look at Integrity, which is more lightweight and written in Ruby (instead of Java like the other two I mentioned). These tools can do everything you said you needed in your question plus way, way more.
Edit
Since you want this to be more of a learning exercise in scripting languages than a practical solution, here's an idea for you. Instead of manually uploading your WAR each time to your server, set up a Mercurial repository on your server and create a hook (see here, here, and especially here) that executes a Ruby (or ant, or maven) script each time a changeset is pushed from a remote computer (i.e. your local workstation). You would write the script so it does all the action items in your list above. That way, you will get to learn three new things: a distributed version control paradigm, how to customize said tool, and how to write Ruby scripts to interact with your operating system (since your actions are very filesystem heavy).

The most common in my experience is ant, it's worth learning, it's all pretty simple, and very usefull.
You should definately automate it, and you should aim to have it happen in 1 step.

What are you using to build the WAR file itself? There's some advantage to using the same tool for build and deployment. On several projects I've used Ant to build a Java project and deploy it to the servers.

Related

Avoiding manual server file management

I have an interesting dev requirement that i've been trying to find a decent solution to for some time. I thought I might look here for suggestions.
I have 4 server locations on a network drive: 2 for development, 2 for production. each pair is intended to be identical, though because we're manually managing files gaps often surface. Developing in this context goes like this:
Make changes on one dev server.
Copy the updated files to the other dev server.
preview in browser.
repeat as necessary until ready to go live.
copy files from one dev server to one prod server, then the other.
review and test and hope to got you remember all the dependancies.
Realize that something is missing in a stylesheet somewhere because a colleague missed a server 6 months ago.
commence swearing. lather. rinse. repeat.
The closest thing to a solution I've been able to get to thus far involved using the local/remote server setup with autosave in dreamweaver with mappings to the two dev servers. But all this stuff being over a network makes things slow. And dreamweaver is far from my favorite editor.
I'd love to be able to find some way of making this better. In my freelance stuff I've been getting into things like NPM and Grunt, and I'm convinced I might be able to do something with something along the lines of task automation as is seen there, but I'm not experienced enough with these yet to know if that's a direction I should be considering.
Any suggestions would be greatly appreciated.
You can use any source control system you like to achieve what you want. Git seems to be the most popular choice those days, but even CVS would do the trick

What are the basics of deploying/building/making a web app?

I am pretty comfortable with the producing web apps now. I am using a NodeJs stack on the back-end and usually have a fair amount of Javascript on the front end. Where I really lack understanding is the deployment process.
What is a typical deployment process?
From what I have gathered in my reading a deployment/build process can include several tasks:
Running through unit-test suites
Concatenating script and CSS files
Version numbering your app
Tracing module dependencies (node_modules)
Pushing it to a remote repo (GitHub)
Instructing 'staging' servers to pull down the latest repo
Instructing 'production' server to pull down the latest repo
This has all left me a little overwhelmed. I don't know whether I should be going into this level of detail for my own projects, it seems like a lot of work! I am using Sublime Text 2 IDE and it seems to have a Build Script process, is this suitable? How does one coordinate all these separate tasks? I'm imagining ideally they would all run at the flick of a switch.
Sorry so many questions, but I need to know how people learnt similar principles. Some of my requirements may be specific to NodeJS but I'm sure processes are similar no matter what choice of stack you are developing in.
First off, let's split the job in two: front-end and back-end stuff. For both, you really want some kind of bulid system, but their goals and scope are vastly different.
For the front-end, you want your source to be as small as possible; concatenate/minify JavaScript, CSS and images. A colleague of mine has written a "compiler", Assetgraph, to do this for you. It has a somewhat seep learning-curve, but it does wonders for your code (our dev-builds are usually ~20 megs, production ~500 k).
As to the back-end, you want contained, easily managed bundles of some sort. We re-package our stuff into debian-packages. As long as the makefile is wired up correctly, you get a lot of the boring build- and deploy-time stuff for free. Here's my (pre-NPM 1.0) Debianizing node programs. I've seen other ways to do this in NPM and on Github, but I haven't looked into them, so I can't speak on their quality.
For testing/pusing around/deploying, we use a rather convoluted combination of Debian package-archives, git-hooks, Jenkins-servers and what not. While I highly recommend using the platforms' native package-manager for rolling out stuff, it can be a bit too much. All in all, we usually deploy staging either automatically (on each git push), or semi-automatic for unstable codebases. Production deployments are always done explicitly.
For the assets I use asereje https://github.com/masylum/asereje
I recently documented my nodejs deployment process in a blog post:
http://pau.calepin.co/how-to-deploy-a-nodejs-application-with-monit-nginx-and-bouncy.html
A build script sounds like a good idea indeed.
What should that build script do?
make sure all the test pass, else exit immediately
concatenate your javascript and css files into one single js/css file and minify them also
increment the version number (unless you decide to set that up yourself manually)
push to the remote repo (which instructs the staging and production servers through a git hook)
This is at least my opinion.
Other resources:
http://howtonode.org/deploying-node-with-spark
http://howtonode.org/deploying-node-upstart-monit
http://dailyjs.com/2010/03/15/hosting-nodejs-apps/
How to deploy node app depencies

Load CMS core files from one server from multiple servers

I'm almost done with our custom CMS system. Now we want to install this for different websites (and more in the future), but every time I change the core files I will need to update each server/website seperatly.
What I really want is to load the core files from our server, so if I install an CMS I only define the nedded config files (on that server) and the rest is loaded from our server. This way I can pass changes in the core very simple, and only once.
How to do this, or this a completely wrong way? If so, what is the right way? Thing I need to look out for? Is it secure (without paying thousands for a https connection)?
I have completely no idea how to start or were to begin, and couldn't find anything helpful (maybe wrong search) so everything is helpful!
Thanks in advance!
Note: My application is build using the Zend Framework
You can't load the required files from remote on runtime (or really don't want to ;). This problem goes down to a proper release & configuration managment where you update all of your servers. But this can mostly be done automatically.
Depending on how much time you want to spend on this mechanism there are some things you have to be aware of. The general idea is, that you have one central server which holds the releases and have all other servers check if for updates, download and install them. There are lot's of possibilities like svn, archives, ... and the check/update can be done manually at the frontend or by crons in the background. Usually you'll update all changed files except the config files and the database as they can't be replaced but have to be modified in a certain way (this is the place where update scripts come into place).
This could look like this:
Cronjob is running on the server which checks for updates via svn
If there is a new revision it'll do a svn-update
This is an very easy to implement mechansim which holds some drawbacks like you can't change the config-files and database. Well infact it'd be possible but quite difficult to achieve.
Maybe this could be easier with a archive-based solution:
Cronjob checks updateserver for a new version. This could be done by reading the contents of a file on the update-server and compare it to a local copy
If there is a new version, download the related archive
Unpack the archive and copy the files
With that approach you might be able to include update-scripts into updates to modify configs/databases.
Automatic updatedistribution is a very very complex topic and that are only two very simple approaches. There are probably very many different solutions out there and 'selecting' the right one is not an easy task (it does even get more complex if you have different versions of a product with dependencies :) and there is no "this is the way it has to be done".

How to create a proper website? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Web developing isn't what it used to be. It used to consist of hacking together a few PHP scripts (I have nothing against PHP, actually it's currently my main programming language), uploading them via FTP to some webhost and that was that. Today, things are more complicated. As I can see by looking at a number of professional and modern websites (SO being the main one, I consider SO being a great example of good practice in web developing, even if it's made with ASP.NET and hosted on Windows), developing a website is much more than that:
The website code is actually in a repository (that little svn revision in the footer makes my nerdy feelings tingle);
Static files (CSS, JavaScript, images) are stored on a separate domain;
Ok, these were my observations. Now for my questions:
What do you do with JavaScript and CSS files? Do you just not keep them under version control? That would seem stupid. Do you create a separate repository for them?
How do you set up the repository? Do you just create one in the root of the web server? Or do you create some sort of post-commit trigger that copies the latest files to their appropriate destinations?
What happens if you have multiple machines running the website and want to push some changes to all of them?
Every such project has to have configuration files. These differ from the local repository to the remote one. For example, on my development machine I have no MySQL root password, while on the production server I certainly have a password. This password would be stored in a config file, amongst other such things, which would be completely different on my machine and on the server. Maybe they are different between production machines, too (like I said earlier, maybe the website runs on multiple machines for load balancing). How do I handle that?
I'm looking to start a new web project using:
Python + SQLAlchemy + Werkzeug + Jinja2
Apache httpd + modwsgi
MySQL
Mercurial
What I'd like is some best practice advice on using the aforementioned tools and answers to my questions above.
You're right, things can get complicated when trying to deploy a scalable website. Here are what I've found to be a few good guidelines (disclaimer: I'm a rails engineer):
Most of the decisions regarding file structure for your code repository are largely based upon the convention of the language, framework and platform you choose to implement. Many of the questions you brought up (JS, CSS, assets, production vs development) is handled with Rails. However, that may differ from PHP to Python to whichever other language you want to use. I've found you should do some research about what language you're choosing to use, and try to find a way to fit the convention of that community. This will help you when you're trying to find help on an obstacle later. Your code will be organized like their code, and you'll be able to get answers more easily.
I would version control everything that isn't very substantial in size. The only problem I've found with VC is when your repo gets large. Apart from that I've never regretted keeping a version of previous code.
For deployment to multiple servers, there are many scripts that can help you accomplish what you need to do. For Ruby/Rails, the most widely used tool is Capistrano. There are comparable resources for other languages as well. Basically you just need to configure what your server setup is like, and then write or look to open source for a set of scripts that can deploy/rollback/manipulate your codebase to the servers you've outlined in your config file.
Development vs Production is an important distinction to make. While you can operate without that distinction, it becomes cumbersome quickly when you're having to patch up code all over your repository. If I were you, I'd write some code that is run at the beginning of every request that determines what environment you're running in. Then you have that knowledge available to you as you process that request. This information can be used when you specify which configuration you want to use when you connect to your db, all the way to showing debug information in the browser only on development. It comes in handy.
Being RESTful often dictates much of your design with regards to how your site's pages are discovered. Trying to keep your code within the restful framework helps you remember where your code is located, keeps your routing predictable, keeps your code from becoming too coupled, and follows a convention that is becoming more and more accepted. There are obviously other conventions that can accomplish these same goals, but I've had a great experience using REST and it's improved my code substantially.
All that being said. I've found that while you can have good intentions to make a pristine codebase that can scale infinitely and is nice and clean, it rarely turns out this way. If I were you, I'd do a small amount of research on what you feel the most comfortable with and what will help make your life easier, and go with that.
Hopefully that helps!
While I have little experience working with the tools you've mentioned, except for MySQL, I can give you a few fairly standard answers for the questions you posted.
1) Depends on the details, but most often you keep them in the same repository but in a separate folder.
2) Just because something is commited to the repository doesn't mean that it's ready to go live - it's quite often an intermediary build that could be riddled with bugs. A publish is done manually, with an export from the repository. Setting up the webserver in the same folder as a svn checkout is a huge nono as the .svn folder contains quite a bit of sensitive information, such as how to push changes to the svn server.
3) You use some sort of NAS or SAN solution, or simply a network share on one of the servers, and read all your data from there. That way, when you push information to one place, it's accessible by all servers. If your network is slow, you set up scripts that pushes the files out to all the servers automatically from a single location. If you use a multi-server environment in ASP.NET, don't forget to update the machine key in the config files or your shared encrypted caches, like the viewstate, won't work across servers. Having a session store in a database is also a good idea.
4) I've got a post build step that only triggers on publish that replaces my database connectionstrings with production ones, and also changes my Production app config value from false to true in the published web.config/app.config files. I can't see any case where you'd want different config files for different servers serving the same content.
If something is unclear, just comment and I'll try to clarify.
Good luck! // Eric Johansson
I think you are mixing 2 different aspects, source control and deployment. Just because you have all your files in a single repository doesnt mean they have to be deployed that way. Its also arguable whether you should be deploying directly using source control or instead using a build/deploy script which could handle any number of configurations.
Also hosting static files on a seperate domain only really becomes worthwhile on high traffic websites. Are you sure you aren't prematurely optimising?

What should I propose for a reusable code library organization?

My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.