Is writing deployment friendly code considered a good virtue on the part of a programmer?
If yes, then what are the general considerations to be kept in mind when coding so that deployment of the same code later does not become a nightmare?
The biggest improvement to deployment is to minimize manual intervention and manual steps. If you have to type in configuration values or manually navigate through configuration screens there will be errors in your deployment.
If your code needs to "call home", make sure that the user understands why, and can turn the functionality off if necessary. This might only be a big deal if you are writing off-the-shelf software to be deployed on corporate networks.
It's also nice to not have your program be dependent on too many environmental things to run properly. To combat this, I like to define a directory structure with my own bin, etc and other folders so that everything can be self-contained.
The whole deployment process should be automated to minimize the human errors. The software should not be affected by the envrionment. Any new deployment should be easily rolled back in case any problem occurs. While coding you should not hard code configuration values which may be different for each environment. Configuration should be done in such a way that it can be easily automated depending upon enviroment.
Client or server?
In general, deployment friendly means that you complete and validate deployment as you complete a small story / unit of work. It comes from continual QA more than style. If you wait till the last minute to build and validate deployment, the cleanest code will never be friendly.
Everything else deployment, desktop or server, follows from early validation. You can add all the whacky dependencies you want, if you solve the delivery of those dependencies early. Some very convenient desktop deployment mechanisms result in a sand-boxed / partially trusted applications. Better to discover early that you can't do something (e.g. write your log to c:\log.txt) than to find out late that your customers can't install.
I'm not entirely sure what you mean by "deployment friendly code." What are you deploying? What do you mean by "deploying"?
If you mean that your code should be transferable between computers, I guess the best things to do would be to minimize unnecessary (with a given definition of "unnecessary") dependencies to external libraries, and document well the libraries that you do depend on.
Related
I've only got a small amount of experience with A/B testing; but from what I've seen it seems like the standard approach to do an A/B test is to introduce some conditional logic in an application's code. This can be tricky to implement properly (depending on the complexity of the test) and requires extra work both for setup and cleanup.
It got me wondering: are there any frameworks or approaches to A/B testing that simplify matters using, e.g., Git branches? I'm envisioning something at the load balancer level, which directs half of traffic to a server where "master" or "default" has been deployed, and the other half to a server with "experiment" deployed. This way the code itself could always be completely agnostic of any A/B tests going on; and presumably the act of choosing either A or B for full deployment would be a simple flip of a switch.
I'm sure this would not be trivial to set up properly. But still I wonder if it's possible, and if in fact it's already been done.
it's relatively easy to build and definitely doable. you need to implement a deployment system where you deploy all branches starting with "ab_* for an example into different folders on your servers. then at some point in your code you can decide which folder should be included in the actual user session based on your actual test. it's not really a 'framework', it's a simple architecture design pattern you have to add to your own system, i was doing the same in production before.
I am pretty comfortable with the producing web apps now. I am using a NodeJs stack on the back-end and usually have a fair amount of Javascript on the front end. Where I really lack understanding is the deployment process.
What is a typical deployment process?
From what I have gathered in my reading a deployment/build process can include several tasks:
Running through unit-test suites
Concatenating script and CSS files
Version numbering your app
Tracing module dependencies (node_modules)
Pushing it to a remote repo (GitHub)
Instructing 'staging' servers to pull down the latest repo
Instructing 'production' server to pull down the latest repo
This has all left me a little overwhelmed. I don't know whether I should be going into this level of detail for my own projects, it seems like a lot of work! I am using Sublime Text 2 IDE and it seems to have a Build Script process, is this suitable? How does one coordinate all these separate tasks? I'm imagining ideally they would all run at the flick of a switch.
Sorry so many questions, but I need to know how people learnt similar principles. Some of my requirements may be specific to NodeJS but I'm sure processes are similar no matter what choice of stack you are developing in.
First off, let's split the job in two: front-end and back-end stuff. For both, you really want some kind of bulid system, but their goals and scope are vastly different.
For the front-end, you want your source to be as small as possible; concatenate/minify JavaScript, CSS and images. A colleague of mine has written a "compiler", Assetgraph, to do this for you. It has a somewhat seep learning-curve, but it does wonders for your code (our dev-builds are usually ~20 megs, production ~500 k).
As to the back-end, you want contained, easily managed bundles of some sort. We re-package our stuff into debian-packages. As long as the makefile is wired up correctly, you get a lot of the boring build- and deploy-time stuff for free. Here's my (pre-NPM 1.0) Debianizing node programs. I've seen other ways to do this in NPM and on Github, but I haven't looked into them, so I can't speak on their quality.
For testing/pusing around/deploying, we use a rather convoluted combination of Debian package-archives, git-hooks, Jenkins-servers and what not. While I highly recommend using the platforms' native package-manager for rolling out stuff, it can be a bit too much. All in all, we usually deploy staging either automatically (on each git push), or semi-automatic for unstable codebases. Production deployments are always done explicitly.
For the assets I use asereje https://github.com/masylum/asereje
I recently documented my nodejs deployment process in a blog post:
http://pau.calepin.co/how-to-deploy-a-nodejs-application-with-monit-nginx-and-bouncy.html
A build script sounds like a good idea indeed.
What should that build script do?
make sure all the test pass, else exit immediately
concatenate your javascript and css files into one single js/css file and minify them also
increment the version number (unless you decide to set that up yourself manually)
push to the remote repo (which instructs the staging and production servers through a git hook)
This is at least my opinion.
Other resources:
http://howtonode.org/deploying-node-with-spark
http://howtonode.org/deploying-node-upstart-monit
http://dailyjs.com/2010/03/15/hosting-nodejs-apps/
How to deploy node app depencies
I'm beginning development on a solution that will plug into an existing application. It will be made available for public use.
I have the option of using a newer technology that promotes better architecture, flexibility, speed, etc... or sticking with existing technology that is tried and tested which the application already uses.
The downside of going with the newer technology is that a major change to an essential config file needs to be made to support it. If the change goes wrong the app would be out of service. Uninstall is also an issue as future custom code by other developers may require the newer tech and there's no way this can be determined.
How important is this issue in considering an approach?
Will significant config changes put users off deploying code, or cause problems for them later?
Edit:
Intentionally not going into specifics about technologies here to avoid the question from being siderailed.
Install/uninstall software can be provided but there is some complexity involved which may cause them to foul up on edge cases resulting in a dead app. (A backup of the original config would be a way to mitigate that.) Also see the issue about uninstall above where I essentially can't provide one.
Yes, in my experience, any large amount of work will make users think twice about deploying or upgrading.
It's your standard cost/benefit analysis done by businesses with just about every decision. Will the expected benefits more than outweigh the potential costs?
When we release updates to our software, there's almost always a major component that's there just to assist the users to migrate.
An example (modified enough to protect the guilty): we have a product which generates reports on system performance and other things. But the reports aren't that pretty and the software for viewing them is tied to a specific platform.
We've leveraged BIRT to give us intranet-based reporting that looks much nicer and only needs the client to have a web browser (not some fat client).
Very few customers made the switch until we provided a toolset that would take their standard reports and turn them into BIRT reports. Once we supplied that, customers started taking it seriously - the benefit hadn't changed, but the cost had gone right down.
You've given us no detail, so we can't answer with any specificity. But if your question is, will a significant portion of your potential userbase be deterred from using your product if they have to do significant setup work, then the answer is yes. I've seen this time and time again, with my own products and those that I've installed myself. When the only config change is an uninstall and reinstall. People don't like to do work.
You may want to devote more effort than you've considered so far to making the upgrade painless. Even if you're upgrading someone else's framework, you may find the effort worthwhile and reflected in an increased number of installs.
I have noticed that "power users" - developers, sysadmins, etc. - are willing to put up with more setup work.
I'm not sure what you mean by "major config change", but if you're talking about settings / configuration files, then I've been doing something like this:
An application always contains a default configuration which is useful for most users, and which can't be replaced. Instead, users can override one or more of the default settings in their own, separate configuration file. When a new (major) version is released, most users don't need to reconfigure anything: their own custom configurations are still taken from their own configuration file, and possibly required new parameters are taken from the new release's default settings.
It's obvious that most users don't want waste their time adjusting some settings that already were right - and quite rightfully so.
My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.
In the past my development team we have mostly done waterfall development against an existing application and deployments were only really done towards the end of a release which would normally result in TEST, UAT, PROD releases normally only consisting of three to five releases in a two month cycle.
A release was an MSI installer, deployed via Group Policy.
We have now moved to a more agile methodology and require releases at least once per day for testing, some times more often.
The application is a VB6 app and the MSI was taking care of COM registrations for us, users do not have elevated privileges on their machines.
Does anyone have any better solutions for rapid deployment?
We have considered batch/scripted installs of the MSI, or doing COM registrations per file, both using CPAU for elevated privileges, and ClickOnce. Neither of these have been tested yet.
Edit: Thanks for suggestions.
To clarify, my pain point is the MSI build / deployment process takes a long time can take up to two hours to get the new build on to the testers desktops. The testers do not admin rights on their machine (and will not get them) so I am looking for a better solution.
I have played around with ClickOnce, using a dot net wrapper which starts up the application and has all the OCX/DLL vb6 assemblies as isolated dependencies, but this is having issues finding all the assemblies when it starts up, or messages to that effect.
CruiseControl and Nant are probably your best bet for builds with flexible output. But Rapid Deployment?
My concern is that you are looking at the daily builds in the wrong way. The dailies do NOT need to be widely deployed. In fact, QA and Development are the only ones who should care about the builds on a day to day basis. EVen then, the devs shouldn't be out of sync ;).
The customer team should only recieve builds at the end of a iteration. That is where you show them what you have done and they provide feedback and you move forward from there. Giving them daily builds could cause a vicious thrashing that would kill your velocity.
All that being said, a nice deployment package might be good for QA. But again, it depends on how in step they are with your development iterations. My experience, right or wrong, is that QA is one iteration back testing the deliverables from the last iteration. From that point of view, they should be testing with the last "stable" release as well.
Is this something you can do in a virtual machine? You could securely give your testers admin rights on the virtualized system and most virtualization software has some form of versioning so you can roll back to a "good" state if something goes wrong. I've found it to be very useful for testing.
I'd recommend ClickOnce with the option to update on execution. That way only people using the software receive and install the updates.
You could try registry-free COM. See this other question. ActiveX EXEs still have to be registered though.
EDIT: to clarify, using registry-free COM means the OCX/DLL components you mention don't need to be registered. Nor do any OCX/DLL components they use. You can just copy the whole application directory onto a tester's machine and it will work straightaway.
If I understand your question correctly, you need admin rights to install your product. I see three options:
1) Don't install to the tester's desktops. Get some scratch testing machines (as dmo suggested, VMWare might help) that you can safely give them admin rights to. This may mean giving them a test domain and their own group policy to edit.
2) Build a variant that doesn't require MSI installation, and can be executed directly. Obviously your testers would not be testing the deployment and installation process with this variant, but they could perform other tests of the product's functionality. I don't know if this is possible with your product; it would certainly be work.
3) Take your agile medicine: "[prefer] responding to change over following a plan". That is, if denying admin rights to your testers is interfering with their ability to do their jobs efficiently, then challenge the organization to give them admin rights. (from experience, this will mean shifting to #1, but it might be the best way to make the case). If they are expected to test the product, how can they not even be allowed to install it in the same way a customer would?
If the MSI deployment is taking velocity out of agile testing, then you should test MSI deployment less regularly.
Use XCOPY deployment wherever possible, using .local for COM components. This can be a problem with third party components. As third party components are pretty stable, you should be able to build a custom MSI for these, install them once and be done with it.
You should try an automated build/deploy process or script that you can manually run. Try Teamcity or CruiseControl. Good luck!
I'm not sure just precisely what your pain point is.
You specifically mention registration of VB6 COM objects. Does the installer sometimes fail because of that?
Is it that the installer works but people don't know to install the new build so they are more often than not reporting bugs on an old build?
If the former, then I suspect the problem to be that VB6 was very likely to play fruit basket turnover with the GUIDs when rebuilding the solution. Try recreating your public interfaces in MIDL and have your VB6 classes implement those interfaces.
If the later, then try Microsoft's SMS product. No, it has nothing to do with cell phones. If all the user's aren't on the same domain, then you will have to build an "auto update" feature into your product. Here is a third party offering that I've heard of but never used.
I'm using SetupBuilder (http://setupbuilder.com/products_setupbuilder_pro.htm) for all my builds. Very extensible. Excellent support.
Not sure exactly if it fits your needs, but this kind of post on the forums, "Installing as a limited account user (non-admin) on XP" (http://www.lindersoft.com/forums/showthread.php?t=11891&highlight=admin+rights), makes me think it might be.