CentOS 5 end of life - server

My server is running CentOS 5. The docs tell me that the "end of life" is March 31, 2017. Does this mean that the software will stop functioning on that date, or does it mean that there will no longer be any upgrades available for version 5? If the latter is true, what kind of difficulties could I expect to face, if I decide not to migrate to another server and OS?

In the software industry in general, "End-of-life" largely (but not always) means "End of support period" where "support" can mean different things - but generally it means these things simultaneously:
The developers will not release any new patches or software updates for the product, this includes both feature updates, but more importantly also security updates. If you must run unsupported software then ensure it's suitably firewalled from the public Internet (and untrusted users) if not completely air-gapped.
The developers will not go out of their way to provide personal technical support (e.g. phone support), however they still will generally keep self-service support resources (e.g. web-pages, knowledge-base articles, etc) available (Microsoft still has pages about Windows 2000 around somewhere).
The developers/publishers are not obligated to provide access to this version of the software. Generally this doesn't happen as much with open-source software (as you can download the repo, rewind to an older version and build from source, but for commercial/proprietary software you will probably lose access to the formerly release binaries unless you retain your own copies.
Your concern about software suddenly stopping working after this end-of-life date is unfounded - at least in CentOS' case (as it's open-source software), and even proprietary software generally don't have timebombs in them (excepting time-limited trial versions of software, of course). The only thing to watch out for is software with an activation system, because after the supported-date period ends there is no guarantee the activation system will still work - this also applies to physical dongles too (while they won't immediately stop working, they might eventually fail) - in this case you'll want to contact the developers and negotiate a special build of the software with the activation features removed, or reverse-engineer the DRM to remove it (which may or may not be legal in your jurisdiction).

Related

Multiple OS vs SIngle OS phone and server development

Me and few friends run a little app creation business in our spare time, our current development environment is a 3 macbooks laptops running just snow leopard, 4 asus laptops with dual boot windows 7 and ubuntu and a rubbish test server box that is similar to our vps.
Our setup currently work okayish at the moment, with a few minor issues, like not knowing what version of software we are working on, caused by continually switching operating systems and lost of productivity from being to lazy to switch the laptop we are working, having to unplug it and plug in the new one, including the second monitor, keyboard and mouse.
Our system is far from professional and we are looking to upgrade. This is because we wish to increase our staff and we have some cash saved up, so why not. The phone we are targetting are iOS, android and Win7. Our servers are written in php and json. So my question is basically, how do you guys manage with all these multiple operating systems.
iOS requires mac os x
android can use all
json require linux/mac os x
windows phone 7 requires windows
do you guys use some form of virutalization?
or try those libraries that compile to each phone binary such as unity?
There are many many different ways to solve this and you may have to find what works best for you. Here are some suggestions though.
Using the macbooks, set up bootcamp so you can dual boot to OSX or Windows. This will mean you can use the Macbook for all development without having to bother swapping monitors, etc. Doing this will leave your other Windows laptops spare which you can use for the next suggestion....
Set up a central repository for your sourcecode. Use one of the servers you have, or re-purpose one of the other machines and install a decent source code repository system. CVS, Git, etc. There's plenty of resources about these. This will allow you to keep your code in one place so it won't matter which machine you are working on - you can always get the most recent code. Plus it will help you track your code changes. Oh, and don't forget having it all in one place will be much easier for backups (you do do backups, don't you....?)
Don't fall into the trap of upgrading hardware just because you have some money floating around. You may just need to use the hardware you have more wisely. You mention what you have is "far from professional". You don't need the latest, greatest hardware and software to do development. I've done iOS development on 4 year old Macbook Pro, used an 8 year old PC as a server for web and database and still use Windows XP every day.
Depending on how many of you there are, you may not have enough Macbooks. If this is the case, then perhaps you have some who are specialists in the server-side stuff (ie they don't do iOS development and so don't need the Macs).
Virtualisation - using VMWare or similar tools are an excellent way of getting more from what you have. For example, you could have a couple of test servers that aren't very heavily utilised. Using virtualisation, you could put both of these servers onto one machine. This will then free up the other box for something else. It also makes it very easy to backup (you are doing backups, aren't you...?) an entire server and recover it back to the exact state in the case of a hardware failure. You can also very easily create a server tailored for each client/project and switch between them quickly without having to maintain lots of other stuff (think if you had a web server configured for one project and you then work on another project that needs a different configuration and you change it, then you need to change it back, etc).
EDIT: Update in response to comments.
If using Bootcamp isn't an option, then consider running a Windows and/or Linux virtual machine inside OSX. Depending on the spec of your macbooks and as long as you don't need very low-level hardware access on Windows, then this would probably work as well and not need to switch in and out using BootCamp. Same goes for the Linux virtual machine. I'm a big fan of using Virtual Machines on development environments as it allows you to copy around and switch in and out servers without having to rely on physical hardware connections. And you can very easily return to a known state with the server configuration and data.
With regards the source control "in the cloud". I'm not a fan of this approach. It's my source code and I want to control it. I don't want to be reliant on some other company and I don't want to hope I've read some Terms and Conditions correctly and I'm not handing over my code to some other company to do what they want with it. Aside from that, what happens if your internet access goes down and you absolutely must get some coding done for a customer? If you are relying on another service, then you are risking problems. Yes, it has advantages for multi-site, they do the backups for you, etc. But it really isn't a problem unless you have lots of developers spread all across the world. And even then it isn't necessarily a problem. You could always do a backup of your code to some package file, encrypt it and then throw that up in the cloud for a backup storage (as well as burning it to disc, writing to another external hard drive and storing them off-site). But I certainly wouldn't want to rely on an external source control unless I was doing open source stuff.
There's sooooo much more to these subjects and there are many other subjects you will probably encounter along the way of building up your business.
One of the most important things about software development is to keep it organised and to get that organisation part done at the start. If you are just each keeping a copy of the code on local drives, then changing code and hoping that you haven't changed the same file as someone else, then this will just lead to pain. The source control aspect is key from the start.
Oh, and did I mention backups?
I would also consider the IDE you're using as part of the equation. For instance a good cross platform IDE (Like QT4+) and a centralised code repository on a server will go a long way towards mitigating your working problems. Eclipse, Netbeans and QT4+ are cross platform and will work with all 3 systems. Virtualisation as you mentioned is an option, but first I would decide on the IDE platforms to use before worrying about your dev infrastructure setup.
Bro, I'm not a pro, but you have two options:
Either multiboot your system by installing multiple OSes...(Obviously, you need a separate MACbook)
Or use Virtual Machines like VMWare etc.
Personally, I haven't heard much about libraries like Unity etc.
Go for dedicated systems & not just libraries.

NuGet official package source: Should I be worried about the safety of the packages?

According to this page:
No central approval process for adding packages. When you upload a package to the NuGet Package Gallery (which doesn’t exist yet), you won’t have to wait around for days or weeks waiting for someone to review it and approve it. Instead, we’ll rely on the community to moderate and police itself when it comes to the feed. This is in the spirit of how CodePlex.com and RubyGems.org work.
This makes me feel uneasy. Before I download a Firefox add-on, I know it should not contain malicious code, because AFAIK all add-ons on addons.mozilla.org are reviewed by Mozilla. Before I download a open source project from codeplex.com or code.google.com, I know it should be safe because anyone can check it's source code. And I can also use WOT (web of trust) to check how other people think about the project.
But before I download a package from NuGet official package source. Take this one for example. I do no know who made this package, nor what is contained in the package. It seems to me that anyone can pack anything into a package, give it any name they want (like "Microsoft Prism", as long as the name is not taken), then upload it to the official package source.
Should I be worried about the safety of the packages on NuGet official package source?
Your uneasyness should apply to software you obtain from any source:
Binaries downloaded from Sourceforge.net, Codeplex.com, etc could feasibly contain malicious code (either planted by the original submitter or, more likely, inserted by a hacker into the website) that may pass unnoticed until someone (you?) gets bitten and raises the alarm.
Even if you compile your own binaries from source downloaded from one of the former websites, it could still perform malicious acts unless you go over all the source code and understand what it does.
Even software downloaded from 'app stores' (e.g. Apple iTunes, Android Market) could feasibly contain malicious code; some of these review processes are partially automated but are still not infallible, and the human review that also occurs is definitely not infallible!
There have been examples in the past of boxed software containing malware!
Perhaps there is a continuum of trust that you can have in software (delivered as binaries or source code), and something like the Nuget Package Gallery (and CodePlex.com and RubyGems, etc) probably lies on the less-trustworthy end of the continuum.
There are potential solutions to this sort of problem, such as those proposed by the Trusted Computing Platform Alliance, however they come with huge restrictions on the freedoms that we currently enjoy in developing software and sharing the software we develop as we see fit, without the need for licenses or cryptographic keys obtained from central authorities at great expense.
I believe that the community will come up with conventions and mechanisms for ensuring that Nuget becomes a trustworthy source of software libraries for .Net developers, whilst retaining the agility it has with not requiring a formal review process. However, the ultimate responsibility rests with yourself as a user to ensure that your IT security isn't compromised, and the precautions you take are a function of the criticality of IT security in the context of the software you are writing (e.g. home projects; probably low. Banking, medical, process control projects; probably high!)
NuGet doesn't manage trust. Even if it did, you would still have to be concerned about trusting what NuGet trusts.
You should absolutely be concerned about the safety of the code in a NuGet package. You should be concerned about the safety of any code you are not familiar with.
The approach I take to using packages, both personally and professionally, through NuGet and NPM are below:
Lock in the semantic version number completely. Explicitly specify the major, minor, and patch numbers. Don't assume that new updates will be safe or that their semantic version will be accurate.
Use only well known current versions for production.
Experiment with anything in a test environment with limited access e.g. under an account which isn't a local administrator, no local access to highly privileged credentials, no access rights to privileged resources granted to the test machine's IP.
Check the vendor. For example if the package is released by Amazon and it's an AWS SDK, then that package is probably safe to use if you trust Amazon.
For example the only packages I would trust right now to just go and add them in a production environment are Newtonsoft.Json and Nhibernate. My biggest concern with new open source packages that anyone can publish, is that they actually work as described before I buy-in and waste my time on something that doesn't meet my needs.
I feel as though if you did enough research on the package to see if it's suitable for a production environment, you probably have learned enough about the software and its community to determine if you can trust it's not doing anything maliciously. Researching the software and it's community really means more to me than NuGet's stamp of approval decided by one central authority that we all pray is perfect.

When is it time to port an old application to new platform?

I'm working for a company that has an established application written in VB6. The application is stable and continues to provide the company with good income. However, it is beginning to show its age and noises are been made to port to a more modern platform such as .Net.
Since this is hardly ever a cut and dry decision I would appreciate input on when it is a good time to port a long standing application to a modern platform.
Some of the pros and cons that I have already worked through:
In favor of porting
Finding skills for an old programming language becomes harder and more expensive
Support from the platform vendor ends at some point
Leveraging modern programming practises on the old platform becomes harder or impossible
Rewriting provides the opportunity to improve existing practises
Moving to a modern platform is motivating for the development team
Moving to a modern platform provides marketing opportunities
Against porting
"If its not broken don't fix it"
The cost of rewriting versus the return
Risks associated with the transition from the old to the new application
Upskilling existing software engineers
Some related StackOverflow questions:
What makes code legacy?
When do you say that the code is Legacy code?
One of the things to consider is that porting an application can get more and more expensive over time. I have seen applications writen in 'ancient' languages that were very well developed. But, as happens many times, all the domain knowledge was in the code and in the heads of the developers, not in up-to-date documents.
So in situations like this porting means not only rewriting in the new sparkly language but also reverse-enginering the specs and picking the, hopefully available, brains of the developers. This becomes harder and harder over time.
An other thing is that 'porting' is hardly ever as easy as the Migration Wizard want us to believe. Many wizards produce a half-baked solution that is still constructed according to the constructs and features common to the 'legacy' environment and will hardly be using the new features and possibilities. This might not seem that bad but if you leave it at that level you are in fact making it very hard for developers that know the 'new' language to understand the code and make porting to the next platform or language even harder. That is what I call LEGACY in capitals. Dragging useless stuff around for decades.
The optimal moment to start porting, from a developer's point of view, was yesterday.
The optimal moment to start porting, from a manager's point of view, is tomorrow.
The optimal moment to start porting, from a competitor's point of view, is never.
There are a lot of other considerations to evaluate: opportunity cost (what else could we be doing), capacities for extensibility and growth (what else does the application need to do/be), sustainability with other moving parts (DB upgrades, OS upgrades), etc. The list goes on and on.
Specific to VB6, I would evaluate what limitations are in the way of product progress vs. moving up to the current .Net framework. Ask yourself -- is this really an IF scenario, or a WHEN scenario?
From a general standpoint, the worst time to port an application is when you HAVE to port it. Your situation sounds like an ideal time to begin code migration -- before it becomes a necessity. Given your legacy product's profitability for your company, any situation where you're forced to move to migrate brings pressures around deadlines, scope, etc.
All things considered, your situation sounds like an ideal time to port up to the .Net Framework, well before it becomes necessary.
Echoing jro and especially Erno,
Upgrade before there is a crisis.
Upgrade before the developers move on to other places where they have a chance at working on a modern framework.
Upgrade while the developers that built the original program are still around.
No competent developer will accept a pure porting job, it is not a career enhancing move. But the existing developers will be happy to learn the latest framework as part of a porting effort.
VB6 was released in 1998. March 31, 2008 Microsoft EOL'ed all VB6 support. Your company is so far into the danger zone with this code, it isn't funny.
To add some perspective,
Netscape was still an independent company and they just release Netscape 4.
Clinton was still president
The internet was still a new concept
Intel had just released their hot new Pentium II running at 450 Mhz
The Matrix was still filming
Google hadn't been founded (it was later in the year)
At some point, the company will be forced to upgrade the app because the operating system will no longer support the apis.
You should leave this company. It is career death to stay.
Update because Cody thinks "I am an individual developer":
#Cody -- Rethink your assumptions. I run my own company. Without fail, every time we have slipped behind the last stable release of a platform, catching up has been incredibly painful and expensive. The latest pain point is we are on dojo 0.4.3 and Tapestry 4. T4 and dojo 0.4.3 have this mutual interdependency that we are separating (slowly). Moving to Tapestry5 and/or jquery or even just to the more recent version of dojo is very slow and very painful. The porting has taken over a year because it has to be this long stretched process to keep other development moving along.
The choices are :
stay stuck on the old library
forever (with the problems around
finding/attracting talent),
try to run dual-mode (old/new) code (code doesn't always cooperate,
or freeze development on large chunks of the product during the
port
So far we have been doing a combination of #2 and #3.
Being on old version of either dojo or tapestry means that we have lost the ability of the community to support us and help us with the problems. The advantage of a framework is that other people are doing work that solves your problems. Nobody is solving any VB6 problems any more. Microsoft will not even take money to solve VB6 problems.
The OP's company is completely on their own. Note: that Google was just founded the year VB6 was released. I would suspect that VB6 knowledge has been disappearing from the web and that each year a Google search about any programming problem the OP's company makes will return fewer and fewer results.
This is a business viability risk.
The happy talk about MS supporting VB6 forever and ever is not a good idea. All it takes is some SVP at Microsoft saying: "We can ship the next Windows version in time to make Christmas if the teams do not have to fix these issues that affect only VB6. We will issue a Service Pack later." At some point this can and will happen.
A competitor can come along and introduce a competing product using the latest tools faster ( because the large pool of libraries available when using the latest frameworks.) The OP's company has lost the ability to be nimble because the latest tools and libraries no longer support VB6. (A 13! year old framework!!)
This is another business viability risk.
The fact that this needs to be explained to anyone is a huge, huge warning flag to any developer with any experience who is interviewing at the OP's company.
This reduces the quality and quantity of the talent pool enormously.
Not being able to attract quality talent is another business risk.
The original OP should bail.
Its not just Microsoft and will the Windows support the app. What about things like printers? or displays? Epson is under no obligation to release printer drivers that support a VB6 application.
What happens when the print function stops working for customers on their latest cool 4G-enabled printer?
What happens when customers try to use the app on the now-standard 2000x4000 display and the fonts look all goofy?
What happens when Adobe starts having Adobe Reader advise that the PDF file version should be upgraded?
Seeing a warning dialog popup, not being able to print, use the latest display well, etc will result in customers quietly moving to competitors. They will not even bother to tell the OP's company that they are doing this.
The OP should move on before the layoffs hit.

What are common practices for deployment of large scale systems?

Given a large scale software project with several components written in different languages, configuration files, configuration scripts, environment settings and database migration scripts - what are the common practices for deployment to production?
What are the difficulties to consider? Can the process be simplified with tools like Ant or Maven? How can rollback and database management be handled? Is it advisable to use version control for the production environment?
As I see it, you're mostly asking about best practices and tools for release engineering AKA releng -- it's important to know the "term of art" for a subject, because it makes it much easier to search for more information.
A configuration management system (CMS -- aka revision control system or version control system) is indispensable for today's software development; if you use one or more IDEs, it's also nice to have good integration between them and the CMS, though that's more of an issue for purposes of development than for purposes of deployment / releng.
From a releng viewpoint, the key thing about a CMS is that it must have good support for "branching" (under whatever name), because releases must be made from a "release branch" where all the code under development and all of its dependencies (code and data) are in a stable "snapshot" from which the exact, identical configuration can be reproduced at will.
The need for good branching support may be more obvious if you have to maintain multiple branches (customized for different uses, platforms, whatever), but even if your releases are always, strictly in a single linear sequence, releng best practices still dictate making a release branch. "Good branching support" includes ease of merging (and "conflict resolution" when different changes are made to a file), "cherry-picking" (taking one patch or changeset from one branch, or the head/trunk, and applying it to another branch), and the like.
In practice, you start off the release process by making a release branch; then, you run exhaustive testing on that branch (typically MUCH more than what you run everyday in your continuous build -- including extensive regression testing, integration testing, load testing, performance verification, etc, and possibly even more costly quality assurance processes, depending). If and when the exhaustive testing and QA reveal defects in the release-candidate (including regressions, performance degradation, etc), they must be fixed; in a large team, development on head/trunk may be continuing while the QA is being done, whence the need for ease of cherry-picking / merging / etc (whether your practice is to perform the fix on head or on the release branch, it still needs to be merged to the other side;-).
Last but not least, you're NOT getting full releng value from your CMS unless you're somehow tracking with it "everything" that your releases depend on -- simplest would be to have copies or hard links to all the binaries for the tools you need to build your release, etc, but that may often be impractical; so at the very least track the exact release, version, bugfix &c numbers of those tools that are used (operating system, compilers, system libraries, tools that preprocess image, sound or video files into final form, etc, etc). The key is being able, at need, to exactly reproduce the environment required to rebuild the exact version that's proposed for release (otherwise you'll go mad tracking down subtle bugs that may depend on third party tools' changes as their versions change;-).
After a CMS, the second most important tool for releng is a good issue tracking system -- ideally one that's well integrated with the CMS. That's also important for the development process (and other aspects of product management), but in terms of the release process the importance of the issue tracker is the ability to easily document exactly what bugs have been fixed, what features have been added, removed, or changed, and what modifications in performance (or other user-observable characteristics) are expected in the new forthcoming release. For the purpose, a key "best practice" in development is that every changeset that gets committed to the CMS must be connected to one (or more) issue in the issue tracking system: after all, there's gotta be some purpose for that change (fix a bug, change a feature, optimize something, or some internal refactor that's supposed to be invisible to the software's user); similarly, every tracked issue that's marked as "closed" must be connected to one (or more) changesets (unless the closing is of the "won't fix / working as intended" kind; issues related to bugs &c in third-party components, which have been fixed by the third-party supplier, are easy to treat similarly if you do manage to keep track of all third-party components in the CMS too, see above; if you don't, at least there should be text files under CMS documenting third-party components and their evolution, again see above, and they need to be changed when some tracked issue on a 3rd party component gets closed).
Automating the various releng processes (including building, automated testing, and deployment tasks) is the third top priority -- automated processes are much more productive and repeatable than asking some poor individual to manually go through a list of steps (for sufficiently complex tasks, of course, the workflow of the automation may need to "get a human being in the loop"). As you surmise, tools such as Ant (and SCons, etc, etc) can help here, but inevitably (unless you're lucky enough to get away with very simple and straightforward processes) you'll find yourself enriching them with ad-hoc scripts &c (some powerful and flexible scripting language such as perl, python, ruby, &c, will help). A "workflow engine" can also be precious when your release workflow is sufficiently complex (e.g. involving specific human beings or groups thereof "signing off" on QA compliance, legal compliance, UI guidelines compliance, and so forth).
Some other specific issues you're asking about vary enormously depending on specifics of your environment. If you can afford programmed downtime, your life is relatively easy, even with a large database in play, as you can operate sequentially and deterministically: you shut the existing system down gracefully, ensure the current database is saved and backed up (easing rollback, in the hopefully VERY rare case it's needed), run the one-off scripts for schema migration or other "irreversible" environment changes, fire the system back up again in a mode that's still unaccessible to general users, run another extensive suite of automated tests -- and finally if everything's gone smoothly (including the saving and backup of the DB in its new state, if relevant) the system's opened up to general use again.
If you need to update a "live" system, without downtime, this may range anywhere from a slight inconvenience to a systematic nightmare. In the best case, transactions are reasonably short, and synchronization between the state set by transactions may be delayed a bit without damage... and you have a reasonable abundance of resources (CPUs, storage, &c). In this case, you run two systems in parallel -- the old one and the new one -- and just make sure all new transactions target the new system, while letting old ones complete on the old system. A separate task periodically syncs "new data in the old system" to the new system, as transactions on the old system terminate. Eventually you can determine that no transactions are running on the old system and all changes that happened there are synced up to the new one -- and at that time you can finally shut down the old system. (You need to be prepared to "reverse sync" too of course, in case a rollback of the change is needed).
This is the "simple, sweet" extreme for live system updating; at the other extreme, you can find yourself in such an overconstrained situation that you can prove the task is impossible (you just cannot logically meet all the stated requirements with the given resources). Long sessions opened on the old system which just cannot be terminated -- scarce resources that make it impossible to run two systems in parallel -- core requirements for hard-real-time sync of every transaction -- etc, etc, can all make your life miserable (and as I noticed, at the extreme, can make the stated task absolutely impossible). The two best things you can do about this: (1) ensure you have abundant resources (this will also save your skin when some server unexpected goes belly-up... you'll have another one to fire up to meet the emergency!-); (2) consider this predicament from the start, when initially defining the architecture of the overall system (e.g.: prefer short-lived transactions to long-lived sessions that just can't be "snapshot, closed down, and restarted seamlessly from the snapshot", is one good arcitectural pointer;-).
disclaimer: where I work we use something I wrote that I'm going to mention
I'll tell you how I do it.
For configuration settings, and general deployment of code and content, I use a combination of NAnt, a CI server, and dashy (an automatic deployment tool). You can replace dashy with any other 'thing', that automates the uploads to your server (perhaps capistrano).
For DB scripts, we use RedGate SQL Compare to get the scripts, and then for most changes, I actually make them manually, where appropriate. This is because some of the changes are slightly complex, and I feel more comfortable doing it by hand. You can actually use this tool to do it for you, (or at least generate the scripts).
Depending on your language, there are some tools that can script the DB updates for you (I think someone on this forum wrote one; hopefully he'll reply), but I have no experience with those. It's something I'd like to add though.
Difficulties
I forgot to answer one of your questions.
The biggest problem in updating any significantly complicated/distributed site is DB synchronisation. You need to consider if you will have any downtime, and if you do, what will happen to the DBs. Will you shutdown everything, so that no transactions can be processed? Or will you shift everything to one server, update DB B, and then sync DB A & B, then update DB B? Or something else?
Whatever you choose, you need to choose it, and either say "Okay for each update, there will be X downtime", or whatever. Just have it documented.
The worst thing you can do is have someones transaction fail, mid processing, because you were updating that server; or to somehow leave only part of your system operational.
I don't think you have any option about using versioning or not.
You cannot go without versioning (as mentioned in a comment).
I am talking from experience since I am currently working on a website where we have several items that have to work together:
The software itself, its functionalities at a given point in time
The external systems (6 of them) we are linked with (messages are versioned)
A database, which holds the configuration (and translations)
The static resources (images, css, javascripts) hosted on an Apache server (several in fact)
A java applet, which has to be synchronized with the javascript and software
This works because we use versioning, although I must admit that our database is pretty simple, and because the deployment is automated.
Versioning means that we have actually at any point in time several concurrent versions of the messages, the database, the static resources and the java applet.
It is especially important in the event of a 'fallback'. If you discover a flaw when loading you new software and suddenly you cannot afford to load, you'll have a crisis if you have not versioned, and you'll just have to load the old soft if you have.
Now as I said, the deployment is scripted:
- the static resources are deployed first (+ the java applet)
- the database goes next, several configurations cohabit since they are versioned
- the soft is queued for load in a 'cool' time-window (when our traffic is at its lowest point, so at night)
And of course we take care of backward / forward compatibility issues with the external servers, which should not be underestimated.

Is automatic upgrades a realistic feature to expect from enterprise Web applications?

Most of the work I do is with what could be considered enterprise Web applications. These projects have large budgets, longer timelines (from 3-12 months), and heavy customizations. Because as developers we have been touting the idea of the Web as the next desktop OS, customers are coming to expect the software running on this "new OS" to react the same as on the desktop. That includes easy to manage automatic upgrades. In other words, "An update is available. Do you want to upgrade?" Is this even a realistic expectation? Can anyone speak from experience on trying to implement this feature?
At my company we have enterprise installations ranging into the thousands of seats. If we implemented an auto-upgrade, our customers would mutiny!
Large installations have peculiar issues that don't apply to small ones. For example, with 2000 users (not all of whom are, let us say, the most sophisticated of tool users), tool-training is a big deal: training time, internal demos, internal process documents, etc.. They cannot unleash a new feature or UI change without a chance to understand how it fits in their process and therefore what their internal best practices are and how to communicate that to their users.
Also when applications fail, it's the internal IT team who are responsible. Therefore, they want time to install a new version in a test area, beat it up, and deploy on a Saturday only when they're good and ready.
I can see the value in making minor patches more easy to install, particularly when the patch is just for a bug-fix and not for anything that would require retraining, and if the admins still get final say over when it's installed. But even then, I don't believe anyone has ever asked for this! Whether because they don't want it or they are trained to not expect it, it doesn't seem worth it.
Well, it really depends on your business model but for a lot of applications the SaaS model can end up biting you. It's great for a lot of things but for some larger applications the users are not investing as significant amount up front and could possibly move to something else before you've made any money.
See
http://news.zdnet.com/2424-9595_22-218408.html
and here
http://www.25hoursaday.com/weblog/2008/07/21/SoftwareAsAServiceWhenYourBusinessModelBecomesAParadox.aspx
for more information
One of the primary reasons to implement an application as a web application is that you get automatic upgrades for free. Why would users be getting prompted for upgrades on a web app?
For Windows applications, the "update is available, do you want to upgrade?" functionality is provided by Microsoft using ClickOnce, which I have used in an enterprise environment successfully -- there are a few gotchas but for the most part it is a good way to manage automatic deployment and upgrade of Windows apps.
For mobile apps, you can also implement auto-upgrades, although it is a little trickier.
In any case, to answer your question in a broad sense, I don't know if it is expected that all enterprise apps should make upgrading easy, but it certainly is worth the money from an IT support standpoint to architect them to allow for easy upgrading.
If you're providing a hosted solution, I wouldn't bother. Let the upgrade happen silently (perhaps with a notice that you did it). If you're selling an application that's hosted on their servers, let the upgrade decision be made by a single owner, not every user of the app.