How do you convert your office to build automation? [closed] - version-control

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The title should say it all, then I can solidify 2 more ticks on the Joel test.
I've implemented build automation using a makefile and a python script already and I understand the basics and the options.
But how can I, the new guy who reads the blogs, convince my cohort of its inherent efficacy?

Ask for forgiveness, instead of permission.
Get it working in private (which it looks like you have) and then demonstrate its advantages.
One thing that always gets people is using CruiseControl's Tray utility - people love it when they can see, through their system tray, that the build succeeded. (this is assuming you're in a Windows environment, that CruiseControl will work with your existing systems, etc.)
NOTE: If asking for forgiveness instead of permission will result in instant termination, you might not want to do the above. You might also want to look for work somewhere else. Your mileage may vary.

Implement build lights ... we did something similar with lava lamps and it was a huge hit. For added bonus marks give every developer a red light over their desk and have the right light come on when the build breaks.

Grab an old spare computer & put it in the corner of your office. Set it up to build your project. Write a small script that does:
Get latest version of all files.
If there was a file change, build
Notify you if there's a failure.
When you catch a break, compassionately get it fixed.
Consider adding a step to run unit tests, too.
If you can avoid scolding people for their mistakes, pretty soon people will be impressed with how reliable the build has been since you arrived. Build from there.
The trick is to spend very little of your time to generate a lot of value for your team, without pissing anyone off.

Set up an autobuilder. Once you have it building and running the tests automatically, it won't matter if you convince other people to save their own time :)
If you're using git for version control, here's an autobuilder that automatically finds the exact checkin that started causing the tests to fail: http://github.com/apenwarr/gitbuilder/

I would take a spare box, install a continuous integration server (Hudson or CruiseControl in the Java world) and set up a job that builds your application each time someone checks in some code.
You can either try to convince your coworker or just wait until someone breaks the build. In the latter case, just send the following email:
to: all developers
Guys,
I've just noticed that I can build our software using the
latest version because of the following error:
...
I you want to be notified by our continuous
build system (attached is the mail I received when
it failed to build our application), just let me know.
Usually it doesn't take that long until everyone is on the list

I would set up the automated build as a nightly process such that every night it grabs the most recent code revision, builds it, and generates a report. Now you will know first thing every morning whether or not the build is broken, and if it is, you can notify the team. If broken builds are much of a problem on your project, people will probably start coming to you first to find out if it is safe to sync to the latest code, since you will be the person who tends to know on any given day whether or not the build is broken (by the way, an automated suite of unit tests helps a great deal with this as well). With any luck, people will start to realize that your nightly build is a useful thing to have, and you'll be able to just set up your daily build report as an email that goes out.

James Shora has two great links:
For hardware
http://jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html
For "Humanware"
http://jamesshore.com/Change-Diary/
( The history of how he did it. The read is long but changing an organization is harder )

When the build is needed by the team on a regular basis, it's pretty easy. You appoint a team member (rotated periodically) to do the build. If the build process is complicated enough, the team will on its own come up with a way of at least partially automating the build. In the worst case, you'll have to automate the build yourself, but no-one will be against the automation.

Demonstration is the best, and really the only way to change anyone's mind who is resistant to doing things differently.
Here we showed how useful automated builds are by having the ability for QA to grab a green light build straight from the build server and install it and test without any direction from the developers. They are able to continue working, they know that it at least passes it's unit tests. It helped integrate testing and development reducing time bugs were in the system.

Related

SDLC: Managing changes in a 'Closed System' (M1 - ERP)

I am working with a client who has an ERP system in place, called M1, that they are looking to make custom changes to.
I have spent a little bit of time investigating the ERP system in terms of making customizations. Here is a list of what I have found with regards to custom changes:
Custom changes cannot be exported/imported. There is an option in the M1 Design Studio, however, they always appear to be disabled... I tried everything and I couldn't find a mention of it in the help documentation.
You can export a customizations change log (CSV, XML, Excel, HTML) that provides type, name, location and description. In essence, it is a read-only document that provides a list of changes you made. You cannot modify the contents of this log.
Custom form changes made, go into effect for all data sources (Test, Stage, LIVE). In other words, there does not appear an ability to limit the scope of a form change.
Custom field changes must be made in each data source (Test, Stage, LIVE). What's odd here is that if add a field in Test, adjust a grid to display it, subsequently change to LIVE, it detects that the field doesn't exist and negates the grid changes.
I'm unable to find documentation indicating that this application supports version control.
sigh
....
So...
How do I manage changes from an SDLC: ALM methodology and tools standpoint?
I could start by bringing in a change request system to manage pending and completed customizations. But then what? How should changes me managed and released? Put backups of application under source control and deploy when needed?
There might not be a good answer to this question since I'm unable to take advantage of version control and create a separation of environments, but I figured I'd ask in case anybody has had similar experience or worked with M1.
I take it from the lack of answers in two months that your question is unanswerable. SDLC is something you could write a textbook on, or read a textbook on, and not know enough about your environment, other than that probably in order to get hired at your shop, "SDLC" would be a bullet point on the hiring qualifications.
I have no experience with M1, but I am assuming that you're going to have to ask your peers at work for their ideas, because it sounds like you're asking a vertically closed (your shop, your tools, your practices) question that has no exact technical answer.
As for best practices; I suggest you investigate best practices outside your M1 ERP silo and apply them as makes sense to you.
The company I work for also uses M1 erp. We have similar issues regarding version control of the customisations. From what I can tell, all customisations are stored in the M1DD database. You could backup a copy of this database before any major development work as a basic revision control system.
I am familiar with the issue of all changes becoming immediately active in all datasets. This is particularly annoying when you are making changes to a commonly used modules as you don't know how live data will be affected during the development process. One technique I have found useful is to surround untested code with an if statement so it is only executed when I am logged in.
If App.UserID = "MYUSERNAME" Then
'new code here
End If
I would be interested in hearing how you solved this problem.

Arguments against zip files as source control [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What arguments can be used against using zip files of source code as a form of version control?
In general each developer is working on their own program and has a responsibility for it. But there are times of course when other developers are involved in work on that program.
Each developer has their own naming convention for zip files ranging from appending the date, a number after the program name or even appending _old / _oldold _newversion etc… When there is collaboration on development of some code. It has to be checked who has the ‘latest’ version of the code – and where it resides, usually the correct version is identified.
There is no easy existing method to diff source trees and during development unwanted changes occasionally slip into code.
The zip file corresponding to software releases that have release to manufacturing are archived. This at least adds some traceability.
Also before RTM there the code is peer reviewed against the previously released version so quality assurance does exist.
Are there any formal white papers explaining the advantages of source control, making clear that the above isn’t a fully valid form of source control? Arguments exist here that since the end product (manufacturing releases) are under control and these are reviewed that there is no problem with the process. Developers do not have too much of a problem working with zip files in this way, but may not be aware of the advantages.
Creating and managing zip files is error-prone.
Real source control gives you tools to understand your code:
History browsing
Diffs between revisions
Annotation of source files to track the origin of a change
Real source control isn't difficult, there's lots of help out there.
The best argument is surely that using a version control system like Subversion or Mercurial is much, much easier and more secure than messing about with zip files. I doubt there has been much paper writing on the subject, as the use of
zip files for this purpose is fairly obviously wrong.
There are a number of SO questions on the general advantages of version control. For example How can I convince my department to implement a version control system? and https://stackoverflow.com/questions/250984/do-i-really-need-version-control
I assume you currently work at a company that practices this method of zip control, and you're looking for ammunition to help you change this practice. There are a lot of questions on StackOverflow about source control, and the community here are in near-total consensus on the benefits of proper source control and the horrors of working without it (for very good reason).
I'll add something here to benefit your battle: YOUR COMPANY IS #$#%&$## CRAZY!!! ZIP FILES??? ARE YOU ##$##% KIDDING ME???
I am assuming that this question was asked because the original poster is working in an office where the standard practice is to share zip files.
Zip files are obviously bad, for the reasons given by Ned Batchelder. The biggest reason I would suggest is that it's clunky, and difficult to merge changes, or get diffs between past revisions easily.
I would recommend you read A Visual Guide to Version Control for some good arguments about why version control systems are very useful, and a superior way of managing code.
I suspect there'll be as many white papers comparing zip files to proper source control as there'll be white papers comparing cutting one's genitals off with a rusty butter knife with buying a puppy.
Zip files work as a very basic form of version control. It's a way to separate "states" of the source. However, it's not a good form of version control because you have to do a lot of work to perform basic source control management tasks. For example:
Bob's team is working on a major feature that requires changing dozens of files. He works in his own private zip-controlled area for a while. He's created 30 new files, added features to 12 existing files, and made changes to existing behavior in 3 existing files over 4 months. How do you merge Bob's work with the main trunk that has also evolved over the last 4 months? Do you hand-diff thousands of lines of code and decide how to merge them? How do you ensure that anything that uses the 15 existing files isn't broken? How do you ensure that Bob's features or main trunk features aren't accidentally omitted?
Alice is investigating a bug in her code and realizes that one of Sam's classes has changed its behavior. Sam says he didn't make the change. How does Alice find when and why the change was made? How does Alice know who depends on the change?
A major customer has reported a bug in an older version of the program. This customer needs a fix and is important enough to warrant a patch. How do you add the code to the old zip file in a way that it also exists in the new files? Also, how do you record that there is a relationship between the two changes?
These are just three scenarios that a version control system handles well. Situation 1 is handled by development branches. Almost every version control system has a notion of branches that can be developed in parallel and merged as needed. Situation 2 is easily addressed by any source control system with a "blame" feature and less easily addressed by just searching commit logs. Situation 3 is a variant of situation 1, but when you merge branches most version control systems make a note. For example, you'd make a branch off of the old version, fix the bug, then merge that branch into the new code. Now when someone asks "Where did this change come from?" they see it was merged from the patch branch and the change was made to fix a bug.
By the way, I've been in each of these 3 situations and used both SVN and Perforce; both made finding a solution very easy.
These people already know all the arguments for SCM, there is nothing anyone can say to them that will sell them on it. These things must happen:
You install SCM on your local machine and use it. If you must, have it autogenerate these .zip files at every build, so no one outside your cube knows the difference.
Some kind of disaster occurs, like loss of work, show-stopper bug is re-introduced or some other worst-case scenario that is the real reason we all use SCM (the other features we learn to appreciate later).
You are unaffected by the disaster, and/or use your personal copy of the code in SCM to fix the problem/recover the lost work/whatever.
You are a hero and everyone wants to know how you did it.
Only by experiencing firsthand the pain of loss caused by poor SCM practices will your organization realize the benefits of SCM. You're smart enough to learn from the mistakes of others, but not everyone is. The rest of the time, you'll just be 2/3X more productive than the rest of the team and maybe, just maybe they'll wonder how.
By the way, this is how you get agile, continuous integration, unit testing, etc into the organization: lead by example.
The ZIP solution requires a pro-active step at the end of the development cycle when things tend to get dropped because no one outside the dev group notices when they doesn't happen. Sort of like that final code cleanup you always plan on doing when things slow down.
An SCM integrated into the dev environment pretty much enforces/encourages keeping a version history with a small amount of effort all the way through the process. This makes it more likely that a version history will actually be created.
On Using ZIP as a SCM
I'm not going to take as hard of a line as some of the others on the ZIP file solution. It is at least better than nothing. It is a perfectly valid way of keeping version histories, it is just a lot more labor intensive, error prone, and lacks a lot of useful features.
Know who you are selling to
Someone in the Dev Group: Focus your arguments on features like ease of troubleshooting by using change histories, safety to experiment with big code changes (because of rollback), and avoiding accidents where work is overwritten by other developers.
Non-Tech Managers/Bean-counters: There are free/low-cost tools that will reduce the labor cost of version control and give greater accountability/transparency into what each developer is doing/the source of coding mostakes.
I wrote a Version Control tool long ago for a company who did the authoring for DVD titles. Before that they had nothing, just a directory full of clips, icons, scripts etc. which anyone could hack away at, and no way to backtrack if it went wrong etc. HOWEVER these people were 'artists', not programmers, so they could not (would not???!) be trained to use a decent Version Control system. So as a bare-minimum, get-out-of-the-mud level tool I wrote a utility which zipped up the current state of the directory, gave the Zip a meaningful name (date + comment supplied by user) and stuck it in a Backups directory, and also allowed you to restore one of these backups.
So zips CAN provide minimum-level version control, and I speak as someone who endorsed that approach when it was right for the skill-level (in terms of programming, I don't want to imply that they couldn't manipulate pixels!) of the people using it.
But as a programmer, you should be thinking to use a tool which really helps you. As such you want to be able to compare differences for individual files, compare differences between complete milestone sets, and (if you are working on anything other than trivial programmes) handle branching and merging. If you want these features you need something BETTER than zip files.
I used to use ComponentSoftware RCS, and if it wasn't for its poor performance over a WAN we might still be using it: it is cheap (even free for single-developer use, in which form I used to use it at home) and simple to use. However nowadays I would suggest looking at SubVersion. It is very flexible, reasonably simple to understand, has a good set of Windows tools to make it even easier (e.g. Tortoise, Ankh), and ... best of all ... you can get it running for free.
It's not good as only creating a zip before a release means loosing a lot of power you get with version control.
Useually you should check in to the repository after you have added/removed/changed a functional aspekt. So that you can go back later when an error occurres that you think migth be because of this change. Or when you say "dammed this worked before the file format changed in someday in march." Naming revisions after changes makes it also easier to remember because you forgot what was done on 27 march 2009.
In general each developer is working
on their own program and has a
responsibility for it. But there are
times of course when other developers
are involved in work on that program.
In a normal development shop, this is not at all true. Different people work on the same source code all the time. XP makes it almost mandatory. Even if you separate the code into modules, there will still be interaction points with code that concerns at least two programmers.
Of course, it's almost impossible to collaborate without major problems if you don't use source control. But the scenario you describe is much more a way to adjust to this limitation than a sane project structure.
Having only a single person working on a module means that nothing will happen when that person is on vacation and you have a major problem when he leaves the company, gets sick for a long time, or dies.
How do you do a merge? How do you do an annotate? How do you bisect? Where are changelogs stored? Just go to wikipedia and look up "Version control" and go down the list: zip files can kind of sort of do about 2 things out of the whole page.
This is like asking "What arguments can be used against shorthand as a form of double-entry bookkeeping?". It's a completely different thing.
For arguments, there's Walter Tichy's original paper on RCS.
For missing features, among many others there's the ability to merge changes from different versions. This is especially well supported by tools like git and darcs, and to a lesser extent mercurial.
P.S. To Mercurial fans: the problem is that Mercurial delegates the merge process to external tools, and it's very difficult for the mercurial novice to know which tool to use, or to understand how they work—the mercurial model of merging seems far more powerful than others but correspondingly difficult to get a grip on.
I haven't seen an answer include Eric Sink's Source Control HOWTO, but it's a valuable reference. I haven't seen any formal white papers on version control, but I'm not sure the argument about "validity" is your strongest one. The problems you describe in your question indicate some pretty serious drawbacks with the current approach. If "the powers that be" in your environment aren't convinced by that, change the argument entirely.
If you make it a question of quality control, and point to continuous integration as a practice that encourages it, then the zip file approach to version control isn't a "not fully valid form of version control", but an obstacle to implementing continuous integration as a practice.
Your question doesn't indicate whether or not the end product "under control" is tested in any automated fashion (in addition to being reviewed). If the process you describe would prevent that from taking place as well, certainly add that to your argument too.
I think your best argument is showing a GOOD form of source control and showing how powerful it is. Don't trash what is currently being done (as someone is surely emotionally attached to that). You don't want to trash the "ZIP Source Control Method." Show the power of something like SVN. Make it very easy to explain. Show common use cases. (A solid demo would help.)
Let the source control version sell itself.

What is continuous integration? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What is continuous integration and what are its benefits?
This is by far the best explanation I have read so far.
At its simplest, it is simply a mechanism that rebuilds your project whenever a check in is made into some revision control system (CVS etc). This can be extended though to include running tests, all the way through to generating a CD image, mounting it within VMs, installing the product and running full tests on it.
It has the simple advantage of highlighting when code changes break the system as early as possible. Not only does it detect breaks in the code, it highlights who caused the break. This psychological effect is very effective in encouraging good testing prior to check in!
It is the practice of ensuring that all aspects of your software development process are lined up to permit the daily creation of a working version of your product. It is best known as part of Extreme Programming.
This involves things as far afield as build automation, automated testing, daily check-ins, using a source code repository, etc. But the ultimate goal is to help the entire project run according to core Agile Principles so that you deliver early and often. This, in turn, helps you leverage feedback from your users, etc.
+1 for the link to Fowler's page.
Personally, I just found it "nice" to know whenever something didn't compile because we had the poor practice of having a single build (yes, we developed on the production build; we were awesome). We hadn't got the integrated testing phase before I left.
After a while, it did, however, lessen the amount of massive coding changes (compared to the "check in and pray my changes don't conflict" that was rampant). Eventually, most developers started making small changes frequently just to get confirmation from the CC.Net tray icon.
Overall, I found it very comforting to know that we could send out a build immediately if we had to. Had we had just a few smoke tests integrated, I think the stress-level would have been substantially lower.
Just to refresh. At this point there is a huge difference between Continuous Integration (CI) and Continuous Delivery (CD). While most of posts above described CD I'll try to show how CI extends now CD definition. Having all the tools needed to build a package and deploy new version of app automatically is a crucial part of CD. Adding to that test automation (based on three level verification: General Health-check, Detailed Statistics and Historical entries) and a proper governance you're creating a really good piece of CI. Only because of such an extended definition building extraordinary cloud tools is possible. Think about muleESB or esbeetle.com. For both of them CI is something natural although only the second one is supporting both ESB and ETL components.
I hope that it was helpful.

Please confirm: Is Windows Workflow Foundation a good horse to be backing right now?

We are in the process of selecting a workflow solution for a company that uses Microsoft products end to end. Given the news on WF4, in that it seems to be essentially a rewrite of previous versions, is it a wise move to back the current version or should we be looking elsewhere?
Ie - is the current version so bad that we would not be wise to try and use it?
Haiving just launched a project which .NET 3.5 and workflow I'd say that the current release of WF is good enough to use and run with. It has helped us to get a product out quickly (we have the usual feature creep and requirements changing weekly). However, I have a list of complaints with it:
The workflow designer will drive you insane because it is so slow (in certain circumstances) and re-arranges your state machines as it sees fit.
There is no built in upgrade strategy for keeping your old workflows running once you do a bug fix release. If you are going to use WF think carefully how to do upgrades early.
Itegrating with WCF (the send and recieve activity) hide the WorkflowRuntime from you this makes it very difficult to understand what is going on on the hood.
Its not easy to unit test them. There are ideas out there but none seemed particulary easy when we started this WorkFlow Unit Testing
I like the ideas and potential of Workflow based development, however I am not in a hurry to repeat this experience and would probably stick without it for long running processes. One place I would use it again would be in a short, complicated process (like a rules engine for working out prices).
Maybe it is a little late for you, but now that WF 4.0 is released in beta, other people thinking the same question can consider backing the 4.0 horse instead of 3.5 horse.
This goes some way to fixing the following problems:
•The workflow designer will drive you insane because it is so slow (in certain circumstances) and re-arranges your state machines as it sees fit.
[Designer Perf Improved]
•Its not easy to unit test them. There are ideas out there but none seemed particulary easy when we started this WorkFlow Unit Testing
[I think it's a little easier now, some of the introduction to workflow samples include plenty of unit testing]
My understanding is that Microsoft will provide backwards compatibility and/or a migration strategy to the new WF, so I would guess that you are safe to use it. However, I have heard from other developers in my organization that the current version of WF is extremely painful to use. If you have the budget (and depending on the complexity of your workflows), you may want to consider K2: http://www.k2.com/en/index.aspx
I, as a workflow developer, think that current version is painful to use. This is not surprising as this is a v1.0 software out from microsoft :)
I think you should first consider your expectations from a workflow software. Do you have a well defined list of expectations from WF? Acutally I am wondering content of such a list. Maybe we can help more detailed on each topic.
I don't know why people have such negative impressions about WF. Sure it has it drawbacks, but I thought it was pretty useful. The one major issue I have about it is the lack of support for upgrading existing workflow (bullent #2 in gbanfill's list).
Another point to use the current version is that "Dublin" (Microsoft new App Server) will be built on WCF & WF .NET 4.0 but will gladly host 3.5 WF's. So you will be able to migrate to that without a rewrite.
Just a quick note to mention that Visual Studio 2010 CTP contains a new updated WF designer as part of the Oslo objective.

Why is Harvest being purchased at all? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does your work environment use Harvest SCM? I've used this now at two different locations and find it appalling. In one situation I wrote a conversion script so I could use CVS locally and then daily import changes to the Harvest system while I was sleeping. The corp was fanatic about using Harvest, despite 80% of the programmers crying for something different. It was needlessly complicated, slow and heavy. It is now a job requirement for me that Harvest is not in use where I work.
Has anyone else used Harvest before? What's your experience? As bad as mine? Did you employ other, different workarounds? Why is this product still purchased today?
I had the benefit of using Harvest at a bank and you'll never find a more wretched hive of scum and villainy, backwards triple-forking undocumented check-in gauntlets that require 15 steps to make one simple change. Nevermind that they weren't even using branching. This is an evil tool don't let it get you in its clutches.
Chances are, your company has some sort of contract with CA - are you using a lot of other CA software in-house?
Edit: Guess so!
OK I'm going to answer this in a couple of episodes because its late here and Harvest is a big topic.
Firstly CA Harvest (which is what version 7 of the product is called, version 5 is CCC which I cant recall the expansion, version 12 is called CA SCM) is a lot more than just a SCM tool - in the same way ClearCase is a lot more than an SCM tool. SVN, CVS, git, hg are all base-standard SCM and little more.
What you get with Harvest is SCM + Policy. It gives you a place to store and version your code and wrap it all in a policy of how that code matures though your organization from dev to prod. Do you have a policy in your organization that a Lead Developer needs to sign off on the code before its released to QA ? Harvest allows you define the signoff as a policy, and enforces it - you cant migrate the code from the "Dev" state to the "QA" state until one of the people in the project designated as a Lead Dev does exactly that. Do you have a policy that any SQL code needs signoff by a DBA before it progresses ? Harvest allows you to define that policy, and enforces it - so you might need both Lead Dev and DBA signoff before code migrates.
Harvest is by no means a tool for most software organizations - it is typically used in the finance industry, or in business' where a very strong regulatory framework governs what they can do. Banks need to comply with Sarbannes-Oxley, which has very strong auditing requirements. Harvest provides the ability to define all kinds of controls and process around how changes to the Banks assets move through their lifecycle. I know large public transport organizations that are responsible for the safety and punctuality of millions of people every day, that need the tightly defined control mechanisms that a tool like Harvest provides. I also have seen Harvest used in environments where 1000's of developers use it everyday - yes, I'm not exaggerating, literally 1000's of devs in one organization, writing code for a worldwide retailer, pushing IT solutions out their door everyday to the stores around the world.
Harvest is not perfect, thought version 12 is much better. It has too many "that's just stupid"-moments, it does per-file versioning ala CVS, and CVS-like branching and directory versioning (or lack thereof), with all the fun we've come to know and fear. Once you know it and accept it though, its isn't inherently slower than any other SCM I've used. It just has a bigger job to do than just version your code.
Another big win, and its even bigger with version 12, is its integration with other CA tool (and ability to integrate with non-CA tools, but not many at the moment) - defect tracking with Quality Centre, trouble ticketing with Unicentre Service Desk, software deployment to the desktop with SDM. You can define bridges between these apps that result in a lot tighter integration of these concerns, with the usually positive effects on accuracy and timeliness.
If your dealing with getting software out to a worldwide enterprise, with thousands of desktops and servers, mainfame/midrange/middleware systems, iron-clad change control processes, complexity, regulations, contracts, auditors, just a whole bunch of complexity, Harvest is just one tool in a whole suite of tools your going to need. If you just want a simple SCM for a team of 10 devs supporting a few hundred customers, its not a great way to go.
I'll try to add something about how Harvest actually works next time - repositories, projects, views, packages, forms, processes etc. That might help explain why some organizations use it, and why its not for everyone.
I used Harvest during a short gig in the banking industry a few years ago. I agree that it was practically unusable, but the people in charge of QA seemed to love it.
I worked for a company that had two choices; ClearCase or Harvest. Subversion hadn't ever been considered, and the reason was that ClearCase (IBM) and Harvest (CA) both had longstanding mainframe contracts already.
We've used Harvest for about ten years (2000-2010) and even though we are now looking at replacing it I believe it has served us very well.
Harvest (let's stick with that name even though it's no longer it's official name), was the first major tool we implemented to support us in R&D and at the time none uf us knew much about the many aspects of application lifecycle (versioning of code, branching, automated testing, regression testing, quality assurance, deployment to numerous runtime environments and production, rollback, ememrgency fixes, maintenance updates etc.); today we know a lot more and our development processes serve us very well (not that there is not room for many improvements).
We do not have a very hierarchical organisation (we don't have a lot of inspectors that need to approve changes) but it's very helpful to have support for "checkpoints" - points in the development process where something need to happen (e.g. functional testing or integration testing).
The drawback (for us) with Harvest in regards to usability has been "what a programmer need to do to change x lines of code". Today (out there) there are a lot of easier and more efficient ways than Harvest to get write access to source code files, make your updates and then return the files again / move them to another aspect of the development process (testing,deployment etc.). Another drawback is the price tag; it's expensive.
Gain we've had with Harvest:
It support workflow and therefore we've been able to have a single system to manage code versioning, workflow and process automation. If possible it's easier to maintain and improve a single system than many.
In addition to providing cmd line access to internal processes (making it possible to script special solutions when so required by your processes) Harvest also is easily configured by graphic interface.
It has the concept of "Package" which makes it easy to attach plenty of meta data to code changes and to handle the changes independently of other changes (versioning on file level rather than change sets containing the complete code mass). This is helpful to handle indpendent emergency and maintenance changes.
If a developer is only a programmer and only think on the coding aspect of software development then I imagen he/she would might get very frustrated with Harvest.
If a developer is a developer and understand that software development is a lot more than coding and that the coding is only the very begining of a the lifecycle of software then I belive he would see a lot of benefits with Harvest.
I have been using HARVEST for the last 4 years and i love it. The kind of support it gives you to control the code movement is really fantastic. We use HARVEST to deploy applications on to Websphere. It also do an amazing work in deploying the plugins into the web server along with the application. When you want to have a process in place for moving the code in a big enterprise environment, i don't think any other tool can even come closer to HARVEST.