As an alternative to this question what is the best way to manage custom versions of software for a particular client?
Most of the differences between client releases is changes to the user interface to customize the software to look like it is owned by the client. More often then not, this is a simple logo change. Occasionaly the color scheme will change as well. But there are occassions where features will be enabled or disabled based on the client. What is the best way to keep all of these releases up to date and make them easily available to the users of a particular client?
At this point we have five different clients and each has their own build of the software and their own installer (complete with their logo in the installer). This is becoming a pain to manage, and it will only get worse as more and more clients start using our software.
So assuming the linked question isn't the way to go, what is the best way to manage these releases?
"This is becoming a pain to manage,
and it will only get worse as more and
more clients start using our
software."
The only way to really solve this part is to make sure your software has an architecture that supports the customizations as a layer on top of your core product. Ideally, they would simply be runtime configuration options (colours, logos and enable/disable are perfect for this). If you get (or want) a very large number of clients, push the configurability right into the core product and enable the clients to do the customizing themselves.
When updating, you build (and test) the core product once, and build the custom versions by simply linking to (referencing) the already-built core product as a library. You could have each client in a separate build, or you could have a single build process that generates updates for all the currently-maintained client builds (the latter is probably better for a larger number of clients). If you have a "vanilla" version, you could build that as part of the core or along with the client versions, it would depend on your particular case.
Depending on your technology, it might be possible to have the customization layer be built independently from the core product. In this case, client rebuilds will rarely be necessary (perhaps only for certain significant changes) -- you can simply link to the updated core product at runtime. For deployment to the client, you would only need to deploy the updated core, if you have a deployment method that supports this.
It's hard to say more detail without knowing your platform, but you've graciously kept this question agnostic.
Separate the common and the custom parts in your source tree. This can eliminate the great majority of merges, depending on your testing and release policies. There is always a way to abstract out and customize a build resource, even if your build process has to invoke script to rewrite some file.
Judicious use of branching in a good Source Code Management system. These aren't called "Configuration Management" systems for nothing. They are the tools optimized for precisely this task, so you're probably not going to get anything better without building on top of one.
Subversion is good at setting up multiple branches, but last I checked it could only do the 3-way merge on a single file at a time. Perforce is awesome in this department because it tracks your branching history on a file-by-file basis and uses this to automate merging of whole sets of changes between whole branches. It really is the cat's pajamas here.
Git and darcs may have similar power, but I haven't used them. They seem to be based on the idea that each working checkout tree will have a copy all changes ever since the beginning of time. That sounds impractical to me since I need to keep some large and changing SDKs under SCM control.
Arguably, the only difference needs to be a branch of whatever release you have, with the customer-unique changes in it. So for example:
/project
/trunk
/branches
/release-v1
/customer-branches
/release-v1-google
/release-v1-microsoft
....
Where your client releases are branches of your customer releases. Since these branches won't ever get rolled up into trunk or another release branch, don't pollute the regular development /branches tree.
I guess it depends on the level of customization you need for each client, but could you make your base app "customize" itself based on a config file?, so you could have the trunk being the full app and then have special config files for each customer that control the look & feel, as well as features that are enabled, and your release process would take the correct config files to deploy in your installers.
Seems that if you are always delivering custom(ish) versions of your app for each client, it would be worth the time to extend the app in this way.
I set up a header file called branding.h which contains a bunch of #ifdefs like the following example to change whatever bits need changing. In Visual Studio, it is easy to set up multiple builds with the appropriate client symbol defined.
#if defined BRAND_CLIENT1
# define COMPANY_NAME "Client 1"
# define PRODUCT_NAME "The Client 1 Widget App"
# define LOGO_FILE "res/logoClient1.ico"
#elif defined BRAND_CLIENT2
# define COMPANY_NAME "Client 2"
# define PRODUCT_NAME "The Client 2 Super Widget App"
# define ENABLE_EXTRA_MENU
# define LOGO_FILE "res/logoClient2.ico"
#endif
This is all assuming C++ of course.
Related
I've been thinking about semantic versioning recently and thought of a hypothetical situation that I can't quite seem to reconcile.
My understanding of the Semantic Versioning Scheme is as follows:
A particular unique state of computer software can be identified with a 3-part version number of the form:
M.m.p
Where
M = Major #
m = Minor #
p = patch #
A major change is any change to the software such that the new version with the change is no longer compatible with the previous version before the change. A minor change is any feature added or removed from the software that does not compromise compatibility with its previous version before the change. and any change that fixes a bug or error is just a patch change. Whenever a major change occurs M is incremented and m and p are reset back to 0. Similarly when a minor change occurs m is incremented and p is reset back to 0. and patch changes merely increment p
(I know there are additional components of semantic versioning like -alpha, -beta and so on but for now I am keeping it simple and constrained to the above)
The Hypothetical situation that is confusing me is this:
Suppose a software project exists in a certain state X.Y.Z in a repository of a version control system (for the sake of the hypothetical lets assume this is Git/GitHub). Suppose there are two developer A and B for the software project. Each clones down a copy of the software project in state X.Y.Z in order to work on some change for it, both are working on a different change (they may be of the same type - M/m/p but the content is different).
Now suppose that developer A finishes their change (whether it be major, minor or patch) and pushes it to the repository (thereby changing the state of the software project to (X.Y.Z)+1 [again we don't know the nature of the change here]). And suppose they do this before developer B is finished with their changes.
How should this influence the work flow of developer B?
if A's change was just a patch should developer B just ignore the update and continue working on their change as if nothing happened (especially if their change was major)?
if A's change was minor or major should developer B scrap whatever progress they've made on their change and start over on the assumption that A's change will have influence on theirs (B's)?
With the nature of problem outline above how should updating semantic versions be handled between multiple developers?
Usually, this is handled by communication. Major versions are rare and planned, so they are overwhelmingly planned and communicated ahead of time.
For the difference between minor and patch versions, there's often some sort of branching scheme. For example, Git LFS uses a branch like release-3.1 for the 3.1.x branch and main for minor versions. All code goes into main, and then if a patch release needs to be done, the changes are cherry-picked into the release-* branches. The core team discusses the plans for the next release and whether they'd like to do a periodic minor release next or whether they think a patch release is necessary.
Of course, there are many other ways to handle this and this is just one. But communication among developers about expectations for the project and plans is going to be important anyway, so it's an ideal approach to decide the way the project would like to deal with managing this kind of issue.
This conundrum only arises as a product of a common misunderstanding about how versioning actually works. Ask yourself "what am I versioning here?".
It was a long standing practice to maintain the version number in a revision controlled file, where it could be modified by developers. In hind sight, it's a terrible practice, but actually made sense to us at the time. But let's stop and think about what it is we are versioning and look at how all the various artifacts are tracked in modern development systems.
One best practice today, that wasn't very common a couple decades ago, is to use a revision control system to automagically track changes to files. Developers no longer have to remember to bump a revision number at the top of each source file when they make changes. Instead, the RCS tracks file version when they are committed (not on every change!). Still most of the various RCS's offered a means to automatically update the source file header, but that practice is rarely used today because it's redundant/wasteful and does not scale well.
Another best practice today is to use some form of semantic versioning on publication artifacts, and there's a lot of variation in the tooling that supports that, but most of it is embedded in a handful of packaging tools and package archive services/feeds. Note that not every change in the source code requires a version bump here. The RCS tracks individual changes committed by developers and the release/publication management system determines the appropriate version to apply to the packaging artifacts.
The primary best practice today is the use of build pipelines. Modern DevOps best practice is to build the product on managed servers for all forms of publication. You typically have separate internal and external publication requirements, but they are all built in controlled, non-developer environments.
Add something like semantic commits (Brk, NB, NIB, and many other notations) to commit messages, automated test results and publication/release policies, and the appropriate version number to apply to any artifact can be derived automatically.
Suggested policies are:
No single developer can publish an artifact to a public facing feed.
Developer builds are versioned based on highest release version available for the repo hash they forked from, plus a -a.Dev. prerelease tag.
No -a.Dev. prerelease artifact shall be shared beyond the developers own workstation.
Continuous integration builds shall be published internally as YYYY.MMDD.Increment-a.Integration
Etc.
Note that the possible variations are endless and determined by your organizations internal needs and commitments to customers.
Circling back on your conundrum, it should be obvious by now that it is a fallacy that the version number should be changed for every change in the source code. Modern tooling and best practices negate the need for it, and it is no longer desired.
If you look through my history of answers here, you should find several rants on why the RCS is not the correct place to track version numbers, but the primary reason is, we just don't use the RCS to store the artifacts we publish. It's okay to add publication tags to specific RCS states for developer convenience, but for CI/CD environments, the best place to keep the product version number, is in your publication feed system or an independent database.
Background:
3-5 programmers working with TFS. We support legacy apps as well as build new apps. I am implementing elements of continuous delivery and I want to have a good structure to start with. At present there is very little interdependency of apps but in the future there will be some shared components.
I plan to implement a "single trunk" branching strategy (in other words, NO BRANCHING) except for the very rare case where a long feature branch is required -- which I will work to ensure never happens.
Question:
Given this, which source code structure is better and why? Are there any material impacts to choosing one over the other (workspaces, etc)?
SINGLE MAIN BRANCH
$/MAIN/src/ApplicationA/ApplicationA.sln
$/MAIN/src/ApplicationA/Project1.csproj
$/MAIN/src/ApplicationA/Project2.csproj
$/MAIN/src/ApplicationB/...
$/MAIN/src/SharedModule/...
vs. MAIN BRANCH PER APPLICATION
$/ApplicationA/MAIN/src/ApplicationA.sln
$/ApplicationA/MAIN/src/Project1.csproj
$/ApplicationA/MAIN/src/Project2.csproj
$/ApplicationB/MAIN/src/...
$/SharedModule/MAIN/src/...
In my experience I found that it's better to use separate branches for things which have separate development cycles.
If you are going to release them always together as a single package and they are developed as a single product - single main branch is more appropriate.
If every application may be released in different moments, main branch per application seems to be more appropriate.
I like developing all the code in a single main branch. Then use a configuration setting to essentially disable the module for production and enable it for testing until it is ready for release. The added benefit is the modules (dlls) are already packaged for the release since you release early and often.
we are creating hospital information system software. The project will be different hospital to hospital and contain different use cases. But lots of parts will be the same. So we will use branching mechanism of the source control. If we find a bug in one hospital, how can we know the other branches have the same bug or not.
IMAGE http://img85.imageshack.us/img85/5074/version.png
The numbers in the picture which we attached show the each hospital software.
Do you have a solution about this problem ?
Which source control(SVN,Git,Hg) we will be suitable about this problem ?
Thank you.!
Ok, this is not really a VCS question, this is first and foremost and architectural problem - how do you structure and build your application(s) in such a way that you can deliver the specific use cases to each hospital as required whilst being able, as you suggest, to fix bugs in commom code.
I think one can state with some certainty that if you follow the model suggested in your image you aren't going to be able to do so consistently and effectively. What you will end up with is a number of different, discrete, applications that have to be maintained separately even though they have at some point come from a common set of code.
Its hard to make more than good generalisations but the way I would think about this would be something along the following lines:
Firstly you need a core application (or set of applications or set of application libraries) these will form the basis of any delivered system and therefore a single maintained set of code (although this core may itself include external libraries).
You then have a number of options for your customised applications (the per hospital instance) you can define the available functionality a number of means:
At one extreme, by configuration - having one application containing all the code and effectively switching things on and off on a per instance basis.
At the other extreme by having an application per hospital that substantially comprises the core code with customisation.
However the likelyhood is that whilst the sum of the use cases for each hospital is different individual use cases will be common across a number of instances so you need to aim for a modular system i.e. something that starts with the common core and that can be extended and configured by composition as much as by any other means.
This means that you are probably want to make extensive use of Inversion of Control and Dependency Injection to give you flexibility within a common framework. You want to look at extensibility frameworks (I do .NET so I'd be looking at the Managed Extensibility Framework - MEF) that allow you to go some way towards "assembling" an application at runtime rather than at compile time.
You also need to pay attention to how you're going to deploy - especially how you're going to update - your applications and at this point you're right you're going to need to have both your version control and you build environment right.
Once you know how you're going build your application then you can look at your version control system - #VonC is spot on when he says that the key feature is to be able to include code from shared projects into multiple deliverable projects.
If it were me, now, I'd probably have a core (which will probably itself be multiple projects/solutions) and then one project/solution per hospital but I would be aiming to have as little code as possible in the per hospital projects - ideally just enough framework to define the instance specific configuration and UI customisation
As to which to use... if you're a Microsoft shop then take a good long hard look at TFS, the productivity gains from having a well integrated environment can be considerable.
Otherwise (and in any case), DVCS (Mercurial, Git, Bazaar, etc) seem to me to be gaining an edge on the more traditional systems as they mature. I think SVN is an excellent tool (I use it and it works), and I think that you need a central repository for this kind of development - not least because you need somewhere that triggers your Continuous Integration Server - however you can achieve the same thing with DVCS and the ability to do frequent local, incremental, commits without "breaking the build" and the flexibility that DVCS gives you means that if you have a choice now then that is almost certainly the way to go (but you do need to ensure that you establish good practices in ensuring that code is pushed to your core repositories early)
I think there is still a lot to address purely from the VCS question - but you can't get to that in useful detail 'til you know how you're going to structure your delivered solution.
All of those VCS (Version Control System) you mention are compatible with the notion of "shared component", which allows you to define a common shared and deployed code base, plus some specialization in each branch:
CVCS (Centralized)
Subversion externals
DVCS (Distributed)
Git submodules (see true nature of submodules)
Hg SubRepos
Considering the distributed aspect of the release management process, a DVCS would be more appropriate.
If the bug is located in the common code base, you can quickly see in the other branches if:
what exact version of the common component they are referring to.
they refer the same or older version of that common component than the one in which the bug has been found (in which case chances are they also do have the bug)
Given a large scale software project with several components written in different languages, configuration files, configuration scripts, environment settings and database migration scripts - what are the common practices for deployment to production?
What are the difficulties to consider? Can the process be simplified with tools like Ant or Maven? How can rollback and database management be handled? Is it advisable to use version control for the production environment?
As I see it, you're mostly asking about best practices and tools for release engineering AKA releng -- it's important to know the "term of art" for a subject, because it makes it much easier to search for more information.
A configuration management system (CMS -- aka revision control system or version control system) is indispensable for today's software development; if you use one or more IDEs, it's also nice to have good integration between them and the CMS, though that's more of an issue for purposes of development than for purposes of deployment / releng.
From a releng viewpoint, the key thing about a CMS is that it must have good support for "branching" (under whatever name), because releases must be made from a "release branch" where all the code under development and all of its dependencies (code and data) are in a stable "snapshot" from which the exact, identical configuration can be reproduced at will.
The need for good branching support may be more obvious if you have to maintain multiple branches (customized for different uses, platforms, whatever), but even if your releases are always, strictly in a single linear sequence, releng best practices still dictate making a release branch. "Good branching support" includes ease of merging (and "conflict resolution" when different changes are made to a file), "cherry-picking" (taking one patch or changeset from one branch, or the head/trunk, and applying it to another branch), and the like.
In practice, you start off the release process by making a release branch; then, you run exhaustive testing on that branch (typically MUCH more than what you run everyday in your continuous build -- including extensive regression testing, integration testing, load testing, performance verification, etc, and possibly even more costly quality assurance processes, depending). If and when the exhaustive testing and QA reveal defects in the release-candidate (including regressions, performance degradation, etc), they must be fixed; in a large team, development on head/trunk may be continuing while the QA is being done, whence the need for ease of cherry-picking / merging / etc (whether your practice is to perform the fix on head or on the release branch, it still needs to be merged to the other side;-).
Last but not least, you're NOT getting full releng value from your CMS unless you're somehow tracking with it "everything" that your releases depend on -- simplest would be to have copies or hard links to all the binaries for the tools you need to build your release, etc, but that may often be impractical; so at the very least track the exact release, version, bugfix &c numbers of those tools that are used (operating system, compilers, system libraries, tools that preprocess image, sound or video files into final form, etc, etc). The key is being able, at need, to exactly reproduce the environment required to rebuild the exact version that's proposed for release (otherwise you'll go mad tracking down subtle bugs that may depend on third party tools' changes as their versions change;-).
After a CMS, the second most important tool for releng is a good issue tracking system -- ideally one that's well integrated with the CMS. That's also important for the development process (and other aspects of product management), but in terms of the release process the importance of the issue tracker is the ability to easily document exactly what bugs have been fixed, what features have been added, removed, or changed, and what modifications in performance (or other user-observable characteristics) are expected in the new forthcoming release. For the purpose, a key "best practice" in development is that every changeset that gets committed to the CMS must be connected to one (or more) issue in the issue tracking system: after all, there's gotta be some purpose for that change (fix a bug, change a feature, optimize something, or some internal refactor that's supposed to be invisible to the software's user); similarly, every tracked issue that's marked as "closed" must be connected to one (or more) changesets (unless the closing is of the "won't fix / working as intended" kind; issues related to bugs &c in third-party components, which have been fixed by the third-party supplier, are easy to treat similarly if you do manage to keep track of all third-party components in the CMS too, see above; if you don't, at least there should be text files under CMS documenting third-party components and their evolution, again see above, and they need to be changed when some tracked issue on a 3rd party component gets closed).
Automating the various releng processes (including building, automated testing, and deployment tasks) is the third top priority -- automated processes are much more productive and repeatable than asking some poor individual to manually go through a list of steps (for sufficiently complex tasks, of course, the workflow of the automation may need to "get a human being in the loop"). As you surmise, tools such as Ant (and SCons, etc, etc) can help here, but inevitably (unless you're lucky enough to get away with very simple and straightforward processes) you'll find yourself enriching them with ad-hoc scripts &c (some powerful and flexible scripting language such as perl, python, ruby, &c, will help). A "workflow engine" can also be precious when your release workflow is sufficiently complex (e.g. involving specific human beings or groups thereof "signing off" on QA compliance, legal compliance, UI guidelines compliance, and so forth).
Some other specific issues you're asking about vary enormously depending on specifics of your environment. If you can afford programmed downtime, your life is relatively easy, even with a large database in play, as you can operate sequentially and deterministically: you shut the existing system down gracefully, ensure the current database is saved and backed up (easing rollback, in the hopefully VERY rare case it's needed), run the one-off scripts for schema migration or other "irreversible" environment changes, fire the system back up again in a mode that's still unaccessible to general users, run another extensive suite of automated tests -- and finally if everything's gone smoothly (including the saving and backup of the DB in its new state, if relevant) the system's opened up to general use again.
If you need to update a "live" system, without downtime, this may range anywhere from a slight inconvenience to a systematic nightmare. In the best case, transactions are reasonably short, and synchronization between the state set by transactions may be delayed a bit without damage... and you have a reasonable abundance of resources (CPUs, storage, &c). In this case, you run two systems in parallel -- the old one and the new one -- and just make sure all new transactions target the new system, while letting old ones complete on the old system. A separate task periodically syncs "new data in the old system" to the new system, as transactions on the old system terminate. Eventually you can determine that no transactions are running on the old system and all changes that happened there are synced up to the new one -- and at that time you can finally shut down the old system. (You need to be prepared to "reverse sync" too of course, in case a rollback of the change is needed).
This is the "simple, sweet" extreme for live system updating; at the other extreme, you can find yourself in such an overconstrained situation that you can prove the task is impossible (you just cannot logically meet all the stated requirements with the given resources). Long sessions opened on the old system which just cannot be terminated -- scarce resources that make it impossible to run two systems in parallel -- core requirements for hard-real-time sync of every transaction -- etc, etc, can all make your life miserable (and as I noticed, at the extreme, can make the stated task absolutely impossible). The two best things you can do about this: (1) ensure you have abundant resources (this will also save your skin when some server unexpected goes belly-up... you'll have another one to fire up to meet the emergency!-); (2) consider this predicament from the start, when initially defining the architecture of the overall system (e.g.: prefer short-lived transactions to long-lived sessions that just can't be "snapshot, closed down, and restarted seamlessly from the snapshot", is one good arcitectural pointer;-).
disclaimer: where I work we use something I wrote that I'm going to mention
I'll tell you how I do it.
For configuration settings, and general deployment of code and content, I use a combination of NAnt, a CI server, and dashy (an automatic deployment tool). You can replace dashy with any other 'thing', that automates the uploads to your server (perhaps capistrano).
For DB scripts, we use RedGate SQL Compare to get the scripts, and then for most changes, I actually make them manually, where appropriate. This is because some of the changes are slightly complex, and I feel more comfortable doing it by hand. You can actually use this tool to do it for you, (or at least generate the scripts).
Depending on your language, there are some tools that can script the DB updates for you (I think someone on this forum wrote one; hopefully he'll reply), but I have no experience with those. It's something I'd like to add though.
Difficulties
I forgot to answer one of your questions.
The biggest problem in updating any significantly complicated/distributed site is DB synchronisation. You need to consider if you will have any downtime, and if you do, what will happen to the DBs. Will you shutdown everything, so that no transactions can be processed? Or will you shift everything to one server, update DB B, and then sync DB A & B, then update DB B? Or something else?
Whatever you choose, you need to choose it, and either say "Okay for each update, there will be X downtime", or whatever. Just have it documented.
The worst thing you can do is have someones transaction fail, mid processing, because you were updating that server; or to somehow leave only part of your system operational.
I don't think you have any option about using versioning or not.
You cannot go without versioning (as mentioned in a comment).
I am talking from experience since I am currently working on a website where we have several items that have to work together:
The software itself, its functionalities at a given point in time
The external systems (6 of them) we are linked with (messages are versioned)
A database, which holds the configuration (and translations)
The static resources (images, css, javascripts) hosted on an Apache server (several in fact)
A java applet, which has to be synchronized with the javascript and software
This works because we use versioning, although I must admit that our database is pretty simple, and because the deployment is automated.
Versioning means that we have actually at any point in time several concurrent versions of the messages, the database, the static resources and the java applet.
It is especially important in the event of a 'fallback'. If you discover a flaw when loading you new software and suddenly you cannot afford to load, you'll have a crisis if you have not versioned, and you'll just have to load the old soft if you have.
Now as I said, the deployment is scripted:
- the static resources are deployed first (+ the java applet)
- the database goes next, several configurations cohabit since they are versioned
- the soft is queued for load in a 'cool' time-window (when our traffic is at its lowest point, so at night)
And of course we take care of backward / forward compatibility issues with the external servers, which should not be underestimated.
I'm working on a project that will (soon) be branched into multiple different versions (Trial, Professional, Enterprise, etc).
I've been using Subversion since it was first released (and CVS before that), so I'm comfortable with the abstract notion of branches and tags. But in all my development experience, I've only ever really worked on trunk code. In a few rare cases, some other developer (who owned the repository) asked me to commit changes to a certain branch and I just did whatever he asked me to do. I consider "merging" a bizarre black art, and I've only ever attempted it under careful supervision.
But in this case, I'm responsible for the repository, and this kind of thing is totally new to me.
The vast majority of the code will be shared between all products, so I assume that code will always reside in trunk. I also assume I'll have a branch for each version, with tags for release builds of each product.
But beyond that, I don't know much, and I'm sure there are a thousand and one different ways to screw it up. If possible, I'd like to avoid screwing it up.
For example, let's say I want to develop a new feature, for the pro and enterprise versions, but I want to exclude that feature from the demo version. How would I accomplish that?
In my day-to-day development, I also assume I need to switch my development snapshot from branch to branch (or back to trunk) as I work. What's the best way to do that, in a way that minimizes confusion?
What other strategies, guidelines, and tips do you guys suggest?
UPDATE:
Well, all right then.
Looks like branching is not the right strategy at all. So I've changed the title of the question to remove the "branching" focus, and I'm broadening the question.
I suppose some of my other options are:
1) I could always distribute the full version of the software, with all features, and use the license to selectively enable and disable features based on authorization in the license. If I took this route, I can imagine a rat's nest of if/else blocks calling into a singleton "license manager" object of some sort. What's the best way of avoiding code-spaghettiism in a case like this?
2) I could use dependency injection. But generally, I hate it (since it moves logic from the source code into configuration files, which make the project more difficult to grok). And even then, I'm still distributing the full app and selecting features at runtime. If possible, I'd rather not distribute the enterprise version binaries to demo users.
3) If my platform supported conditional compilation, I could use #IFDEF blocks and build flags to selectively include features. That'd work well for big, chunky features like whole GUI panels. But what about for smaller, cross-cutting concerts...like logging or statistical tracking, for example?
4) I'm using ANT to build. Is there something like build-time dependency injection for ANT?
A most interesting question. I like the idea of distributing everything and then using a license key to enable and disable certain features. You have a valid concern about it being a lot of work to go through the code and continue to check if the user is licensed for a certain feature. It sounds a lot like you're working in java so what I would suggest is that you look into using an aspect weaver to insert the code for license checking at build time. There is still a going to be one object into which all calls for license checking goes but it isn't as bad of a practice if you're using an aspect, I would say that it is good practice.
For the most part you only need to read if something is licensed and you'll have a smallish number of components so the table could be kept in memory at all times and because it is just reads you shouldn't have too much trouble with threading.
As an alternative you could distribute a number of jars, one for each component which is licensed and only allow loading the classes which are licensed. You would have to tie into the class loader to achieve this.
Do you want to do this via Subversion ? I would use Subversion to maintain different releases (a branch per release e.g. v1.0, v2.0 etc.) but I would look at building different editions (trial/pro etc.) from the same codebase.
That way you're simply enabling or disabling various features via a build and you're not having to worry about synchronising different branches. If you use Subversion to manage different releases and different versions, I can see an explosion of branches/tags in the near future.
For switching, you can simply maintain a checked-out codebase, and use svn switch to checkout differing versions. It's a lot less time-consuming than performing new checkouts for each switch.
You are right not to jump on the branching and merging cart so fast. It's a PITA.
The only reason I would want to branch a subversion repository is if I want to share my code with another developer. For example, if you work on a feature together and it is not done yet, you should use a branch to communicate. Otherwise, I would stay on trunk as much as possible.
I second the recommendation of Brian to differentiate the releases on build and not on the code base.