Feed HAPI FHIR Package Cache manually? (for completeness and/or off-line use) - hapi

Because of data protection regulations we need to run the HAPI validator (validator_cli.jar) off-line, and we also need to complement the FHIR Package Cache by adding conformance resources that are not available online at all (they tend to get distributed via mounted courier, carrier pigeon and similar technologies).
Transplanting a well-filled package cache (e.g. %userprofile%\.fhir) from a connected computer to an offline computer takes care of all things that HAPI can download. From that point on HAPI finds these conformance resources without requiring any switches or other TLC.
Referencing directories with conformance resources that came in a push-cart can be done via the implementation guide switch (-ig /foo/bar). However, adding several dozen directories in this way is tedious and error-prone; it also makes it somewhat impractical to use the HAPI validator from the command line or in a context like Yannick Lagger's VSCode FHIR plugin.
Workarounds like creating a wrapper batch file with the umpteen -ig switches have limited reach; they do not work on HAPI as a whole, and they do not help with things like the VSCode plugin.
Lastly, for various reasons it is necessary to put the whole FHIR cache (minus the official HL7 packages) into the build process, with version control, test suites etc. pp. The reason is that the specifications for German health care are still very much in flux, only partly available online, incomplete, and owned by about half a dozen different organisations. Using a carefully constructed FHIR cache with controlled contents is the only option in this situation, especially if you consider that our automated billing system spits out invoices for up to 7 digits a pop.
Are there any tools that can assist with turning an -ig style tree with (predominantly) XML conformance resources into a package that can be shoved into the FHIR Package Cache?
HL7.org has some documentation about the NPM Package Format as far as it pertains to FHIR packages. This indicates, among other things, that all resources must be converted to JSON. Is there a reliable command line tool that can be used to automate at least this part of the process, even if it doesn't spit out a complete NPM package?

Related

HTTP based "mirror"

I am looking to implement a PS based system to manage a local library of assets, specifically a library of Revit Family files. There is a "vetted library" that acts as the source library, to which items can be added, removed or revised. This library then needs to be mirrored on the local machine.
I do this now with the vetted library on the network, and I do a Robocopy /mir at every user logon. This works great for a traditional office environment with laptops that sometimes leave the office, to ensure they have the current library. However, with Work From Home now a major issue, I want to implement a similar functionality but with a web hosted library, either on my own server or an Amazon S3 bucket. My thinking is to make this a two stage process.
1: At update of the vetted library, an XML file is updated, which includes the entire folder structure and file data for the library, including file size and file hash.
2: On the local machine, I download the vetted library map, and compare with the previous map. Missing and extraneous files are easy, though moved files are a bit more complex. Files with different sizes are easy too. If files are the same size, then already computed hashes are compared. In this way I can build a list of files to be deleted locally, as well as new files to be downloaded.
These libraries can easily reach 5gb and 10k files per library, and every year a new library is required. Often firms have as many as 5 year versions of the software installed. So, LOTS of files and lots of size.
This seems like the most performant way to handle regular updates, but I wonder if there is a cmdlet already available that handles this kind of thing better?
I know I COULD do this with Dropbox or the like, but there are a number of arguments against it, from the size of the libraries to security and access control (which I will need to address with my solution eventually as well). These libraries can cost 10s of thousands of $ to purchase, and folks aren't going to want to manage them via dropbox or OneDrive.
And... the fact that Microsoft has OneDrive has me thinking there isn't a built in PS way to do this, since they want to push OneDrive. In which case, is my file map compare based approach viable, or is there a better approach I should consider.
I know there is no code here, so maybe I am running afoul of some Stack overflow rule, but hopefully program specification and planning is seen as an appropriate avenue for questions as well as simple code solutions.

Robust software update solutions for an OpenEmbedded/Yocto based system

We are using a Variscite VAR-SOM-AM33 platform for our project, and software platform is based on OpenEmbedded/Yocto.
To ensure the hardware is running with the current software, the devices are connected to the internet. So far, we have been following the OE recipes and generating ipk and applying software updates via opkg.
However, the process is not satisfactory as some of the recipes are poorly written (fails to uninstall/install during the upgrade process). What robust technique/solution are available for OE/Yocto based systems?
Thanks in advance.
I'd like to add SWUpdate to the list of packages that you should consider. It was recommended in a 2016 paper by the Konsulko Group for Automotive-Grade Linux. That paper mentions a few other options, and provides an analysis of the various tools, so it's probably worth a read. Quoting from the paper:
It is our recommendation that the reference AGL software update strategy make use of SWUpdate in a dual copy configuration and integrate OSTree support. This allows recovery from a corrupt partition for the exception case, but also optimizes the common case where small, incremental updates can be quickly applied or rolled back as needed to [meet] OEM policy.
I don't completely agree with the paper. For example, they wrote off Mender.io because it lacks community support, but IMO the Automotive-Grade Linux group is influential enough to create popularity from scratch. Still, it's a good paper, and the fact that they settled on SWUpdate was interesting to me. I was already leaning toward it because the author, sbabic, is involved in U-Boot software development, and we use U-Boot to burn new images into our device.
At the moment I'm unsatisfied with all of the current options, but mostly because I want extra functionality. I'll probably settle on a custom system which incorporates one or more of the aforementioned packages. Unfortunately that's not the kind of definitive answer that SO prefers, but I hope that it was helpful.
I'm working on a metadata layer to integrate the Software Updater (swupd) from Clear Linux with the Yocto Project / OpenEmbedded Core.
swupd performs whole of OS updates, rather than package-based updates, using binary deltas to only update the files which change and to do so in an efficient manner.
I recently wrote some documentation (within the docs/Guide.md file in the meta-swupd repo) about adopting the "Clear Linux Way" to utilise meta-swupd from an OE/YP based distro. A wikified version of that guide, including a link to the layer git repository, are available on the Yocto Project wiki:
https://wiki.yoctoproject.org/wiki/Meta-swupd
I also have a sample layer on Github which demonstrates use of the layer (this is also the distro layer I test much of meta-swupd with):
https://github.com/incandescant/meta-myhouse
About mender.io, I have recently talked to them regarding their open-source update.
Currently, they already have their client side developed, and is working on the server side. They use HTTP and JSON. This is their git, it is only supporting Beaglebone and QEMU at the moment.
The way mender.io works is: they will have one persistence data and uboot. and 2 rootfs (active and backup) to update. So, when there is an update on the server, the users will be notified to pull it down. Give a mender -rootfs image update command. And if the upgrade is success, the user gives another mender -commit command. If there is no mender -commit, the rootfs will be rolled back to the previous rootfs in the next reboot. Mender currently only support update of kernel and rootfs.
The main role of mender.io is to ensure that the mass distributed image upgrade process is recoverable from errors. In the Server side, mender.io developed a management server to the mass distributed devices using UUID.
Not to advertise but please try out mender.io and give feedback so that the software could be more mature.
Mender Introduction pdf
Well, you can either use package based upgrades, like you do. In that case, you'll need to test and verify everything locally before you push any updates to the field. Obviously, you'll likely need to improve a number of recipes. (And I assume that you upstream those immprovements, right?)
The alternative is to use image-based upgrades. Either with full images, see for instance the discussion at Stackoverflow: Embedded Linux mechanism for deloying firmware updates or swupd
Note: I got distracted while writing this answer, so look at the answer from joshuagi; he explains a lot more of swupd.
I think they are two problems here. We (OpenEmbedded) do need to be careful that we do not break package based updates.
Also, there are image updates like swupd (mentioned above and swupdate, described at: https://sbabic.github.io/swupdate/swupdate.html
meta-updater provides support for OSTree-based updates to OE systems.
OSTree is interesting because it provides a half-way house between full image updates (which are large and tricky to handle correctly) and package based updates (which are tricky to make robust). It has a 'git-like' object representation of a root filesystem, and uses chroot and hard links to atomically switch between file system images.
(Disclosure: I'm a contributor to meta-updater)
Posts here were done years ago. Now also RAUC seems to be promising alternative to mender.io

ReSpec vs Bikeshed: How to document and publish a standard REST API interface to be implemented by a number of vendors?

We want to document a standard REST API interface which will be implemented by a number of vendors. Currently we are using Google Docs to store the specification.
Requirements (which must be common to most):
Spec history: We want to be able to reference previous versions of the specification.
Version control: We want to store the spec in version control, so that we can tag versions against our codebase and store it alongside the related
Issues: We want to allow the community to submit issues.
Community / Affiliation: We want to share the specification with a broader community, to receive validation on our approach.
endpoint validator.
Format / Tooling: We want to use a format that is easy to edit, and also publishable into an easy-to-understand form.
Potential ratification: If it's useful, it would be good to create a standard where there's a pathway for it becoming more widely adopted.
From a little research there are a few relevant standards bodies:
IETF (Internet Engineering Task Force): Mostly use the text-based RFC format, but seem to have some nice tracking tools. Generally for lower-level standards (e.g. TCP), though they've created higher level ones too.
W3C (World Wide Web Consortium): If we are publishing through the W3C eventually, it looks like we'll need to conform to pubrules.
WHATWG (Web Hypertext Application Technology Working Group): A group that appears to focus mainly on HTML5, so less relevant for a REST API spec.
OASIS (Organization for the Advancement of Structured Information Standards): Seems to be more about business abstractions on top of IETF / W3C standards.
I have looked at a few examples over the web, and note a difference of approaches:
YAML: spec history, versioned in GitHub, issues on GitHub, no apparent affiliation, uses DocBook.
JSON-LD: spec history, versioned in GitHub, issues on GitHub, W3C affiliation, uses ReSpec (also on GitHub).
JSON API: spec history, versioned in GitHub, issues on GitHub, no apparent affiliation, appears to use Jekyll and some custom templates.
JMAP: versioned in GitHub, issues on GitHub, no apparent affiliation, appears to use markdown and some custom templates.
HTML 5 (W3C): versioned in GitHub, issues on GitHub, W3C affiliation, uses Bikeshed.
HTML 5 (WHATWG): versioned in GitHub, issues on GitHub, WHATWG affiliation, uses a "proprietary language that is then post-processed into HTML" (source).
JSON Schema: versioned using IETF tooling, issues on GitHub, IETF affiliation, uses IETF RFC format.
CSS 3: spec history, versioned in Mercurial, issues inline in spec, W3C affiliation, uses Bikeshed.
For a REST API, which approach should we follow? What are the advantages and disadvantages of each?
Caveat: I was the original author of ReSpec (though maintenance has now passed on to others).
I think that at the end of the day, a lot of it boils down to your personal preferences. Both tools support your first list of requirements. Both tools have a similar feature set with a lot of overlap (but also distinct things) and in both cases the documentation may not cover that fact.
Some things that might help you choose:
ReSpec requires zero installation. In my experience that makes it easier for contributors who are relatively new to spec-writing to get started since they can just fork the repo and edit the HTML — refreshing the browser will show the edits directly. ReSpec source uses conventions beyond HTML, but it is always conforming HTML. Bikeshed requires either a working Python2 installation and it needs to be installed, or use of a curl command to the web version (but I don't think that's very convenient). To more seasoned users, that point makes no difference.
ReSpec does support batch building, there's a respec2html tool that comes with it. You should normally be able to operate it in CI (otherwise spec-gen works too).
Publican is dead, as far as I know.
If you are producing specs that are not intended for W3C, you might need to patch whichever option you pick. At that point your preference in language might be a factor.
ReSpec will not be very good at very large specifications (but for most cases it's fine).
Overall I think that's it. If you're undecided, the best thing might be for you to grab the sources of two similar specs and compare to see what you like best, and also to play at making a few small edits to both and see what's most convenient for your expected workflow. At the end of the day, don't agonise over this: both formats are HTML-based (and support embedded Markdown if that's your thing). Converting between them should you need to will likely require less time than a properly thorough investigation!
Caveat: I'm the author of Bikeshed.
As Robin said, the choice of processor is largely one of personal taste. Most of the differences in processor are minor; to my knowledge there are two major differences to consider:
Bikeshed compiles a source document into HTML; ReSpec is included into an HTML file and on-the-fly rewrites it into better HTML. In my opinion, this makes ReSpec slightly easier for casual use (nothing to install, just refresh the source document to see changes), but Bikeshed is better for the ecosystem (no "flash of un-ReSpec'd content" or "jumping spec" when you navigate to an anchor). That said, Bikeshed is easy to install locally, and a lot of people use the server version instead quite happily.
One of Bikeshed's primary features is its cross-spec linking database; it has a growing (largely W3C-centric) database of specs that it regularly spiders for definitions, and makes it very simple to link to those definitions. This has resulted in greatly improved cross-linking in W3C specs, which makes things much easier to read and follow. However, if you're not planning to link into W3C specs, or have them link into you, that's not a big deal. Linking "locally" (within your own spec) is about as easy in either processor.
So on the Bikeshed vs. ReSpec topic, a few thoughts:
When choosing software to rely on, technical superiority or feature set of a project over an other one should rarely be your deciding factor; unless of course there are specific features that you absolutely need to get your job done and that aren't available in all contenders.
Software tends to come and go. And that's true of commercial software as it is of open-source. The steeper the learning curve and the higher migration costs, the more you want to consider a tool's future when picking one.
Bikeshed's killer feature is its cross-spec linking database integration. But it's only a killer feature if you need it. I doubt you would given your current use case.
That said, because it is a killer feature for some of the more involved and Web-centric spec editing, it's acting as a magnet, pulling in key members of the community. As these members adopt Bikeshed and use it for new spec or convert existing specs to it, it increases that tool's appeal creating a snowball effect. Conversely, it makes it harder for ReSpec to maintain its traction. Having a reactive maintainer whose job it is to write specs and whose tool to do so is Bikeshed also helps.
All in all, Bikeshed has a brighter future in front of it than ReSpec does at this point. So, even though you don't need Bikeshed's extra features, its learning curve is a bit steeper and installation more involved, you might still want to pick it simply because it has more traction, which is code for the following:
it will be along longer,
bugs should get fixed faster,
it should improve faster,
it should be more stable,
it might add a bit of veneer to your work because you're using the cool kids' tool.
However it seems that you're planning to specify a REST API. I'm not sure either tool is the right one for the job. Have you considered a combination of JSON Schema, JSON Hyper-Schema, and a documentation tool like prmd? This has the added benefits of being (highly) machine readable which can be used to generate test suites for implementations, clients for different programming languages, etc.
Full disclosure: I started off using ReSpec, added Markdown support to it, helped maintain it and recently switched to Bikeshed to benefit from its cross-spec linking database integration.
Given it's a REST-based API, W3C is most relevant. WHATWG is too focussed on HTML, IETF would result in a less readable spec, and potentially OASIS is too obscure.
All bodies agree on RFC2119, so it's worth ensuring this is used in the spec.
If the W3C is chosen, Pubrules must be followed (there is a new W3C pubrules validator, accessible via npm and here). Two main formats/tools are currently popular, both supported by the W3C's tooling, as described here:
ReSpec and Bikeshed: Since W3C "pubrules" markup can prove repetitive and at times hard to get right, many tools have been developed to assist people in producing it — these are the two main ones. ReSpec documents are essentially valid HTML with some extra configuration that a JS library turns into the real thing; Bikeshed is a Python preprocessor that can apply to HTML but is more often used in Markdown mode.
N.B. Anolis, an older preprocessor that preceded ReSpec and Bikeshed, has been declared dead by it's author.
W3C is currently undergoing a process of modernisation. A new W3C project named Echidna (based in GitHub) supports both ReSpec and Bikeshed automated publishing, though the latter has only recently been implemented, and it currently only works inside the W3C.
Using either of the above tools will allow the standard to be indexed in specref.org (the database of bibliographical references that W3C specifications rely upon).
Notes on each of these options:
ReSpec
ReSpec is in use at the W3C and actively maintained.
ReSpec apparently does support Markdown, but the feature is undocumented.
Spec Generator seems in common use in W3C for ReSpec CI, and can be accessed outside the W3C (it used to be internal).
In terms of CI alternatives:
Echidna is a new official recommendation for ReSpec CI, however it currently only works inside the W3C.
Publican is a GitHub hook listener that generates specs written to be parsed by ReSpec or Bikeshed. Hard to tell the status of the project though, as it appears to be discontinued (Robin originally created ReSpec, and did lots of work for webplatform.org, but the project may have since changed direction). Likely better to use actual Bikeshed for Bikeshed CI. It also runs in Docker (see gist).
There are various articles discussing the use of ReSpec with GitHub, and the publishing process.
Some say ReSpec is more accessible.
Examples of ReSpec specs: here, here, here, here
Examples of ReSpec CI: here (Echidna here, here)
Bikeshed
Bikeshed is in use at both the W3C and WHATWG, and is actively maintained.
Bikeshed fully supports Markdown (and soon CommonMark).
Bikeshed specs are compiled, so this works well for CI in terms of flagging syntax error (it's main advantage).
Need to set up Travis CI on the GitHub repo to publish changes, as the W3C have done.
Can use a watcher when editing locally to reduce dev cycle.
Can use a remotely hosted processor, but it won't work with all features (e.g. the separate biblio file).
There are a few examples of people migrating from ReSpec to Bikeshed, not the other way around.
HTML diff (not natively supported in Bikeshed, but found in ReSpec) can still be done by manually adapting ReSpec Section 5, as it's just a simple curl.
Given the use of Bikeshed for the current HTML5 spec, it would appear to be gaining popularity.
Examples of Bikeshed specs: here
Examples of Bikeshed CI: here, here, here, here
Both ReSpec and Bikeshed have a feature to link GitHub issues to inline issues in the spec, which mean they pair well with GitHub. All examples found just use commit logs for versioning.
In terms of community:
W3C community groups appear to be a good way to attract a broader audience, and they recommend using GitHub for "modern standards development" (and ReSpec, though that recommendation may be outdated).
The Web Incubator CG is an even more informal version of W3C Community Groups, which provides an existing community, GitHub, and forum to discuss topics directly related to the "web platform" (which means only useful for a "web platform feature that would be implemented in a browser or similar user agent"). They use both ReSpec and Bikeshed.

What is the difference between configuration management and version control?

Can anyone explain in simple terms what the difference is between configuration management and version control? From the descriptions I've been able to find on various websites, it seems like configuration management is just a fancy term for putting your config files in a source control repository. But others lead me to believe there is a more involved explanation.
Version control is necessary but not sufficient for configuration management. Version control happens in some central or distributed repository, but says nothing about where any particular version is deployed or used.
Configuration management worries about how to take what is in version control and deploy that consistently to the appropriate places, primarily QA and production, but in a large enough development operation developers as well.
For example, you may keep all of your SQL queries in version control, including your table modification scripts, but that doesn't control when those scripts are deployed to the appropriate database server and kept in sync with the deployment of any other code that relies on that database structure.
Configuration management includes, but is not limited to, version control.
Configuration management is everything that you need to manage in terms of a project. This includes software, hardware, tests, documentation, release management, and more. It identifies every end-user component and tracks every proposed and approved change to it from Day 1 of the project to the day the project ends.
Version control is specifically applied to computer files. This includes documents, spreadsheets, emails, source code, and more.
Version control is saving files and keeping different versions of them, so you can see the change over time.
Configuration management is generally referred to as an overall process of which keeps track of what version of the code is on what server, how the servers are setup (and the install scripts to do so at many places). It is how process of what happens after the code goes into source control and how gets to deployed to the servers/desktops etc.
Configuration management is an ambigute term.
In software, it tends to be a superset of version control with emphasis on the entire process to produce a result in a repeatable and predictible manner.
In computing maintenance, it is related to the maintenance of the configuration settings and hardware/firmware/software versions of entire networks and set of attached computing machines (including servers, clients, routers...).
In hardware manufacturing, it represents even a superset of the two above, including the hardware pieces and software modules needed to obtain a product, with the description of the process to manufacture them, and even sometime the entire schemas and configurations of the production lines themselves.
In addition to everything said above I'd like to recommend Bob Aiello's book named "Configuration Management Best Practices" - http://www.amazon.com/dp/0321685865 .
It covers all aspects of Software Configuration Management including version control.
Version control is the control of deliverables whereas configuration management is managing the entire process leading to produce the deliverables. Configuration management involves change management, project management, etc., which generally are not managed by simple version control.
Roughly speaking, version control means you can check out the source for any particular version. Configuration management means you can build and deploy and probably test any particular version.
This can be helpful.
Versions and configurations
Versions:
Ability to maintain several versions of an object.
Commonly found in many software engineering and concurrent engineering environments.
Merging and reconciliation of various versions is left to the application program
Some systems maintain a version graph
Configuration:
A configuration is a collection compatible versions of modules of a software system (a version per module)
Version control is one of the features of a SCM system.
From the subversion user guide:
http://svnbook.red-bean.com/en/1.7/svn-book.html
"Some version control systems are also software configuration management (SCM) systems. These systems are specifically tailored to manage trees of source code and have many features that are specific to software development—such as natively understanding programming languages, or supplying tools for building software. Subversion, however, is not one of these systems. It is a general system that can be used to manage any collection of files. For you, those files might be source code—for others, anything from grocery shopping lists to digital video mixdowns and beyond."

What should I propose for a reusable code library organization?

My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.