ReSpec vs Bikeshed: How to document and publish a standard REST API interface to be implemented by a number of vendors? - rest

We want to document a standard REST API interface which will be implemented by a number of vendors. Currently we are using Google Docs to store the specification.
Requirements (which must be common to most):
Spec history: We want to be able to reference previous versions of the specification.
Version control: We want to store the spec in version control, so that we can tag versions against our codebase and store it alongside the related
Issues: We want to allow the community to submit issues.
Community / Affiliation: We want to share the specification with a broader community, to receive validation on our approach.
endpoint validator.
Format / Tooling: We want to use a format that is easy to edit, and also publishable into an easy-to-understand form.
Potential ratification: If it's useful, it would be good to create a standard where there's a pathway for it becoming more widely adopted.
From a little research there are a few relevant standards bodies:
IETF (Internet Engineering Task Force): Mostly use the text-based RFC format, but seem to have some nice tracking tools. Generally for lower-level standards (e.g. TCP), though they've created higher level ones too.
W3C (World Wide Web Consortium): If we are publishing through the W3C eventually, it looks like we'll need to conform to pubrules.
WHATWG (Web Hypertext Application Technology Working Group): A group that appears to focus mainly on HTML5, so less relevant for a REST API spec.
OASIS (Organization for the Advancement of Structured Information Standards): Seems to be more about business abstractions on top of IETF / W3C standards.
I have looked at a few examples over the web, and note a difference of approaches:
YAML: spec history, versioned in GitHub, issues on GitHub, no apparent affiliation, uses DocBook.
JSON-LD: spec history, versioned in GitHub, issues on GitHub, W3C affiliation, uses ReSpec (also on GitHub).
JSON API: spec history, versioned in GitHub, issues on GitHub, no apparent affiliation, appears to use Jekyll and some custom templates.
JMAP: versioned in GitHub, issues on GitHub, no apparent affiliation, appears to use markdown and some custom templates.
HTML 5 (W3C): versioned in GitHub, issues on GitHub, W3C affiliation, uses Bikeshed.
HTML 5 (WHATWG): versioned in GitHub, issues on GitHub, WHATWG affiliation, uses a "proprietary language that is then post-processed into HTML" (source).
JSON Schema: versioned using IETF tooling, issues on GitHub, IETF affiliation, uses IETF RFC format.
CSS 3: spec history, versioned in Mercurial, issues inline in spec, W3C affiliation, uses Bikeshed.
For a REST API, which approach should we follow? What are the advantages and disadvantages of each?

Caveat: I was the original author of ReSpec (though maintenance has now passed on to others).
I think that at the end of the day, a lot of it boils down to your personal preferences. Both tools support your first list of requirements. Both tools have a similar feature set with a lot of overlap (but also distinct things) and in both cases the documentation may not cover that fact.
Some things that might help you choose:
ReSpec requires zero installation. In my experience that makes it easier for contributors who are relatively new to spec-writing to get started since they can just fork the repo and edit the HTML — refreshing the browser will show the edits directly. ReSpec source uses conventions beyond HTML, but it is always conforming HTML. Bikeshed requires either a working Python2 installation and it needs to be installed, or use of a curl command to the web version (but I don't think that's very convenient). To more seasoned users, that point makes no difference.
ReSpec does support batch building, there's a respec2html tool that comes with it. You should normally be able to operate it in CI (otherwise spec-gen works too).
Publican is dead, as far as I know.
If you are producing specs that are not intended for W3C, you might need to patch whichever option you pick. At that point your preference in language might be a factor.
ReSpec will not be very good at very large specifications (but for most cases it's fine).
Overall I think that's it. If you're undecided, the best thing might be for you to grab the sources of two similar specs and compare to see what you like best, and also to play at making a few small edits to both and see what's most convenient for your expected workflow. At the end of the day, don't agonise over this: both formats are HTML-based (and support embedded Markdown if that's your thing). Converting between them should you need to will likely require less time than a properly thorough investigation!

Caveat: I'm the author of Bikeshed.
As Robin said, the choice of processor is largely one of personal taste. Most of the differences in processor are minor; to my knowledge there are two major differences to consider:
Bikeshed compiles a source document into HTML; ReSpec is included into an HTML file and on-the-fly rewrites it into better HTML. In my opinion, this makes ReSpec slightly easier for casual use (nothing to install, just refresh the source document to see changes), but Bikeshed is better for the ecosystem (no "flash of un-ReSpec'd content" or "jumping spec" when you navigate to an anchor). That said, Bikeshed is easy to install locally, and a lot of people use the server version instead quite happily.
One of Bikeshed's primary features is its cross-spec linking database; it has a growing (largely W3C-centric) database of specs that it regularly spiders for definitions, and makes it very simple to link to those definitions. This has resulted in greatly improved cross-linking in W3C specs, which makes things much easier to read and follow. However, if you're not planning to link into W3C specs, or have them link into you, that's not a big deal. Linking "locally" (within your own spec) is about as easy in either processor.

So on the Bikeshed vs. ReSpec topic, a few thoughts:
When choosing software to rely on, technical superiority or feature set of a project over an other one should rarely be your deciding factor; unless of course there are specific features that you absolutely need to get your job done and that aren't available in all contenders.
Software tends to come and go. And that's true of commercial software as it is of open-source. The steeper the learning curve and the higher migration costs, the more you want to consider a tool's future when picking one.
Bikeshed's killer feature is its cross-spec linking database integration. But it's only a killer feature if you need it. I doubt you would given your current use case.
That said, because it is a killer feature for some of the more involved and Web-centric spec editing, it's acting as a magnet, pulling in key members of the community. As these members adopt Bikeshed and use it for new spec or convert existing specs to it, it increases that tool's appeal creating a snowball effect. Conversely, it makes it harder for ReSpec to maintain its traction. Having a reactive maintainer whose job it is to write specs and whose tool to do so is Bikeshed also helps.
All in all, Bikeshed has a brighter future in front of it than ReSpec does at this point. So, even though you don't need Bikeshed's extra features, its learning curve is a bit steeper and installation more involved, you might still want to pick it simply because it has more traction, which is code for the following:
it will be along longer,
bugs should get fixed faster,
it should improve faster,
it should be more stable,
it might add a bit of veneer to your work because you're using the cool kids' tool.
However it seems that you're planning to specify a REST API. I'm not sure either tool is the right one for the job. Have you considered a combination of JSON Schema, JSON Hyper-Schema, and a documentation tool like prmd? This has the added benefits of being (highly) machine readable which can be used to generate test suites for implementations, clients for different programming languages, etc.
Full disclosure: I started off using ReSpec, added Markdown support to it, helped maintain it and recently switched to Bikeshed to benefit from its cross-spec linking database integration.

Given it's a REST-based API, W3C is most relevant. WHATWG is too focussed on HTML, IETF would result in a less readable spec, and potentially OASIS is too obscure.
All bodies agree on RFC2119, so it's worth ensuring this is used in the spec.
If the W3C is chosen, Pubrules must be followed (there is a new W3C pubrules validator, accessible via npm and here). Two main formats/tools are currently popular, both supported by the W3C's tooling, as described here:
ReSpec and Bikeshed: Since W3C "pubrules" markup can prove repetitive and at times hard to get right, many tools have been developed to assist people in producing it — these are the two main ones. ReSpec documents are essentially valid HTML with some extra configuration that a JS library turns into the real thing; Bikeshed is a Python preprocessor that can apply to HTML but is more often used in Markdown mode.
N.B. Anolis, an older preprocessor that preceded ReSpec and Bikeshed, has been declared dead by it's author.
W3C is currently undergoing a process of modernisation. A new W3C project named Echidna (based in GitHub) supports both ReSpec and Bikeshed automated publishing, though the latter has only recently been implemented, and it currently only works inside the W3C.
Using either of the above tools will allow the standard to be indexed in specref.org (the database of bibliographical references that W3C specifications rely upon).
Notes on each of these options:
ReSpec
ReSpec is in use at the W3C and actively maintained.
ReSpec apparently does support Markdown, but the feature is undocumented.
Spec Generator seems in common use in W3C for ReSpec CI, and can be accessed outside the W3C (it used to be internal).
In terms of CI alternatives:
Echidna is a new official recommendation for ReSpec CI, however it currently only works inside the W3C.
Publican is a GitHub hook listener that generates specs written to be parsed by ReSpec or Bikeshed. Hard to tell the status of the project though, as it appears to be discontinued (Robin originally created ReSpec, and did lots of work for webplatform.org, but the project may have since changed direction). Likely better to use actual Bikeshed for Bikeshed CI. It also runs in Docker (see gist).
There are various articles discussing the use of ReSpec with GitHub, and the publishing process.
Some say ReSpec is more accessible.
Examples of ReSpec specs: here, here, here, here
Examples of ReSpec CI: here (Echidna here, here)
Bikeshed
Bikeshed is in use at both the W3C and WHATWG, and is actively maintained.
Bikeshed fully supports Markdown (and soon CommonMark).
Bikeshed specs are compiled, so this works well for CI in terms of flagging syntax error (it's main advantage).
Need to set up Travis CI on the GitHub repo to publish changes, as the W3C have done.
Can use a watcher when editing locally to reduce dev cycle.
Can use a remotely hosted processor, but it won't work with all features (e.g. the separate biblio file).
There are a few examples of people migrating from ReSpec to Bikeshed, not the other way around.
HTML diff (not natively supported in Bikeshed, but found in ReSpec) can still be done by manually adapting ReSpec Section 5, as it's just a simple curl.
Given the use of Bikeshed for the current HTML5 spec, it would appear to be gaining popularity.
Examples of Bikeshed specs: here
Examples of Bikeshed CI: here, here, here, here
Both ReSpec and Bikeshed have a feature to link GitHub issues to inline issues in the spec, which mean they pair well with GitHub. All examples found just use commit logs for versioning.
In terms of community:
W3C community groups appear to be a good way to attract a broader audience, and they recommend using GitHub for "modern standards development" (and ReSpec, though that recommendation may be outdated).
The Web Incubator CG is an even more informal version of W3C Community Groups, which provides an existing community, GitHub, and forum to discuss topics directly related to the "web platform" (which means only useful for a "web platform feature that would be implemented in a browser or similar user agent"). They use both ReSpec and Bikeshed.

Related

Feed HAPI FHIR Package Cache manually? (for completeness and/or off-line use)

Because of data protection regulations we need to run the HAPI validator (validator_cli.jar) off-line, and we also need to complement the FHIR Package Cache by adding conformance resources that are not available online at all (they tend to get distributed via mounted courier, carrier pigeon and similar technologies).
Transplanting a well-filled package cache (e.g. %userprofile%\.fhir) from a connected computer to an offline computer takes care of all things that HAPI can download. From that point on HAPI finds these conformance resources without requiring any switches or other TLC.
Referencing directories with conformance resources that came in a push-cart can be done via the implementation guide switch (-ig /foo/bar). However, adding several dozen directories in this way is tedious and error-prone; it also makes it somewhat impractical to use the HAPI validator from the command line or in a context like Yannick Lagger's VSCode FHIR plugin.
Workarounds like creating a wrapper batch file with the umpteen -ig switches have limited reach; they do not work on HAPI as a whole, and they do not help with things like the VSCode plugin.
Lastly, for various reasons it is necessary to put the whole FHIR cache (minus the official HL7 packages) into the build process, with version control, test suites etc. pp. The reason is that the specifications for German health care are still very much in flux, only partly available online, incomplete, and owned by about half a dozen different organisations. Using a carefully constructed FHIR cache with controlled contents is the only option in this situation, especially if you consider that our automated billing system spits out invoices for up to 7 digits a pop.
Are there any tools that can assist with turning an -ig style tree with (predominantly) XML conformance resources into a package that can be shoved into the FHIR Package Cache?
HL7.org has some documentation about the NPM Package Format as far as it pertains to FHIR packages. This indicates, among other things, that all resources must be converted to JSON. Is there a reliable command line tool that can be used to automate at least this part of the process, even if it doesn't spit out a complete NPM package?

What is the most useful way to represent a coding standard?

We currently keep our coding standard in a MSWord document under SVN.
As our standards grow / change, it's becoming an increasingly clunky beast to maintain.
Most entries currently consist of:
A succinct explanation of the guideline.
Reasoning behind the guideline.
Any extra notes.
Examples of what you should do.
Examples of what you should not to.
At the moment we use track changes within the document to keep track of pending suggestions / corrects which are periodically reviewed and then accepted / rejected.
Is there a de-facto good way of tackling maintaining a document like this?
A repository at GitHub would serve well. See example: https://github.com/airbnb/javascript - you can have discussions, track changes, accept/reject pull requests, etc.
Also it would help if you use auto-formatting tools plugged into your build process like https://golang.org/cmd/gofmt/ or https://github.com/thoughtbot/hound
I suggest you use plain text file (or HTML / some other markup file if you need some fancy formatting) under some version control system. We used Word's features for versioning and I like what Git offers much much more.
GITHUB: As an organization, if you maintain a private Github repository (not opensource, but leverage Github's strengths to maintain repository, allow distributed coding accessible to individuals within organization), you could upload your Coding Standards document to a Github repository, maintain a markdown document, which could have reviews/pull requests etc, as mentioned by Alex above
REVIEWBOARD: If your organization does not have a private Github repository, then I suggest you could choose this option, if your organization is performing code reviews through review board. ReviewBoard allows to review code by peers, maintain data of the different reviews, whether addressed, whether the version is allowed to be shipped etc. So, you could avail this feature of review board to review Coding Standards document. ReviewBoard has a feature of reviewing PDF documents. So, I guess by this option, you are maintaining a repository for CodingStandards document as well as providing an option of reviewing PDF document, which is tracked by ReviewBoard application.
Hope it helped. I guess there might be many other ways in which many companies might be doing.

How do you manage the underlying codebase for a versioned API?

I've been reading up on versioning strategies for ReST APIs, and something none of them appear to address is how you manage the underlying codebase.
Let's say we're making a bunch of breaking changes to an API - for example, changing our Customer resource so that it returns separate forename and surname fields instead of a single name field. (For this example, I'll use the URL versioning solution since it's easy to understand the concepts involved, but the question is equally applicable to content negotiation or custom HTTP headers)
We now have an endpoint at http://api.mycompany.com/v1/customers/{id}, and another incompatible endpoint at http://api.mycompany.com/v2/customers/{id}. We are still releasing bugfixes and security updates to the v1 API, but new feature development is now all focusing on v2. How do we write, test and deploy changes to our API server? I can see at least two solutions:
Use a source control branch/tag for the v1 codebase. v1 and v2 are developed, and deployed independently, with revision control merges used as necessary to apply the same bugfix to both versions - similar to how you'd manage codebases for native apps when developing a major new version whilst still supporting the previous version.
Make the codebase itself aware of the API versions, so you end up with a single codebase that includes both the v1 customer representation and the v2 customer representation. Treat versioning as part of your solution architecture instead of a deployment issue - probably using some combination of namespaces and routing to make sure requests are handled by the correct version.
The obvious advantage of the branch model is that it's trivial to delete old API versions - just stop deploying the appropriate branch/tag - but if you're running several versions, you could end up with a really convoluted branch structure and deployment pipeline. The "unified codebase" model avoids this problem, but (I think?) would make it much harder to remove deprecated resources and endpoints from the codebase when they're no longer required. I know this is probably subjective since there's unlikely to be a simple correct answer, but I'm curious to understand how organisations who maintain complex APIs across multiple versions are solving this problem.
I've used both of the strategies you mention. Of those two, I favor the second approach, being simpler, in use cases that support it. That is, if the versioning needs are simple, then go with a simpler software design:
A low number of changes, low complexity changes, or low frequency change schedule
Changes that are largely orthogonal to the rest of the codebase: the public API can exist peacefully with the rest of the stack without requiring "excessive" (for whatever definition of of that term you choose to adopt) branching in code
I did not find it overly difficult to remove deprecated versions using this model:
Good test coverage meant that ripping out a retired API and the associated backing code ensured no (well, minimal) regressions
Good naming strategy (API-versioned package names, or somewhat uglier, API versions in method names) made it easy to locate the relevant code
Cross-cutting concerns are harder; modifications to core backend systems to support multiple APIs have to be very carefully weighed. At some point, the cost of versioning backend (See comment on "excessive" above) outweighs the benefit of a single codebase.
The first approach is certainly simpler from the standpoint of reducing conflict between co-existing versions, but the overhead of maintaining separate systems tended to outweigh the benefit of reducing version conflict. That said, it was dead simple to stand up a new public API stack and start iterating on a separate API branch. Of course, generational loss set in almost immediately, and the branches turned into a mess of merges, merge conflict resolutions, and other such fun.
A third approach is at the architectural layer: adopt a variant of the Facade pattern, and abstract your APIs into public facing, versioned layers that talks to the appropriate Facade instance, which in turn talks to the backend via its own set of APIs. Your Facade (I used an Adapter in my previous project) becomes its own package, self-contained and testable, and allows you to migrate frontend APIs independently of the backend, and of each other.
This will work if your API versions tend to expose the same kinds of resources, but with different structural representations, as in your fullname/forename/surname example. It gets slightly harder if they start relying on different backend computations, as in, "My backend service has returned incorrectly calculated compound interest that has been exposed in public API v1. Our customers have already patched this incorrect behavior. Therefore, I cannot update that computation in the backend and have it apply until v2. Therefore we now need to fork our interest calculation code." Luckily, those tend to be infrequent: practically speaking, consumers of RESTful APIs favor accurate resource representations over bug-for-bug backwards compatibility, even amongst non-breaking changes on a theoretically idempotent GETted resource.
I'll be interested to hear your eventual decision.
For me the second approach is better. I have use it for the SOAP web services and plan to use it for REST also.
As you write, the codebase should be version aware, but a compatibility layer can be used as separate layer. In your example, the codebase can produce resource representation (JSON or XML) with first and last name, but the compatibility layer will change it to have only name instead.
The codebase should implement only the latest version, lets say v3. The compatibility layer should convert the requests and response between the newest version v3 and the supported versions e.g v1 and v2.
The compatibility layer can have a separate adapters for each supported version which can be connected as chain.
For example:
Client v1 request: v1 adapt to v2 ---> v2 adapt to v3 ----> codebase
Client v2 request: v1 adapt to v2 (skip) ---> v2 adapt to v3 ----> codebase
For the response the adapters function simply in the opposite direction. If you are using Java EE, you can you the servlet filter chain as adapter chain for example.
Removing one version is easy, delete the corresponding adapter and the test code.
Branching seems much better for me, and i used this approach in my case.
Yes as you already mentioned - backporting bug fixes will require some effort, but at the same time supporting multiple versions under one source base (with routing and all other stuff) will require you if not less, but at least same effort, making system more complicated and monstrous with different branches of logic inside (at some point of versioning you definetely will come to huge case() pointing to version modules having code duplicated, or having even worse if(version == 2) then...) .
Also dont forget that for regression purposes you still have to keep tests branched.
Regarding versioning policy: i would keep as max -2 versions from current, deprecating support for old ones - that would give some motivation for users to move.
Usually, introduction of a major version of API leading you in a situation of having to maintain multiple versions is an event which does not (or should not) occur very frequently. However, it cannot be avoided completely. I think it is overall a safe assumption that a major version, once introduced, would stay latest version for relatively long period of time. Based on this, I would prefer to achieve simplicity in the code at the expense of duplication as it gives me better confidence of not breaking previous version when I introduce changes in latest one.

Wiki-like CMS for public viewing and private collaboration

I am interested in a content management system that supports a wiki-like approach to the presentation of information (partly because public familiarity with the interface is desirable), but a strictly private collaborative process.
I realise this notion is antithetical to what passes for the philosophy of wikis, but the information intended to be presented must be 100% reliable at all times. On the private side, at least two classes of user should be allowed: ordinary collaborators, who cannot change the publicly-viewable content (only discuss or propose modifications in private) without the approval of an editor (reviewing and approving on a content-by-content basis).
Can someone experienced in this area advise whether a wiki can be configured in this way, or whether there are alternative (free) packages that can achieve this?
Again, the reason I am thinking along the wiki line is that it is very important that (often young) viewers be immediately comfortable with the interface, and that the collaborative back-end is robust. The wide range of embedding and citing capabilities is also important.
Mediawiki which is used for Wikipedia, does support private wikis provided you configure it properly. Please check the following links to get an idea about it-
http://www.mediawiki.org/wiki/Manual:Preventing_access
http://mythopoeic.org/mediawiki-private/
N.B: I haven't tried it, but it should work.

What should I propose for a reusable code library organization?

My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.