How to keep code and specs in sync? - are there good tools [closed] - version-control

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
In my team we've got a great source control system and we have great specs. The problem I'd like to solve is how to keep the specs up-to-date with the code. Over time the specs tend to age and become out of date
The folks making the specs tend to dislike source control and the programmers tend to dislike sharepoint.
I'd love to hear what solutions others use? is there a happy middle somewhere?

Nope. There's no happy middle. They have different audiences and different purposes.
Here's what I've learned as an architect and spec writer: Specifications have little long-term value. Get over it.
The specs, while nice to get programming started, lose their value over time no matter what you do. The audience for the specification is a programmer who doesn't have much insight. Those programmers morph into deeply knowledgeable programmers who no longer need the specs.
Parts of the specification -- overviews in particular -- may have some long-term value.
If the rest of the spec had value, the programmers would keep them up to date.
What works well is to use comments embedded in the code and a tool to extract those comments and produce the current live documentation. Java does this with javadoc. Python does this with epydoc or Sphinx. C (and C++) use Doxygen. There are a lot of choices: http://en.wikipedia.org/wiki/Comparison_of_documentation_generators
The overviews should be taken out of the original specs and placed into the code.
A final document should be extracted. This document can replace the specifications by using the spec overviews and the code details.
When major overhauls are required, there will be new specifications. There may be a need to revisions to existing specifications. The jumping-off point is the auto-generated specification documents. The spec. authors can start with those and add/change/delete to their heart's content.

I think a non-Sharepoint wiki is good for keeping documentation up to date. Most non-technical people can understand how to use a wiki, and most programmers will be more than happy to use a good wiki. The wiki and documentation control systems in Sharepoint are clunky and frustrating to use, in my opinion.
Mediawiki is a good choice.
I really like wikis because they are by far the lowest pain to adopt and keep up. They give you automatic version control, and are usually very intuitive for everyone to use. A lot of companies will want to use Word, Excel, or other types of docs for this, but getting everything online and accessible from a common interface is key.

As much as possible the spec should be executable, via rspec, or doctest and similar frameworks. The spec of the code should be documented with unit tests and code that has well named methods and variables.
Then the spec documentation (preferably in a wiki) should give you the higher level overview of things - and that won't change much or get out of sync quickly.
Such an approach will certainly keep the spec and the code in sync and the tests will fail when they get out of sync.
That being said, on many projects the above is kind of pie-in-the-sky. In that case, S. Lott is right, get over it. They don't stay in sync. Look to the spec as the roadmap the developers were given, not a document of what they did.
If having a current spec is very important, then there should be specific time on the project allocated to write (or re-write) the spec after the code is written. Then it will be accurate (Until the code changes).
An alternative to all of this is to keep the spec and the code under source control and have check-ins reviewed to ensure that the spec changed along with the code. It will slow down the development process, but if it is really that important ...

One technique used to keep the documentation in sync with the code is literate programming. This keeps the code and the documentation in the same file and uses a preprocessor to generate the compilable code from the documentation. As far as I know this is one of the techniques Donald Knuth uses - and he's happy to pay people money if they find bugs in his code.

I don't know of any particularly good solution for precisely what you're describing; generally, the only solutions that I've seen that really keep this sort of stuff in sync are tools that generate documentation from the source code (doxygen, Javadoc).

Related

What are the mechanisms and idioms of code re-use in OpenSCAD? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 months ago.
This post was edited and submitted for review 9 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I designed few simple 3D parts with OpenSCAD and I would like to move on to more complex parts now. As in most other programming languages, that would naturally include starting to re-use code that others have written before. Such as functions for round/bevel edges, infill corners, beziers curves and some common parts like screws, bolts.
How does that work in OpenSCAD? Specifically: What are the language features, idioms and officially recommended good practices of how code reuse is achieved in OpenSCAD?
(You are welcome to include pointers to good examples. But the question is about the mechanisms and good practices for code reuse in OpenSCAD, not about specific code that can be reused.)
Reusable code in OpenSCAD is organized in "libraries", similar to the package or library system in many other languages.
As with all code reuse, there is the problem of library scope overlap, where two libraries solve the same issue. This cannot be truly solved. But as a best practice, I would choose the one most appropriate library for each design project, and then stick with whatever that library has to offer. For example, don't depend on both BOSL and NopSCADlib because you like BOSL for everything except its threading functions, which you like better in NopSCADlib. For a small project, I like to work with only Round Anything, which is small and compact.
To help you get started with choosing a library appropriate for your project, I include a list of examples below that I thing show good practices of reusable OpenSCAD code. I had a long look at OpenSCAD libraries recently and this is the result. Most of them come from the official OpenSCAD libraries page, which I found to recommend only a few but very good libraries.
My favourite libraries, roughly in my personal order of desirability:
BOSL (source, docs) and BOSL2. (source, docs) "The Belfry OpenScad Library - A library of tools, shapes, and helpers to make OpenScad easier to use." Includes lots of modules and functions to make OpenSCAD code more readable. Overall, it's like MCAD in scope, but much better in execution. BOSL2 is a much extended second edition of BOSL, but as of 2020-11 the author says it is not yet ready for production use.
BOSL includes a very good Bezier library. Includes a threading library.
Round Anything. (source, API, visual overview) "Round-Anything is primarily a set of OpenSCAD utilities that help with rounding parts, but it also embodies a robust approach to developing OpenSCAD parts."
NopSCADlib. (source, docs) A very large library. Use for any kind of machine design, as it contains nuts, bolts, washers, electronic components, belts etc.. "It contains lots of vitamins (the RepRap term for non-printed parts), some general purpose printed parts and some utilities. There are also Python scripts to generate Bills of Materials (BOMs), STL files for all the printed parts, DXF files for CNC routed parts in a project and a manual containing assembly instructions and exploded views by scraping markdown embedded in OpenSCAD comments, see scripts."
Also contains a 3D sweep function and a thread generation module.
BOLTS. (source, docs) "BOLTS is an Open Library of Technical Specifications." Contains all kinds of models for metal hardware standard parts (example).
dotSCAD. (source) Seems to be one of the best general library for OpenSCAD, being both huge, good quality, and well maintained. Mostly focused on math art parts. For an overview of the designs made by the author of dotSCAD, using that library, see here. For background articles about the designs made with dotSCAD, see here.
MCAD. (source, docs) This is so far the only library shipped with every installation of OpenSCAD, so would qualify as its standard library. No need to tell users of your designs to install anything when you only include MCAD.
Note that currently (as of 2020-11), a large rework is being done to MCAD, with the effect that the dev branch has nearly twice the commits as the master branch. You'll find many goodies here, but of course users of your design would then have to install the dev branch first.
The problem with MCAD, esp. the current master branch, is that I don't find it useful. It's so far a rather a non-integrated hotchpotch of contributions from many authors. But since it's the standard library, we should give it a chance. When I have something to generally useful, I'd try to contribute it here.
Revolve2. (source, announcement) In terms of speed, this is hands-down the best thread generation library I could find. I did not yet test the threading features in BOSL and NopSCADlib, though.
3D sweep demos. (source) But note that this is rather demo code than a library; first try the sweep module in NopSCADlib.
scad-utils. (source)
Relativity. (source) A library to arrange objects relative to each other. Also includes a CSS-like styling language for objects. Seemingly no longer in active development, but still great to learn really advanced OpenSCAD techniques.
After some researches, it seems that the BOSL2 library is the most complete :
BOSL2 Library documentation
BOSL2 Library
[edit 2022: update BOSL to BOSL2]

Best way to organize bioinformatics projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I come from a computer science. background, but I am now doing genomics.
My projects include a lot of bioinformatics typically involving: aligning sequences, comparing overlap, etc. between sequences and various genome-annotation-features, from different classes of biological samples, time-course data, microarray, high-throughput sequencing ("next-generation" sequencing, though it's the current generation actually) data, this kind of stuff.
The workflow with this kind of analyses is quite different from what I experienced during my computer science studies: no UML and thoughtfully designed objects shining with sublime elegance, no version management, no proper documentation (often no documentation at all), no software engineering at all.
Instead, what everyone does in this field is hacking out one Perl-script or AWK-one-liner after the other, usually for one-time usage.
I think the reason is that the input data and formats change so fast, the questions need to be answered so soon (deadlines!), that there seems to be no time for project organization.
One example to illustrate this: Let's say you want to write a raytracer. You would probably put a lot of effort into the software engineering first. Then program it, finally in some highly-optimized form. Because you would use the raytracer countless of times with different input data and would make changes to the source code over a duration of years to come. So good software engineering is paramount when coding a serious raytracer from scratch. But imagine you want to write a raytracer, where you already know that you will use it to raytrace one, single picture ever. And that picture is of a reflecting sphere over a checkered floor. In this case you would just hack it together somehow. Bioinformatics is like the latter case only.
You end up with whole directory trees with the same information in different formats until you have reached the one particular format necessary for the next step, and dozen of files with names like "tmp_SNP_cancer_34521_unique_IDs_not_Chimp.csv" where you don't have the slightest idea one day later why you created this file and what it exactly is.
For a while I was using MySQL which helped, but now the speed in which new data is generated and changes formats is such that it is not possible to do proper database design.
I am aware of one single publication which deals with these issues (Noble, W. S. (2009, July). A quick guide to organizing computational biology projects. PLoS Comput Biol 5 (7), e1000424+). The author sums the goal up quite nicely:
The core guiding principle is simple:
Someone unfamiliar with your project
should be able to look at your
computer files and understand in
detail what you did and why.
Well, that's what I want, too! But I am following the same practices as that author already, and I feel it is absolutely insufficient.
Documenting each and every command you issue in Bash, commenting it with why exactly you did it, etc., is just tedious and error-prone. The steps during the workflow are just too fine-grained. Even if you do it, it can be still an extremely tedious task to figure out what each file was for, and at which point a particular workflow was interrupted, and for what reason, and where you continued.
(I am not using the word "workflow" in the sense of Taverna; by workflow I just mean the steps, commands and programs you choose to execute to reach a particular goal).
How do you organize your bioinformatics projects?
I'm a software specialist embedded in a team of research scientists, though in the earth sciences, not the life sciences. A lot of what you write is familiar to me.
One thing to bear in mind is that much of what you have learned in your studies is about engineering software for continued use. As you have observed a lot of what research scientists do is about one-off use and the engineered approach is not suitable. If you want to implement some aspects of good software engineering you are going to have to pick your battles carefully.
Before you start fighting any battles, you are going to have to critically examine your own ideas to ensure that what you learned in school about general-purpose software engineering is valid for your current situation. Don't assume that it is.
In my case the first battle I picked was the implementation of source code control. It wasn't hard to find examples of all the things that go wrong when you don't have version control in place:
some users had dozens of directories each with different versions of the 'same' code, and only the haziest idea of what most of them did that was unique, or why they were there;
some users had lost useful modifications by overwriting them and not being able to remember what they had done;
it was easy to find situations where people were working on what should have been the same program but were in fact developing incompatibly in different directions;
etc etc etc
Once I had gathered the information -- and make sure you keep good notes about who said what and what it cost them -- it became relatively easy to paint a picture of a better world with source code control.
Next, well, next you have to choose your own next battle. But one of the seeds of doubt you have to sow in your scientist-colleagues minds is 'reproducibility'. Scientific experiments are not valid if they are not reproducible; if their experiments involve software (and they always do) then careful software engineering is essential for reproducibility. A lot of this is about data provenance, but that's a topic for another day.
Part of the issue here is the distinction between documentation for software vs documentation for publication.
For software development (and research plan) design, the important documentation is structural and intentional. Thus, modeling the data, reasons why you are doing something, etc. I strongly recommend using the skills you've learned in CS for documenting your research plan. Having a plan for what you want to do gives you a lot of freedom to multi-task while long analyses are running.
On the other hand, a lot of bioinformatics work is analysis. Here, you need to treat documentation like a lab notebook, and not necessarily a project plan. You want to be document what you did, maybe a brief comment why (e.g. when you are troubleshooting data), and what the outputs and results are.
What I do is fairly simple.
First, I start in a directory and create a git repo. Then, whenever I change some file, I commit it to the repo. As much as possible, I try to name data outputs in a way that I can drop then into my git ignore files.
Then, as much as possible, I work on a single terminal session for a project at a time, and when I hit a pause point (like when I've got a set of jobs sent up to the grid, I run 'history |cut -c 8-' and paste that into a lab notes file. I then edit the file to add comments for what I did, and remember, change the git add/commit lines to git checkout (I have a script that does this based on the commit messages). As long as I start it in the right directory, and my external data doesn't go away, this means that I can recreate the entire process later.
For any even slightly complex processing tasks, I write a script to do it, so that my notebook, as much as possible, looks clean. To an approximation, a helper script can be viewed as a subroutine in a larger project, and should be documented internally to at least that level.
Your question is about project management. Bad project management is not unique to bioinformatics. I find it hard to believe that the entire industry of bioinformatics is commited to bad software design.
About the presure... Again there are others in this world that have very challenging deadlines, and they are still using good software designs.
In many cases, following a good software design does not hold down the projects and may even speed its design and maintainance (at least on the long run).
Now to your real question... You can offer your manager to redesign small parts of the code that have no influence on the rest of the code as a proof of concept (POC), but it's really hard to stop a truck from keep on moving, so don't get upset if he feels "we worked this way for years - we know what we are doing, and we don't need a child to teach us how to do our work". Learn to work like the rest and when you will gain their trust, you could
do your thing once in a while (I hope you will have time and the devotion to do the right thing).
Good luck.

How do you collaboratively write specs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working with a small team (2 others) of developers that are geographically dispersed, and I'm looking for good ways for us to collaborate on specs... We're thinking we might use Google Docs to write the spec in so we can all have access to modify it in a central location.
What have you done? What good ideas do you have?
If you have an intranet or VPN, I would actually consider installing and using a small Wiki for these specs.
Compared to Google docs you get:
Much better versioning and change tracking (IMHO)
Much easier to start new documents for subsections
An actual markup rather than WYSIWYG (a matter of taste, I prefer LaTeX to Word).
Possible to attach variety of other file types
Very easy to backup
Very easy to create an offline version
You don't have to worry about storing sensitive materials elsewhere.
The disadvantage is that it is not WYSIWYG, which may or may not be an issue to you.
Of course, you can pick a Wiki implementation that supports a better editor, and possibly even a synchronous collaboration one.
Google Wave - exactly what it's meant for - collaboration
IMHO, a word processor is the wrong tool for a programmer. A spec should be written in a plain text editor, and utilize lightweight markup such as reStructuredText, AsciiDoc etc.
The benefits of such an approach are:
There are excellent tools to manage plain text, that are already in the hands of programmers (VCS, automated build systems, diff, patch, programming editors, grep, etc.)
A markup language allow for expressing intent rather then formatting.
That in mind, a Wiki seem to be the obvious choice.
Personally my tool chain of choice is:
reStructuredText as the markup language.
Trac as a Wiki
Firefox + the it's all text extension
Emacs + rst-mode
The choice of technology is one issue and Google docs is a good choice IMHO. But the real challenge is how to manage the process e.g. divide the tasks.
My suggestion is to first make sure that the platform and all related technologies are decided-upon as best as feasible. Then, compose a a thorough table of contents. A well-designed TOC will allow you to divide tasks properly and not "step" on each others' work. From then on you each "flesh" out your assigned sections as well as review each others' work.
In effect, each TOC subsection becomes an atomic unit of work that can be assigned and maintained by an individual who is also accountable for said section(s).
Good luck!
I think it depends on
How heavily into writing the specs you all are
If you're likely writing at the same time
Whether you intend to publish the specs.
Google Docs is nice and easy to get started with. It's also great that you can now export folders all at once. Still, for something that's going to be published to the web, a wiki or general cms is a better presentation vehicle. A wiki will also integrate with your existing site.
If you've got small specs, primarily written by one person then use whatever tool is available where you're hosting the project code or website. If you're not likely to be editing at the same time then a wiki is good.
I've done the wiki thing, the passed document thing and the Google Docs thing.
The wiki thing has a low starting effort and lasts a pretty long time. At a certain size it does get to be a pain.
The passed document thing (writes, email, edit, email, etc) only works while one person is starting everything up. As soon as there are even minor edits then it sucks.
The Google Docs thing is fine until you have several docs and several editors or want to publish it online.
hth
This isn't programming related, but I've personally used Google Docs to write shared documents and found it easy to use.
I would suggest enabling Google Gears however, in the event that the Google servers go down momentarily or an internet connection isn't available.
For writing specs collaboratively, you could try Gingko.
It's a card-tree editor, which means it's a mix between index cards and an outliner, with real-time collaboration and full Markdown support (as well as basic LaTeX).
We are still missing several features (version history, comments, etc), but for some the benefits of having everything in a tree structure outweigh these drawbacks.
Writing specs with it is great, because you can create a card for each user story, and drill into it as much as you like (and organize them into categories if you'd like).
http://gingkoapp.com

Software design period...what do other developers do? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm a new software architect/lead, coming up with software design for a team of software developers. I'm coming up with the requirement spec, interface header files, and visio software design docs, and build plan, etc.
My question is: what do the rest of the team do during this period? I'm certainly engaging them in the design, but we dont need the whole team actively working on what I'm doing all the time.
Are there any good books for new software architect?
Generally the various stages overlap, so there will be some coding during design etc. There are a lot of things to do besides that. They can be reviewing unfamiliar technology that is going to be used, setting up source control system, reviewing business requirements, reviewing your documents to make sure they make sense and are clear. There is a lot of other work to be done besides programming.
What a software team does while the lead does the design is very different from company to company. On my company we try to work on the design while the developers are finalizing other projects or solving bugs.
Another approach that I've taken when starting a whole new project is to get the developers to work on the design as well - people with a good understanding of the requirements can help you designing smaller parts of the system and writing the specs for them. Others can work on mockups, frameworks. This worked rather well for the small software team I led in a previous job (4 developers in total).
I also found it useful to have other team members research parts I'm unsure of (or even validating that things I think should work will indeed work), such as:
Investigating whether an external API provides the features we need
Writing a small proof of concept or technology demonstrator
Create an API mockup (header file, interface or REST endpoint) to investigate whether the API looks useful.
As other have said, you typically want a ramp-up period during the first part of the project, and through the first iteration. You're planning on building this iteratively, aren't you? Start with a core team (nor more than 3-4 people, since you're going to need to communicate heavily with each other) to help you explore the requirements, get a basic data model in place, identify and setup any frameworks, identify and setup build and test tools. Some coding activities typically take place in the design phase: for UI mockups, run-ahead prototypes of technically sensitive areas (whatever risks you have should be mitigated by explirative coding: be they new technologies, undocumented interfaces to integrated systems, or unstable requirements).
But coders in the design phase should help with the design, in order to get their buy-in, and to help train up the rest of the team during the first iterations. Your role during this is to ensure that the major nonfunctional requirements (e.g. are known, prioritized, are met by the design, and can be tested). You should also collaborate with the project lead or whoever else is responsible for staffing and financing in order to sketch out the iterations and the staffing levels needed. Ensure the solution can be built iteratively, and aim at implementing only a basic structure during the first iteration, both to build confidence, and to eliminate risks. (Sometimes, you can push major risks to the second iteration, and focus the first towards confidence and team building.)
And of course, be sure you are not designing every detail. You should be able to use every design artifact in the next iteration (and elaborate them later as needed). Since design decisions are expensive to change, try to postpone them. However, some influence the entire solution (for instance, the data model, or your approach to security) and absolutely must be at least outlined up front. This isn't waterfall. This is just not closing your eyes and hoping a viable architecture will emerge by magic.
But design proceeds throughout the iterations. It's just that you do less of it as you go along, and with lesser impact on the solution (unless you're unlucky... and then things get expensive).
Stop doing the useless things you do and just start coding with them! ;)
If there is no overlap with another ongoing project, getting them involved as you're doing is great, maybe push it a little further by having them prototype and present the plus and minus of alternative technologies (APIs, frameworks, libraries, etc...) that your project could use.
As a new software architect, I can recommend some books that helped me understand the role of the architect (but of course not to master it):
Fundamentals of Software Architecture An Engineering Approach:
This book gives good modern overview of software architecture and its many aspects, good place to start if you are a beginner or broaden your knowlage.
Software Architecture in Practice:
Explains what software architecture is, why it's important, and how to design, instantiate, analyze, evolve, and manage it in disciplined and effective ways.
Software Architect's Handbook:
This book takes you through all the important concepts, right from design principles to different considerations at various stages of your career in software architecture. It begins by covering the fundamentals, benefits, and purpose of software architecture.
Clean Architecture: A Craftsman's Guide to Software Structure and Design:
Learn what software architects need to achieve and how to achieve it, master essential software design principles and see how designs and architectures go wrong.
Software Architecture: The Hard Parts:
An advanced architecture book, with this book, you'll learn how to think critically about the trade-offs involved with distributed architectures.
Usually there's another project they can work on, but...
I have my team review the project specs/requirements and put together a basic/preliminary structure to get them already thinking through the application and working out specific questions.
When we convene at the table to discuss the plan they already have an idea of what the project is and requires and in some cases, they present questions I may have missed or overlooked.
Although it's too late now, a good way to approach it is to move the architect over before his current project has ended. Start freeing him up at like 25% then work your way up to 75-100% on the new project a month or two before it starts (maybe more depending on how much analysis and customer interaction there is).
On a trivial project (let's say 2 man-years) it might not be necessary, but anything bigger than that can end up in chaos if somebody doesn't at least get the analysis right before everybody jumps aboard.
If your team does not have any other projects to work on, ask experienced programmers of your your team to come up with at prototype so that you can create a requirement doc according to the needs of the client.
Also programmers novice to the technologies being used in the team could utilize this time to familiarize themselves with the technologies on which your team is going to develop the project.
architect != designer
Chances are that all of your developers can help with the design; let them. Architects don't have to be "lone wolves" and do everything themselves. You lay out the guidelines and the principles and the scaffolding, rough in the wiring, and let your developers flesh out the details - whether it is drawing Visio diagrams or building prototypes to mitigate unknowns/risks.
Migrate towards Agile/XP and away from waterfall methods, and you'll find the team a lot more help.
When making the general design, it's very handy to have programmers create proof-of-concepts. Do that especially with parts of the system that could end up being show stoppers if they don't work in the way you plan to do them, so you can think of alternatives, and adjust the design.
That's going to help you to make the right design-decisions before moving entirely into a certain direction.
Just doing a design, and then moving on and start coding is a sure way to mess up a project. You won't realize that your design is not feasible (or just plain sucks) until you're half-way coding, and by then it's too late to make radical changes.
You'll waste time mitigating non-existing problems during the design, and you'll run into unforeseen problems during implementation.

Suggest some good MVC framework in perl [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Can you suggest some good MVC framework for perl -- one I am aware of is catalyst
The need is to be able to expose services on the perl infrastructure which can be called by Java/.Net applications seamlessly.
I'll tell you right now that Catalyst has by far the best reputation amongst Perl developers in terms of a rapid application development MVC framework.
In terms of "pure" MVC I'm not sure there are even that many "mature" or at least production-ready alternatives.
If Catalyst doesn't seem right to you, then you could build upon the lightweight framework CGI::Application to suit your needs or take a look at some of the lesser known MVC frameworks like PageKit and Maypole.
Since this old thread popped up, I will mention two exciting new additions to the Perl MVC world:
Dancer (CPAN) which is heavily influenced by Ruby's Sinatra, known for being very lightweight
Mojolicious (CPAN) which is written by the original developer of Catalyst to use what he learned there, it has no non-core dependencies, with very modern builtins (HTML5/CSS3/Websockets, JSON/XML parsers, its own UserAgent/templating engine)
(N.B. I have used Mojolicious more than Dancer, and as such if I missed some features of Dancer that I listed for Mojolicious then I apologize in advance)
Another alternative besides the ones already mentioned is Continuity; however, it is (as the name is meant to imply) continuation-based rather than MVC in the typical sense. Still, it’s worth mentioning because it is one of the better Perl web frameworks.
That said, I like Catalyst much better than any of the alternatives. And it’s still getting better all the time! The downside of that is that current preferred coding approaches continue to evolve at a fairly hurried clip – but for the last couple of versions, there has been strong emphasis on API compatibility, so the burden is now mostly mental rather than administrative. The upcoming port of the internals to Moose in particular is poised to provide some excellent benefits.
But the biggest argument in favour of Catalyst, IMO, is the Chained dispatch type. I have seen nothing like it in all of web-framework-dom, and it is a most excellent tool to keep your code as DRY as possible. This couples well with another great thing that Catalyst provides, namely uri_for – a method which takes a controller and a bunch of arguments and then constructs a URI that would dispatch to that place, which it returns. Together, these facilities mean that you can structure your URI space any way you deem right, yet at the same time can structure your controllers to avoid duplication of logic, and keep templates independent of the URI structure.
It’s just brilliant.
Seconding comments made by others: Catalyst (which more or less forked from Maypole) is by far and away the most complete and robust of them. There is a book by Jonathan Rockway that will certainly help you come to grips with it.
In addition to the 'Chained' dispatch type, the :Regex (and :LocalRegex) dispatch methods provide enormous flexibility. The latest app we've built here supports a lot of disparate-looking URLs through just a handful of subs using :LocalRegex.
I also particularly like the fact that you are not limited to a particular templating language or database. The mailing list (and the book) both have a preference for Template::Toolkit (as do I), and DBIx::Class (we continue to use Class::DBI), but you can use pretty much anything you like. Catalyst is marvelously agnostic that way.
Don't be put off by the fact Catalyst seems to require half of CPAN as dependencies. Once you get it up and running, it is a well-oiled machine. It has reached a level of maturity now that once you come to grips with it, you find it 'fades into the background'. You spend your time solving business needs, not fighting with the tools you use.
It does what it says on the tin. Catalyst++
Been playing with Squatting the last few days and I have to say it looks very promising and been fun to use.
Its a micro webframework (or web microframework ;-) and is heavily influenced by Camping which is written in Ruby.
NB. Squatting (& Camping) don't have model components baked into the framework. Here's the authors comments on models... "Models? The whole world is your model. ;-) I've always been ambivalent about defining policy here. Use whatever works for you"
There is also CGI::Application, which is more like the guts of a framework. It helps a person to write basic CGI's and glue bits on to it to make it as custom as they like. So you can have it use hardly any modules, or just about everyone under the sun.
Catalyst is the way to go. There is also Jifty, but (last time I looked), it had terrible documentation.
If you are already aware of Catalyst, then I recommend focusing on it. It is mature, well-documented, and has a very large user-base, community, and collection of plug-ins.
For your problem I would take a look into Jifty::Plugin::REST which allows access to models and actions using various formats.
Let me just say that Jifty doesn't have terrible documentation. However, most of included documentation is API documentation, but there is very low-noise mailing list which has useful tips and links to applications.
Wiki at http://jifty.org/ is another resource which has useful bits.
If your goal is to make video store (my favorite benchmark for 4GLs and CRUD frameworks) in afternoon, it's really worth a look!
Another options is Gantry when used in conjunction with the BigTop module it can reduce the time it takes to build simple CRUD sites.
There is also Clearpress which I can recommend as a useful database backed application. It needs fewer dependencies than Catalyst. We have written a few large applications with it, and I run a badminton ladder website using it.
I have built some applications with Kelp, it's easy to learn and very helpful.