Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Im starting working with Moose/Perl and im searching for an UML Tool to create diagrams and representing the Moose OO System. I already worked with Astah (former Jude) but it is designed for Java OO System. Can someone give recomend other UML tool to work with Moose/Perl?
My two cents:
I have written an extension (a .xom file) for Sybase PowerDesigner. This tool has a powerful metaclass editor that you can script with vbscript and a proprietary language, GTL. It also has large collections of customizable metaclasses and templates.
My PowerDesigner Extension is quite hacky and contains some stale code that I didnt clean up. Therefore I haven't published anything. It works for me, and only for me. Some lessons learned, from the top of my head:
I wanted to do UML modeling and code-generation, do you want to do that, too?
Moose is quite attribute-heavy so a UML approach is worth doing in this respect.
Didn't use roles much, but I tried to map them to interfaces anyway.
I am not satisfied with how to model relationships. Lots of edge-cases and "impedance mismatches" of UML concepts and moose/perl concepts. (BTW, Whats the moose equivalent of an "association class"? )
Native traits are a nice feature in Moose but I haven't succeeded in creating a GUI for editing them
I also hurt my brain by designing a comprehensible GUI for type coercions (I often need to check + coerce date values)
Static attributes are an important feature in UML but less important in moose. The problem is that there is no "static" keyword in perl/moose, but you have to declare a "use MooseX::ClassAttribute" or whatever it is called, and do it only once per class, but in the right place (order matters)
the code generated is impossible to pretty-print, so usually I send it through perltidy right away, to bring it to a "canonical" form, making diffing and versioning / committing to SVN easier.
When a class is generated , the compactness of Moose class is gone, you'll have svn properties, header comments, lots of "use" + "use lib" statements, lots of POD, some comment lines after each sub declaration with parameter-passing doc, the the obligatory footer ("no moose ....")-
unfortunately, reverse engineering Perl code (to update a UML model from code) is impossible. Thus, at some point I must stop working in the UML tool, and start to edit the perl code directly, abandonig the model. Checking back in these changes must be done manually at some later time, is very time consuming and requires care.
Advantages:
Generating properly POD-documented code is the main productivity gain you'd get by doing all this UML modeling, IMHO. Good for "enterprisey" programming environments.
you can autogenerate *.t files with testcases (or stubs of testcases). Requires some thinking to design smart tests, and to avoid the problems Dave Rolsky has written about in this blog post: "add(ing) absolutely nothing that isn't already tested by Moose itself"
You can define custom checks in the model such as "check if builder methods for all declared attributes exist, and if they don't exist, create a stub, or (ask me what to do)"
easy mapping of nightmarish database tables to moose classes. (I have to work with lots of multi-column tables that cannot be touched). Build your own graphical ORM-Mapper!
there might be even more advantages
I haven't seen a UML tool yet for Moose. It wouldn't be that difficult to build one, just a little labor intensive. Mostly it would require crawling the meta-class tree for a given class and outputting the proper UML markup for each step. If are interested in building something like this, you can stop by #moose on irc.perl.org. Someone I'm sure can help point you in the right direction.
Just stumbled across your question while looking for "UML Moose perl".
One of the other links thrown up was to a utility called umlclass.pl that looks quite interesting.
I'll post a follow-up after seeing how well it works.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 months ago.
This post was edited and submitted for review 9 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I designed few simple 3D parts with OpenSCAD and I would like to move on to more complex parts now. As in most other programming languages, that would naturally include starting to re-use code that others have written before. Such as functions for round/bevel edges, infill corners, beziers curves and some common parts like screws, bolts.
How does that work in OpenSCAD? Specifically: What are the language features, idioms and officially recommended good practices of how code reuse is achieved in OpenSCAD?
(You are welcome to include pointers to good examples. But the question is about the mechanisms and good practices for code reuse in OpenSCAD, not about specific code that can be reused.)
Reusable code in OpenSCAD is organized in "libraries", similar to the package or library system in many other languages.
As with all code reuse, there is the problem of library scope overlap, where two libraries solve the same issue. This cannot be truly solved. But as a best practice, I would choose the one most appropriate library for each design project, and then stick with whatever that library has to offer. For example, don't depend on both BOSL and NopSCADlib because you like BOSL for everything except its threading functions, which you like better in NopSCADlib. For a small project, I like to work with only Round Anything, which is small and compact.
To help you get started with choosing a library appropriate for your project, I include a list of examples below that I thing show good practices of reusable OpenSCAD code. I had a long look at OpenSCAD libraries recently and this is the result. Most of them come from the official OpenSCAD libraries page, which I found to recommend only a few but very good libraries.
My favourite libraries, roughly in my personal order of desirability:
BOSL (source, docs) and BOSL2. (source, docs) "The Belfry OpenScad Library - A library of tools, shapes, and helpers to make OpenScad easier to use." Includes lots of modules and functions to make OpenSCAD code more readable. Overall, it's like MCAD in scope, but much better in execution. BOSL2 is a much extended second edition of BOSL, but as of 2020-11 the author says it is not yet ready for production use.
BOSL includes a very good Bezier library. Includes a threading library.
Round Anything. (source, API, visual overview) "Round-Anything is primarily a set of OpenSCAD utilities that help with rounding parts, but it also embodies a robust approach to developing OpenSCAD parts."
NopSCADlib. (source, docs) A very large library. Use for any kind of machine design, as it contains nuts, bolts, washers, electronic components, belts etc.. "It contains lots of vitamins (the RepRap term for non-printed parts), some general purpose printed parts and some utilities. There are also Python scripts to generate Bills of Materials (BOMs), STL files for all the printed parts, DXF files for CNC routed parts in a project and a manual containing assembly instructions and exploded views by scraping markdown embedded in OpenSCAD comments, see scripts."
Also contains a 3D sweep function and a thread generation module.
BOLTS. (source, docs) "BOLTS is an Open Library of Technical Specifications." Contains all kinds of models for metal hardware standard parts (example).
dotSCAD. (source) Seems to be one of the best general library for OpenSCAD, being both huge, good quality, and well maintained. Mostly focused on math art parts. For an overview of the designs made by the author of dotSCAD, using that library, see here. For background articles about the designs made with dotSCAD, see here.
MCAD. (source, docs) This is so far the only library shipped with every installation of OpenSCAD, so would qualify as its standard library. No need to tell users of your designs to install anything when you only include MCAD.
Note that currently (as of 2020-11), a large rework is being done to MCAD, with the effect that the dev branch has nearly twice the commits as the master branch. You'll find many goodies here, but of course users of your design would then have to install the dev branch first.
The problem with MCAD, esp. the current master branch, is that I don't find it useful. It's so far a rather a non-integrated hotchpotch of contributions from many authors. But since it's the standard library, we should give it a chance. When I have something to generally useful, I'd try to contribute it here.
Revolve2. (source, announcement) In terms of speed, this is hands-down the best thread generation library I could find. I did not yet test the threading features in BOSL and NopSCADlib, though.
3D sweep demos. (source) But note that this is rather demo code than a library; first try the sweep module in NopSCADlib.
scad-utils. (source)
Relativity. (source) A library to arrange objects relative to each other. Also includes a CSS-like styling language for objects. Seemingly no longer in active development, but still great to learn really advanced OpenSCAD techniques.
After some researches, it seems that the BOSL2 library is the most complete :
BOSL2 Library documentation
BOSL2 Library
[edit 2022: update BOSL to BOSL2]
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Almost all conventional languages today represent programmers intention as text source, which is then (lets say for sake of simplicity) translated to some bytecode/machine code and interpreted/executed by a VM/CPU.
There was another technique, which, for some reason, isn't that popular theese days: "freeze" the run-time of your VM and dump/serialize the environment (symbol bindings, state, code (whatever that is)) into an image, which you can then transfer, load and execute.
Consequentially, you do not "write" your code in a usual way, but you modify the environment with new symbols, while in "run-time".
I see great advantages to this technique:
Power-boosted REPL: you can introspect your code as you write it, partially evaluate it, test it directly and see the effects of your changes. Then roll back if you've messed up and do it again, or commit it to the environment finally. No need for long compile-run-debug cycle;
Some of the usual problems about dynamic languages (that they cannot be compiled, as compiler cannot reason about environments statically) are obliviated: the interpreter knows where what is located and can subsitute symbol references with static offsets and do other optimizations;
It's easier on programmer's brain: you "offload" different contextual information about the code from your head, i.e. you don't need to keep track about what your code has already done to some variable/data structure or which variable holds what: you see it directly in front of your eyes! In the usual way (writing source), programmers add new abstractions or comments to the code to clarify intents, but this can (and will) get messy.
The question is: what are disadvantages of this approach? Is there any serious critical disadvantage that I am not seeing? I know, there are some problems with it, i.e.:
try building a module system with it, that will not result in dependancy hell or serious linkage problems
security issues
try to version-control such images and enable concurrent development
But these are, IMHO, solvable with a good design.
EDIT1: concerning status "closed,primarily opinion-based". I've described two existent approaches and it is clear and obvious that one is preferred over another. Whether the reasons for that are purely "opinion-based" or there is a reasearch to back this up, is unknown to me, but even if they are opinion-based, if someone would list these reasons for such an opinion to develop, it should, actually, answer my question.
As a daily user of smalltalk, I've to say I haven't found any fundamental disadvantages and have to agree that there are lots of advantages.
It makes metaprogramming, reasoning about your program easy, and much better supports refactoring and code rewriting.
It requires/develops a different way of looking at your code, though. Smalltalk has little to offer to developers who are not interested in abstraction
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There are probably different kinds of code generation. In RoR for example, Rails can create skeletons for models, controllers, etc. But the developer has to complete those skeletons.
Now some times there are projects where many core artifacts in their entirety get generated according to a set of definitions or models.
I am mainly interested to know the advantages and disadvantages of this latter type of code generation.
The main advantage is that it does the work for you, its repeatable, and that the code will most likely work (that depends of course if the person who wrote the generator knew what they were doing). It can remove the a lot of necessary time doing menial coding tasks. For example, is it really worth your time to write objects which are nothing more than containers for data from the database, or is it better to have some program automatically create these for you?
The big disadvantage is that it forces you into writing the code that is compatible with the generated code. Most of the time this isn't a problem, but it can be a real hassle when someone comes up to you and says "Hey, can we do X?" and that conflicts with the generated code. If the generator is good, it will allow you to change functionality, but that almost always increases the complexity of the code generated etc. This complexity has a price. It's more difficult to understand, and it can be less efficient that code you write yourself. This of course varies by situation.
The main problem with this style of programming is that it contaminates a view of your project. It no longer allows you to practice DRY. It is useful to have a clean separation between that what is automatically generated, and that which is written by a human. Most systems, especially file-based ones, do not support such a separation well. In systems that have good introspection capabilities (e.g. smalltalk images), building a dynamic object structure by walking the definition/model is preferable.
In illusion-based programming (as practiced in large companies and government agencies) it is very useful because it allows the generation of very impressive stacks of documentation and show impressive implementation performance as measured in lines of code per man month. There your most important skill is of course timing your disappearance act.
I think the most important thing to keep in mind is WHY you want to generate source code. Is it, for instance, because you are more fluent with UML than any programming language and hence want to generate object-oriented classes from that graphical model?
Is it because you expressed a schema definition in any language (SQL DDL for example: jOOQ, XSD for example JAXB code generation) and want to generate a model from that?
The advantage of code generation is always the fact that you express something only once (as in DRY, like Stephan stated). This is a very good practice that made it deep into extreme programming (among other processes). When you keep things DRY, you will not run the risk that the model differs from its glue code. On the other hand, you might blow up your glue code because it will exactly match its underlying model. Typically, you have one class/type/object per RDMBS table or per XML element.
If, however, you use code generation because you're more at ease with a modelling language (as in MDA, or model-driven architecture), you might run the risk that your generated code is not good enough (lack of detail) or too complicated (lack of simplicity) because - for instance - UML is not suited for solving problems in detail.
In any case: code generation can be very helpful if the generated code can be used AS-IS and does not need any customisation. As soon as you start customising generated code, it may become a maintenance nightmare.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
In my team we've got a great source control system and we have great specs. The problem I'd like to solve is how to keep the specs up-to-date with the code. Over time the specs tend to age and become out of date
The folks making the specs tend to dislike source control and the programmers tend to dislike sharepoint.
I'd love to hear what solutions others use? is there a happy middle somewhere?
Nope. There's no happy middle. They have different audiences and different purposes.
Here's what I've learned as an architect and spec writer: Specifications have little long-term value. Get over it.
The specs, while nice to get programming started, lose their value over time no matter what you do. The audience for the specification is a programmer who doesn't have much insight. Those programmers morph into deeply knowledgeable programmers who no longer need the specs.
Parts of the specification -- overviews in particular -- may have some long-term value.
If the rest of the spec had value, the programmers would keep them up to date.
What works well is to use comments embedded in the code and a tool to extract those comments and produce the current live documentation. Java does this with javadoc. Python does this with epydoc or Sphinx. C (and C++) use Doxygen. There are a lot of choices: http://en.wikipedia.org/wiki/Comparison_of_documentation_generators
The overviews should be taken out of the original specs and placed into the code.
A final document should be extracted. This document can replace the specifications by using the spec overviews and the code details.
When major overhauls are required, there will be new specifications. There may be a need to revisions to existing specifications. The jumping-off point is the auto-generated specification documents. The spec. authors can start with those and add/change/delete to their heart's content.
I think a non-Sharepoint wiki is good for keeping documentation up to date. Most non-technical people can understand how to use a wiki, and most programmers will be more than happy to use a good wiki. The wiki and documentation control systems in Sharepoint are clunky and frustrating to use, in my opinion.
Mediawiki is a good choice.
I really like wikis because they are by far the lowest pain to adopt and keep up. They give you automatic version control, and are usually very intuitive for everyone to use. A lot of companies will want to use Word, Excel, or other types of docs for this, but getting everything online and accessible from a common interface is key.
As much as possible the spec should be executable, via rspec, or doctest and similar frameworks. The spec of the code should be documented with unit tests and code that has well named methods and variables.
Then the spec documentation (preferably in a wiki) should give you the higher level overview of things - and that won't change much or get out of sync quickly.
Such an approach will certainly keep the spec and the code in sync and the tests will fail when they get out of sync.
That being said, on many projects the above is kind of pie-in-the-sky. In that case, S. Lott is right, get over it. They don't stay in sync. Look to the spec as the roadmap the developers were given, not a document of what they did.
If having a current spec is very important, then there should be specific time on the project allocated to write (or re-write) the spec after the code is written. Then it will be accurate (Until the code changes).
An alternative to all of this is to keep the spec and the code under source control and have check-ins reviewed to ensure that the spec changed along with the code. It will slow down the development process, but if it is really that important ...
One technique used to keep the documentation in sync with the code is literate programming. This keeps the code and the documentation in the same file and uses a preprocessor to generate the compilable code from the documentation. As far as I know this is one of the techniques Donald Knuth uses - and he's happy to pay people money if they find bugs in his code.
I don't know of any particularly good solution for precisely what you're describing; generally, the only solutions that I've seen that really keep this sort of stuff in sync are tools that generate documentation from the source code (doxygen, Javadoc).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Can you suggest some good MVC framework for perl -- one I am aware of is catalyst
The need is to be able to expose services on the perl infrastructure which can be called by Java/.Net applications seamlessly.
I'll tell you right now that Catalyst has by far the best reputation amongst Perl developers in terms of a rapid application development MVC framework.
In terms of "pure" MVC I'm not sure there are even that many "mature" or at least production-ready alternatives.
If Catalyst doesn't seem right to you, then you could build upon the lightweight framework CGI::Application to suit your needs or take a look at some of the lesser known MVC frameworks like PageKit and Maypole.
Since this old thread popped up, I will mention two exciting new additions to the Perl MVC world:
Dancer (CPAN) which is heavily influenced by Ruby's Sinatra, known for being very lightweight
Mojolicious (CPAN) which is written by the original developer of Catalyst to use what he learned there, it has no non-core dependencies, with very modern builtins (HTML5/CSS3/Websockets, JSON/XML parsers, its own UserAgent/templating engine)
(N.B. I have used Mojolicious more than Dancer, and as such if I missed some features of Dancer that I listed for Mojolicious then I apologize in advance)
Another alternative besides the ones already mentioned is Continuity; however, it is (as the name is meant to imply) continuation-based rather than MVC in the typical sense. Still, it’s worth mentioning because it is one of the better Perl web frameworks.
That said, I like Catalyst much better than any of the alternatives. And it’s still getting better all the time! The downside of that is that current preferred coding approaches continue to evolve at a fairly hurried clip – but for the last couple of versions, there has been strong emphasis on API compatibility, so the burden is now mostly mental rather than administrative. The upcoming port of the internals to Moose in particular is poised to provide some excellent benefits.
But the biggest argument in favour of Catalyst, IMO, is the Chained dispatch type. I have seen nothing like it in all of web-framework-dom, and it is a most excellent tool to keep your code as DRY as possible. This couples well with another great thing that Catalyst provides, namely uri_for – a method which takes a controller and a bunch of arguments and then constructs a URI that would dispatch to that place, which it returns. Together, these facilities mean that you can structure your URI space any way you deem right, yet at the same time can structure your controllers to avoid duplication of logic, and keep templates independent of the URI structure.
It’s just brilliant.
Seconding comments made by others: Catalyst (which more or less forked from Maypole) is by far and away the most complete and robust of them. There is a book by Jonathan Rockway that will certainly help you come to grips with it.
In addition to the 'Chained' dispatch type, the :Regex (and :LocalRegex) dispatch methods provide enormous flexibility. The latest app we've built here supports a lot of disparate-looking URLs through just a handful of subs using :LocalRegex.
I also particularly like the fact that you are not limited to a particular templating language or database. The mailing list (and the book) both have a preference for Template::Toolkit (as do I), and DBIx::Class (we continue to use Class::DBI), but you can use pretty much anything you like. Catalyst is marvelously agnostic that way.
Don't be put off by the fact Catalyst seems to require half of CPAN as dependencies. Once you get it up and running, it is a well-oiled machine. It has reached a level of maturity now that once you come to grips with it, you find it 'fades into the background'. You spend your time solving business needs, not fighting with the tools you use.
It does what it says on the tin. Catalyst++
Been playing with Squatting the last few days and I have to say it looks very promising and been fun to use.
Its a micro webframework (or web microframework ;-) and is heavily influenced by Camping which is written in Ruby.
NB. Squatting (& Camping) don't have model components baked into the framework. Here's the authors comments on models... "Models? The whole world is your model. ;-) I've always been ambivalent about defining policy here. Use whatever works for you"
There is also CGI::Application, which is more like the guts of a framework. It helps a person to write basic CGI's and glue bits on to it to make it as custom as they like. So you can have it use hardly any modules, or just about everyone under the sun.
Catalyst is the way to go. There is also Jifty, but (last time I looked), it had terrible documentation.
If you are already aware of Catalyst, then I recommend focusing on it. It is mature, well-documented, and has a very large user-base, community, and collection of plug-ins.
For your problem I would take a look into Jifty::Plugin::REST which allows access to models and actions using various formats.
Let me just say that Jifty doesn't have terrible documentation. However, most of included documentation is API documentation, but there is very low-noise mailing list which has useful tips and links to applications.
Wiki at http://jifty.org/ is another resource which has useful bits.
If your goal is to make video store (my favorite benchmark for 4GLs and CRUD frameworks) in afternoon, it's really worth a look!
Another options is Gantry when used in conjunction with the BigTop module it can reduce the time it takes to build simple CRUD sites.
There is also Clearpress which I can recommend as a useful database backed application. It needs fewer dependencies than Catalyst. We have written a few large applications with it, and I run a badminton ladder website using it.
I have built some applications with Kelp, it's easy to learn and very helpful.