Are MooseX::Declare and MooseX::Method::Signatures production ready? - perl

From the current version (0.98) of the Moose::Manual::MooseX are the lines:
We have high hopes for the future of
MooseX::Method::Signatures and
MooseX::Declare. However, these
modules, while used regularly in
production by some of the more insane
members of the community, are still
marked alpha just in case backwards
incompatible changes need to be made.
I noticed that for MooseX::Method::Signatures the change log for September 2009 mentions the removal of the "scary ALPHA disclaimer".
So, are these still "alpha"?
Would I still be considered one of the "more insane" to use them?

I'd say they are production ready - I'm using them in production - but there are several things to consider:
Performance
MooseX::Declare and dependencies do almost all of their magic at compile time. Depending on the size of your program, you might find anywhere from half a second to several seconds of additional initialization overhead. If this a problem, don't use MooseX::Declare.
At runtime, the main overhead is type and argument checking, which you should (ideally) be doing anyway. That said, Moose type constraints have some overheads, namely coercion and the more complex (MooseX::Types::Structured-style) constraints. Don't use these if performance is an issue.
Stability
MooseX::Declare and MooseX::Method::Signature's external syntax is now stable. But it is important to know that the internals are subject to extreme change. (fortunately, changes for the better)
To give you an idea, the signature itself is grabbed using a big block of C code stolen from the Perl tokenizer (toke.c). This can break in some situations since it isn't actually parsing anything. The bit inside the brackets is parsed using PPI, which is designed for pure Perl, but the resulting PPI tree is then hacked up to get something useful. Devel::Declare itself is a hack - after it sees specific keywords (e.g. 'role', 'class', 'method') the Devel::Declare-using module must rewrite the source code by hand, with no interaction with the real Perl parser.
Corner cases may cause Perl to segfault. Or rewrite the source code badly, so you get syntax errors but have no idea what's causing them without -MO::Deparse. If you mess up the MooseX::Declare syntax by accident, there is no guarantee that the module will detect this and give you a sensible error. The ALPHA message may have gone, but this is still doing dark and scary things internally, and you should be prepared for that.
UPDATE
MooseX::Declare has not been updated much, and you may wish to look at alternatives such as Moops. Personally, I have decided to stick with pure Moose until Perl itself begins to support class/method/has syntax natively, which is possibly on the cards.

I think it's a matter of differing perspectives as much as anything -- rafl is one of the aforementioned "more insane members of the community" while Rolsky is more conservative. It's up to you to decide who you agree with, and really I think that the most important variable is your own code.
MooseX::Declare is good code. It won't randomly blow up your machine, it's not awful for performance, and it offers a lot of nifty stuff while reducing the amount of boilerplate that you have to write. But it might change in the future, making your code refuse to compile until it's updated; it might make your editor and other development tools confused when it sees syntax that it can't parse, it might piss off your collaborators by making them learn a new module to work with your code, or it might piss off your boss by making it so any future maintainer has to learn a new module to work with your code. Which of those things apply to you, and to what degree? You know better than I do, I hope.

There are people who feel that the maturity and stability of MooseX::Delcare, Devel::Declare on which it's based, or even Moose itself are not yet ready for "prime time". I also know of two large companies with millions of visitors a month, who have MooseX::Declare in their production environment. I personally am happy with the stack I am provided with Moose and do not see a need yet to bring in MooseX::Declare. I know people who's opinion I deeply respect who refuse to write new code without the declarative sugar from MooseX::Declare.
All of this is to say, the decision on whether something is or is not production ready is highly dependent upon your production environment, your development needs, and taste for risk. Without being in your shoes we can't possibly give an informed decision as to how well any given tool matches that profile.

MooseX::Method::Signatures (MXMS), and MooseX::Declare which uses it, is not production ready. This is not because the code isn't stable, but because it is appallingly slow. Simply using the method keyword, no types or arguments, is a 500-1000x runtime performance hit over a regular method call. My Macbook Pro can do about 6,000 simple method calls per second using MXMS vs 5,000,000 with plain Perl.
Method::Signatures, in contrast, has almost no performance hit above what it would normally cost to do the requested checks. The syntax is almost exactly the same as MXMS and it supports Moose (and Mouse) types. Both rely on the same underlying syntax modifying technique. (Full disclosure, I am the author of Method::Signatures.)
If you like MooseX::Declare but want the performance of Method::Signatures, try Method::Signatures::Modifiers.

It depends on what you mean by "production ready". I wouldn't depend on them until their velocity slows down quite a bit. I like my production stuff to not need frequent care from external code changes, API adjustments, and so on. That's not something particular to Moose, but any young project.
You have to judge how much that matters to you. In some situations, pushing stuff into production is a lengthy process, so you must be circumspect with such things. At the other extreme, some places let you edit files directly on the production server. That is, you have to define your tolerance before anyone can tell you which side a given MooseX module is on.

MooseX::Declare and MooseX::Method::Signatures are working well but they can have really nasty penalty depending on what does your code do. This can be fixed by just not using method keyword or using Method::Signatures::Modifiers.
Performance penalty I am seeing is around 2-5x compared to Method::Signatures::Modifiers (5x being mostly for one specific class I was using). And it seems that it is mostly compile time or maybe first time initialization, because it is getting under 2x when the computation is longer.
Method::Signatures::Modifiers has better errors but you have to turn this optimization off when you use debugger (it goes haywire because it does not see these methods, you can see for yourself in -MO=Deparse output).
It may be worth it to get rid of Perl argument shifting hell.

Related

What are "not so well defined problems" that LISP is supposed to solve?

Most people agree that LISP helps to solve problems that are not well defined, or that are not fully understood at the beginning of the project.
"Not fully understood"" might indicate that we don't know what problem we are trying to solve, so the developer refines the problem domain continuously. But isn't this process language independent?
All this refinement does not take away the need for, say, developing algorithms/solutions for the final problem that does need to be solved. And that is the actual work.
So, I'm not sure what advantage LISP provides if the developer has no idea where he's going i.e. solving a problem that is not finalised yet.
Lisp (not "LISP") has a number of advantages when you're facing problems that are not well-defined. First of all, you have a REPL where you can quickly experiment with -- that helps in sketching out quick functions and trying to play with them, leading to a very rapid development cycle. Second, having a dynamically typed language is working well in this context too: with a statically typed language you need to "design more" before you begin, and changing the design leads to changing more code -- in contrast, with Lisps you just write the code and the data it operates on can change as needed. In addition to these, there's the usual benefits of a functional language -- one with first class lambda functions, etc (eg, garbage collection).
In general, these advantage have been finding their way into other languages. For example, Javascript has everything that I listed so far. But there is one more advantage for Lisps that is still not present in other languages -- macros. This is an important tool to use when your problem calls for a domain specific language. Basically, in Lisp you can extend the language with constructs that are specific to your problem -- even if these constructs lead to a completely different language.
Finally, you need to plan ahead for what happens when the code becomes more than a quick experiment. In this case you want your language to cope with "growing scripts into applications" -- for example, having a module system means that you can get a more "serious"
application. For example, in Racket you can get your solution separated into such modules, where each can be written in its own language -- it even has a statically typed language which makes it possible to start with a dynamically typed development cycle and once the code becomes more stable and/or big enough that maintenance becomes difficult, you can switch some modules into the static language and get the usual benefits from that. Racket is actually unique among Lisps and Schemes in this kind of support, but even with others the situation is still far more advanced than in non-Lisp languages.
In AI (Artificial Intelligence) historically Lisp was seen as the AI assembly language. It was used to build higher-level languages which help to work with the problem domain in a more direct way. Many of these domains need a lot of 'knowledge' for finding usable answers.
A typical example is an expert system for, say, oil exploration. The expert system gets as inputs (geological) observations and gives information about the chances to find oil, what kind of oil, in what depths, etc. To do that it needs 'expert knowledge' how to interpret the data. When you start such a project to develop such an expert system it is typically not clear what kind of inferences are needed, what kind of 'knowledge' experts can provide and how this 'knowledge' can be written down for a computer.
In this case one typically develops new languages on top of Lisp and you are not working with a fixed predefined language.
As an example see this old paper about Dipmeter Advisor, a Lisp-based expert system developed by Schlumberger in the 1980s.
So, Lisp does not solve any problems. But it was originally used to solve problems that are complex to program, by providing new language layers which should make it easier to express the domain 'knowledge', rules, constraints, etc. to find solutions which are not straight forward to compute.
The "big" win with a language that allows for incremental development is that you (typically) has a read-eval-print loop (or "listener" or "console") that you interact with, plus you tend to not need to lose state when you compile and load new code.
The ability to keep state around from test run to test run means that lengthy computations that are untouched by your changes can simply be kept around instead of being re-computed.
This allows you to experiment and iterate faster. Being able to iterate faster means that exploration is less of a hassle. Very useful for exploratory programming, something that is typical with dealing with less well-defined problems.

Moose OOP or Standard Perl?

I'm going into writing some crawlers for a web-site, the idea is that the site will use some back-end Perl scripts to fetch data from other sites, my design (in a very abstract way..) will be to write a package, lets say:
package MyApp::Crawler::SiteName
where site name will be a module / package for crawling specific sites, I will obviously will have other packages that will be shared across different modules, but that not relevant here.
anyway, making long short, my question is: Why (or why not...) should I prefer Moose over Standard OO Perl?
While I disagree with Flimzy's introduction ("I've not used Moose, but I have used this thing that uses Moose"), I agree with his premise.
Use what you feel you can produce the best results with. If the (or a) goal is to learn how to effectively use Moose then use Moose. If the goal is to produce good code and learning Moose would be a distraction to that, then don't use Moose.
Your question however is open-ended (as others have pointed out). There is no answer that will be universally accepted as true, otherwise Moose's adoption rate would be much higher and I wouldn't be answering this. I can really only explain why I choose to use Moose every time I start a new project.
As Sid quotes from the Moose documentation. Moose's core goal is to be a cleaner, standardized way of doing what Object Oriented Perl programmers have been doing since Perl 5.0 was released. It provides shortcuts to make doing the right thing simpler than doing the wrong thing. Something that is, in my opinion, lacking in standard Perl. It provides new tools to make abstracting your problem into smaller more easily solved problems simpler, and it provides a robust introspection and meta-programming API that tries to normalize the beastiary that is hacking Perl internals from Perl space (ie what I used to refer to as Symbol Table Witchery).
I've found that my natural sense of how much code is "too much" has been reduced by 66% since I started using Moose[^1]. I've found that I more easily follow good design principles such as encapsulation and information hiding, since Moose provides tools to make it easier than not. Because Moose automatically generates much of the boiler-plate that I normally would have to write out (such as accessor methods, delegate methods, and other such things) I find that it's easier to quickly get up to speed on what I was doing six months ago. I also find myself writing far less tricky code just to save myself a few keystrokes than I would have a few years ago.
It is possible to write clean, robust, elegant Object Oriented Perl that doesn't use Moose[^2]. In my experience it requires more effort and self control. I've found that in those occasions where the project demands I can't use Moose, my regular Object Oriented code has benefitted from the habits I have picked up from Moose. I think about it the same way I would think about writing it with Moose and then type three times as much code as I write down what I expect Moose would generate for me[^3].
So I use Moose because I find it makes me better programmer, and because of it I write better programs. If you don't find this to be true for you too, then Moose isn't the right answer.
[^1]: I used to start to think about my design when I reached ~300 lines of code in a module. Now I start getting uneasy at ~100 lines.
[^2]: Miyagawa's code in Twiggy is an excellent example that I was reading just today.
[^3]: This isn't universally true. There are several stories going around about people who write less maintainable, horrific code by going overboard with the tools that Moose provides. Bad programmers can write bad code anywhere.
You find the answer why to use Moose in the Documentation of it.
The main goal of Moose is to make Perl 5 Object Oriented programming easier, more consistent, and less tedious. With Moose you can think more about what you want to do and less about the mechanics of OOP.
From my experience and probably others will tell you the same. Moose reduces your code size extremly, it has a lot of features and just standard features like validation, forcing values on creation of a object, lazy validation, default values etc. are just so easy and readable that you will never want to miss Moose.
Use Moose. This is from something I wrote last night (using Mouse in this case). It should be pretty obvious what it does, what it validates, and what it sets up. Now imagine writing the equivalent raw OO. It's not super hard but it does start to get much harder to read and not just the code proper but the intent which can be the most important part when reading code you haven't seen before or in awhile.
has "io" =>
is => "ro",
isa => "FileHandle",
required => 1,
handles => [qw( sysread )],
trigger => sub { binmode +shift->{io}, ":bytes" },
;
I wrote a big test class last year that also used the handles functionality to redispatch a ton of methods to the underlying Selenium/WWWMech object. Disappearing this sort of boilerplate can really help readability and maintenance.
I've never used Moose, but I've used Catalyst, and have extensive experience with OO Perl and non-OO Perl. And my experience tells me that the best method to use is the method you're most comfortable using.
For me, that method has become "anything except Catalyst" :) (That's not to say that people who love and swear by Catalyst are wrong--it's just my taste).
If you already have the core of a crawler app that you can build on, use whatever it's written in. If you're starting from scratch, use whatever you have the most experience in--unless this is your chance to branch out and try something new, then by all means, accomplish your task while learning something new at the same time.
I think this is just another instance of "which language is best?" which can never be answered, except by the individual.
When I learned about objects in Perl, first thing I thought was why it's so complicated when Perl is usually trying to keep things simple.
With Moose I see that uncomplicated OOP is possible in Perl. It sort of bring OOP of Perl back to manageable level.
(yes, I admit, I don't like perl's object design.)
Although this was asked 10 years ago, much has changed in the Perl world. I think most people now see that Moose didn't deliver all people thought it might. There are several derivative projects, lots of MooseX shims, and so on. That much churn is the code smell of something that's close to useful but not quite there. Even Stevan Little, the original author, was working on it's replacement, Moxie, but it never really went anywhere. I don't think anyone was ever completely satisfied with Moose.
Currently, the future is probably not Moose, irrespective of your estimation of that framework. I have a chapter for Moose in Intermediate Perl, but I'd remove it in the next edition. That's not a decision from quality, just on-the-ground truth from what's happening in that space.
Ovid is working on Corrina, a similar but also different framework that tries to solve some of the same problems. Ovid outlines various things he wants to improve, so you can use that as a basis for your own decision. No matter what you think about either Moose or Corrina, I think the community is now transitioning out of its Moose phase.
You may want to listen to How Moose made me a bad OO programmer, a short talk from Tadeusz Sośnierz at PerlCon 2019.
Even if your skeptical, I'd say try Moose in a small project before you commit to reconfiguring a large codebase. Then, try it in a medium project. I tend to think that frameworks like Moose look appealing in the sorts of examples that fit on a page, but don't show their weaknesses until the problems get much more complex. Don't go all in until you know a little more.
You should at least know the basics of Moose, though, because you are likely to run into other code that uses it.
The Perl 5 Porters may even include this one in core perl, but a full delivery may happen in the five to ten year range, and even then you'd have to insist on a supported Perl. Most people I encounter are about three to five years behind on Perl versions. If you control your perl version, that's not a problem. Not everyone does though. Holding out for Corrina may not be a good short-term plan.

Moose vs. MooseX::Declare

POSTLUDE
MooseX::Declare would no longer be recommended by anyone as it relies on Devel::Declare which served its purpose but is itself obsolete. At this point if anyone wants MX::D they should look at Moops
ORIGINAL
Assuming I already have a decent knowledge of old-style Perl OO, and assuming I am going to write some new code in some flavor of Moose (yes, I understand there is a performance hit), I was wondering if deeper down either rabbit hole, am I going to wish that I had chosen the other path? Could you SO-monks enlighten me with the relative merits of Moose vs. MooseX::Declare (or some other?). Also how interchangeable they are, one for one class and the other for another, should I choose to switch.
(p.s. I would be ok cw-ing this question, however I think a well formed answer might be able to avoid subjectivity)
MooseX::Declare is basically a sugar-layer of syntax over Moose. They are, for everything past the parser, identical in what they produce. MooseX::Declare just produces a lot more of it, with a lot less writing.
Speaking as someone who enjoys the syntax of MooseX::Declare but still prefers to write all of my code in plain Moose, the tradeoffs are mostly on the development & maintainability side.
The basic list of items of note when comparing them:
MooseX::Declare has much more concise syntax. Things that take several hundred lines in plain old perl objects (POPO?), may take 50 lines in Moose, may take 30 lines in MooseX::Declare. The code from MooseX::Declare is to me more readable and elegant as well.
MooseX::Declare means you have MooseX::Types and MooseX::Method::Signatures for free. This leads to the very elegant method foo(Bar $bar, Baz $baz) { ... } syntax that caused people to come back to Perl after several years in Ruby.
A downside to MooseX::Declare is that some of the error messages are much more cryptic than Moose. The error to a TypeConstraint validation failure may happen several layers deep in MooseX::Types::Structured and getting from there to where in your code you broke it can be difficult for people new to the system. Moose has this problem too, but to a lesser degree.
The places where the dragons hide in MooseX::Declare can be subtly different than where they hide in Moose. MooseX::Declare puts in an effort to walk around known Moose issues ( the timing of with() for example) but introduces some new places to be aware of. MooseX::Types for example have a wholly different set of problems from Moose's native Stringy types[^1].
MooseX::Declare has yet another performance hit. This is known to the MooseX::Declare developers and people are working on it (for several values of working I believe).
MooseX::Declare adds more dependencies to Moose. I add this one because people complain already about Moose's dependency list which is around 20 modules. MooseX::Declare adds around another 5 direct dependencies on top of that. The total list however according to http://deps.cpantesters.org/ is Moose 27, MooseX::Declare 91.
If you're willing to go with MooseX::Declare, the best part is you can swap between them at the per-class level. You need not pick one over the over in a project. If this class is better in Moose because of Performance needs, or it's being maintained by Junior programmers, or being installed on a more tightly controlled system. You can do that. If that class can benefit from the extra clarity of the MooseX::Declare syntax you can do that too.
Hope this helps answer the question.
[^1]: Some say fewer, some say more. Honestly the Moose core developers are still arguing this one, and there is no right answer.
One minor aspect that may interest you, and I may as well be interested by an answer to this : the main problem I had with MooseX::Declare, which was important in my specific case, was that I was unable to pack my application as an executable, neither with PAR::Packer nor ActiveState PerlApp.
I then used https://github.com/komarov/undeclare/blob/master/undeclare.pl to go back to Moose code.
As written above
other problems with MooseX::Declare:
- terrible error messages ( really, useless. unless you use Method::Signatures::Modifiers )
- performance hit ( as You have noted ), but in my opinion not small. ( we profiled some big real-life apps )
- problem with TryCatch ( if U use that, see: https://rt.cpan.org/Public/Bug/Display.html?id=82618 )
- some incompatibilities in mixed ( MooseX - non-Moose environment, eg. failed $VERSION check )
If You do not need the 'syntactic sugar' of MooseX, do not use it. Depending on the task You are into, I'd use from 'bottom-to-top', eg.
1. Mouse+Mehod::Signatures
2. Moose
3. then perhaps MooseX
depending on what you want.
Upgrading is not too complicated in this order. However, if You come to the point that You really need MooseX, I'd rather suggest You looking for some other, OO-wise developed language that offer most of the features in-box ( eg. horribile dictu Ruby or Python ), and those, that are not found, You perhaps you can live without.
If You really want Moose, consider a bottom-to-top approach starting with the less sugar. I prefer using Mouse + Method::Signatures first. My scenario is that I am sitting on the backend where we need very few objects, shallow hierarchy, but sometimes fast accessors - then we can still fall back to XSAccessor. Mouse+Method Signatures seem to be a rather good compromise between syntactic help and speed. If my design really needs more, then simply upgrade to Moose.
I can confirm the speed penalty with MooseX::Declare not only with simple accessor benchmarks ( https://metacpan.org/pod/App::Benchmark::Accessors ), but also in real-life application. This combined with cryptic error messages rules MooseX::Declare out.

Is MooseX::Method::Signatures still suggested, or is there a better alternative?

My team recently decided to move away from MooseX::Declare. Is using MooseX::Method::Signatures on its own the best alternative?
Courtesy of Jon Rockway, who is too lazy to change his proxy:
My take is that for ease of debugging, it’s best not to use either. They don’t introduce many problems of their own (the slow startup time is Class->meta->make_immutable, which you would do anyway), but they do introduce problems when interacting with other tools. Devel::Cover, Devel::NYTProf, perlcritic, perltidy, etc., require various degrees of tweaking in order to be usable. You have to weigh the syntax sugar against the inability to use certain tools as easily.
So I think there are various options:
MooseX::Declare – less typing; ease in being accurate; ease of extensibility
MooseX::Method::Signatures – a little more typing, a little less accuracy; you have to worry about “use namespace::autoclean” or “no Moose”, you have to worry about make_immutable, you have to return a true value, etc.
MooseX::Params::Validate – now we’re back to normal Perl; same validations as MX::Method::Signatures, and same drawbacks. But now all your tools work. Only problem is that the syntax is ugly – my eyes, the goggles do nothing!
Parms::Util – simple way to get “correct” validations, acceptable syntax, but less flexible; does not integrate with MooseX::Types like MX::Params::Validate
Doing it manually – simple, usually correct, easy to understand. But it’s easy to be tempted into being lazy; allow a CODE ref, but not objects with a CODE overload; “ref $foo” instead of “ref $foo && blessed $foo && $foo->isa(‘ClassName’); etc.
So really, they’re all bad in their own special fun ways. Lately I have been doing a combination of manual validation and Params::Util, but I’m not willing to say that’s the best way to do things. I’m going to weight my “best practice” towards MX::Types + MX::Params::Validate, but for some reason, I’m not motivated to use it myself.
--Jon

Why use a post compiler?

I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.