Should my MVC controllers be object-oriented? - perl

I'm making a Perl website, and I'll using Template Toolkit (for the view), a whole bunch of objects for DB interaction and business logic (the model), but I'm wondering: should the controllers be OO?
I feel like they should, just for consistency, but it also feels it might be a bit redundant when I'm not interacting with the controllers in an OO way. The controllers are called more in a fire-and-forget kind of way.
Thanks for any thoughts.

Yes, make the controllers object-oriented. You should be interacting with them as objects. You might want to extend or modify them later with subclasses. A lot of people get themselves into trouble by assuming that they'll ever only need one controller, so they paint themselves into a corner by not planning for future flexibility.

In my opinion, if it feels reduntant, you shouldn't use it.
OOP can have more cons than pros if you use it on a project that doesn't need it.
If it's just about consistency just drop it. there's plenty of people that (for example) in c++ use stl but write the rest of the code in a procedural way. If you feel OOP overwhelming go for the mixed approach you are thinking of using (OOP where needed, procedura the rest), as long as your code doesn't become difficult to read because of this.

You need to look at Catalyst, which will save you a good deal of worry about what OO to use for controllers and how to implement it. It's not perfect but, if you like, it's a well beaten path through the design wilderness.

Related

Object hierarchies in Objective-C

I've been introduced to an Objective-C codebase which has ~50,000 LoC and I'd estimate that 25% or so is duplicate code. Unfortunately, OO principles have been mostly ignored up to this point in the codebase in favor of copy and pasting logic. Yay!
I'm coming from a Java background and a lot of this duplication is fixable with good old-fashioned objective oriented programming. Extracting shared logic into a base class feels like the correct solution in a lot of cases.
However, before I embark on creating a bunch of base classes and sharing common logic between derived classes, I thought I should stop and see if there are any other options available to me. After watching Ken Kocienda's 'Writing Easy-To-Change Code' WWDC session from 2011, he's advising me to keep object hierarchies as shallow as possible. He doesn't offer up any hard statistics as to why he has this opinion, so I'm wondering whether I'm missing out on something.
I'm not an Objective-C expert by any stretch of the imagination, so I'm wondering if there's any best practices when deciding on an object hierarchy. Basically, I'd like to get opinions on when you decide to stop creating base classes and start using composition instead of inheritance as a way of sharing code between classes.
Also, from a runtime performance standpoint, is there anything to sway me away from creating object hierarchies?
I wrote up some thoughts awhile back on coming to iOS from other backgrounds, including Java. Some things have changed due to ARC. In particular, memory management is no longer so front-and-center. That said, all the things you used to do to make memory management easy (use accessors, use accessors, use accessors) is still equally valid in ARC.
#Radu is completely correct that you should often keep your class hierarchies fairly simple and shallow (as you read). Composition is often a much better approach in Cocoa than extensive subclassing (this is likely true in Java, too, but it's common practice in ObjC). ObjC also has no concept of an abstract method or class, which makes certain kinds of subclassing a little awkward. Rather than extracting shared logic into base classes (particularly abstract base classes), it is often better to extract them into a separate strategy object.
Look at UITableView and its use of delegates and datasources. Look at things like NSAttributedString which HAS-A NSString rather than IS-A. That's common and often keeps things cleaner. As with all large object hierarchies, keep LSP in mind at all times. I see a lot of ObjC design go sideways when someone forgets that a square is not a rectangle. Again, this is true of all languages, but it's worth remembering as you design.
Immutable (value) objects are a real win whenever you can use them.
The other piece you will quickly discover is that there are very few "safety decorations" like "final" or "protected" (there is a #protected, but it isn't actually that useful in practice and is seldom used). People from a Java and C++ background tend to fret about compiler enforcement of various access rules. ObjC doesn't have compiler enforcement of most protections (you can always send any message you want to any object at runtime). You just use consistent naming conventions and don't go poking around at private methods. Programmer discipline takes the place of compiler enforcement. In practice, it works just fine that way in the vast majority of cases.
That said, ObjC has a lot of warnings, and you absolutely must eliminate all warnings. Most ObjC warnings are actually errors.
I've strayed a little from the specific question of object hierarchies, but hopefully it's useful.
One major problem with deep hierarchies in Objective-C is that Xcode doesn't help you at all understanding/managing them. Another is simply that just about anything in Objective-C gets about twice as complex than the equivalent in Java, so you need to work harder to keep stuff simple.
But I find composition in Objective-C to be awkward (though I can't say exactly why), so there is no "perfect" answer.
I have observed that small subroutines are much rarer in Objective-C vs Java, and one is much more likely to see code duplicated between mostly-identical view controllers and the like. I think a big part of this is simply the development tools and the relative awkwardness with creating new classes.
PS: I had to rework an app that contained roughly 55K lines, close as we could count. As you found, there was likely about 25% duplication, but there was also another 25% or so of totally dead code. (Thankfully, that app has been pretty much abandoned since.)

Need some advice on starting a New Life with MVC 2 and which Tools to use for RAD in MVC2?

I have finally decided to hop up on the train of MVC 2.
Now I have been doing a lot of reading lately and following is the architecture which I think will be good enough for most Business Web Applications.
Layered Architecture:-
Model (layer which communicates with Database). EF4
Repository (Layer which communicates with Model and includes all the queries)
Business Layer (Validations, Helper Functions, Calls to repository)
Controllers (Controls the flow of the application and is responsible for providing data to the view from the Business Layer.)
Views (UI)
Now I have decided to create a separate project for each layer (Just to respect the separation of concerns dilemma. Although I know it's not necessary but I think it makes the project look more professional :-)
I am using AutoMetaData t4 template for Validation. I also came across FluentValidation but cant find much on it. Which one should I go with?
Which View Engine to go for?
Razor View Engine Was Love at first sight. But it's still in beta and I think it won't be easy to find examples of it. Am I right?
Spark .. I can't find much on it either and don't want to get stuck somewhere in the middle crying for help when there is no one to listen...:-(
T4 templates auto generate views and I can customize them to generate the views the way I want? Will this be possible with razor and spark or do I have to create them manually?
Is there any way to Auto generate the repositories?
I would really appreciate it if I can see a project based on the architecture above.
Kindly to let me know if it's a good architecture to follow.
I have some confusion on the business layer like is it really necessary?
This is a very broad question. I decided to use Fluent NHibernate's autoconfig feature for a greenfield application, and was quite impressed. A lot of my colleagues use CakePHP, and it needed very little configuration to get it to generate a database schema compatible with the default conventions cake uses, which is great for us.
I highly suggest the book ASP.NET MVC2 in Action. This book does a good job at covering the ecosystem of libraries that are used in making a maintainable ASP.NET MVC application.
As for the choice of view engines, that can depend on your background. I personally prefer my view to look as much like the HTML as possible, so I would choose Spark. On the other hand if you are used to working with ASP.NET classic, the WebForms view engine may get you up and running fastest.
Kindly to let me know if its a good architecture to follow?
It's a fine start - the only thing I would suggest you add is a layer of abstraction between your Business Logic and Data Access (i.e: Dependency Inversion / Injection) - see this: An Introduction to Dependency Inversion.
i know its not necessary but i think it makes the project looks more professional :-)
Ha! Usually you'll find that a lot of "stuff" isn't necessary - right up until the moment when it is, at which point it's usually too late.
Re View Engines: I'm still a newbie to ASP.NET MVC myself and so aren't familiar with the view engines your talking about; if I were you I'd dream up some test scenarios and then try tackling them with each product so you can directly compare them. Often, you need to take things for a test drive to be more comfortable - although this might take time, but it's usually worth it.
Edit:
If i suggest this layer to my PM and give him the above two reasons then i don't think he will accept it
Firstly, PM's are not tech leads (usually); you have responsibility for the design of the solution - not the PM. This isn't uncommon, in my experience most of the time the PM isn't even aware they are encroaching on your turf that isn't theres. It's not that I'm a "political land grabber" but I just tend to think of "separation of concerns" and, well, I'm sure you understand.
As the designer / architect it's up to you to interpret requirements and (taking business priorities into account) come up with solution that provides the best 'platform' going forward.
(Regarding DI) My question is , is it really worth it?
If you put a gun to my head I would say yes, however the real world is a little more complex.
If you answer yes to any of these questions then its more likely using DI would be a good idea:
The system is non-trivial
The expected life of the system is more than (not sure what the right figure is here, there probably isn't one, so I'm going to put a stake in the ground at) 2 years.
The system and/or its requirements are fluid.
Splitting up the work (BL / DAL) into different teams would be advantageous to the project (perhaps you're part of a distributed team).
The system is intended for a market with a diverse technical landscape (e.g: not everyone will want to use MS SQL).
You want to perform quality testing (this would make it easier).
The system is large / complex, so splitting up functionality and putting it into other systems is a possibility.
You want to offer more than one way to store data (say a file based repository for free, and a database driven repository for a fee).
Business drivers / environment are volatile - what if they came to you and said "this is excellent but now we want to offer a cloud-based version, can you put it on Azure?"
Id also like to point out that whilst there's definitely a learning curve involved it's not that huge, and once you're up-to-speed you'll still be at least as fast as you are now; or at worst you;ll take a little longer but you'll be providing much more value (with relatively less effort).
In terms of how much effort is involved...
One-Off Tasks (beyond getting the team up to speed):
Writting a Provider Loader or picking DI Framework. Once you've done this it will be reusable in all your projects.
'New' Common Tasks (assuming you're following the approach taken in the article):
Defining interface (on paper) - this is something you'll be doing right now anyway, except that you might not realise it. Basically it's OO Design, but as it's going to be the formal interface between two or more packages you need to give it some thought (and yes you can still refactor it - but ideally the interface should be "stable" and not change a lot; if it does change it's better to 'add' than to 'remove or change' existing members).
Writting interface code. This is very fast (minutes not hours), as you're not writting any implementation; and when you go to implement you can use tools provided by your IDE to generate code-stubs based on the interface.
Things you do now that you'd do differently:
Instantiating a variable (in your BL classes) to hold the provider, probably via a factory. Writting this shouldn't take long (again, minutes not hours) and it's fairly simple code to copy, paste & refactor where required.
Writing the DAL code: should be the same as before.
Sometimes it is way more easy to learn patterns from code : Sharp Architecture is a concrete implementation of good practices in MVC, using DDD.

Should I use interface builder or not?

I'd like to know more about the pros and cons of using interface builder when developing iPhone/iPad apps.
I've written a fairly complex and customized app that's on the app store right now, but all of the interfaces are hand coded as they are fairly complex. I've customised the navigation and tab bars with backgrounds, table view cells are manually drawn for speed, and some views are complex and scalable with many subviews.
I'm pondering whether or not to start using interface builder but I'm not sure to what extent I'll use it and whether it's worth it at all. Is it quicker? Can things still be easily customised?
Any advice would be most welcome!
There is absolutely no reason not to use it. One thing that scares people off is their experiences with other GUI tools, things that generated code for them or some other mess. Then the problem becomes that it is hard to round-trip the interface, you cannot easily modify things once they are generated because of the complexity of pushing those changes back into the emitted code.
Interface Builder does not generate code, it uses NSArchiver to read and write an actual object graph for the GUI. This has many benefits, starting with the fact that you can easily round-trip the interface and make incremental changes. It really is all good, use it. :-)
Personally I've found Interface Builder quite tricky to ramp up on, and sometimes it doesn't expose all the properties I want to edit (although this may have changed in the newer versions), so generally I've tended to create my UIs in code.
If you do use Interface Builder, make sure to consider localization. Apple's iPhone Developer docs recommend that the NIB be a localized resource that gets translated. That way the translator can see if the new text fits in the view. Unfortunately this means the translator needs to be capable of opening NIB files and editing them (or a developer needs to get involved in the translation process).
Personally, I prefer providing localized text resources and setting the text to the UI in code. I then provide comments in the Localizable.strings file saying how long the text can be, and providing any context the translator might need.
There are no cons.
Why dont use it? It makes everything easier :)

What are the pros and cons of using the two different programming styles of CGI.pm with Perl?

I am in a Web Scripting class at school and am working on my first assignment. I tend to overdo things and delve deeper into my subject than what is required in my classes. Right now I am researching CGI.pm to do my HTTP requests and it says there are two programming styles for CGI.pm:
An object-oriented style
A function-oriented style
Unless I overlooked the clear answer or am not knowledgeable enough to discern the answer for myself from the documentation provided at: http://perldoc.perl.org/CGI.html I just don't know what the pros and cons are of using these two different styles.
With that being said what are the pros and cons of using the two different styles? Which one is more commonly used? As far as using object-oriented style it says I can only use one CGI object at the time. Why is that?
Thanks for all your help. You have all made studying Computer Science very enjoyable, satisfying, and rewarding for me. =D
Behind the scenes, CGI.pm is doing the same thing despite the styles. The functional interface actually uses a secret object that you don't see.
For many small-scale CGI projects, you're probably never going to need more than one CGI object at a time, so the functional interface is fine. This might be the more common style, but only because most people make small scripts for very specific tasks. If you have a lot of other stuff going on, you might not like CGI.pm importing a long list (and it is long) of function names into your script. Some of the function names might clash with those other modules want to import.
I, however, always use the object-oriented interface. I don't have to worry about name collisions, and it's apparent where any method came from since you see its object. It's also easy to pass the object as arguments to other parts of large applications, etc.
Some people might complain about the extra typing, but that's never been the slow part of programming for me. I've been doing Perl for a long time and I don't mind the syntax. However, I only use CGI to get the input and maybe send the output. I don't mess with any of the HTML stuff.
When it talks about one CGI.pm object at a time, it's referring to access to the input. Once you've read STDIN, for instance, another CGI.pm object won't be able to read that. You can have as many objects as you like though. They just won't share data and the first one gets all of POST data.
You can actually use a mixture though. You can import some things, like :html, but still use the OO interface to deal with the input.
I strongly recommend using the object interface.
Will it be absolutely required for your classwork? No, in fact it is arguably overkill for even small production projects.
However, if you are serious about learning to use CGI.pm for larger scale projects you will need to learn the object method. If you reach the point of needing two objects you will have to use the object interface. Programming, like most everything else, gets better with practice. Practicing now on relatively easier problems will help you be ready for more complex ones.
In fact I'd recommend it as a general rule in programming (although there are exceptions) that if faced with two methods of using a particular tool making a habit of using the one most likely to be used in production code and/or the one that is the correct answer for more of the problem space.

How can I convince people to use proper object oriented Perl?

I have been using an Object-Oriented MVC architecture for a web project, and all the models are OO Perl. But I have noticed a couple on the team are reverting to procedural techniques and are essentially using "objects" as dumping grounds for related functions. Their functions basically read/write directly to/from the database.
What's the best way to convince them this is the wrong way? Are there some good tutorials I can get them to read?
I can think of three possible reasons why some team members may not produce nice OO code:
They don't care
They care, try to do it right but lack skills to do it right
They are doing it right, or at least right enough! Some matters of design are matters of opinion. Go back and look at some of your own old code, code that you thought was quite good, there's a good chance that you would revise some of it now.
My approach is to make the assumption that everybody wants to do the job right, so I assume (or pretend to assume) that folks care. One tries to build an ethos of wanting to do it right.
So that leaves building our skills, theirs and yours (and mine). Code reviews seem like one obvious way to do this. Talk about alternatives. Also maybe pair-programming?
OOP is just a tool to accomplish some great things like more reliable, usable and maintainable software. If you can not convince your team to write tests give up with strict OOP.
Are they all out of step with you? Perhaps it's just you who is out of step!
Lest I seem facetious, I mean what are the answers to the following questions:
What has the team agreed should be the approach?
Whose responsibility is it to enforce purity? If you don't go for enforced purity, whose responsibility is it to encourage purity? If it's not you, have a look in the mirror and make sure you are not the annoying noob on the team telling the old sweats how to do the jobs they've been doing since you were in Perl baby-clothes.
What mechanisms do you have in place for sorting out such team issues? Regular code reviews, pair programming, bare-knuckle fights ?
Asking them to read some tutorials is likely to be a waste of breath. What are your arguments for doing things the OO way? As Boris has observed, OO is not an end in itself, it is only a means to an end, and not the only means.
There are probably more things you want to consider in your situation, but these should get you started.