Pattern for sharing functionality between controllers - iphone

I'm writing an iPhone application and I find that there are three controllers in the application that have very similar functionality. They are similar enough that it doesn't make sense to separate them into three separate classes, so I have a "mode" property that clients of the class use to specify how the controller should behave in certain situations. But again, maybe 95% of the functionality is identical. There are three separate modes with only minor differences in behavior.
This feels messy to me. Is there a better pattern for this?

You could try inheritance ... the three controllers can all inherit from a common base that implements the shared functionality.
Aside from that, you could look at the Strategy Pattern.
Which one you use depends on what your code is doing and what the bits that change look like :-)

A similar approach would involve not to use inheritance (i.e.: use the same controller for the three screens) and use the state pattern to define specific behavior for each of the screens.

Related

Is there a way to import common feature file to another feature file in cucumber

Is there a way to import one cucumber feature file to another? so that I can move my repeated logics/actions/business validations for different flow to a common feature file.
Note: am using the background option effectively for few things like launching the application in every feature file. if consider it, even that background is also duplicated. :)
Many Thanks.
There is no way to include one feature file in another.
If you could, then Gherkin could be considered to be a programming language. Gherkin isn't a programming language and thus lacks features like functions or modules.
What can you do about your repeated backgrounds then? My approach would probably be to see if I could move the common initialization you do in the background down the stack. I would see if I could implement some helpers that would perform the same steps and then either minimize the background to something like
Given the world is prepared
in a background. Or just make sure that the preparation was done first in the scenarios that needed it. Maybe even hide it so the call is done in the first step. This would essentially move the background away from the feature file and hide it from for your business stakeholders.
One thing to consider tho would be, is the background important for your business stakeholders? Do they care about the backgrounds or is it just noise for them? If it is important, then don't hide the backgrounds. If the backgrounds isn't important, then hide them as much as possible.
You can deal with this by abstraction and background.
Given any number of prerequisites, if you group them together and give them a name you can call them in one step. In the cucumber space its far more effective to do this rather than import.
BDD is all about working at different levels of abstraction. In particular when you create some specific behaviour you work at a detailed level. Once the behaviour is created you then use that behaviour in a more abstract way.
Importing just gives you access to the detail. This is a bad thing because you end up working with different levels of abstraction in the same place which is both dangerous and confusing.

How to choose modified instances in AspectJ?

I am studying aspect oriented programming and I want to use AspectJ to create several aspects to change a class. The problem is that all instances of the class are being changed by all aspects, and I want to choose the aspects to use for each instance.
Another problem, how to choose the order of the advices from different aspects for the same method?
Thanks
I'll only answer partially, for the time being.
Concerning your second question:
how to choose the order of the advices from different aspects for the
same method?
please look at declare precedence
This may not be the answer you are looking for, but it is by design that all instances of a class are modified by an aspect. You should be thinking that AspectJ affects the semantics of a program as a whole, rather than pieces of it.
AspectJ is implemented through byte code manipulation, so it would not be possible to make the changes for individual instances even that were part of the AspectJ spec.
But, perhaps there is another way of implementing what you need implemented since only requiring weaving into individual instances implies that there is something wrong with your implementation.

Catalyst Controller Questions

I just used catalyst for my first large project and I was left wondering if I used catalyst in the way it was meant to be used.
I have Root.pm and inside of that file I put multiple url handlers.
Is it a best practice to have one controller per url or should a grouping be considered?
One of the beauties of Catalyst is its flexibility. You can do this however it best suits your application.
If you only have a handful of URLs you support, then there's nothing intrinsically wrong with putting them all in Root.pm. Particularly if there's no depth, ie localhost:3000/foo and localhost:3000/bar
However, as soon as you start to have longer URLs such as localhost:3000/foo/bar/baz/quux where baz and quux are arguments to bar, you'll see the benefit of separating out a Foo.pm that contains an action (sub) called 'bar'. (And that's before we get into the joys of Chained Actions...)
Although there are ways that you can achieve the equivalent of a Rails style routing table, it's not generally considered to be a good idea. Not having a routes table is an intrinsic feature/benefit of Catalyst over other frameworks.
There's a good discussion of this on pages 13-14 of The Definitive Guide to Catalyst. If you don't have this book, you should.

Should my MVC controllers be object-oriented?

I'm making a Perl website, and I'll using Template Toolkit (for the view), a whole bunch of objects for DB interaction and business logic (the model), but I'm wondering: should the controllers be OO?
I feel like they should, just for consistency, but it also feels it might be a bit redundant when I'm not interacting with the controllers in an OO way. The controllers are called more in a fire-and-forget kind of way.
Thanks for any thoughts.
Yes, make the controllers object-oriented. You should be interacting with them as objects. You might want to extend or modify them later with subclasses. A lot of people get themselves into trouble by assuming that they'll ever only need one controller, so they paint themselves into a corner by not planning for future flexibility.
In my opinion, if it feels reduntant, you shouldn't use it.
OOP can have more cons than pros if you use it on a project that doesn't need it.
If it's just about consistency just drop it. there's plenty of people that (for example) in c++ use stl but write the rest of the code in a procedural way. If you feel OOP overwhelming go for the mixed approach you are thinking of using (OOP where needed, procedura the rest), as long as your code doesn't become difficult to read because of this.
You need to look at Catalyst, which will save you a good deal of worry about what OO to use for controllers and how to implement it. It's not perfect but, if you like, it's a well beaten path through the design wilderness.

Should I still code to the interface even if I am ONLY EVER going to have ONE implementation?

I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.