Should I still code to the interface even if I am ONLY EVER going to have ONE implementation? - interface

I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?

I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.

This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.

The question is, if there is only going to ever be one concrete implementation, should there be an interface?

YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.

Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.

You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.

I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.

One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.

"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.

The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.

A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.

Related

How to choose modified instances in AspectJ?

I am studying aspect oriented programming and I want to use AspectJ to create several aspects to change a class. The problem is that all instances of the class are being changed by all aspects, and I want to choose the aspects to use for each instance.
Another problem, how to choose the order of the advices from different aspects for the same method?
Thanks
I'll only answer partially, for the time being.
Concerning your second question:
how to choose the order of the advices from different aspects for the
same method?
please look at declare precedence
This may not be the answer you are looking for, but it is by design that all instances of a class are modified by an aspect. You should be thinking that AspectJ affects the semantics of a program as a whole, rather than pieces of it.
AspectJ is implemented through byte code manipulation, so it would not be possible to make the changes for individual instances even that were part of the AspectJ spec.
But, perhaps there is another way of implementing what you need implemented since only requiring weaving into individual instances implies that there is something wrong with your implementation.

Alternative to Singleton in Objective-C for better application design

It seems a lot of Objective-C code is using Singleton nowadays.
While a lot of people complaining about Singleton, e.g. Google (Where Have All the Singletons Gone?), their fellow engineers also use it anyway: http://code.google.com/mobile/analytics/docs/iphone/
I know we had some answers in Stack Overflow already but they are not totally specific to Objective-C as a dynamic language: Objective C has categories, while many other languages do not.
So what is your opinion? Do you still use Singleton? If so, how do you make your app more testable?
Updated: I think we need to use codes as example for more concrete discussion, so much discussions on SO are theory based without a single line of code
Let's use the Google Analytics iOS SDK as an example:
// Initialization
[[GANTracker sharedTracker] startTrackerWithAccountID:#"UA-0000000-1"
dispatchPeriod:kGANDispatchPeriodSec
delegate:nil];
// Track page view
[[GANTracker sharedTracker] trackPageview:#"/app_entry_point"
withError:&error];
The beauty of the above code is once you have initialized using the method "startTrackerWithAccountID", you can run method "trackPageview" throughout out your apps without passing through configurations.
If you think Singleton is bad, can you improve the above code?
Much thanked for your input, have a happy Friday.
This post is likely to be downvote-bait, but I don't really understand why singletons get no love. They're perfectly valid, you just have to understand what they're useful for.
In iOS development, you have one and only one instance of the application you currently are. You're only one application, right? You're not two or zero applications, are you? So the framework provides you with a UIApplication singleton through which to get at application-level os and framework features. It models something appropriately to have that be a singleton.
If you've got data fields of which there can and should be only one, and you need to get to them from all over the place in your app, there's totally nothing wrong with modeling that as a singleton too. Creating a singleton as a globals bucket is probably a misuse of the pattern, and I think that's probably what most people object to about them. But if you're modeling something that has "singleness" to it, a singleton might well be the way to go.
Some developers seem to have a fundamental disgust for singletons, but when actually asked why, they mumble something about globals and namespaces and aesthetics. Which I guess I can understand, if you've really resolved once and for all that Singletons are an anti-pattern and to be abhorred in all cases. But you're not thinking anymore, at that point. And the framework design disagrees with you.
I think most developers go through the Singleton phase, where you have everything you need at your fingertips, in a bunch of wonderful Singletons.
Then you discover that unit testing with Singletons can be difficult. You don't actually want to connect to the database, but your Singleton does. Add a layer of redirection and mock it.
Then you discover that unit testing isn't the only time you need different behaviour. You make your Singleton configurable to have different behaviour based on a parameter. You start to wonder if you need to split it into two Singletons. Then your code needs to know which Singleton to use, so you need a Singleton that knows which Singleton to use.
Then some other code starts messing with the values in your Singleton, while you're using it. How dare they! If you wanted just anybody to get at those values from anywhere, you'd make them global...
Once you get to this point, you start wondering if Singletons were the right solution. You start to see the dangers of global data, particularly within an OO design, where you just assume your data won't get poked at by other people.
So you go back and start passing the data along, rather than looking it up (this used to be called good OO design, but now it has a fancy name like "Dependency Injection").
Eventually you learn that Singletons are fine in moderation. You learn to recognize when your Singleton needs to stop being single.
So you get shared objects like UIApplication and NSUserDefaults. Those are good uses of Singletons.
I got burned enough in the Java Singleton craze a decade ago. I don't even consider writing my own Singletons. The only time I've needed anything similar in recent memory is wanting to cache the result of [NSCalendar currentCalendar] (which takes a long time). I created a category on NSCalendar and cached it as a static variable. I felt a bit dirty, but the alternative was painfully slow code.
To summarize and for those who tl;dr:
Singletons are a tool. They're not likely to be the right tool, but you have to discover that for yourself.
Why do you need an answer that is "total Objective C specific"? Singletons aren't totally Obj-C specific either, and you're able to use those. Functions aren't Obj-C-specific, integers aren't Obj-C specific, and yet you're able to use all of those in your Obj-C code.
The obvious replacements for a singleton work in any language.
A singleton is a badly-designed global.
So the simplest replacement is to just make it a regular global, without the silly "one instance only" restriction.
A more thorough solution is, instead of having a globally accessible object at all, pass it as a parameter to the functions that need it.
And finally, you can go for a hybrid solution using a Dependency Injection framework.
The problem with singletons is that they can lead to tight coupling. Let's say you're building an airline booking system: your booking controller might use an
id<FlightsClient>
A common way to obtain it within the controller would be as follows:
_flightsClient = [FlightsClient sharedInstance];
Drawbacks:
It becomes difficult to test a class in isolation.
If you want to change the flight client for another implementation, its necessary to search through the application and swap it out one by one.
If there's a case where the application should use a different implementation (eg OnlineFlightClient, OfflineFlightClient), things get tricky.
A good workaround is to apply the dependency injection design pattern.
Think of dependency injectionas telling an architectural story. When the key actors in your application are pulled up into an assembly, then the application’s configuration is correctly modularized (removing duplication). Having created this script, its now easy to reconfigure or swap one actor for another.”. In this way we need not understand all of a problem at once, its easy to evolve our app’s design as the requirements evolve.
Here's a dependency injection library: https://github.com/typhoon-framework/Typhoon

What does "monolithic" mean?

I've seen it in the context of classes. I suspect it means that the class could use being broken down into logical subunits, but I can't find a good definition. Could you give some examples?
Thanks for the help.
Edit: I love the smart replies, but I'm obviously referring to "monolithic" within a software context. I know about monoliths, megaliths, dolmens, and all the stone-related contexts. Gee, I have enough of them in my country...
Interesting question. I don't think there are any formal definitions of what a monolithic class is, but you've got the idea. A class that contains multiple components that are logically unconnected, or pointlessly coupled, is a monolithic class.
If you've read The Pragmatic Programmer, which I strongly recommend, you can define a monolithic class as an anti-pattern that goes against almost everything from that book.
As for examples, you'll find more in the realm of chip and OS design, where there are formal definitions of monolithic chips/kernels, which are similar to a monolithic class. Here are some examples, although each of them can be argued against being on this list:
JOGL - Java bindings for OpenGL. This could be arguable, and with good reason.
Most academic projects - For obvious reasons.
If you started programming alone, rather than joining a team, then chances are you can open one of your first projects, and there will be a class that is monolithic.
If you look up the etymology of the word you'll see it comes from the Greek monos (single) and lithos (stone). In the context of software as you mention it, it describes a single-tiered application in which the code for the user interface and the data access are combined into a single program from a single platform.
"Monolithic" is a term that has been used to flame succesful software. This link exposes the assumptions inherent in the term, and their limited usefulness.
The basic assumption is that a system works better if it is built from software components that each have an individual, well-defined task. Intuitively, this seems right. If each component works, the entire system must work, right?
In reality, it's not that easy. A larger, compositional (non-monolithic) system can miss a critical function, even when there is no single component to blame. This happens when the architectural design fails to allocate a function to any specific component. This can happen especially if it's a function which doesn't map cleanly to a single component.
Now Linux (to continue with the linked example) in reality is not monolithic. It has a modular userspace on top of a monolithic kernel, a userspace that comes with many separate utilities. Except when it doesn't.
My definition of a Monolithic design in software development, is a design which requires additional functionality to be added to a single indivisible block of code.
PRO:
Everything is in one place, and therefore easy to find
Can be simpler, given there less relations to consider (can also be more complex see cons)
CONS:
Over time as functionality is added the complexity of the system may exponentially increase, to the point new features are extremely hard or impossible to implement
Can make it difficult for multiple developers to work with e.g Entity Framework EDMX files have the entire database in a single file which can be extremely difficult for multiple developers to work on.
Reduced re-usability, by definition it does not have smaller components which can be then reused and re-purposed to solve other problems, unless a complete copy of the code is made and then modified.
A monolithic architecture is a model of software structure which is created as one piece where all Rails tools (ActionMailer, ActiveJob, ActionCable, etc.) can be gathered together with the code that these tools applies. The tools are not connected with each other but they are also not autonomous.
If one feature needs changes, it will influence the work of the whole process and other features because they are parts of one process.
Let’s recall what Ruby on Rails is, what it can offer, its pros and cons. Its most important benefit is that it is easy to work with.
If you write rails new you immediately get a new application at once, then you can create any REST API you want and use Rails helpers and generators, which makes development even easier.
If you need to send emails in your Rails app, then use Rails ActionMailer. When you need to do some hard processing, ActiveJob will help you. With Rails 5 you will also be able to use websockets out of the box. Thus, it will be easy to create chats or make your application more interactive.
In case you use correct DSL syntax, you can use all that and even more immediately. Moreover, you don’t have to know everything about the internal implementation of these tools, consider it’s DSL, and receive the expected result.
It means something is the opposite of modular. A modular application can have parts, referred to as modules, replaced without requiring replacement of the entire application. Whereas a monolithic application, after having a part fixed or upgraded, must be replaced in it's entirety.
From Wikipedia: "Modularity is desirable, in general, as it supports reuse of parts of the application logic and also facilitates maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement."
So in the context of a monolithic class, all its features are self-contained and if you want to add or alter a feature to the class you would need to alter/add code in the class and recompile it. Conversely a modular class exposes access to functionality which is implemented externally. For example a "Calculator" class may use a separate "Add" class for actually adding numbers; call a "Multiply" function from a separate library; or even call an "Amortize" function from a web service. As long as the each of these functional parts can be altered externally from the class, it is modular.

How do you define an interface, knowing that it should be immutable once published?

Anyone who develops knows that code is a living thing, so how do you go about defining a "complete" interface when considerable functionality may not have been recognised before the interface is published?
Test it a lot. I've never encountered a panacea for this particular problem - there are different strategies depending on the particular needs of the consumers and the goals of the project - for example, are you Microsoft shipping the ASP.NET MVC framework, or are you building an internal LoB application? But distilled to its simplest, you can never go wrong by testing.
By testing, I mean using the interface to implement functionality. You are testing the contract to see if it can fulfill the needs. Come up with as many different possible uses for the interface you can think of, and implement them as far as you can go. Whiteboard the rest, and it should become clear what's missing. I'd say for a given "missing member", if you don't hit it within 3-5 iterations, you probably won't need it.
Version Numbers.
Define a "Complete Interface". Call it version 1.0.
Fix the problems. Call it version 2.0.
They're separate. They overlap in functionality, but they're separate.
Yes, you increase the effort to support both. That is, until you deprecate 1.0, and -- eventually -- stop support.
Just make you best reasonable guess of the future, and if you will need more create second version of your interface.
You cannot do it in a one-shot release. You need feedback.
What you can do is first make a clean interface that provide all the functionalities your library should provide; then expose it to your user base for real-world usage; then use the feedbacks as guide to update your interface -without adding features other than helper function/classes- until it starts to be stable on the interface.
You cannot rely only on experience/good-practice. It really helps but it's never enough. You simply need feedback.
Make sure the interface, or the interface technology (such as RPC, COM, CORBA, etc), has a well-defined mechanism for upgrades and enhanced interfaces.
For example, Microsoft frequently has MyInterface, followed by MyInterfaceEx, followed by MyInterfaceEx2, etc, etc.
Other systems have a means to query and negotiate for different versions of the interface (see DirectX, for one).

What are the pros and cons for using a IOC container?

Using a IOC container will decrease the speed of your application because most of them uses reflection under the hood. They also can make your code more difficult to understand(?). On the bright side; they help you create more loosely coupled applications and makes unit testing easier. Are there other pros and cons for using/not using a IOC container?
If you're using an IOC container in a simple fashion, reflection is only used on startup - the application is wired up to start with, and then it runs as normal without any intervention from the container. Of course, if you're using the IOC to resolve dependencies after you've started running, that may be slightly different - although I'd still expect it to resolve lazily and cache unless you've got it configured to create new instances each time.
As for making the code harder to understand - quite the reverse! With the dependencies explicitly stated, it's much easier to understand each component, and the configuration file(s) make it clear how the whole application hangs together.
Well I suppose a con I've experienced is that some developers don't seem to be able to grasp IoC. We've had a few people who were against it for no reason other than that they didn't understand them. (Not saying that's a bad reason to be against something, not at all.)
It does add a bit of abstraction that always seems to manage to confuse someone or other, but I'd say the pros far outweigh the cons in most cases.
I think it is fair to say if you have expert understanding of how to use IOC and tend to write good code anyway, then IOC will make your code easier to understand on all but the smallest systems.
However if you are working somewhere where most classes/methods are very large and the concept of refactoring has not yet taken hold, then trying to use an IOC is likely just to make the software harder to understand. The IOC also has to be leant by everyone that programs on the project, so that may be a consideration.
I see IOC as icing on the cake; I like the icing but only on a nice cake. If the cake is not nice to start with, sort out the cake first.
As to the performance overhead of using IOC, I don’t see this as a problem in most cases. The overhead need not be large, and given the speed of today’s CPU most of you run time is likely to be data access anyway. If a IOC proved to slow for a given bit of code I would look at adding some caching of returned object, or removing the IOC just from that bit of code.
I believe the assumption about reduced execution speed is much the same kind of argument as "C is faster than C#/Java". While this statement may be true for specific operations and/or structurally simple tasks it is not the case the moment complexity rises.
The way DI-frameworks let you focus on object creation and dependencies creates more efficient systems when code size increases. For large applications I'm almost certain DI-framework based code will outperform any alternative solution. There's simply so little redundancy in the runtime that it's hard to make it more efficient! Most of the additional overhead is also just at first load.
Advanced DI containers also lets you do "scope" magic that you can only dream of without the container. Using scope-proxies, spring can do the following:
A Singleton
|
B Singleton
|
C Prototype (per-invocation)
|
D Singleton
|
E Session scope (web app)
|
F Singleton
Effectively you can have ten layers of singleton objects and all of a sudden something session scoped shows up.
Stuff like security can be injected in totally different manner than you would otherwise. There's often a classical paradox: Often the GUI layer needs to have intricate knowledge of the security permissions. Quite often the services layer also needs this, but often at a different level of detail (usually less detailed than the gui). The classical approach would be to send it around as parameters, put it on a threadlocal or to ask a service. With spring you can just inject it straght where you need it and no-one else needs to know.
This actually changes application development as a whole. I had a real hard time adjusting to this, but after this pain I see it is truly a lot closer to how things should be (as opposed to how we've learned to do it).
So I think DI frameworks have the potential of changing the way you make programs, with much further reaching implications than just DI. It's not just a glorified way of calling new.
I agree with all the answers so far.
What I'd like to add, is that it creates a bit of overhead, so it isn't really suited for small applications.
Mid-size and larger applications benefit the most from using IoC.
You might also check out this question for more information on pros and cons: Castle Windsor Are There Any Downsides?
In most circumstances you would not even notice performance penalty since for "singleton" objects all the initialization is performed once only. I would also argue that IoC makes it different to understand the code: on the contrary, IoC-style development forces you to create small coherent classes, which are in turn easier to grok.
If you are writing a business application, using an inversion-of-control and dependency-injection container (in conjunction with other agile practices and tools) will help you out in terms of productivity and reliability.
Moreover, your application will probably spend a vast majority of its CPU time waiting for resources or waiting for human interaction and doing nothing useful. Your application should have plenty of horsepower to spare for a few microseconds of reflection.
It seeks to reduce their dependency with IOC by ensuring object instance management within the application. The framework, rather than the developer, is in charge of creating and managing dependencies in your project.
The framework calls and runs our code when we write a block of code, and the entire event of passing control back to the framework is known as the Inversion of Control.
It enables the execution of a method separate from its
implementation.
It enables you to switch between multiple implementations with ease.
increases the modularity of the software
Because dependencies are reduced, it is simple to test and write.