How do you define an interface, knowing that it should be immutable once published? - interface

Anyone who develops knows that code is a living thing, so how do you go about defining a "complete" interface when considerable functionality may not have been recognised before the interface is published?

Test it a lot. I've never encountered a panacea for this particular problem - there are different strategies depending on the particular needs of the consumers and the goals of the project - for example, are you Microsoft shipping the ASP.NET MVC framework, or are you building an internal LoB application? But distilled to its simplest, you can never go wrong by testing.
By testing, I mean using the interface to implement functionality. You are testing the contract to see if it can fulfill the needs. Come up with as many different possible uses for the interface you can think of, and implement them as far as you can go. Whiteboard the rest, and it should become clear what's missing. I'd say for a given "missing member", if you don't hit it within 3-5 iterations, you probably won't need it.

Version Numbers.
Define a "Complete Interface". Call it version 1.0.
Fix the problems. Call it version 2.0.
They're separate. They overlap in functionality, but they're separate.
Yes, you increase the effort to support both. That is, until you deprecate 1.0, and -- eventually -- stop support.

Just make you best reasonable guess of the future, and if you will need more create second version of your interface.

You cannot do it in a one-shot release. You need feedback.
What you can do is first make a clean interface that provide all the functionalities your library should provide; then expose it to your user base for real-world usage; then use the feedbacks as guide to update your interface -without adding features other than helper function/classes- until it starts to be stable on the interface.
You cannot rely only on experience/good-practice. It really helps but it's never enough. You simply need feedback.

Make sure the interface, or the interface technology (such as RPC, COM, CORBA, etc), has a well-defined mechanism for upgrades and enhanced interfaces.
For example, Microsoft frequently has MyInterface, followed by MyInterfaceEx, followed by MyInterfaceEx2, etc, etc.
Other systems have a means to query and negotiate for different versions of the interface (see DirectX, for one).

Related

Using Feature Toggling and IoC in lieu of Branching Code -- Good or Bad Idea?

Our clients get to choose when to upgrade. So, my team literally has to maintain and support dozens of versions of our software product. As you can imagine that results in a lot of branching and merging as hot fixes and service packs have to be propagated across all these flavors. I'm not happy with the situation. The obvious solution is simply not to maintain so many different versions of our product, but that obvious solution is not available to me. So, I'm exploring creative options to lower the team's maintenance work. I'm considering using a mix of Feature Toggling and IoC as a way to implement n-number of versions of our software product. The idea is that I could use a single code base for my product and manage behaviors and features via configuration management. This would be in lieu of having to propagate code across multiple branches. Is this a reasonable approach or am I just trading off one problem for another?
That sounds reasonable, in that this would be the way I'd address such a problem in a greenfield environment.
Let's not call it a Feature Toggle, though. As the name implies, a Feature Toggle is an on/off switch, which may not be what you need.
Sometimes, an upgrade also involves changed behaviour in existing features. That implies that you're probably going to need something more sophisticated than an on/off switch.
The Strategy pattern is a more flexible way of modelling variation in behaviour. Each Strategy can represent a particular version of a particular behaviour, and if you don't want the behaviour at all, you can provide a Null Object implementation. In other words, Feature Toggle can be implemented with a Strategy.
You can inject the Strategies into your application kernel using Dependency Injection, and you could make the choice of Strategies configurable via a configuration system. Most DI Containers I've heard about (on .NET and Java) support file-based configuration.
This essentially describes an add-in architecture.
Now, even for a greenfield application, this is no easy feat to pull off. If you have a headless system, it's not that hard, but once you have UI involved, you start to realise that you're going to need to componentise the UI architecture as well, so that you can plug in UI elements via Strategies.
On a decade-old code base, this would be what I'd call an 'interesting challenge', to say the least.

General architecture for backend?

We are trying to be forward looking in our architecture choice on some of the new systems we are designing. Pretty much we want to architecture back end system that no matter what interface we decide to use (WinForms, Silverlight, MVC, Webforms, WPF, IOS (IPad/Iphone), ect...) which i believe just screams REST. Our organization generally will only use Microsoft APIs but since i have no idea when WCF-Web-Api will be released and we want to get started soon it looks like we have no other choice.
We want to take baby steps here to increase the chances of buy off. So we don't want to have to set up another server with IIS.
In the foreseeable future we will only be using WinForms & WebForms. What i was thinking we could use Nancy on the local machine but communicate with it in a RESTFul way. That way in the future it should be as simple as setting up a server and redirecting all the clients to that server rather than locally.
I've never used either NancyFX or OpenRasta, but, from what ive heard, it sounded like a good fit.
So the questions are:
Is the way i'm thinking on approaching this a good approach
Does it sound like NancyFX or OpenRasta would be a better fit?
Any reason why we should wait for WCF-Web-API and if so does anyone have an approx release date.
OpenRasta was built for resource-oriented scenarios. You can achieve the same thing with any other frameworks (with more or less pain). OpenRasta gives you a fully-composited, IoC friendly environment that completely decouples handlers and whatever renders them (which makes it different from MVC frameworks like nancy and MVC).
I'd add that we have a very strong community, a stable codebase and we've been in this for quite a few years, we're building 2.1 and 3.0 and our featureset is still above and beyond what you can get from most other systems. Compare this to most of the frameworks you've highlighted, where none have reached 1.0.
Professional support is also available, if that's a deciding factor for your company.
But to answer your question fully, depending on your scenario and what you want to achieve, you can make anything fits, given enough work. I'd suggest reformulating your question in terms of architecture rather than in terms of frameworks.

What does "monolithic" mean?

I've seen it in the context of classes. I suspect it means that the class could use being broken down into logical subunits, but I can't find a good definition. Could you give some examples?
Thanks for the help.
Edit: I love the smart replies, but I'm obviously referring to "monolithic" within a software context. I know about monoliths, megaliths, dolmens, and all the stone-related contexts. Gee, I have enough of them in my country...
Interesting question. I don't think there are any formal definitions of what a monolithic class is, but you've got the idea. A class that contains multiple components that are logically unconnected, or pointlessly coupled, is a monolithic class.
If you've read The Pragmatic Programmer, which I strongly recommend, you can define a monolithic class as an anti-pattern that goes against almost everything from that book.
As for examples, you'll find more in the realm of chip and OS design, where there are formal definitions of monolithic chips/kernels, which are similar to a monolithic class. Here are some examples, although each of them can be argued against being on this list:
JOGL - Java bindings for OpenGL. This could be arguable, and with good reason.
Most academic projects - For obvious reasons.
If you started programming alone, rather than joining a team, then chances are you can open one of your first projects, and there will be a class that is monolithic.
If you look up the etymology of the word you'll see it comes from the Greek monos (single) and lithos (stone). In the context of software as you mention it, it describes a single-tiered application in which the code for the user interface and the data access are combined into a single program from a single platform.
"Monolithic" is a term that has been used to flame succesful software. This link exposes the assumptions inherent in the term, and their limited usefulness.
The basic assumption is that a system works better if it is built from software components that each have an individual, well-defined task. Intuitively, this seems right. If each component works, the entire system must work, right?
In reality, it's not that easy. A larger, compositional (non-monolithic) system can miss a critical function, even when there is no single component to blame. This happens when the architectural design fails to allocate a function to any specific component. This can happen especially if it's a function which doesn't map cleanly to a single component.
Now Linux (to continue with the linked example) in reality is not monolithic. It has a modular userspace on top of a monolithic kernel, a userspace that comes with many separate utilities. Except when it doesn't.
My definition of a Monolithic design in software development, is a design which requires additional functionality to be added to a single indivisible block of code.
PRO:
Everything is in one place, and therefore easy to find
Can be simpler, given there less relations to consider (can also be more complex see cons)
CONS:
Over time as functionality is added the complexity of the system may exponentially increase, to the point new features are extremely hard or impossible to implement
Can make it difficult for multiple developers to work with e.g Entity Framework EDMX files have the entire database in a single file which can be extremely difficult for multiple developers to work on.
Reduced re-usability, by definition it does not have smaller components which can be then reused and re-purposed to solve other problems, unless a complete copy of the code is made and then modified.
A monolithic architecture is a model of software structure which is created as one piece where all Rails tools (ActionMailer, ActiveJob, ActionCable, etc.) can be gathered together with the code that these tools applies. The tools are not connected with each other but they are also not autonomous.
If one feature needs changes, it will influence the work of the whole process and other features because they are parts of one process.
Let’s recall what Ruby on Rails is, what it can offer, its pros and cons. Its most important benefit is that it is easy to work with.
If you write rails new you immediately get a new application at once, then you can create any REST API you want and use Rails helpers and generators, which makes development even easier.
If you need to send emails in your Rails app, then use Rails ActionMailer. When you need to do some hard processing, ActiveJob will help you. With Rails 5 you will also be able to use websockets out of the box. Thus, it will be easy to create chats or make your application more interactive.
In case you use correct DSL syntax, you can use all that and even more immediately. Moreover, you don’t have to know everything about the internal implementation of these tools, consider it’s DSL, and receive the expected result.
It means something is the opposite of modular. A modular application can have parts, referred to as modules, replaced without requiring replacement of the entire application. Whereas a monolithic application, after having a part fixed or upgraded, must be replaced in it's entirety.
From Wikipedia: "Modularity is desirable, in general, as it supports reuse of parts of the application logic and also facilitates maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement."
So in the context of a monolithic class, all its features are self-contained and if you want to add or alter a feature to the class you would need to alter/add code in the class and recompile it. Conversely a modular class exposes access to functionality which is implemented externally. For example a "Calculator" class may use a separate "Add" class for actually adding numbers; call a "Multiply" function from a separate library; or even call an "Amortize" function from a web service. As long as the each of these functional parts can be altered externally from the class, it is modular.

MEF vs. any IoC

Looking at Microsoft's Managed Extensibility Framework (MEF) and various IoC containers (such as Unity), I am failing to see when to use one type of solution over the other. More specifically, it seems like MEF handles most IoC type patterns and that an IoC container like Unity would not be as necessary.
Ideally, I would like to see a good use case where an IoC container would be used instead of, or in addition to, MEF.
When boiled down, the main difference is that IoC containers are generally most useful with static dependencies (known at compile-time), and MEF is generally most useful with dynamic dependencies (known only at run-time).
As such, they are both composition engines, but the emphasis is very different for each pattern. Design decisions thus vary wildly, as MEF is optimized around discovery of unknown parts, rather than registrations of known parts.
Think about it this way: if you are developing your entire application, an IoC container is probably best. If you are writing for extensibility, such that 3rd-party developers will be extending your system, MEF is probably best.
Also, the article in #Pavel Nikolov's answer provides some great direction (it is written by Glenn Block, MEF's program manager).
I've been using MEF for a while and the key factor for when we use it instead of IOC products is that we regularly have 3-5 implementations of a given interface sitting in our plugins directory at a given time. Which one of those implementations should be used is actually something that can only be decided at runtime.
MEF is good at letting you do just that. Typically, IOC is geared toward making sure you could swap out, for a cononical example, an IUserRepository based on ORM Product 1 for ORM Product 2 at some point in the future. However, most IOC solutions assume that there will only be one IUserRepository in effect at a given time.
If, however, you need to choose one based on the input data for a given page request, IOC containers are typically at a loss.
As an example, we do our permission checking and our validation via MEF plugins for a big web app I've been working on for a while. Using MEF, we can look at when the record's CreatedOn date and go digging for the validation plugin that was actually in effect when the record was created and run the record BOTH through that plugin AND the validator that's currently in effect and compare the record's validity over time.
This kind of power also lets us define fallthrough overrides for plugins. The apps I'm working on are actually the same codebase deployed for 30+ implementations. So, we've typically go looking for plugins by asking for:
An interface implementation that is specific to the current site and the specific record type in question.
An interface implementation that is specific to the current site, but works with any kind of record.
An interface that works for any site and any record.
That lets us bundle a set of default plugins that will kick in, but only if that specific implementation doesn't override it with customer specific rules.
IOC is a great technology, but really seems to be more about making it easy to code to interfaces instead of concrete implementations. However, swapping those implementations out is more of a project shift kind of event in IOC. In MEF, you take the flexibility of interfaces and concrete implementations and make it a runtime decision between many available options.
I am apologizing for being off-topic. I simply wanted to say that there are 2 flaws that render MEF an unnecessary complication:
it is attribute based which doesn't do any good to helping you figuring out why things work as they do. There's no way to get to the details burred in the internals of the framework to see what exactly is going on there. There is no way to get a tracing log or hook up to the resolving mechanisms and handle unresolved situations manually
it doesn't have any troubleshooting mechanism to figure out the reasons for why some parts get rejected. Despite pointing at a failing part it doesn't tell you why that part has failed.
So I am very disappointed with it. I spent too much time fighting windmills trying to bootstrap a few classes instead of working on the real problems. I convinced there is nothing better than the old-school dependency injection technique when you have full control over what is created, when, and can trace anything in the VS debugger. I wish somebody who advocates MEF presented a bunch of good reasons as to why would I choose it over plain DI.
I agree that MEF can be a fully capable IoC framework. In fact I'm writing an application right now based on using MEF for both extensibility and IoC. I took the generic parts of it and made it into a "framework" and open sourced it as its own framework called SoapBox Core in case people want to see how it works.
In particular, take a look at how the Host works if you want to see MEF in action.

Should I still code to the interface even if I am ONLY EVER going to have ONE implementation?

I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.