What does "monolithic" mean? - class

I've seen it in the context of classes. I suspect it means that the class could use being broken down into logical subunits, but I can't find a good definition. Could you give some examples?
Thanks for the help.
Edit: I love the smart replies, but I'm obviously referring to "monolithic" within a software context. I know about monoliths, megaliths, dolmens, and all the stone-related contexts. Gee, I have enough of them in my country...

Interesting question. I don't think there are any formal definitions of what a monolithic class is, but you've got the idea. A class that contains multiple components that are logically unconnected, or pointlessly coupled, is a monolithic class.
If you've read The Pragmatic Programmer, which I strongly recommend, you can define a monolithic class as an anti-pattern that goes against almost everything from that book.
As for examples, you'll find more in the realm of chip and OS design, where there are formal definitions of monolithic chips/kernels, which are similar to a monolithic class. Here are some examples, although each of them can be argued against being on this list:
JOGL - Java bindings for OpenGL. This could be arguable, and with good reason.
Most academic projects - For obvious reasons.
If you started programming alone, rather than joining a team, then chances are you can open one of your first projects, and there will be a class that is monolithic.

If you look up the etymology of the word you'll see it comes from the Greek monos (single) and lithos (stone). In the context of software as you mention it, it describes a single-tiered application in which the code for the user interface and the data access are combined into a single program from a single platform.

"Monolithic" is a term that has been used to flame succesful software. This link exposes the assumptions inherent in the term, and their limited usefulness.
The basic assumption is that a system works better if it is built from software components that each have an individual, well-defined task. Intuitively, this seems right. If each component works, the entire system must work, right?
In reality, it's not that easy. A larger, compositional (non-monolithic) system can miss a critical function, even when there is no single component to blame. This happens when the architectural design fails to allocate a function to any specific component. This can happen especially if it's a function which doesn't map cleanly to a single component.
Now Linux (to continue with the linked example) in reality is not monolithic. It has a modular userspace on top of a monolithic kernel, a userspace that comes with many separate utilities. Except when it doesn't.

My definition of a Monolithic design in software development, is a design which requires additional functionality to be added to a single indivisible block of code.
PRO:
Everything is in one place, and therefore easy to find
Can be simpler, given there less relations to consider (can also be more complex see cons)
CONS:
Over time as functionality is added the complexity of the system may exponentially increase, to the point new features are extremely hard or impossible to implement
Can make it difficult for multiple developers to work with e.g Entity Framework EDMX files have the entire database in a single file which can be extremely difficult for multiple developers to work on.
Reduced re-usability, by definition it does not have smaller components which can be then reused and re-purposed to solve other problems, unless a complete copy of the code is made and then modified.

A monolithic architecture is a model of software structure which is created as one piece where all Rails tools (ActionMailer, ActiveJob, ActionCable, etc.) can be gathered together with the code that these tools applies. The tools are not connected with each other but they are also not autonomous.
If one feature needs changes, it will influence the work of the whole process and other features because they are parts of one process.
Let’s recall what Ruby on Rails is, what it can offer, its pros and cons. Its most important benefit is that it is easy to work with.
If you write rails new you immediately get a new application at once, then you can create any REST API you want and use Rails helpers and generators, which makes development even easier.
If you need to send emails in your Rails app, then use Rails ActionMailer. When you need to do some hard processing, ActiveJob will help you. With Rails 5 you will also be able to use websockets out of the box. Thus, it will be easy to create chats or make your application more interactive.
In case you use correct DSL syntax, you can use all that and even more immediately. Moreover, you don’t have to know everything about the internal implementation of these tools, consider it’s DSL, and receive the expected result.

It means something is the opposite of modular. A modular application can have parts, referred to as modules, replaced without requiring replacement of the entire application. Whereas a monolithic application, after having a part fixed or upgraded, must be replaced in it's entirety.
From Wikipedia: "Modularity is desirable, in general, as it supports reuse of parts of the application logic and also facilitates maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement."
So in the context of a monolithic class, all its features are self-contained and if you want to add or alter a feature to the class you would need to alter/add code in the class and recompile it. Conversely a modular class exposes access to functionality which is implemented externally. For example a "Calculator" class may use a separate "Add" class for actually adding numbers; call a "Multiply" function from a separate library; or even call an "Amortize" function from a web service. As long as the each of these functional parts can be altered externally from the class, it is modular.

Related

What is a software framework? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Can someone please explain me what a software framework is? Why do we need a framework? What does a framework do to make programming easier?
I'm very late to answer it. But, I would like to share one example, which I only thought of today. If I told you to cut a piece of paper with dimensions 5m by 5m, then surely you would do that. But suppose I ask you to cut 1000 pieces of paper of the same dimensions. In this case, you won't do the measuring 1000 times; obviously, you would make a frame of 5m by 5m, and then with the help of it you would be able to cut 1000 pieces of paper in less time. So, what you did was make a framework which would do a specific type of task. Instead of performing the same type of task again and again for the same type of applications, you create a framework having all those facilities together in one nice packet, hence providing the abstraction for your application and more importantly many applications.
Technically, you don't need a framework. If you're making a really really simple site (think of the web back in 1992), you can just do it all with hard-coded HTML and some CSS.
And if you want to make a modern webapp, you don't actually need to use a framework for that, either.
You can instead choose to write all of the logic you need yourself, every time.
You can write your own data-persistence/storage layer, or - if you're too busy - just write custom SQL for every single database access.
You can write your own authentication and session handling layers.
And your own template rending logic.
And your own exception-handling logic.
And your own security functions.
And your own unit test framework to make sure it all works fine.
And your own... [goes on for quite a long time]
Then again, if you do use a framework, you'll be able to benefit from the good, usually peer-reviewed and very well tested work of dozens if not hundreds of other developers, who may well be better than you. You'll get to build what you want rapidly, without having to spend time building or worrying too much about the infrastructure items listed above.
You can get more done in less time, and know that the framework code you're using or extending is very likely to be done better than you doing it all yourself.
And the cost of this? Investing some time learning the framework. But - as virtually every web dev out there will attest - it's definitely worth the time spent learning to get massive (really, massive) benefits from using whatever framework you choose.
The summary at Wikipedia (Software Framework) (first google hit btw) explains it quite well:
A software framework, in computer programming, is an abstraction in which common code providing generic functionality can be selectively overridden or specialized by user code providing specific functionality. Frameworks are a special case of software libraries in that they are reusable abstractions of code wrapped in a well-defined Application programming interface (API), yet they contain some key distinguishing features that separate them from normal libraries.
Software frameworks have these distinguishing features that separate them from libraries or normal user applications:
inversion of control - In a framework, unlike in libraries or normal user applications, the overall program's flow of control is not dictated by the caller, but by the framework.[1]
default behavior - A framework has a default behavior. This default behavior must actually be some useful behavior and not a series of no-ops.
extensibility - A framework can be extended by the user usually by selective overriding or specialized by user code providing specific functionality.
non-modifiable framework code - The framework code, in general, is not allowed to be modified. Users can extend the framework, but not modify its code.
You may "need" it because it may provide you with a great shortcut when developing applications, since it contains lots of already written and tested functionality. The reason is quite similar to the reason we use software libraries.
A lot of good answers already, but let me see if I can give you another viewpoint.
Simplifying things by quite a bit, you can view a framework as an application that is complete except for the actual functionality. You plug in the functionality and PRESTO! you have an application.
Consider, say, a GUI framework. The framework contains everything you need to make an application. Indeed you can often trivially make a minimal application with very few lines of source that does absolutely nothing -- but it does give you window management, sub-window management, menus, button bars, etc. That's the framework side of things. By adding your application functionality and "plugging it in" to the right places in the framework you turn this empty app that does nothing more than window management, etc. into a real, full-blown application.
There are similar types of frameworks for web apps, for server-side apps, etc. In each case the framework provides the bulk of the tedious, repetitive code (hopefully) while you provide the actual problem domain functionality. (This is the ideal. In reality, of course, the success of the framework is highly variable.)
I stress again that this is the simplified view of what a framework is. I'm not using scary terms like "Inversion of Control" and the like although most frameworks have such scary concepts built-in. Since you're a beginner, I thought I'd spare you the jargon and go with an easy simile.
I'm not sure there's a clear-cut definition of "framework". Sometimes a large set of libraries is called a framework, but I think the typical use of the word is closer to the definition aioobe brought.
This very nice article sums up the difference between just a set of libraries and a framework:
A framework can be defined as a set of libraries that say “Don’t call us, we’ll call you.”
How does a framework help you? Because instead of writing something from scratch, you basically just extend a given, working application. You get a lot of productivity this way - sometimes the resulting application can be far more elaborate than you could have done on your own in the same time frame - but you usually trade in a lot of flexibility.
A simple explanation is: A framework is a scaffold that you can you build applications around.
A framework generally provides some base functionality which you can use and extend to make more complex applications from, there are frameworks for all sorts of things. Microsofts MVC framework is a good example of this. It provides everything you need to get off the ground building website using the MVC pattern, it handles web requests, routes and the like. All you have to do is implement "Controllers" and provide "Views" which are two constructs defined by the MVC framework. The MVC framework then handles calling your controllers and rendering your views.
Perhaps not the best wording but I hope it helps
at the lowest level, a framework is an environment, where you are given a set of tools to work with
this tools come in the form of libraries, configuration files, etc.
this so-called "environment" provides you with the basic setup (error reportings, log files, language settings, etc)...which can be modified,extended and built upon.
People actually do not need frameworks, it's just a matter of wanting to save time, and others just a matter of personal preferences.
People will justify that with a framework, you don't have to code from scratch. But those are just people confusing libraries with frameworks.
I'm not being biased here, I am actually using a framework right now.
In General, A frame Work is real or Conceptual structure of intended to serve as a support or Guide for the building some thing that expands the structure into something useful...
A framework provides functionalities/solution to the particular problem area.
Definition from wiki:
A software framework, in computer
programming, is an abstraction in
which common code providing generic
functionality can be selectively
overridden or specialized by user code
providing specific functionality.
Frameworks are a special case of
software libraries in that they are
reusable abstractions of code wrapped
in a well-defined Application
programming interface (API), yet they
contain some key distinguishing
features that separate them from
normal libraries.
A framework helps us about using the "already created", a metaphore can be like,
think that earth material is the programming language,
and for example "a camera" is the program, and you decided to create a notebook. You don't need to recreate the camera everytime, you just use the earth framework (for example to a technology store) take the camera and integrate it to your notebook.
A framework has some functions that you may need. you maybe need some sort of arrays that have inbuilt sorting mechanisms. Or maybe you need a window where you want to place some controls, all that you can find in a framework. it's a kind of WORK that spans a FRAME around your own work.
EDIT:
OK I m about to dig what you guys were trying to tell me ;) you perhaps havent noticed the information between the lines "WORK that spans a FRAME around ..."
before this is getting fallen deeper n deeper. I try to give a floor to it hoping you're gracfully:
a good explanation to the question "Difference between a Library and a Framework" I found here
http://ifacethoughts.net/2007/06/04/difference-between-a-library-and-a-framework/
Beyond definitions, which are sometimes understandable only if you already understand, an example helped me.
I think I got a glimmer of understanding when loooking at sorting a list in .Net; an example of a framework providing a functionality that's tailored by user code providing specific functionality. Take List.Sort(IComparer). The sort algorithm, which resides in the .Net framework in the Sort method, needs to do a series of compares; does object A come before or after object B? But Sort itself has no clue how to do the compare; only the type being sorted knows that. You couldn't write a comparison sort algorithm that can be reused by many users and anticipate all the various types you'd be called upon to sort. You've got to leave that bit of work up to the user itself. So here, sort, aka the framework, calls back to a method in the user code, the type being sorted so it can do the compare. (Or a delegate can be used; same point.)
Did I get this right?

Common Libraries at a Company

I've noticed in pretty much every company I've worked that they have a common library that is generally shared across a number of projects. More often than not this has been a single companyx-commons project that ends up as a dumping ground for common programs including:
Command Line Parsers
File Utilities
Framework Helpers
etc...
Some of these are well thought out and some duplicate functionality found in Apache commons-lang, commons-io etc..
What are the things you have in your common library and more importantly how do you structure the common libraries to make them easy to improve and incorporate across other projects?
In my experience, the single biggest factor in the success of a common library is user buy-in; users in this case being other developers; and culture of your workplace/team(s) will be a big factor.
Separate libraries (projects/assemblies if you're in .Net) for different application tiers is essential (e.g: there's obviously no point putting UI and data access code together).
Keep things as simple as possible; what you don't put in a common library is often at least as important as what you do. Users of the library won't want to have to think, so usage needs to be super easy.
The golden rule we stuck to was keeping individual functions focused on a single task - do one thing and do it well (or very very well); don't try and provide something that tries to take every possibility into account, the more reusable you think you're making it - the less likely it is to be used. Code Complete (the book) has some excellent content on common libraries.
A good approach to setting/improving a library up is to do regular code reviews and retrospectives; find good candidates that you've already come up with and consider re-factoring them into a library for future projects; a good candidate will be something that more than one developer has had to do on more that one project (for example).
Set-up some sort of simple and clear governance of the libraries - someone who can 'own' a specific library and ensure it's overal quality (such as a senior dev or team lead).
I have so far written most of the common libraries we use at our office.
We have certain button classes that are just slightly more useful to us than the standard buttons
A database management class that does some internal caching and can connect to ODBC, OLEDB, SQL, and Access databases without even the flip of a parameter
Some grid and list controls that are multi threaded so we can add large amounts of data to them without the program slowing and without having to write all the multithreading code every time there is a performance issue with a list box/combo box.
These classes make it easier for all of us to work on each other's code and know how exactly they work since we all use the exact same interfaces throughout our products.
As far as organization goes, all of the DLL's are stored along with their source code on a shared development drive in the office that we all have access to. (We're a pretty small shop)
We split our libraries by function.
Commmon.Ui.dll has base classes for ui elements.
Common.Data.Dll is sort of a wrapper around Enterprise library Data access classes.
Common.Business is a dumping ground for other common classes that don't fit into one of those.
We create other specialized dlls as needs arise.

MEF vs. any IoC

Looking at Microsoft's Managed Extensibility Framework (MEF) and various IoC containers (such as Unity), I am failing to see when to use one type of solution over the other. More specifically, it seems like MEF handles most IoC type patterns and that an IoC container like Unity would not be as necessary.
Ideally, I would like to see a good use case where an IoC container would be used instead of, or in addition to, MEF.
When boiled down, the main difference is that IoC containers are generally most useful with static dependencies (known at compile-time), and MEF is generally most useful with dynamic dependencies (known only at run-time).
As such, they are both composition engines, but the emphasis is very different for each pattern. Design decisions thus vary wildly, as MEF is optimized around discovery of unknown parts, rather than registrations of known parts.
Think about it this way: if you are developing your entire application, an IoC container is probably best. If you are writing for extensibility, such that 3rd-party developers will be extending your system, MEF is probably best.
Also, the article in #Pavel Nikolov's answer provides some great direction (it is written by Glenn Block, MEF's program manager).
I've been using MEF for a while and the key factor for when we use it instead of IOC products is that we regularly have 3-5 implementations of a given interface sitting in our plugins directory at a given time. Which one of those implementations should be used is actually something that can only be decided at runtime.
MEF is good at letting you do just that. Typically, IOC is geared toward making sure you could swap out, for a cononical example, an IUserRepository based on ORM Product 1 for ORM Product 2 at some point in the future. However, most IOC solutions assume that there will only be one IUserRepository in effect at a given time.
If, however, you need to choose one based on the input data for a given page request, IOC containers are typically at a loss.
As an example, we do our permission checking and our validation via MEF plugins for a big web app I've been working on for a while. Using MEF, we can look at when the record's CreatedOn date and go digging for the validation plugin that was actually in effect when the record was created and run the record BOTH through that plugin AND the validator that's currently in effect and compare the record's validity over time.
This kind of power also lets us define fallthrough overrides for plugins. The apps I'm working on are actually the same codebase deployed for 30+ implementations. So, we've typically go looking for plugins by asking for:
An interface implementation that is specific to the current site and the specific record type in question.
An interface implementation that is specific to the current site, but works with any kind of record.
An interface that works for any site and any record.
That lets us bundle a set of default plugins that will kick in, but only if that specific implementation doesn't override it with customer specific rules.
IOC is a great technology, but really seems to be more about making it easy to code to interfaces instead of concrete implementations. However, swapping those implementations out is more of a project shift kind of event in IOC. In MEF, you take the flexibility of interfaces and concrete implementations and make it a runtime decision between many available options.
I am apologizing for being off-topic. I simply wanted to say that there are 2 flaws that render MEF an unnecessary complication:
it is attribute based which doesn't do any good to helping you figuring out why things work as they do. There's no way to get to the details burred in the internals of the framework to see what exactly is going on there. There is no way to get a tracing log or hook up to the resolving mechanisms and handle unresolved situations manually
it doesn't have any troubleshooting mechanism to figure out the reasons for why some parts get rejected. Despite pointing at a failing part it doesn't tell you why that part has failed.
So I am very disappointed with it. I spent too much time fighting windmills trying to bootstrap a few classes instead of working on the real problems. I convinced there is nothing better than the old-school dependency injection technique when you have full control over what is created, when, and can trace anything in the VS debugger. I wish somebody who advocates MEF presented a bunch of good reasons as to why would I choose it over plain DI.
I agree that MEF can be a fully capable IoC framework. In fact I'm writing an application right now based on using MEF for both extensibility and IoC. I took the generic parts of it and made it into a "framework" and open sourced it as its own framework called SoapBox Core in case people want to see how it works.
In particular, take a look at how the Host works if you want to see MEF in action.

Should I still code to the interface even if I am ONLY EVER going to have ONE implementation?

I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.

What are the pros and cons for using a IOC container?

Using a IOC container will decrease the speed of your application because most of them uses reflection under the hood. They also can make your code more difficult to understand(?). On the bright side; they help you create more loosely coupled applications and makes unit testing easier. Are there other pros and cons for using/not using a IOC container?
If you're using an IOC container in a simple fashion, reflection is only used on startup - the application is wired up to start with, and then it runs as normal without any intervention from the container. Of course, if you're using the IOC to resolve dependencies after you've started running, that may be slightly different - although I'd still expect it to resolve lazily and cache unless you've got it configured to create new instances each time.
As for making the code harder to understand - quite the reverse! With the dependencies explicitly stated, it's much easier to understand each component, and the configuration file(s) make it clear how the whole application hangs together.
Well I suppose a con I've experienced is that some developers don't seem to be able to grasp IoC. We've had a few people who were against it for no reason other than that they didn't understand them. (Not saying that's a bad reason to be against something, not at all.)
It does add a bit of abstraction that always seems to manage to confuse someone or other, but I'd say the pros far outweigh the cons in most cases.
I think it is fair to say if you have expert understanding of how to use IOC and tend to write good code anyway, then IOC will make your code easier to understand on all but the smallest systems.
However if you are working somewhere where most classes/methods are very large and the concept of refactoring has not yet taken hold, then trying to use an IOC is likely just to make the software harder to understand. The IOC also has to be leant by everyone that programs on the project, so that may be a consideration.
I see IOC as icing on the cake; I like the icing but only on a nice cake. If the cake is not nice to start with, sort out the cake first.
As to the performance overhead of using IOC, I don’t see this as a problem in most cases. The overhead need not be large, and given the speed of today’s CPU most of you run time is likely to be data access anyway. If a IOC proved to slow for a given bit of code I would look at adding some caching of returned object, or removing the IOC just from that bit of code.
I believe the assumption about reduced execution speed is much the same kind of argument as "C is faster than C#/Java". While this statement may be true for specific operations and/or structurally simple tasks it is not the case the moment complexity rises.
The way DI-frameworks let you focus on object creation and dependencies creates more efficient systems when code size increases. For large applications I'm almost certain DI-framework based code will outperform any alternative solution. There's simply so little redundancy in the runtime that it's hard to make it more efficient! Most of the additional overhead is also just at first load.
Advanced DI containers also lets you do "scope" magic that you can only dream of without the container. Using scope-proxies, spring can do the following:
A Singleton
|
B Singleton
|
C Prototype (per-invocation)
|
D Singleton
|
E Session scope (web app)
|
F Singleton
Effectively you can have ten layers of singleton objects and all of a sudden something session scoped shows up.
Stuff like security can be injected in totally different manner than you would otherwise. There's often a classical paradox: Often the GUI layer needs to have intricate knowledge of the security permissions. Quite often the services layer also needs this, but often at a different level of detail (usually less detailed than the gui). The classical approach would be to send it around as parameters, put it on a threadlocal or to ask a service. With spring you can just inject it straght where you need it and no-one else needs to know.
This actually changes application development as a whole. I had a real hard time adjusting to this, but after this pain I see it is truly a lot closer to how things should be (as opposed to how we've learned to do it).
So I think DI frameworks have the potential of changing the way you make programs, with much further reaching implications than just DI. It's not just a glorified way of calling new.
I agree with all the answers so far.
What I'd like to add, is that it creates a bit of overhead, so it isn't really suited for small applications.
Mid-size and larger applications benefit the most from using IoC.
You might also check out this question for more information on pros and cons: Castle Windsor Are There Any Downsides?
In most circumstances you would not even notice performance penalty since for "singleton" objects all the initialization is performed once only. I would also argue that IoC makes it different to understand the code: on the contrary, IoC-style development forces you to create small coherent classes, which are in turn easier to grok.
If you are writing a business application, using an inversion-of-control and dependency-injection container (in conjunction with other agile practices and tools) will help you out in terms of productivity and reliability.
Moreover, your application will probably spend a vast majority of its CPU time waiting for resources or waiting for human interaction and doing nothing useful. Your application should have plenty of horsepower to spare for a few microseconds of reflection.
It seeks to reduce their dependency with IOC by ensuring object instance management within the application. The framework, rather than the developer, is in charge of creating and managing dependencies in your project.
The framework calls and runs our code when we write a block of code, and the entire event of passing control back to the framework is known as the Inversion of Control.
It enables the execution of a method separate from its
implementation.
It enables you to switch between multiple implementations with ease.
increases the modularity of the software
Because dependencies are reduced, it is simple to test and write.