What is best practice for using E4 dependency injection for our own objects? - eclipse

I am working on an E4 RCP application and, while our basic DI configuration is working, I have some reservations about our current implementation.
The IInjector interface and #ProcessAdditions annotation are tagged as being discouraged for external access. Currently, we are using a series of statements similar to
injector.addBinding(IInterface.class).implementedBy(Concrete.class);
from within a method marked as #ProcessAdditions. What method(s) can be used that don't violate access rules? I know I can bind classes/strings to instances via IEclipseContext, but using ContextInjectionFactory by hand seems to force order of construction to be known by configurer (as opposed to other DI frameworks).
I know Guice has the concept of child injectors, but in E4, ContextInjectionFactory is internally set to use only the default injector for manufacturing. What is the best method to manufacture a group of objects, using DI, and subsequently disposing of this group? I would like to create a fresh batch of processing objects for each processing operation.

ContextInjectionFactory is the only thing I have seen described for doing injection in e4 (in Lars Vogel's 'Eclipse 4 RCP' book for example). This is what I use in my e4 applications.
Some things, such as #ProcessAdditions are marked as discouraged because that part of the e4 API has not been finalized yet and might change, they can still be used. #ProcessAdditions is only used for the application Life Cycle class.

Related

Accessing data from the jcr from jsp

In some implementations, I've seen jsp's using java bean classes acting as an intermediate store/data access layer to get data from a jcr.
Why is this, since the jsp can access the jcr directly via the jcr api.
Separation of concerns? Memory cache for the data?
Just wondering why such a pattern exists when the jcr api was written in the first place.
Using scriptlet's might not be so problematic in smaller installations but is in large multi site projects.
Separating UI code and model/business logic eases maintainability and allows reusability of code upon projects. Also changing layout's gets much easier. Usually this seperation is done by using a component bean to access the JCR repo and to provide the data and by using the JSP just for the view.
Just imagine that your customer requires a large UI change propably in multiple sites. It's harder to change JSPs mixed up with scriptlets and UI code, especially if you have a lot of them.
From an OO perspective using JSPs and scriptlets prevents you from using inheritance and composition. Scriptlet's can not be made abstract.
I experienced that java beans are easier to debug then scriptlets, especially in case of an exception and java beans can be easier unit tested.

Implementing Chain of Responsibility with Services

I'm thinking about a platform-neutral (i.e. not .NET MEF) technique of implementing chain-of-responsibliity pattern using web services as the handlers. I want to be able to add more CoR handlers by deploying new services and not compiling new CoR code, just change configuration info. It seems the challenge will be managing the metadata about available handlers and ensuring the handlers are conforming to the interface.
My question: any ideas on how I can safely ensure:
1. The web services are implementing the interface
2. The web services are implementing the base class behavior, like calling the successor
Because, in compiled code, I can have type-safety and therefore know that any handlers have derived from the abstract base class that ensures the interface and behavior I want. That seems to be missing in the world of services.
This seems like a valid question, but a rather simple one.
You are still afforded the protection of the typing system, even if you are loading code later, at runtime, that the original code never saw before.
I would think the preferred approach here would be to have something like a properties file with a list of implementers (your chain). Then in the code, you are going to have to have a way to instantiate an instance of each handler at runtime to construct the chain. When you construct the instance, you will have to check its type. In Java, for instance, that would take the form of instanceof (abomination ordinarily, but you get a pass for loading scenarios), or isAssignableFrom. In Objective C, it's conformsToProtocol.
If it doesn't, it can't be used and you can spit an error out to the console.

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.

Frameworks for Layering reusable Architectures

My question is very simple, my intention is to generate a repository with your responses so it could serve to the community when selecting frameworks for developing enterprise general purpose applications.
This could apply very well for general purpose languages such as C++, C# or Java.
What Framework do you recommend for generating Layered Architectures?
Based on you experience why do you prefer the usage of some Framework versus your own architecture?
How long do you believe your selected Framework will stay as a preferred option in the software development industry?
This is indeed an overly general question, especially since there are so many interpretations of the very word framework, and within the world of frameworks many different kinds for different tasks. Nevertheless, I'll give it a shot for Java.
Java
Java EE
The default overall enterprise framework of Java is called Java EE. Java EE strongly emphasis a layered architecture. It's a quite large framework and learning every aspect of it can take some time. It supports several types of applications. Extremely small and simple ones may only use JSP files with some scriptlets, while larger ones may use much more.
Java EE doesn't really enforce you to use all parts of it, but you pick and choose what you like.
Top down it consists of the following parts:
Web layer
For the web layer Java EE primarily defines a component and MVC based Web Framework called JSF - JavaServer Faces. JSF utilizes an XML based view description language (templating language) called Facelets. Pages are created by defining templates and letting template clients provide content for them, including other facelets and finally placing components and general markup on them.
JSF provides a well defined life-cyle for doing all the things that every web app should do: converting request values, validating them, calling out to business logic (the model) and finally delegating to a (Facelets) view for rendering.
For a more elaborate description look up some of the articles by BalusC here, e.g. What are the main disadvantages of Java Server Faces 2.0?
Business layer
The business layer in the Java EE framework is represented by a light-weight business component framework called EJB - Enterprise JavaBeans. EJBs are supposed to contain the pure business logic of an application. Among others EJBs take care of transactions, concurrency and when needed remoting.
An ordinary Java class becomes an EJB by applying the #Stateless annotation. By default, every method of that bean is then automatically transactional. Meaning, if the method is called and no transaction is active one is started, otherwise one is joined. If needed this behavior can be tuned or even disabled. In the majority of cases transactions will be transparent to the programmer, but if needed there is an explicit API in Java EE to manage them manually. This is the JTA API - Java Transaction API.
Methods on an EJB can easily be made to execute asynchronous by using the #Asynchronous annotation.
Java EE explicitly supports layering via the concept of a separate module specifically for EJBs. This isolates those beans and prevents them from accessing their higher layer. See this Packaging EJB in JavaEE 6 WAR vs EAR for a more elaborate explanation.
Persistence layer
For persistence the Java EE framework comes with a standard ORM framework called JPA - Java Persistence API. This is based on annotating plain java classes with the #Entity annotation and a property or field on them with #Id. Optionally (if needed) further information can be specified via annotations on how objects and object relations map to a relational database.
JPA heavily emphasizes slim entities. This means the entities themselves are as much as possible POJOs that can be easily send to other layers and even remote clients. An entity in Java EE typically does not take care of its own persistence (i.e. it does not hold any references to DB connections and such). Instead, a separate class called the EntityManager is provided to work with entities.
The most convenient way of working with this EntityManager is from within an EJB bean, which makes obtaining an instance and the handling of transactions a breeze. However, using JPA in any other layer, even outside the framework (e.g. in Java SE) is supported as well.
These are the most important services related to the traditional layers in a typical enterprise app, but the Java EE framework supports a great many additional services. Some of which are:
Messaging
Messaging is directly supported in the Java EE framework via the JMS API - Java Messaging Service. This allows business code to send messages to so-called queues and topics. Various parts of the application or even remote applications can listen to such a queue or topic.
The EJB component framework even has a type of bean that is specifically tailored for messaging; the message driven bean which has a onMessage method that is automatically invoked when a new message for the queue or topic that the bean is listening to comes in.
Next to JMS, Java EE also provides an event-bus, which is a simple light-weight alternative to full blown messaging. This is provided via the CDI API, which is a comprehensive API that among others provides scopes for the web layer and takes care of dependency injections. Being a rather new API it currently partially overlaps with EJB and the so-called managed beans from JSF.
Remoting
Java EE provides a lot of options for remoting out of the box. EJBs can be exposed to external code willing and able to communicate via a binary protocol by merely letting them implement a remote interface.
If binary communication is not an option, Java EE also provides various web service implementations. This is done via among others JAX-WS (web services, soap) and JAX-RS (Rest).
Scheduling
For scheduling periodic or timed jobs, Java EE offers a simple timer API. This API supports CRON-like timers using natural language, as well as timers for delayed execution of code or follow up checks.
This part of Java EE is usable but as mentioned fairly basic.
There are quite some more things in Java EE, but I think this about covers the most important things.
Spring
An alternative enterprise framework for Java is Spring. This is a proprietary, though fully open source framework.
Just as the Java EE framework, the Spring framework contains a web framework (called Spring MVC), a business component framework (simply called Spring, or Core Spring Framework) and a web services stack (called Spring Web Services).
Although many parts of the Java EE framework can be used standalone, Spring puts more emphasis on building up your own stack than Java EE does.
The choice of Java EE vs Spring is often a religiously influenced one. Technically both frameworks offer a similar programming model and a comparable amount of features. Java EE may be seen as slightly more light-weight (emphasis convention over configuration) and having the benefit of type-safe injections, while Spring may offer more of those smaller convenience methods that developers often need.
Additionally Spring offers a more thoroughly and directly usable security API (called Spring Security), where Java EE leaves a lot of security details open to (third party) vendors.
To specifically answer the second question:
Developing your own framework gives you the burden of having to maintain it and educating new developers in using it.
The larger your framework becomes, the more time you have to devote specifically to it and the less time you thus have to solve your actual business problem. This is okay if your business problem is the framework, but otherwise it can become a bit of a problem, even for very large companies that can dedicate a group of people to such a framework.
If you're a smaller company (say ~15 developer max) this can really become a huge burden.
Additionally, if your own framework is the kind of framework that can take advantage of third party developments (e.g. third parties can develop components for JSF), then your own framework obviously won't be able to take advantage of that.
Unless of course you open source your own framework, but this will only significantly increase the burden of supporting it. Just dumping your source code on sourceforge does not really count. You will have to actively support it. All of a sudden your framework becomes their framework with maybe 'weird' feature requests and awkward error reports for environments that you have no personal interest in.
This also assumes that your framework will actually be used by external users. Unless it's really very, very, good and you put lots of energy in it, this will probably not happen if it's simply the umpteenth Java web- or ORM framework.
Obviously, some people have to take up the job of creating new frameworks, otherwise the industry just stagnates, but if your prime concern is your business problem I would really think twice of starting your own framework.
Very vague question, I'm not really sure it's ever a good idea to "write your own" at this point for a work project (unless writing your own, IS the project). If it's a learning exercise, fine, but otherwise go use one of the libraries written by people who have been doing it far longer. If you really want to get involved, read their code, try and contribute patches etc.
For .Net there is Sharp Architecture Which is a pretty popular framework for layered applications.
Here's some of the stuff I use (I don't use Sharp Architecture)
First, the infrastructure stuff
For Dependency Injection, I use StructureMap. I use it because it's way more robust and performant than anything I would or could write, and it's very well supported within the .Net community. It also sticks to being DI, and doesn't venture out into other things that I might want to use other libs for (AOP etc). The fluent configuration is fantastic (but many .Net DI Tools have that now)
For AOP, I use Linfu Dynamic Proxy. I know a lot of people that like the code weaver variety for performance reasons, but that's always seemed a bit like premature optimization to me.
For a DataMapper, I use AutoMapper. This is one where I'm on again off again. If you can do your mappings based just on convention, then great, I'll use it. Once I have to start tweaking the configuration to do special things.... to me that starts to get into the gray area where the code might be more clear with just some left=>right wrapped in a function.
Web/UI
Asp.Net MVC. Although to be quite honest, I'm having a falling out lately and may soon be moving to FubuMvc. Asp.Net MVC seems like it has split personalities in terms of API design (dynamic over here, static over there, using blocks to render forms, but System.Actions to render other things etc). Combine that with the fact that it's not really OSS (you can't submit a patch), and to me there's a compelling reason why the community should come up with something better that's OSS.
Persistence
NHibernate, Specifically Fluent NHibernate. Sure I'd love to write my own OR/M, but at the same time I'm certain that the hordes of developers who have worked on NHibernate are way smarter than me.
Services/Distribution etc
WCF for Synchronous calls
NServiceBus for Messaging and most async calls.
Most of this stuff is OSS, so how long will it be around, well, I would imagine a good long while.
This question doesn't work very well. Selecting frameworks is difficult, and very context specific. For each selection process you might end up with a simple shortlist and a simple list of questions to answer, but those lists do not transfer well to other selections.
The number of parameters and the parameter sensitivity influencing a decision is very large, and at enterprise level a lot of them are not technical.
Currently, there are no frameworks available that are ready to support these near-term enterprise needs:
the switch for most of the workforce from pc to tablet and phone;
the switch from web client and rdbms to p2p/disconnected based storage and distribution

IoC and events

I am having a really hard time reconciling IoC, interfaces, and events. Let's see if I can explain this without writing a book.
I'm just getting started with IoC and I'm playing with Spring. We have a simple data layer that was built long before EF or the others. One of the classes is a DBProcedure which has some methods and events.
I created an IDBProcedure interface that the 'real' DBProcedure class implements. In TDD fashion I'd like to be able to swap out the 'real' DBProcedure class for another that implements the same interface for testing. To me, this means that the IDBProcedure interface should be defind in a different namespace/project than my data layer, right?
But a DBProcedure can raise some events and those events deliver custom EventArgs-derived classes. Does that mean that the EventArgs-classes need to be defined outside the data layer too? Seems like it to make the interface work, but that seems bad because it spreads data-layerness around?
On the other hand maybe I have the wrong idea - is it ok to include the data layer namespace when I'm testing to get interface and event definitions even though I'm not using any of the 'real' classes?
Yes, you need to move the interfaces and all the types it depends on somewhere, because you do not want the interfaces module to depend on the implementations.
The typical choice for this is one of two alternatives
Impl ----> Api <---- client
(Implementation depends on api, client depends on api, everything in api module)
Impl ----> Api <----- client
\ | /
\ V /
------->Model<------
Here everyone depends on a common "model" module, and this contains the enums and such. The advantage of this version is that you can have multiple API modules share the same common enums and other artifacts. (Because you really don't want API's to depend on other API modules usually)