I would like to know what is the best way in which I can extend an existing functionality (probably by using AOP or Annotations). The scenario which I am looking for is.
We have one service say DisableEmployee which uses an entity Employee which does some validation and then disables access to employee.
For certain customers I would like to extend this functionality where DisableEmployee not only disables the access but also imposes some penalty points.
One approach would be I extend the base class and then add the additional functionality.
Is it possible (or advisable) to use AOP and annotations here; where in I annotate my DisableEmployee and then at compile time I weave the additional code into the class. (using aspect) I have read about APT and Velocity which should be able to help me in achieving it.
The reason I am looking at APT because we might have to extend the entity classes also to add some new attributes.
The idea of having this approach is to see if we can classify service extension as a form of cross-cutting functionality (like traditional logging, auditing ....)
Thanks in advance
Your question is too general to get reliable answer. But at the moment I cannot see anything what cannot be implemented in AspectJ.
Related
If given an option to use Sling Models or WCM use class which one should be preferred when and why?
Is either of them better performance wise?
Thanks in Advance
Sling models are saving you a lot of time for accessing simple objects as the current page/resource, injecting some properties or services, adapting from resource or sling http request to your model. Sure with the use the plain API your code will execute a little bit faster, because you initialize only the objects you really need, but you have to do all that things "manually". I think that this sightly introduction is giving a good overview of all possible implementation you can go with. You can also have a look at the sightly official documentation. Below you can find a quick overview of the what you can expect and hopefully make your decision easier (quoted from the offical sightly documentation).
Java Use Provider
Advantages
Use-objects provided through bundles:
faster to initialise and execute than Sling Models for similar code
easy to extend from other similar Use-objects
simple setup for unit testing
Use-objects backed by Resources:
faster to initialise and execute than Sling Models for similar code
easy to override from inheriting components through search path
overlay or by using the sling:resourceSuperType property, allowing
for greater flexibility
business logic for components sits next to the Sightly scripts where
the objects are used
Disadvantages
Use-objects provided through bundles:
lacks flexibility in terms of component overlaying
Use-objects backed by Resources:
cannot extend other Java objects
the Java project might need a different setup to allow running unit
tests, since the objects will be deployed like content
Sling Models Use Provider
Advantages
convenient injection annotations for data retrieval
easy to extend from other Sling Models
simple setup for unit testing
Disadvantages
lacks flexibility in terms of component overlaying, relying on
service.ranking configurations
If you ask me I would always take a framework as sling models or slice which makes the development easier and faster. At the end the performance impact by using a framework is not really a problem, would be not the only one third party framework in the project. But if your project is performance oriented probably you could make some tests with all possibilities you have and decide if such a framework suits your needs (or just mix both).
Previously I asked this question and on a answer I got this comment:
This works, however injecting the container to a part, as far as I know, is not a "normal" use-case of MEF.
In my web app I have a few repositories that, of course, retrieve entities from the DB. To make them as loosely coupled as possible I'm making them return interfaces (eg IUser, IBill, IBlaBlaBla...) and the repository project only references the library project (that contains the interfaces). I use MEF composition capabilities to tie it all up...
Since the repository must have a concrete object to fill with info it got from the DB and the only the Container is aware of which concrete class maps to a specific interface I think that the repository MUST have reference to the container so it can call the "Resolve()", get a new instance and do his job, but that apparently is a mistake.
Can anyone tell me why and what approach would be better?
PS: I don't know if it's relevant but I'm using DDD...
I think the flaw in your design that lead to this problem is the use of interfaces to hide entities behind. Since entities are your core concept in your domain, there should be no use in hiding them behind an abstraction. Let's put it differently: do you ever have a different implementation of IUser?
In other words, ditch the IUser, IBill, etc. interface and let your repositories and business commands depend directly on your aggregate entities.
I use Struts2 + Spring + Hibernate for web site development. And I am wondering about 1 thing, I never used annotations in my web applications, but hey, what is the best way to code a web application? Annotations (I never understood how they works) or Config-files? and why? More complex applications will work faster on this?, or is something about principles?
This isn't definitive it is just what I do with similar tools.
Looking at the Struts2 xml configuration vs conventions (struts2-conventions-plugin) and annotations. The benefit of the later is that there is a lot less work. When the conventions don't do what we want we have a choice, use struts.xml which will override the conventions or use annotations which will also override the conventions. If you go with annotations on your action class then you can clearly see what is going on from one location. With struts.xml you often need to look at both the configuration file and the action to understand the whole picture.
Although I advocate annotations, the xml configuration is still good for somethings. It is a good place to set global parameters. It is still needed for defining custom interceptors/interceptor-stacks and if you need actions defined from wildcards it makes sense to have them there too. All these examples reinforce the point that it is more general configuration that belongs in struts.xml because they are bigger than any action.
For hibernate it is similar. Your entity classes and meta information are all in one place which makes it easier to understand. There was a case I had where xml was more useful in a testing situation, I needed to use the same entity classes but needed to make extensive changes to the metadata. So in that case I could simply load a different set of xml files.
With spring I use annotations for injection but wire the beans in application.xml.
Other stackoverflow posts that may be of interest:
Xml configuration versus Annotation based configuration
Is there a good reason to configure hibernate with XML rather than via annotations?
Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.
I am having a really hard time reconciling IoC, interfaces, and events. Let's see if I can explain this without writing a book.
I'm just getting started with IoC and I'm playing with Spring. We have a simple data layer that was built long before EF or the others. One of the classes is a DBProcedure which has some methods and events.
I created an IDBProcedure interface that the 'real' DBProcedure class implements. In TDD fashion I'd like to be able to swap out the 'real' DBProcedure class for another that implements the same interface for testing. To me, this means that the IDBProcedure interface should be defind in a different namespace/project than my data layer, right?
But a DBProcedure can raise some events and those events deliver custom EventArgs-derived classes. Does that mean that the EventArgs-classes need to be defined outside the data layer too? Seems like it to make the interface work, but that seems bad because it spreads data-layerness around?
On the other hand maybe I have the wrong idea - is it ok to include the data layer namespace when I'm testing to get interface and event definitions even though I'm not using any of the 'real' classes?
Yes, you need to move the interfaces and all the types it depends on somewhere, because you do not want the interfaces module to depend on the implementations.
The typical choice for this is one of two alternatives
Impl ----> Api <---- client
(Implementation depends on api, client depends on api, everything in api module)
Impl ----> Api <----- client
\ | /
\ V /
------->Model<------
Here everyone depends on a common "model" module, and this contains the enums and such. The advantage of this version is that you can have multiple API modules share the same common enums and other artifacts. (Because you really don't want API's to depend on other API modules usually)