How do I inject objects into a Servlet using Dagger?
Since the servlet container instantiates the Servlets themselves, they are not created with Dagger. Therefore, the only mechanism I can see to inject into them is via static injections, which the dagger homepage warns against doing. Is there another (best practices) way to do it?
Specifically, I am using Jetty and GWT (my servlets extend RemoteServiceServlet), but I don't think those details matter.
There is not (yet) any stock infrastructure code to support a Java EE servlet stack for Dagger.
That said, there are ways you could home-brew it until we get to it. If you were using it only for singletons, then you could mirror what some people are doing on android, and initialize your graph at app startup using a context listener, then use the Servlet's init() method to self-inject
It gets much trickier when you try to add scoping to requests and such - not impossible, but it requires more scaffolding.
While there is no stock infrastructure for this, I did the following:
I put the ObjectGraph into the ServletContext of the web server. Then, for each Servlet, I can do the following,
#Inject
SomeDependency dependency;
#Inject
SomeOtherDependency otherDependency;
#Override
public void init(FilterConfig filterConfig) throws ServletException
{
((ObjectGraph) filterConfig.getServletContext().getAttribute(DaggerConstants.DAGGER_OBJECT_GRAPH)).inject(this);
}
where I have previously defined the DaggerConstants myself.
There are likely a variety of ways to get the ObjectGraph into the ServletContext, depending on what your application is. We use an embedded jetty server, so we control everything during startup. Not sure how you would do it in a general container, but presuming you instantiate your main ObjecGraph through some init servlet, you would do it there.
servletContext.setAttribute(DaggerConstants.DAGGER_OBJECT_GRAPH, objectGraph);
Note that our application uses a single ObjectGraph for the entire application, which might not be your situation.
Related
I have been working with for a while but somehow never thought about this. Every aem project that I have worked on, has one similarity in their code structure. There is an interface for every service written.
My question is why do we need a interface for every service?
Can #reference or #inject not use the services without an interface?
Using interfaces is a good practice to decouple the user of a service from the implementation. In many cases you even want to have an API bundle so the user of the service does not need a maven dependency to the implementing bundle.
On the other hand you are not required to use interfaces. Especially when I wire components inside a bundle interfaces are often an unnecessary layer. In this case simply export the service directly with the class.
See here for an example:
#Component(service = DistributionMetricsService.class)
public class DistributionMetricsService {
...
}
and here for the client code:
#Reference
private DistributionMetricsService distributionMetricsService;
So the main difference is that you have to specify the service property if you want to export a component with its implementation class.
GWT's servlet implementation has onBefore/onAfterDeserialization which would give me a hook with which to start and stop transactions without doing anything fancy, however those methods don't allow me to properly check for error conditions after the service method got invoked, I just have access to the serialized return value, not directly to any exception that might have been thrown, so deciding whether to roll back or not is not possible that way without rewriting parts the GWT servlet.
I was thinking about using aspectj's compile-time weaving. However, this does not work with Netbeans' compile-on-save feature because the module needs to be recompiled using the aspectj compiler.
How about LTW (load-time-weaving)? Is there any way (or example) to add LTW to the webapp container without using the Spring framework?
I was also thinking about using AOP based on Java dynamic proxies, ie. to put a proxy in front of the servlet. Again, the question arises how to tell the Jetty WebApp container to load the proxy instead of the original servlet.
Or is there any ready-to-use solution out there already?
I think you could overwrite a combination of
public String processCall(RPCRequest rpcRequest) from RemoteServiceServlet and RPC.invokeAndEncodeResponse to do what you want.
Not ideal, as you need to copy/paste a few lines of code, but they really are only a few.
I myself hit the same problems as I needed some customizations, and relevant methods didn't had the access modifier that I needed, so I ended up copy/pasting some portions.
I can't comment on the rest of your question, but I don't expect to find any ready-to-use solutions, as GWT-RPC doesn't seem to have any new fans out there; just people maintaining legacy systems. Therefore, I expect that you either don't find anything or find solutions that are no longer maintained.
My research so far says that javax.servlet.http.HttpServletRequest is the interface to calling regular Java Servlets while org.apache.http.HttpRequest is typically used to implement RESTful services. I see an example for the same in one of the internally available frameworks in my organization where org.apache.http.HttpRequest is the interface to program RESTful services.
I still feel that org.apache.http.HttpRequest has been made available by Apache to facilitate RESTful implementation since this interface does not have any status code and works with passing entities as responses.
What exactly is the difference between the two interfaces and when one should be used over the other?
HttpServletRequest is a server-side class that is part of the Java EE Servlet APIs. You use it when you are implementing ... a servlet.
In the Java context, HttpRequest could (in theory) be anything ... because it is not a Java SE or EE class. But usually it is a class in the Apache Http Components library. This is typically used for client-side code, though it is also possible to use it server-side too.
(There are HttpRequest classes in non-Java contexts also ...)
What exactly is the difference between the two interfaces and when one should be used over the other?
They are unrelated interfaces. (Or "exactly" unrelated ... if you prefer :-) )
Use HttpServletRequest when you are implementing servlets.
Don't use HttpRequest when you are implementing servlets.
"RESTful" is orthogonal; i.e. you can implement RESTful servers using servlet, and non-RESTful servers without using servlets.
I am still not clear about the basic difference between the two. Why would somebody need a HttpRequest in the first place if HttpServletRequest is already there?
Because that somebody's application may not be using the standard Java EE servlet framework. And if they are not, then it is not "already there".
From this point of view, the basic difference between HttpRequest and HttpServletRequest is that they are part of different frameworks, and you use one or the other depending on which framework you are using.
Why do we have two classes? Because of history. Java EE servlets came first, and were standardized many years ago and are widely used. The Apache HTTP Components library was implemented later to address use-cases that servlets did not address; e.g. where servlets are too heavy-weight.
Oracle can't change Java EE to replace HttpServletRequest with the Apache HttpRequest class because it would break too much customer code.
Apache couldn't have adopted HttpServletRequest in HTTP Components because it has "baggage" that is not appropriate to non-servlet use-cases.
Either way, it is what it is.
Which framework do you choose? How do you choose? Those questions are both off-topic for StackOverflow. (Recommendations, subjective, too broad, etc)
i think the basic difference is Httpservletrequest is part of communication between container and servlet as container creates the object for it and passes it on to servlet,while as Httprequest is part of communication between container and client because container converts Httpservlet respone into Httpresponse and then sends it back to client.
Ok, I am uisng GWTP, it got Client, Server & Shared package.
I have a Util class in client.
my.client.Util.java{
public static String method1();
//more methods here
}
In server i have
my.server.GetDataActionHandler{
///Should I do like this
String s=my.client.Util.method1();
}
Is it safe to do that or I should put Util into shared package, like this
my.shared.Util.java{
public static String method1();
//more methods here
}
What is the different if we put a Util in shared package? is it safer or any other good reasons?
client is as safe as shared, these are just names and conventions.
By placing your class in client though, you lose the indication that you're using it also on the server side, where client-specific code won't run.
By placing it in shared, you're signaling to yourself that you should make sure the code your put in the class can effectively be used in both the client and the server.
Read here about GWT MVP atchitecture
Read more here about GWT Architectural Perspectives
Accessing client side code from server side will become your code tightly coupled. Shared package is used for this purpose but still its not for any UI specific code. Shared packages is used to define some DTO (Data Transfer Object) and Utility classes.
There is no meaning of accessing any GWT UI specific utility classes at server side.
Try to decouple your code in such way if in future you want to use your server side classes for Swing application or any other web application other than GWT then you can easily incorporate it. Think about it.
Let me ask two coupled questions that might boil down to one about good application design ;-)
What is the best practice for using event based communication in an e4 RCP application?
How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ?
Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events:
#Inject static IEventBroker broker;
private void sendEvent() {
broker.post(MyEventConstants.SOME_EVENT, payload)
}
On the receiver side, I have a method like:
#Inject
#Optional
private void receiveEvent(#UIEventTopic(MyEventConstants.SOME_EVENT) Object payload)
Now the questions:
In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context);
This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application?
how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker?
This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach.
This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved.
All this makes me wonder whether I am still on the right track or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?
Actually, running a JUnit plug-in test is not that expensive. You can configure the launch configuration to run in headless mode so the only thing loaded is a lightweight PDE without workbench. The same happens when you run a headless build with for example Tycho. Surefire launches your test-bundle as headless plug-in test by default.
The advantage over isolated unit tests is that you can access your plug-in's resources and, most importantly, use dependency injection. If you want to mock an injected object you have to run a plug-in test so you can use InjectorFactory.
This is how you would go about mocking the event service: IEventBroker is an interface, so the only thing you need to do is writing a mock implementation for it
public class IEventBrokerMock implements IEventBroker {
#Override
// Implemented Methods
}
In your test method you would have something like
InjectorFactory.getDefault().addBinding(IEventBroker.class).implementedBy(IEventBrokerMock.class);
ClassUnderTest myObject = InjectorFactory.getDefault().make(ClassUnderTest.class, null);
If you want to work with a context the test method would instead contain
IEclipseContext context = EclipseContextFactory.create();
context.set(IEventBroker.class, new IEventBrokerMock());
ClassUnderTest myObject = ContextInjectionFactory.make(ClassUnderTest.class, context);
If you run this as JUnit plug-in test your object will have the mocked event service injected.
for testing, instead of DI, i use "eventBroker=new org.eclipse.e4.ui.services.internal.events.EventBroker();" to get a eventbroker object to use, it works ok