Agent classloading on Jboss - jboss

We are writing an instrumentation application to monitor a web application application via instrumentation. This instrumentation package does not load all application classes. We have determined that classes that do not extend other classes (stand alone classes) get loaded properly, but classes that extend other classes in our web application are loaded in a different classloader.
For example:
If we have a class called User, it get loaded in the classloader, and our instrumentation applciation loads the class correctly.
However if we have a class called Employee that extends Person, neither Person nor Employee are loaded in the classloader. We have enabled -verbose:class which also shows this class is not loaded. We need to determine which classloader these classess are load into and how we and provide instrumentation on these classes.

Related

Meaning of Play Application class

I'm developing a project using Play and I'm confused about Application class. I see it in many code snippets, for example:
class Application(silhouette: Silhouette[DefaultEnv]) extends Controller
(source)
But I don't get if it's an arbitrary name for a generic controller (used instead of FooController, MyController...) or if it has a special meaning and it gets handled by the framework in a special way.
To add further confusion, I realized that there is also the Application interface (source) and the concrete implementation is DefaultApplication (source) and documentation says:
Application creation is handled by the framework engine.
so... what is the meaning of having an Application controller?
This is just an arbitrary, generic name as its example. Imagine you are writing a documentation. As an example controller name, you maybe name it as Application, ApplicationController, MyApp, MyAppController or MainController etc.
But DefaultApplication you found on play documentation has meaning.
It is a default implementation of play.Application interface (in scala, play.api.Application trait) that maneges the play application environment and state etc.
https://www.playframework.com/documentation/2.7.x/JavaApplication
https://www.playframework.com/documentation/2.7.x/ScalaApplication
As long as you place controllers under controllers namespace, you can name your controllers whatever you want such as "Application".
By the way, default controller from official play template was named as Application previously. I guess this is a reason why you see many controller code snippets named Application.
https://github.com/typesafehub/activator-hello-play-scala/blob/master/app/controllers/Application.scala

Why do we need a interface to define every service in aem?

I have been working with for a while but somehow never thought about this. Every aem project that I have worked on, has one similarity in their code structure. There is an interface for every service written.
My question is why do we need a interface for every service?
Can #reference or #inject not use the services without an interface?
Using interfaces is a good practice to decouple the user of a service from the implementation. In many cases you even want to have an API bundle so the user of the service does not need a maven dependency to the implementing bundle.
On the other hand you are not required to use interfaces. Especially when I wire components inside a bundle interfaces are often an unnecessary layer. In this case simply export the service directly with the class.
See here for an example:
#Component(service = DistributionMetricsService.class)
public class DistributionMetricsService {
...
}
and here for the client code:
#Reference
private DistributionMetricsService distributionMetricsService;
So the main difference is that you have to specify the service property if you want to export a component with its implementation class.

Cannot inherit undefined extended classes in SuiteCRM

We have an external dependency that connects to objects within SuiteCRM (custom objects that extend from SugarBean and Basic) and when we load up the application within our browser, it cannot load the Basic class, presumably because its not called into memory.
Is there a way to load all the required SuiteCRM classes into an external project, if they share the same root directory (e.g, /project is part of the SuiteCRM install)?
if you are accessing this via an external application. I would recommend using the rest / soap API to interface with SuiteCRM.
If you are trying to reuse the classes then you could do the same thing as one of the entry points like index.php, soap.php etc.
They all include entryPoint.php
require_once 'include/entryPoint.php';

Using JPA (application managed) packed in a jar

We are creating a portlet with persistence layer (jpa/eclipselink) delivered by a jar (a 2nd maven module).
parent
|---portlet-module (war)
|---persistence-module (jar)
The idea was to completely encapsulate the persistence layer and only deliver a simple service class with all crud methods. The portlet (or whatever in future) does not have to know anything about jpa stuff. The portlet must only deliver the data base link (url, user...).
But this seems not to work because the portlet needs its own persitence.xml including all entity classes (fully qualified) - is this correct?
Or is there a way to deliver the persistence layer together with persistence unit? (complete encapsulation)

Correct way of using/testing event service in Eclipse E4 RCP

Let me ask two coupled questions that might boil down to one about good application design ;-)
What is the best practice for using event based communication in an e4 RCP application?
How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ?
Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events:
#Inject static IEventBroker broker;
private void sendEvent() {
broker.post(MyEventConstants.SOME_EVENT, payload)
}
On the receiver side, I have a method like:
#Inject
#Optional
private void receiveEvent(#UIEventTopic(MyEventConstants.SOME_EVENT) Object payload)
Now the questions:
In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context);
This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application?
how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker?
This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach.
This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved.
All this makes me wonder whether I am still on the right track or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?
Actually, running a JUnit plug-in test is not that expensive. You can configure the launch configuration to run in headless mode so the only thing loaded is a lightweight PDE without workbench. The same happens when you run a headless build with for example Tycho. Surefire launches your test-bundle as headless plug-in test by default.
The advantage over isolated unit tests is that you can access your plug-in's resources and, most importantly, use dependency injection. If you want to mock an injected object you have to run a plug-in test so you can use InjectorFactory.
This is how you would go about mocking the event service: IEventBroker is an interface, so the only thing you need to do is writing a mock implementation for it
public class IEventBrokerMock implements IEventBroker {
#Override
// Implemented Methods
}
In your test method you would have something like
InjectorFactory.getDefault().addBinding(IEventBroker.class).implementedBy(IEventBrokerMock.class);
ClassUnderTest myObject = InjectorFactory.getDefault().make(ClassUnderTest.class, null);
If you want to work with a context the test method would instead contain
IEclipseContext context = EclipseContextFactory.create();
context.set(IEventBroker.class, new IEventBrokerMock());
ClassUnderTest myObject = ContextInjectionFactory.make(ClassUnderTest.class, context);
If you run this as JUnit plug-in test your object will have the mocked event service injected.
for testing, instead of DI, i use "eventBroker=new org.eclipse.e4.ui.services.internal.events.EventBroker();" to get a eventbroker object to use, it works ok