AEM Osgi Sling Model #PostConstruct never called - annotations

I am having an issue with the javax.annotation.PostConstruct annotation in my Sling model.
My html file that uses my model:
<div data-sly-use="com.company.platform.component.general.textblockvalidator.TextBlockValidatorModel" data-sly-unwrap />
Model:
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.models.annotations.Model;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.annotation.PostConstruct;
import javax.inject.Inject;
#Model(adaptables = org.apache.sling.api.resource.Resource.class)
public class TextBlockValidatorModel {
#PostConstruct
private void init() {
System.out.println();
}
public String getValidate(){
return "This works";
}
}
I can call the getter from my sightly file but I never seem to enter my #PostConstruct init() method.
IntelliJ does give me a warning on the annotation but I am not sure what I am doing wrong:
Sling-model-packages:
<Sling-Model-Packages>
...
com.asadventure.platform.component
...
</Sling-Model-Packages>
Any ideas? Thanks in advance!

First, check your Sling Model has been registered correctly by looking for your class in this web page:
http://localhost:4502/system/console/status-adapters
If it isn't listed there, you most likely have not specified the <Sling-Model-Packages> property of the maven-bundle-plugin.
I would also try changing the access modifier for the init method to protected or public.
UPDATE:
I've created a sample project for AEM 6.1 demonstrating the use of the #PostConstruct annotation.
The Sling Model class:
#Model(adaptables = Resource.class)
public class SampleModel {
private boolean postContructCalled = false;
#PostConstruct
public void init() {
this.postContructCalled = true;
}
public boolean isPostContructCalled() {
return this.postContructCalled;
}
}
And a simple HTL component:
<sly data-sly-use.model="com.github.mickleroy.models.SampleModel">
<p>#PostConstruct was called: ${model.postContructCalled}</p>
</sly>
Please take note of the use of the data-sly-use directive - you need to provide a model name.
Also, as I mentioned in the comments, you should not be adding javax.annotation-api as a dependency as it is part of the JDK.
Full source available here: https://github.com/mickleroy/sling-models-sample

For anyone still looking for an answer to this that the above did not resolve, the issue for me was that I did not include the javax.annotation-api dependency:
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3.2</version>
<scope>provided</scope>
</dependency>
Once I added this in the parent pom, and its inclusion in the core pom, #PostConstruct worked just fine.
Update:
The reason I had to do this was because of my inclusion of jersey-client, which requires its own version of javax.annotation-api. Since my first rendition of this answer, I have found I needed to separate jersey-client and its dependencies into a separate bundle project. This allows both Jersey and #PostConstruct to work at the same time.
Just adding the dependency as the answer shows above caused issues with dependency clashes between Jersey's version of javax.annotation-api and AEM's version (Felix's version) of javax.annotation-api.

My guess is that your class is being initialized by the Java Use provider instead of adapting the current resource or request.
In sightly, when you use data-sly-use, it tries several things to obtain an object (I cant recall the order):
get an Osgi service with that name
use the AEM Java USE Api
Adapt the current request / resource into your model class (your desired case)
simply treat the class as a Java Pojo and instantiate it (post construct is not called, injection wont be done).
I've seen several cases where the injection or postconstruct methods of the sling models fails and sightly defaults to the java Use provider. If this happens what you describe happens. You have an object of the right class, but no injection happened and no post construct was called.
My recommendation is to careful check the logs, you should see an error if this is the case. Also, you can install the Scripting HTL Sling Models Use Provider which will propagate any error creating the sling model, making the problem obvious.

Related

VaadinServiceInitListener not picked up in a Quarkus app

I have a Quarkus application using current versions of Vaadin Flow and Quarkus (23.2.4 and 2.13.1.Final). I want to have a VaadinServiceInitListener to check access annotations on the views (#RolesAllowed(...)) using AccessAnnotationChecker. I believe annotating the implementation with #VaadinServiceEnabled
should fix this, but I need to register it in META-INF/services/com.vaadin.flow.server.VaadinServiceInitListener to have it activated. This is how to do it when not using a dependency injection framework. Then everything works as expected and I can use AccessAnnotationChecker to see if the user has access to that view, on BeforeEnterEvent.
I also notice the message Can't find any #VaadinServiceScoped bean implementing 'I18NProvider'. Cannot use CDI beans for I18N, falling back to the default behavior. on startup. Strangely, implementing I18NProvided in a class and annotating it with #VaadinServiceEnabled and #VaadinServiceScoped makes that message go away, eg. it is recognized by CDI.
Why isn't my VaadinServiceInitListener implementation recogized? Currently it is annotated with
#VaadinServiceEnabled
#VaadinServiceScoped
#Unremovable
My pom.xml include
vaadin-quarkus-extension,
quarkus-oidc,
quarkus-keycloak-authorization,
vaadin-jandex
Instead of using a listener, you can use a CDI event.
Quarkus's dependency injection solution is based on CDI, so you can use the same events. Here's an example
public class BootstrapCustomizer {
private void onServiceInit(#Observes
ServiceInitEvent serviceInitEvent) {
serviceInitEvent.addIndexHtmlRequestListener(
this::modifyBootstrapPage);
}
private void modifyBootstrapPage(
IndexHtmlResponse response) {
response.getDocument().body().append(
"<p>By CDI add-on</p>");
}
}
More information here https://vaadin.com/docs/latest/integrations/cdi/events

NodeEntity not recognized when SessionFactory is created inside library

I am not sure if my problem is a non-existent feature or I am using the neo4j-ogm framework incorrectly. Since you are only supposed to post bugs or feature requests in the projects GitHub repository, I would like to place my question here first.
Please be aware, that I shortened my real code to just give you an idea what I try to achieve.
My example application consists of two modules:
Module a contains a class to create neo4j-ogm sessions. There I read the configuration from a property file and create the session factory, passing the packages to scan as parameters:
public Neo4jController(final String ... packagesWithNodes) {
ConfigurationSource props = new ClasspathConfigurationSource("neo4jogm.properties");
Configuration configuration = new Configuration.Builder(props).build();
SessionFactory sessionFactory = new SessionFactory(configuration, packagesWithNodes);
...
}
Module b includes module a as a Maven dependency and then tries to persist a NodeEntity via the Session object. The Session is created correctly, but the NodeEntity in the passed package is not recognized.
public MyObject create(final MyObject newObject) {
Neo4jController neo4jController = new Neo4jController("my.example.package");
neo4jController.getNeo4jSession().save(newObject);
...
}
This always results in an IllegalArgumentException:
Class class my.example.package.MyObject is not a valid entity class. Please check the entity mapping.
This is what my NodeEntity in module b looks like
package my.example.package;
import de.bitandgo.workflow.common.model.Neo4jNode;
import org.neo4j.ogm.annotation.NodeEntity;
import org.neo4j.ogm.annotation.Property;
#NodeEntity(label = "MyObject")
public class MyObject extends Neo4jNode {
#Property
public String name;
}
The base class contains among others
public abstract class Neo4jNode {
#Id
#GeneratedValue
private Long id;
}
I know that neo4j-ogm uses glassgraph internally for scanning the classpath. The corresponding call in the framework with prepended configuration looks like this:
private static List<String> useClassgraph(String[] packagesOrClasses) {
// .enableExternalClasses() is not needed, as the super classes are loaded anywhere when the class is loaded.
try (ScanResult scanResult = new ClassGraph()
.ignoreClassVisibility()
.acceptPackages(packagesOrClasses)
.acceptClasses(packagesOrClasses)
.scan()) {
return scanResult.getAllClasses().getNames();
}
}
Via debugging I verified that the desired package my.example.package was passed as an argument to the GlassGraph object.
Also the Neo4jController in module a is able to load the class. I have tested this with the following code
try {
Class<?> aClass = Class.forName("my.example.package.MyObject");
System.out.println(aClass); // prints "class my.example.package.MyObject"
} catch (ClassNotFoundException e) {}
My question is whether neo4j-ogm/classgraph may not be able to find the class at all due to the way neo4j-ogm uses classgraph, or if I am doing something wrong in the usage.
Update 1:
The problem seems to be related to my application running inside an application server (TomEE). When I deploy my application ClassGraph is not able to find the configured package.
But executing the exact same scan via a classical main method finds me the expected class.
public static void main(String[] args) {
try (ScanResult scanResult = new ClassGraph()
.ignoreClassVisibility()
.acceptPackages("my.example.package")
.acceptClasses()
.scan()) {
System.out.println(scanResult.getAllClasses().getNames());
}
}
So I assume the problem is related to the compiled classes not being visible in the "normal" classpath when deploying the application into an application server?
I appreciate any input.
Thanks a lot
The problem was indeed related to the glassgraph library, which is used by neo4j-ogm for scanning the class path for model files.
glassgraph supports a wide variety of classloaders, but the one of TomEE wasn't included yet. This problem has been fixed since version 4.8.107, so if you run into a similar problem check whether glassgraph supports the classloader of the application server used by you.
You can easily overwrite neo4j-ogm's classgraph dependency by specifying the version you need in your pom.xml.
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-core</artifactId>
<version>${neo4j-version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-bolt-driver</artifactId>
<version>${neo4j-version}</version>
<scope>runtime</scope>
</dependency>
<!-- overwriting the internal classloader of neo4j-ogm because it's used version does not support ApacheTomEE -->
<dependency>
<groupId>io.github.classgraph</groupId>
<artifactId>classgraph</artifactId>
<version>4.8.107</version>
</dependency>
The problem was not related to my multi module Maven structure.

Using Guice/Peaberry for osgi declarative services

I want to solve the following problem and need advice, what the best solution is.
I have a bundle A in which a service interface X is defined. A bundle B provides a service implementation of X and contributes the implementation to the tool. A and B use Google Guice and Peaberry to configure the setup of the objects.
There are two possibilities I can use to contribute the service implementation:
Using an eclipse extension:
In this solution I can use the GuiceExtensionFactory mechanism of Peaberry to create the service implementation using Guice and therefore can inject stuff needed by the implementation. The disadvantage here is that in the bundle defining the extension point, I need the boilerplate code for the resolution of the extensions because there is to my knowledge no way to get the extensions injected into the class which uses the extensions.
This looks like this:
<extension point="A.service.X">
<xservice
...
class="org.ops4j.peaberry.eclipse.GuiceExtensionFactory:B.XImpl"
.../>
</extension>
<extension
point="org.ops4j.peaberry.eclipse.modules">
<module
class="B.XModule">
</module>
</extension>
but I need the boilerplate code like this:
private List<X> getRegisteredX() {
final List<X> ximpls = new ArrayList<>();
for (final IConfigurationElement e : Platform.getExtensionRegistry().getConfigurationElementsFor( X_EXTENSION_POINT_ID)) {
try {
final Object object = e.createExecutableExtension("class"); //$NON-NLS-1$
if (object instanceof X) {
ximpls.add((X) object);
}
} catch (final CoreException ex) {
// Log
}
}
return ximpls;
}
Using an OSGI service:
My main problem here is to ensure that the service is registered. I want the bundle loaded lazily, so at least an access to one of the classes of the bundle is required. Registering the service programmatically using Peaberry has an issue, because nobody ever asks for a class of the bundle. The solution is to provide the service as a declarative service, but I do not know a way to create the service implementation in a way, that I can use Guice to inject required objects.
So I have some questions:
Is there something I do not know so far that implements the code needed to read the extensions at an extension point generically and allows to inject the extensions to the class using the extensions?
Is there a way to ensure that the service is provided even if it is added using the standard Peaberry mechanism, i.e., the bundle is activated when the service is requested?
Is there a way like the GuiceExtensionFactory for declarative services, so that the creation of the service implementation can be done by the injector of the bundle?
Something that look like:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="Ximpl">
<implementation class="some.generic.guiceaware.ServiceFactory:B.Ximpl"/>
<service>
<provide interface="A.X"/>
</service>
</scr:component>
Summarized, I want a service implementation generated by Guice and I want to get the service implementations simply injected into the classes using the service without extensive boilerplate code. Has anybody a solution for that?
Sorry, to ask, but I searched the web for quite a while and so far I did not find a solution.
Thanks and best regards,
Lars
I found a solution, but since I did not find it without a lot of trying out and thinking I thought I share it here. From the options I mentioned in my posting, my solution uses the first one, that is Eclipse extension points and extensions. In order to use Guice in the context of extension points there are two aspects to consider:
Providing an extension that is created by an Guice injector
This is explained very well here: https://code.google.com/p/peaberry/wiki/GuiceExtensionFactory. There is one remark to make from my side. The creation of the extension object is done in an injector inside of the GuiceExtensionFactory, so it is an own context, which needs to be configured by the module given as additional extension to the factory. This can become an issue, if you have other needs that require creating the injector in the bundle on your own.
Defining an extension point so that the extensions are simply injected into the classes which use the extensions.
First thing to do is to define the extension point schema file as normally. It should contain the reference of an interface that has to be implemented by the extensions.
The id of the extension point has to be connected to the interface which is provided by the extensions and which is injected by guice/peaberry. Therefore peaberry provides an annotation to be used to annotate the interface:
import org.ops4j.peaberry.eclipse.ExtensionBean;
#ExtensionBean("injected.extension.point.id")
public interface InjectedInterface {
...
}
On some web pages you also find the information that if the id is equal to the qualified name of the interface, it can be found directly without the annotation but I did not try this out.
In order to enable the injection, you have to do two things to configure the Guice injector creation.
First the EclipseRegistry object of Peaberry has to be set as ServiceRegistry. Second the binding of the extension implementations to a provided service has to be done.
The injector creation has to be done in this way:
import org.osgi.framework.BundleContext;
import com.google.inject.Guice;
import com.google.inject.Injector;
import org.ops4j.peaberry.eclipse.EclipseRegistry;
import static org.ops4j.peaberry.Peaberry.*;
void initializer() {
Injector injector = Guice.createInjector(osgiModule(context, EclipseRegistry.eclipseRegistry()), new Module() {
binder.bind(iterable(InjectedInterface.class)).toProvider(service(InjectedInterface.class).multiple());
});
}
The extension implementations can then simply be injected like this:
private Iterable<InjectedInterface> registeredExtensions;
#Inject
void setSolvers(final Iterable<InjectedInterface> extensions) {
registeredExtensions = extensions;
}
With the described way it is possible to have injected extensions which are implemented by classes using Guice to get dependencies injected.
I did not find a solution to use osgi services so far, but perhaps there is someone who has an idea.
Best regards,
Lars

Using an interceptor with Tapestry Resteasy

I have a resource class and I'd like to be able to check an authentication token before the resource method is called, thus avoiding having to pass the token directly into the Resource method.
I have added the following to web.xml:
<context-param>
<param-name>resteasy.providers</param-name>
<param-value>com.michael.services.interceptors.AuthorisationInterceptorImpl</param-value>
</context-param>
My interceptor is implemented as follows:
#Provider
public class AuthorisationInterceptorImpl implements javax.ws.rs.container.ContainerRequestFilter {
#Inject
private ApiAuthenticationService apiAuthenticationService
#Override
public void filter(ContainerRequestContext requestContext) {
//Code to verify token
}
}
The filter method is being called before the methods in my resource class; however, the apiAuthenticationService is not being injected and is null when I attempt to call its methods.
I'm using Tapestry 5.3.7, Tapestry-Resteasy 0.3.2 and Resteasy 2.3.4.Final.
Can this be done ?
I don't think this will work, based on a quick glance at the tapestry-resteasy code.
The #Inject annotation is part of tapestry-ioc; if a class is not instantiated by Tapestry, the #Inject annotation is not honored.
Filters defined in web.xml are instantiated by the servlet container (Jetty, Tomcat, etc.) which do not have any special knowledge of Tapestry and Tapestry annotations.
I think you will be better off contributing a filter into Tapestry's HttpServletRequestHandler or RequestHandler pipelines (see their JavaDoc). I'm not sure how you can gain access to the ContainerRequestContext, however.
With tapestry-resteasy you don't need to define the provider in the web.xml file.
If you want to use Tapestry's autobuild mechanism just move your provider to the .rest package together with your resources.
If don't want to use autodiscovery/autobuild just contribute it to javax.ws.rs.core.Application
#Contribute(javax.ws.rs.core.Application.class)
public static void configureRestProviders(Configuration<Object> singletons, AuthorisationInterceptor authorisationInterceptor)
{
singletons.add(authorisationInterceptor);
}
Even though you can use rest providers for security is probably a good idea to take Howard's advice and implement your own filter in the tapestry pipeline.
BTW, you can also give tapestry-security a try :)

Servlet 3.0 annotations in conjuction with Guice

I am attempting to update a legacy Guice application, and I was wondering if there is any sort of preferred way of doing things when taking Servlet 3.0 annotations into consideration. For example, my application has a filter, FooFilter, which is defined in the Guice Module Factory method configureServlets(), as follows:
Map<String, String> fooParams = new HashMap<String, String>();
fooParams.put("someParam", "parameter information");
filter("/foo.jsp","/foo/*").through(com.example.filter.FooFilter.class, fooParams);
Is the above binding still necessary, or will it interfere with the following using the #WebFilter Servlet 3.0 annotation:
#Singleton
#WebFilter(
filterName="FooFilter",
urlPatterns={"/foo.jsp", "/foo/*"},
initParams = {
#WebInitParam(name="foo", value="Hello "),
#WebInitParam(name="bar", value=" World!")
})
public class FooFilter implements Filter {
etc....
Which method is now preferred? Will they mess with each other?
I just made quick draft how could a Servlet 3.0 support looks like. There could be a more elegant way to just call filter(Filter Class with WebFilter annotation) in configureServlet method, but that requires updated right to guice-servlet module, which is quite hard to distribute.
Well, what I did is a project at Github: https://github.com/xbaran/guice-servlet3
all you need to do is download and build. It is created on top of Guice 3.0 and works like this:
new Servlet3Module() {
#Override
protected void configureServlets3() {
scanFilters(FooFilter.class.getPackage());
}
};
The Servlet3Module extends ServletModule and contains a scanFilters method with package argument. This method will scan provided package from your classpath and try to register all classes with annotation WebFilter via filter() method.
This scan idea is based on Sitebricks (guice web framework created by Dhanji R. Prasanna) configuration system.
Honestly, I just make a draft, never try if it works. But hopefully it will. If you have any problem or question, just let me know.
PS: The support for servlets, listeners and so on could be added to, if you wish.