Assume I have an EJB defining two views:
Local business,
Remote business.
Both interfaces share the same method signatures, so it's like:
public interface MyBusinessCommon {
void myMethod(Object o);
}
#Local
public interface MyBusinessLocal extends MyBusinessCommon { }
#Remote
public interface MyBusinessRemote extends MyBusinessCommon { }
#Stateless
public class MyBusinessBean implements MyBusinessLocal, MyBusinessRemote {
public void myMethod(Object o) {
// ...
}
}
Is there a way to figure out what EJB view was called from within the EJB itself (or its interceptor?)
Let's say I would like to perform different authorization procedures depending on used view. Remote should be more constrained and local shouldn't.
I can invoke SessionContext#getInvokedBusinessInterface() but this gives me information only about the class object - not about EJB semantics of it. Plainly using reflection to check annotations presence on interfaces or bean is not enough (what about views defined in ejb-jar.xml?)
I doubt it is possible using straight EJB specification but perhaps there's something I missed.
If not, is it possible to get this information from the inners of an application server? (let's consider only JBoss AS 7.x, Glassfish 3.x and TomEE 1.5.1).
It's just like Arjan said - it is impossible to do just by following EJB spec.
However, in Glassfish it's quite simple to do.
All EJB interceptors accept the InvocationContext parameter. InvocationContext implementation in Glassfish is in fact com.sun.ejb.EjbInvocation class. It has the isLocal field that tolds you if it's intercepting local business call (or isRemote for remote business calls).
You can use it e.g. as follows:
import com.sun.ejb.EjbInvocation;
import javax.interceptor.AroundInvoke;
import javax.interceptor.Interceptor;
import javax.interceptor.InvocationContext;
#Interceptor
public class CallSourceAwareInterceptor {
#AroundInvoke
public Object aroundInvoke(InvocationContext ictx) throws Exception {
boolean isLocalCall = isLocalEJBCall(ictx);
return ictx.proceed();
}
boolean isLocalEJBCall(final InvocationContext ictx) {
if (ictx instanceof EjbInvocation) {
return ((EjbInvocation) ictx).isLocal;
}
else {
throw new IllegalArgumentException("Unknown InvocationContext implementation.");
}
}
}
To access this EjbInvocation internal Glassfish class you need to add following maven dependency:
<dependency>
<groupId>org.glassfish.main.ejb</groupId>
<artifactId>ejb-container</artifactId>
<version>4.0.1-b02</version>
<scope>provided</scope>
</dependency>
And you might need to add following specific repository to get access to this artifact:
<repositories>
<repository>
<id>maven-promoted</id>
<url>https://maven.java.net/content/groups/promoted/</url>
</repository>
</repositories>
I did a quick research (based on Richard's suggestion regarding Invocation object) how to achieve the same in JBoss but couldn't find answer...
Related
I have a Quarkus application using current versions of Vaadin Flow and Quarkus (23.2.4 and 2.13.1.Final). I want to have a VaadinServiceInitListener to check access annotations on the views (#RolesAllowed(...)) using AccessAnnotationChecker. I believe annotating the implementation with #VaadinServiceEnabled
should fix this, but I need to register it in META-INF/services/com.vaadin.flow.server.VaadinServiceInitListener to have it activated. This is how to do it when not using a dependency injection framework. Then everything works as expected and I can use AccessAnnotationChecker to see if the user has access to that view, on BeforeEnterEvent.
I also notice the message Can't find any #VaadinServiceScoped bean implementing 'I18NProvider'. Cannot use CDI beans for I18N, falling back to the default behavior. on startup. Strangely, implementing I18NProvided in a class and annotating it with #VaadinServiceEnabled and #VaadinServiceScoped makes that message go away, eg. it is recognized by CDI.
Why isn't my VaadinServiceInitListener implementation recogized? Currently it is annotated with
#VaadinServiceEnabled
#VaadinServiceScoped
#Unremovable
My pom.xml include
vaadin-quarkus-extension,
quarkus-oidc,
quarkus-keycloak-authorization,
vaadin-jandex
Instead of using a listener, you can use a CDI event.
Quarkus's dependency injection solution is based on CDI, so you can use the same events. Here's an example
public class BootstrapCustomizer {
private void onServiceInit(#Observes
ServiceInitEvent serviceInitEvent) {
serviceInitEvent.addIndexHtmlRequestListener(
this::modifyBootstrapPage);
}
private void modifyBootstrapPage(
IndexHtmlResponse response) {
response.getDocument().body().append(
"<p>By CDI add-on</p>");
}
}
More information here https://vaadin.com/docs/latest/integrations/cdi/events
I am not sure if my problem is a non-existent feature or I am using the neo4j-ogm framework incorrectly. Since you are only supposed to post bugs or feature requests in the projects GitHub repository, I would like to place my question here first.
Please be aware, that I shortened my real code to just give you an idea what I try to achieve.
My example application consists of two modules:
Module a contains a class to create neo4j-ogm sessions. There I read the configuration from a property file and create the session factory, passing the packages to scan as parameters:
public Neo4jController(final String ... packagesWithNodes) {
ConfigurationSource props = new ClasspathConfigurationSource("neo4jogm.properties");
Configuration configuration = new Configuration.Builder(props).build();
SessionFactory sessionFactory = new SessionFactory(configuration, packagesWithNodes);
...
}
Module b includes module a as a Maven dependency and then tries to persist a NodeEntity via the Session object. The Session is created correctly, but the NodeEntity in the passed package is not recognized.
public MyObject create(final MyObject newObject) {
Neo4jController neo4jController = new Neo4jController("my.example.package");
neo4jController.getNeo4jSession().save(newObject);
...
}
This always results in an IllegalArgumentException:
Class class my.example.package.MyObject is not a valid entity class. Please check the entity mapping.
This is what my NodeEntity in module b looks like
package my.example.package;
import de.bitandgo.workflow.common.model.Neo4jNode;
import org.neo4j.ogm.annotation.NodeEntity;
import org.neo4j.ogm.annotation.Property;
#NodeEntity(label = "MyObject")
public class MyObject extends Neo4jNode {
#Property
public String name;
}
The base class contains among others
public abstract class Neo4jNode {
#Id
#GeneratedValue
private Long id;
}
I know that neo4j-ogm uses glassgraph internally for scanning the classpath. The corresponding call in the framework with prepended configuration looks like this:
private static List<String> useClassgraph(String[] packagesOrClasses) {
// .enableExternalClasses() is not needed, as the super classes are loaded anywhere when the class is loaded.
try (ScanResult scanResult = new ClassGraph()
.ignoreClassVisibility()
.acceptPackages(packagesOrClasses)
.acceptClasses(packagesOrClasses)
.scan()) {
return scanResult.getAllClasses().getNames();
}
}
Via debugging I verified that the desired package my.example.package was passed as an argument to the GlassGraph object.
Also the Neo4jController in module a is able to load the class. I have tested this with the following code
try {
Class<?> aClass = Class.forName("my.example.package.MyObject");
System.out.println(aClass); // prints "class my.example.package.MyObject"
} catch (ClassNotFoundException e) {}
My question is whether neo4j-ogm/classgraph may not be able to find the class at all due to the way neo4j-ogm uses classgraph, or if I am doing something wrong in the usage.
Update 1:
The problem seems to be related to my application running inside an application server (TomEE). When I deploy my application ClassGraph is not able to find the configured package.
But executing the exact same scan via a classical main method finds me the expected class.
public static void main(String[] args) {
try (ScanResult scanResult = new ClassGraph()
.ignoreClassVisibility()
.acceptPackages("my.example.package")
.acceptClasses()
.scan()) {
System.out.println(scanResult.getAllClasses().getNames());
}
}
So I assume the problem is related to the compiled classes not being visible in the "normal" classpath when deploying the application into an application server?
I appreciate any input.
Thanks a lot
The problem was indeed related to the glassgraph library, which is used by neo4j-ogm for scanning the class path for model files.
glassgraph supports a wide variety of classloaders, but the one of TomEE wasn't included yet. This problem has been fixed since version 4.8.107, so if you run into a similar problem check whether glassgraph supports the classloader of the application server used by you.
You can easily overwrite neo4j-ogm's classgraph dependency by specifying the version you need in your pom.xml.
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-core</artifactId>
<version>${neo4j-version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-bolt-driver</artifactId>
<version>${neo4j-version}</version>
<scope>runtime</scope>
</dependency>
<!-- overwriting the internal classloader of neo4j-ogm because it's used version does not support ApacheTomEE -->
<dependency>
<groupId>io.github.classgraph</groupId>
<artifactId>classgraph</artifactId>
<version>4.8.107</version>
</dependency>
The problem was not related to my multi module Maven structure.
I have below maven dependency & configuration set up
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
#Configuration
#EnableMongoAuditing
public class MongoConfig {
#Bean
MongoTransactionManager transactionManager(MongoDbFactory mongoDbFactory) {
return new MongoTransactionManager(mongoDbFactory);
}
}
Updated: I've taken the suggested solution to create a bean with #Transactional, and have it injected into my test class. Below is the service bean I created:
#Service
#Transactional
#RequiredArgsConstructor
public class MongoTransactionService {
private final UserRepo userRepo;
public void boundToFail() throws RuntimeException {
userRepo.save(User.builder().id("1").build());
throw new RuntimeException();
}
}
and test class where I inject a bean of MongoTransactionService:
#DataMongoTest(excludeAutoConfiguration = EmbeddedMongoAutoConfiguration.class,
includeFilters = #ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, classes = MongoTransactionService.class))
#ExtendWith(SpringExtension.class)
class MongoTransactionServiceTest {
#Autowired
UserRepo userRepo;
#Autowired
MongoTransactionService mongoTransactionService;
#Test
void testTransactional() {
try {
mongoTransactionService.boundToFail();
} catch (Exception e) {
// do something
}
val user = userRepo.findById("1").orElse(null);
assertThat(user).isNull();
}
}
I am expecting a call to boundToFail(), which throws a RuntimeException, would roll back the saved user, but the user still gets persisted in the database after the call.
It turns out that #DataMongoTest doesn't activate the auto-configuration for MongoDB transactions. I've filed a ticket with Spring Boot to fix that. In the mean time, you can get this to work by adding
#ImportAutoConfiguration(TransactionAutoConfiguration.class)
to your test class.
Note that using MongoDB transactions requires a replica set database setup. If that's not given the creation of a transaction will fail and your test case will capture that exception and the test will still succeed. The data will not be inserted but that's not due to the RuntimeException being thrown but the transaction not being started in the first place.
The question previously presented a slightly different code arrangement that suffered from other problems. For reference, here's the previous answer:
#Transactional needs to live on public methods of a separate Spring bean as the transactional logic is implemented by wrapping the target object with a proxy that contains an interceptor interacting with the transaction infrastructure.
You example suffers from two problems:
The test itself is not a Spring bean. I.e. there's no transactional behavior added to boundToFail(…). #Transactional can be used on JUnit test methods but that's controlling the transactional behavior of the test. Most prominently, to roll back the transaction to make sure changes to the data store made in the test do not affect other tests. See this section of the reference documentation.
Even if there was transactional logic applied to boundToFail(…), a local method call to the method would never trigger it as it doesn't pass the proxy that's applying it. See more on that in the reference documentation.
The solution to your problem is to create a separate Spring bean that carries the #Transactional annotation, get that injected into your test case and call the method from the test.
I am having an issue with the javax.annotation.PostConstruct annotation in my Sling model.
My html file that uses my model:
<div data-sly-use="com.company.platform.component.general.textblockvalidator.TextBlockValidatorModel" data-sly-unwrap />
Model:
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.models.annotations.Model;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.annotation.PostConstruct;
import javax.inject.Inject;
#Model(adaptables = org.apache.sling.api.resource.Resource.class)
public class TextBlockValidatorModel {
#PostConstruct
private void init() {
System.out.println();
}
public String getValidate(){
return "This works";
}
}
I can call the getter from my sightly file but I never seem to enter my #PostConstruct init() method.
IntelliJ does give me a warning on the annotation but I am not sure what I am doing wrong:
Sling-model-packages:
<Sling-Model-Packages>
...
com.asadventure.platform.component
...
</Sling-Model-Packages>
Any ideas? Thanks in advance!
First, check your Sling Model has been registered correctly by looking for your class in this web page:
http://localhost:4502/system/console/status-adapters
If it isn't listed there, you most likely have not specified the <Sling-Model-Packages> property of the maven-bundle-plugin.
I would also try changing the access modifier for the init method to protected or public.
UPDATE:
I've created a sample project for AEM 6.1 demonstrating the use of the #PostConstruct annotation.
The Sling Model class:
#Model(adaptables = Resource.class)
public class SampleModel {
private boolean postContructCalled = false;
#PostConstruct
public void init() {
this.postContructCalled = true;
}
public boolean isPostContructCalled() {
return this.postContructCalled;
}
}
And a simple HTL component:
<sly data-sly-use.model="com.github.mickleroy.models.SampleModel">
<p>#PostConstruct was called: ${model.postContructCalled}</p>
</sly>
Please take note of the use of the data-sly-use directive - you need to provide a model name.
Also, as I mentioned in the comments, you should not be adding javax.annotation-api as a dependency as it is part of the JDK.
Full source available here: https://github.com/mickleroy/sling-models-sample
For anyone still looking for an answer to this that the above did not resolve, the issue for me was that I did not include the javax.annotation-api dependency:
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3.2</version>
<scope>provided</scope>
</dependency>
Once I added this in the parent pom, and its inclusion in the core pom, #PostConstruct worked just fine.
Update:
The reason I had to do this was because of my inclusion of jersey-client, which requires its own version of javax.annotation-api. Since my first rendition of this answer, I have found I needed to separate jersey-client and its dependencies into a separate bundle project. This allows both Jersey and #PostConstruct to work at the same time.
Just adding the dependency as the answer shows above caused issues with dependency clashes between Jersey's version of javax.annotation-api and AEM's version (Felix's version) of javax.annotation-api.
My guess is that your class is being initialized by the Java Use provider instead of adapting the current resource or request.
In sightly, when you use data-sly-use, it tries several things to obtain an object (I cant recall the order):
get an Osgi service with that name
use the AEM Java USE Api
Adapt the current request / resource into your model class (your desired case)
simply treat the class as a Java Pojo and instantiate it (post construct is not called, injection wont be done).
I've seen several cases where the injection or postconstruct methods of the sling models fails and sightly defaults to the java Use provider. If this happens what you describe happens. You have an object of the right class, but no injection happened and no post construct was called.
My recommendation is to careful check the logs, you should see an error if this is the case. Also, you can install the Scripting HTL Sling Models Use Provider which will propagate any error creating the sling model, making the problem obvious.
I have a resource class and I'd like to be able to check an authentication token before the resource method is called, thus avoiding having to pass the token directly into the Resource method.
I have added the following to web.xml:
<context-param>
<param-name>resteasy.providers</param-name>
<param-value>com.michael.services.interceptors.AuthorisationInterceptorImpl</param-value>
</context-param>
My interceptor is implemented as follows:
#Provider
public class AuthorisationInterceptorImpl implements javax.ws.rs.container.ContainerRequestFilter {
#Inject
private ApiAuthenticationService apiAuthenticationService
#Override
public void filter(ContainerRequestContext requestContext) {
//Code to verify token
}
}
The filter method is being called before the methods in my resource class; however, the apiAuthenticationService is not being injected and is null when I attempt to call its methods.
I'm using Tapestry 5.3.7, Tapestry-Resteasy 0.3.2 and Resteasy 2.3.4.Final.
Can this be done ?
I don't think this will work, based on a quick glance at the tapestry-resteasy code.
The #Inject annotation is part of tapestry-ioc; if a class is not instantiated by Tapestry, the #Inject annotation is not honored.
Filters defined in web.xml are instantiated by the servlet container (Jetty, Tomcat, etc.) which do not have any special knowledge of Tapestry and Tapestry annotations.
I think you will be better off contributing a filter into Tapestry's HttpServletRequestHandler or RequestHandler pipelines (see their JavaDoc). I'm not sure how you can gain access to the ContainerRequestContext, however.
With tapestry-resteasy you don't need to define the provider in the web.xml file.
If you want to use Tapestry's autobuild mechanism just move your provider to the .rest package together with your resources.
If don't want to use autodiscovery/autobuild just contribute it to javax.ws.rs.core.Application
#Contribute(javax.ws.rs.core.Application.class)
public static void configureRestProviders(Configuration<Object> singletons, AuthorisationInterceptor authorisationInterceptor)
{
singletons.add(authorisationInterceptor);
}
Even though you can use rest providers for security is probably a good idea to take Howard's advice and implement your own filter in the tapestry pipeline.
BTW, you can also give tapestry-security a try :)