EclipsLink doesnt' seem to detect or fire JSR303 annotation constraints in a base class that is the mapped super class of an entity during a persist() operation.
For example:
public Base
{
#NotNull
private Integer id;
private String recordName;
//other stuff (getters etc)
}
and then
public class MyObject
extends Base
{
//stuff...
}
and then:
<mapped-superclass class="Base">
<attributes>
<basic name="recordName">
<column name = "NAME" />
</basic>
</attributes>
</mapped-superclass>
and finally:
<entity class="MyObject">
<table name="TheTable"/>
<attributes>
<id name="id">
<column name="recordId" />
</id>
</attributes>
</entity>
Some other relevant parameters:
using jpa 2.1 -- specifically eclipslink 2.6.2 and 2.6.3
I am integration testing - so java se (and spock)
JDK 1.8.77
I do have hibernate validator in my classpath (org.hibernate:hibernate-validator:5.2.4.Final)
If I write a test fixture and use validitor.validate() directly (no jpa or persist) hibernate validator works as expected.
I do NOT use JPA annotations and only use ORM xml to declare entity mappings.
I do use JSR303 annotations to mark attrs and props with constraints.
persistence.xml is marked with validation "AUTO" and many variations of properties like javax.persistence.validation.group.pre-persist with FQDN of marker interfaces have been tried.
As mentioned, calling em.persist(myObjectInst) will not fire any 303 annotations added to class 'Base'.
* Is there some tuning parameter or switch I can tinker with that will make this work? *
Note: I did a deep-dive debug on this and can see that org.eclipse.persistence.internal.jpa.metadata.beanvalidation.BeanValidationHelper.detectConstraints() does NOT look at any parent classes for JSR303 annotations. It seems to only want to look at the specific entity class. I'd hazard to guess that if I moved my JSR303 constraints to the concrete (or entity class); it may just work. But then I would loose the extension and mapped super class stuff. So what fun is that?
UPDATE
Looks like issue in EclipseLink 2.6.x. See here ( https://www.eclipse.org/forums/index.php?t=msg&th=1077658&goto=1732842&#msg_1732842 ) for more details.
From what I can see, eclipse link 2.6.X up to 2.6.4 seems to have a Massive bug in terms of upholding its contract of triggering JSR 303 bean validations.
Right now, eclipselink 2.6.4 only triggers these validations if your child entity is right-out flagged with constraints.
I have integration tests that work perfectly under JEE 6 library versions (e.g. eclipselink 2.4.x).
When I upgrade libraries to JEE 7 verions, in the particular case of ecliselink this means versions: 2.6.1 up to 2.6.4, they all manifest the same bug.
The broken unit tests I have analyzed so far, are validating that ConstraintViolationExceptions, such as not null, must get triggered.
So if you take an Entity A that extends abstract entity B. And abstract entity B is a #MappedSuperClass.
You will have problems if your #NotNull or any other such constraints is found on your abstract entity B ...
In this case, things will not go well.
No constraint violation gets triggered by eclipselink.
Instead, it is the DB that stops you if you do issue the commit() or flush() in the test.
Eclipse-link will rollback on the db exception.
However, as soon as you go to entity A and you pump into it a dummy field:
#NotNull
private String dummy;
This is sufficient to make the Validator (e.g. hibernate validator) get called.
In this case, my the test still fails because now I get twotwo #NotNull constraint validations instead of one.
In the following snippet I illustrate the relevant chucnk of stack trace on eclipselink 2.6.1.
Caused by: javax.validation.ConstraintViolationException:
Bean Validation constraint(s) violated while executing Automatic Bean Validation on callback event:'prePersist'.
Please refer to embedded ConstraintViolations for details.
at org.eclipse.persistence.internal.jpa.metadata.listeners.BeanValidationListener.validateOnCallbackEvent(BeanValidationListener.java:108)
at org.eclipse.persistence.internal.jpa.metadata.listeners.BeanValidationListener.prePersist(BeanValidationListener.java:77)
at org.eclipse.persistence.descriptors.DescriptorEventManager.notifyListener(DescriptorEventManager.java:748)
at org.eclipse.persistence.descriptors.DescriptorEventManager.notifyEJB30Listeners(DescriptorEventManager.java:691)
at org.eclipse.persistence.descriptors.DescriptorEventManager.executeEvent(DescriptorEventManager.java:229)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.registerNewObjectClone(UnitOfWorkImpl.java:4314)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.registerNotRegisteredNewObjectForPersist(UnitOfWorkImpl.java:4291)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.registerNotRegisteredNewObjectForPersist(RepeatableWriteUnitOfWork.java:521)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.registerNewObjectForPersist(UnitOfWorkImpl.java:4233)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:507)
at TEST_THAT_IS_PROBLEMATIC
... 25 more
In the stack trace above, you have the unit test doing an em.persist() on entity A, and this case entity A has the dummy #NotNull field. So the validation gets called.
The bug in eclipselink seems to be when the BeanValidationListener asks the BeanValidationHelper if a class is constrained or not:
The code from Eclipselink is as follows:
private void validateOnCallbackEvent(DescriptorEvent event, String callbackEventName, Class[] validationGroup) {
Object source = event.getSource();
boolean noOptimization = "true".equalsIgnoreCase((String) event.getSession().getProperty(PersistenceUnitProperties.BEAN_VALIDATION_NO_OPTIMISATION));
boolean shouldValidate = noOptimization || beanValidationHelper.isConstrained(source.getClass());
if (shouldValidate) {
Set<ConstraintViolation<Object>> constraintViolations = getValidator(event).validate(source, validationGroup);
if (constraintViolations.size() > 0) {
// There were errors while call to validate above.
// Throw a ConstrainViolationException as required by the spec.
// The transaction would be rolled back automatically
// TODO need to I18N this.
throw new ConstraintViolationException(
"Bean Validation constraint(s) violated while executing Automatic Bean Validation on callback event:'" +
callbackEventName + "'. Please refer to embedded ConstraintViolations for details.",
(Set<ConstraintViolation<?>>) (Object) constraintViolations); /* Do not remove the explicit
cast. This issue is related to capture#a not being instance of capture#b. */
}
}
}
And the problem is that the query:
beanValidationHelper.isConstrained(source.getClass());
Returns false, this is completely wrong.
Finally, if you check the implemation of the BeanValidationHelper, the intial part of the code looks as follows:
private Boolean detectConstraints(Class<?> clazz) {
for (Field f : ReflectionUtils.getDeclaredFields(clazz)) {
for (Annotation a : f.getDeclaredAnnotations()) {
final Class<? extends Annotation> type = a.annotationType();
if (KNOWN_CONSTRAINTS.contains(type.getName())){
return true;
}
// Check for custom annotations on the field (+ check inheritance on class annotations).
// Custom bean validation annotation is defined by having #Constraint annotation on its class.
for (Annotation typesClassAnnotation : type.getAnnotations()) {
final Class<? extends Annotation> classAnnotationType = typesClassAnnotation.annotationType();
if (Constraint.class == classAnnotationType) {
KNOWN_CONSTRAINTS.add(type.getName());
return true;
}
}
}
}
The implementation above is clearly wrong, because the method as whole is non recursive and the apis it uses from reflection are themselves non recursive.
They look only at the current instance.
If you see the following stack overflow thread:
What is the difference between getFields and getDeclaredFields in Java reflection
It is clearly explained by the top ranked answer that:
Field f : ReflectionUtils.getDeclaredFields(clazz)
Only returns you the fields for the current class but not the parents.
What I am seing myself force to do in the mean time is to put in place this workaround to froce the broken algorithm in the BeanValidationHelper to detect the class as one needing to be validated:
#Transient
#NotNull
private final char waitForEclipseLinkToFixTheVersion264 = 'a';
By doing as above, you have your code clearly flagged with a chunk that you can in the future remove.
And since the field is transient ... hey, it does not change your DB.
Please note as well that the eclipselink forum now has additional information.
The bug goes deeper than than just the improper tracking of when "beanValidation" is needed in the BeanValidationListner.class.
The bug has second depth.
The BeanValidationListner.class provided with eclipse-link also does not register any implementation for the:
PreWriteEvent and for the DescriptorEventManager.PreInsertEvent.
So when the "DeferredCachedDetectionPolocy.class is calculatinChanges(), if your entity A has JSR fields and it still does not get JSR 303 validation.
This is most likely happening to you because your enitya was:
T0: Persisted and validatioons wen through ok
T1: you modify the peristed entity in the same transaction, and when the calculateChanges invokes the event litenters.
The BeanValidationListner.class does not care about a preInsertEvent.
It just assumes the validation was done a prePersist and does not invoke the validation at all.
The work-around for this, I am not yet sure.
I will be looking at how to register an event listner during PreInserPhase, that does the same as the BeanValidationListner.
Or, I will be locally patching the BeanValidationListner.class to subscribe to the PreINsert event.
I hade modifying code of libraries maintained by others, so I will go first for the approach of our own eventListner as a temprorary workarund for this bug.
Adding repository that allows to verify both bugs.
https://github.com/99sono/EclipseLink_2_6_4_JSR_303Bug
For bug number 2 the following EventListner can server a a temporary work-around, until eclipse link 2.6.4 fixes their bean validation orchestration logic.
package jpa.eclipselink.test.bug2workaround;
import java.util.Map;
import javax.validation.Validation;
import javax.validation.ValidatorFactory;
import org.eclipse.persistence.config.PersistenceUnitProperties;
import org.eclipse.persistence.descriptors.DescriptorEvent;
import org.eclipse.persistence.descriptors.DescriptorEventAdapter;
import org.eclipse.persistence.descriptors.changetracking.DeferredChangeDetectionPolicy;
import org.eclipse.persistence.internal.jpa.deployment.BeanValidationInitializationHelper;
import org.eclipse.persistence.internal.jpa.metadata.listeners.BeanValidationListener;
/**
* Temporary work-around for JSR 303 bean validation flow in eclipselink.
*
* <P>
* Problem: <br>
* The
* {#link DeferredChangeDetectionPolicy#calculateChanges(Object, Object, boolean, org.eclipse.persistence.internal.sessions.UnitOfWorkChangeSet, org.eclipse.persistence.internal.sessions.UnitOfWorkImpl, org.eclipse.persistence.descriptors.ClassDescriptor, boolean)}
* during a flush will do one of the following: <br>
* {#code descriptor.getEventManager().executeEvent(new DescriptorEvent(DescriptorEventManager.PreInsertEvent, writeQuery)); }
* or <br>
*
* {#code descriptor.getEventManager().executeEvent(new DescriptorEvent(DescriptorEventManager.PreUpdateEvent, writeQuery)); }
*
* <P>
* WHe it does
* {#code descriptor.getEventManager().executeEvent(new DescriptorEvent(DescriptorEventManager.PreInsertEvent, writeQuery)); }
* the {#link BeanValidationListener} will not do anything. We want it to do bean validation.
*/
public class ForceBeanManagerValidationOnPreInsert extends DescriptorEventAdapter {
private static final Class[] DUMMY_GROUP_PARAMETER = null;
/**
* This is is the EJB validator that eclipselink uses to do JSR 303 validations during pre-update, pre-delete,
* pre-persist, but not pre-insert.
*
* Do not access this field directly. Use the {#link #getBeanValidationListener(DescriptorEvent)} api to get it, as
* this api will initialize the tool if necessary.
*/
BeanValidationListener beanValidationListener = null;
final Object beanValidationListenerLock = new Object();
/**
*
*/
public ForceBeanManagerValidationOnPreInsert() {
super();
}
/**
* As a work-around we want to do bean validation that the container is currently not doing.
*/
#Override
public void preInsert(DescriptorEvent event) {
// (a) get for ourselves an instances of the eclipse link " Step 4 - Notify internal listeners."
// that knows how to run JSR 303 validations on beans associated to descriptor events
BeanValidationListener eclipseLinkBeanValidationListenerTool = getBeanValidationListener(event);
// (b) let the validation listener run its pre-update logic on a preInsert it serves our purpose
eclipseLinkBeanValidationListenerTool.preUpdate(event);
}
/**
* Returns the BeanValidationListener that knows how to do JSR 303 validation. Creates a new instance if needed,
* otherwise return the already created listener.
*
* <P>
* We can only initialize our {#link BeanValidationListener} during runtime, to get access to the JPA persistence
* unit properties. (e.g. to the validation factory).
*
* #param event
* This event describes an ongoing insert, updetae, delete event on an entity and for which we may want
* to force eclipselink to kill the transaction if a JSR bean validation fails.
* #return the BeanValidationListener that knows how to do JSR 303 validation.
*/
protected BeanValidationListener getBeanValidationListener(DescriptorEvent event) {
synchronized (beanValidationListenerLock) {
// (a) initializae our BeanValidationListener if needed
boolean initializationNeeded = beanValidationListener == null;
if (initializationNeeded) {
beanValidationListener = createBeanValidationListener(event);
}
// (b) return the validation listerner that is normally used by eclipse link
// for pre-persist, pre-update and pre-delete so that we can force it run on pre-insert
return beanValidationListener;
}
}
/**
* Creates a new instance of the {#link BeanValidationListener} that comes with eclipse link.
*
* #param event
* the ongoing db event (e.g. pre-insert) where we want to trigger JSR 303 bean validation.
*
* #return A new a new instance of the {#link BeanValidationListener} .
*/
protected BeanValidationListener createBeanValidationListener(DescriptorEvent event) {
Map peristenceUnitProperties = event.getSession().getProperties();
ValidatorFactory validatorFactory = getValidatorFactory(peristenceUnitProperties);
return new BeanValidationListener(validatorFactory, DUMMY_GROUP_PARAMETER, DUMMY_GROUP_PARAMETER,
DUMMY_GROUP_PARAMETER);
}
/**
* Snippet of code taken out of {#link BeanValidationInitializationHelper}
*
* #param puProperties
* the persistence unit properties that may be specifying the JSR 303 validation factory.
* #return the validation factory that can check if a bean is violating business rules. Almost everyone uses
* hirbernate JSR 303 validation.
*/
protected ValidatorFactory getValidatorFactory(Map puProperties) {
ValidatorFactory validatorFactory = (ValidatorFactory) puProperties
.get(PersistenceUnitProperties.VALIDATOR_FACTORY);
if (validatorFactory == null) {
validatorFactory = Validation.buildDefaultValidatorFactory();
}
return validatorFactory;
}
}
Simply add this bean validator to the class, preferably a base abstract class to ensure JSR 303 validation will happen in pre-insert.
This should work-around the hole that allows us to commit dirtyu entities violating business rules to the db.
Here is an example of an entity with the work-around in place.
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#DiscriminatorColumn(name = "DESCRIMINATOR", length = 32)
#DiscriminatorValue("Bug2WorkAround")
#Entity
#EntityListeners({ ForceBeanManagerValidationOnPreInsert.class })
public class Bug2Entity2WithWorkAround extends GenericEntity {
Kind regards.
Related
I'm using Eclipselink JPA, I have an Entity with a Timestamp field annotated with #Version por optimistic locking.
By default, this sets the entitymanager to use database time, so, if I have to do a batch update it doesn't work properly as it query the database for time each time it wants to do an insert.
How can I change the TimestampLockingPolicy to use LOCAL_TIME?
The class org.eclipse.persistence.descriptors.TimestampLockingPolicy.class has a public method useLocalTime() but I dont know how to use or, from where should I call it.
Found the answer:
first lets create a DescriptorCustomizer
public class LocalDateTimeCustomizer implements DescriptorCustomizer {
#Override
public void customize(ClassDescriptor descriptor) throws Exception {
OptimisticLockingPolicy policy = descriptor.getOptimisticLockingPolicy();
if (policy instanceof TimestampLockingPolicy) {
TimestampLockingPolicy p = (TimestampLockingPolicy) policy;
p.useLocalTime();
}
}
}
then annotate the entity that has the #Version with
#Customizer(LocalDateTimeCustomizer.class)
I am having an issue where I will get a ClassNotFoundException error when I try to run Junit tests. The query classes that are generated are QSomeTableEntity_Q but it keeps looking for QSomeTableEntity in the SomeTableRepository for the entity even though my Predicate class imports the QSomeTableEntity_Q class.
I have in my maven pom
< querydsl.suffix >_Q< /querydsl.suffix >
It seems like spring jpa framework will look for the q-entity in the domain class located package.Here is the code:
/**
* Returns the name of the query class for the given domain class.
*
* #param domainClass
* #return
*/
private String getQueryClassName(Class<?> domainClass) {
String simpleClassName = ClassUtils.getShortName(domainClass);
return String.format("%s.Q%s%s", domainClass.getPackage().getName(), getClassBase(simpleClassName),
domainClass.getSimpleName());
}
So just move the q-entity will solve the problem.
Let's say I specify an outputText component like this:
<h:outputText value="#{ManagedBean.someProperty}"/>
If I print a log message when the getter for someProperty is called and load the page, it is trivial to notice that the getter is being called more than once per request (twice or three times is what happened in my case):
DEBUG 2010-01-18 23:31:40,104 (ManagedBean.java:13) - Getting some property
DEBUG 2010-01-18 23:31:40,104 (ManagedBean.java:13) - Getting some property
If the value of someProperty is expensive to calculate, this can potentially be a problem.
I googled a bit and figured this is a known issue. One workaround was to include a check and see if it had already been calculated:
private String someProperty;
public String getSomeProperty() {
if (this.someProperty == null) {
this.someProperty = this.calculatePropertyValue();
}
return this.someProperty;
}
The main problem with this is that you get loads of boilerplate code, not to mention private variables that you might not need.
What are the alternatives to this approach? Is there a way to achieve this without so much unnecessary code? Is there a way to stop JSF from behaving in this way?
Thanks for your input!
This is caused by the nature of deferred expressions #{} (note that "legacy" standard expressions ${} behave exactly the same when Facelets is used instead of JSP). The deferred expression is not immediately evaluated, but created as a ValueExpression object and the getter method behind the expression is executed everytime when the code calls ValueExpression#getValue().
This will normally be invoked one or two times per JSF request-response cycle, depending on whether the component is an input or output component (learn it here). However, this count can get up (much) higher when used in iterating JSF components (such as <h:dataTable> and <ui:repeat>), or here and there in a boolean expression like the rendered attribute. JSF (specifically, EL) won't cache the evaluated result of the EL expression at all as it may return different values on each call (for example, when it's dependent on the currently iterated datatable row).
Evaluating an EL expression and invoking a getter method is a very cheap operation, so you should generally not worry about this at all. However, the story changes when you're performing expensive DB/business logic in the getter method for some reason. This would be re-executed everytime!
Getter methods in JSF backing beans should be designed that way that they solely return the already-prepared property and nothing more, exactly as per the Javabeans specification. They should not do any expensive DB/business logic at all. For that the bean's #PostConstruct and/or (action)listener methods should be used. They are executed only once at some point of request-based JSF lifecycle and that's exactly what you want.
Here is a summary of all different right ways to preset/load a property.
public class Bean {
private SomeObject someProperty;
#PostConstruct
public void init() {
// In #PostConstruct (will be invoked immediately after construction and dependency/property injection).
someProperty = loadSomeProperty();
}
public void onload() {
// Or in GET action method (e.g. <f:viewAction action>).
someProperty = loadSomeProperty();
}
public void preRender(ComponentSystemEvent event) {
// Or in some SystemEvent method (e.g. <f:event type="preRenderView">).
someProperty = loadSomeProperty();
}
public void change(ValueChangeEvent event) {
// Or in some FacesEvent method (e.g. <h:inputXxx valueChangeListener>).
someProperty = loadSomeProperty();
}
public void ajaxListener(AjaxBehaviorEvent event) {
// Or in some BehaviorEvent method (e.g. <f:ajax listener>).
someProperty = loadSomeProperty();
}
public void actionListener(ActionEvent event) {
// Or in some ActionEvent method (e.g. <h:commandXxx actionListener>).
someProperty = loadSomeProperty();
}
public String submit() {
// Or in POST action method (e.g. <h:commandXxx action>).
someProperty = loadSomeProperty();
return "outcome";
}
public SomeObject getSomeProperty() {
// Just keep getter untouched. It isn't intented to do business logic!
return someProperty;
}
}
Note that you should not use bean's constructor or initialization block for the job because it may be invoked multiple times if you're using a bean management framework which uses proxies, such as CDI.
If there are for you really no other ways, due to some restrictive design requirements, then you should introduce lazy loading inside the getter method. I.e. if the property is null, then load and assign it to the property, else return it.
public SomeObject getSomeProperty() {
// If there are really no other ways, introduce lazy loading.
if (someProperty == null) {
someProperty = loadSomeProperty();
}
return someProperty;
}
This way the expensive DB/business logic won't unnecessarily be executed on every single getter call.
See also:
Why is the getter called so many times by the rendered attribute?
Invoke JSF managed bean action on page load
How and when should I load the model from database for h:dataTable
How to populate options of h:selectOneMenu from database?
Display dynamic image from database with p:graphicImage and StreamedContent
Defining and reusing an EL variable in JSF page
Measure the render time of a JSF view after a server request
With JSF 2.0 you can attach a listener to a system event
<h:outputText value="#{ManagedBean.someProperty}">
<f:event type="preRenderView" listener="#{ManagedBean.loadSomeProperty}" />
</h:outputText>
Alternatively you can enclose the JSF page in an f:view tag
<f:view>
<f:event type="preRenderView" listener="#{ManagedBean.loadSomeProperty}" />
.. jsf page here...
<f:view>
I have written an article about how to cache JSF beans getter with Spring AOP.
I create a simple MethodInterceptor which intercepts all methods annotated with a special annotation:
public class CacheAdvice implements MethodInterceptor {
private static Logger logger = LoggerFactory.getLogger(CacheAdvice.class);
#Autowired
private CacheService cacheService;
#Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
String key = methodInvocation.getThis() + methodInvocation.getMethod().getName();
String thread = Thread.currentThread().getName();
Object cachedValue = cacheService.getData(thread , key);
if (cachedValue == null){
cachedValue = methodInvocation.proceed();
cacheService.cacheData(thread , key , cachedValue);
logger.debug("Cache miss " + thread + " " + key);
}
else{
logger.debug("Cached hit " + thread + " " + key);
}
return cachedValue;
}
public CacheService getCacheService() {
return cacheService;
}
public void setCacheService(CacheService cacheService) {
this.cacheService = cacheService;
}
}
This interceptor is used in a spring configuration file:
<bean id="advisor" class="org.springframework.aop.support.DefaultPointcutAdvisor">
<property name="pointcut">
<bean class="org.springframework.aop.support.annotation.AnnotationMatchingPointcut">
<constructor-arg index="0" name="classAnnotationType" type="java.lang.Class">
<null/>
</constructor-arg>
<constructor-arg index="1" value="com._4dconcept.docAdvance.jsfCache.annotation.Cacheable" name="methodAnnotationType" type="java.lang.Class"/>
</bean>
</property>
<property name="advice">
<bean class="com._4dconcept.docAdvance.jsfCache.CacheAdvice"/>
</property>
</bean>
Hope it will help!
Originally posted in PrimeFaces forum # http://forum.primefaces.org/viewtopic.php?f=3&t=29546
Recently, I have been obsessed evaluating the performance of my app, tuning JPA queries, replacing dynamic SQL queries with named queries, and just this morning, I recognized that a getter method was more of a HOT SPOT in Java Visual VM than the rest of my code (or majority of my code).
Getter method:
PageNavigationController.getGmapsAutoComplete()
Referenced by ui:include in in index.xhtml
Below, you will see that PageNavigationController.getGmapsAutoComplete() is a HOT SPOT (performance issue) in Java Visual VM. If you look further down, on the screen capture, you will see that getLazyModel(), PrimeFaces lazy datatable getter method, is a hot spot too, only when enduser is doing a lot of 'lazy datatable' type of stuff/operations/tasks in the app. :)
See (original) code below.
public Boolean getGmapsAutoComplete() {
switch (page) {
case "/orders/pf_Add.xhtml":
case "/orders/pf_Edit.xhtml":
case "/orders/pf_EditDriverVehicles.xhtml":
gmapsAutoComplete = true;
break;
default:
gmapsAutoComplete = false;
break;
}
return gmapsAutoComplete;
}
Referenced by the following in index.xhtml:
<h:head>
<ui:include src="#{pageNavigationController.gmapsAutoComplete ? '/head_gmapsAutoComplete.xhtml' : (pageNavigationController.gmaps ? '/head_gmaps.xhtml' : '/head_default.xhtml')}"/>
</h:head>
Solution: since this is a 'getter' method, move code and assign value to gmapsAutoComplete prior to method being called; see code below.
/*
* 2013-04-06 moved switch {...} to updateGmapsAutoComplete()
* because performance = 115ms (hot spot) while
* navigating through web app
*/
public Boolean getGmapsAutoComplete() {
return gmapsAutoComplete;
}
/*
* ALWAYS call this method after "page = ..."
*/
private void updateGmapsAutoComplete() {
switch (page) {
case "/orders/pf_Add.xhtml":
case "/orders/pf_Edit.xhtml":
case "/orders/pf_EditDriverVehicles.xhtml":
gmapsAutoComplete = true;
break;
default:
gmapsAutoComplete = false;
break;
}
}
Test results: PageNavigationController.getGmapsAutoComplete() is no longer a HOT SPOT in Java Visual VM (doesn't even show up anymore)
Sharing this topic, since many of the expert users have advised junior JSF developers to NOT add code in 'getter' methods. :)
If you are using CDI, you can use Producers methods.
It will be called many times, but the result of first call is cached in scope of the bean and is efficient for getters that are computing or initializing heavy objects!
See here, for more info.
You could probably use AOP to create some sort of Aspect that cached the results of our getters for a configurable amount of time. This would prevent you from needing to copy-and-paste boilerplate code in dozens of accessors.
If the value of someProperty is
expensive to calculate, this can
potentially be a problem.
This is what we call a premature optimization. In the rare case that a profiler tells you that the calculation of a property is so extraordinarily expensive that calling it three times rather than once has a significant performance impact, you add caching as you describe. But unless you do something really stupid like factoring primes or accessing a databse in a getter, your code most likely has a dozen worse inefficiencies in places you've never thought about.
I would also advice using such Framework as Primefaces instead of stock JSF, they address such issues before JSF team e. g in primefaces you can set partial submit. Otherwise BalusC has explained it well.
It still big problem in JSF. Fo example if you have a method isPermittedToBlaBla for security checks and in your view you have rendered="#{bean.isPermittedToBlaBla} then the method will be called multiple times.
The security check could be complicated e.g . LDAP query etc. So you must avoid that with
Boolean isAllowed = null ... if(isAllowed==null){...} return isAllowed?
and you must ensure within a session bean this per request.
Ich think JSF must implement here some extensions to avoid multiple calls (e.g annotation #Phase(RENDER_RESPONSE) calle this method only once after RENDER_RESPONSE phase...)
Throughout my GWT app there are many different async calls to the server, using many different services. In order to do better error handling I want to wrap all my callbacks so that I can handle exceptions like InvocationExceptions in one place. A super class implementing AsyncCallback isn't really an option because that would mean that I would have to modify every async call.
RpcServiceProxy#doCreateRequestCallback() looks like the method to override. Simple enough. I just can't see how to make GWT use my new class.
Another way to state the question would be
How do I make GWT use my own subclass of RpcServiceProxy?
In order to wrap every AsynCallback<T> that is passed to any RemoteService you need to override RemoteServiceProxy#doCreateRequestCallback() because every AsynCallback<T> is handed in here before an RPC call happens.
Here are the steps to do so:
As #ChrisLercher alluded, you need to define your own Proxy Generator to step in every time a RemoteService proxy gets generated. Start by extending ServiceInterfaceProxyGenerator and overriding #createProxyCreator().
/**
* This Generator extends the default GWT {#link ServiceInterfaceProxyGenerator} and replaces it in the
* co.company.MyModule GWT module for all types that are assignable to
* {#link com.google.gwt.user.client.rpc.RemoteService}. Instead of the default GWT {#link ProxyCreator} it provides the
* {#link MyProxyCreator}.
*/
public class MyServiceInterfaceProxyGenerator extends ServiceInterfaceProxyGenerator {
#Override
protected ProxyCreator createProxyCreator(JClassType remoteService) {
return new MyProxyCreator(remoteService);
}
}
In your MyModule.gwt.xml make use of deferred binding to instruct GWT to compile using your Proxy Generator whenever it generates something of the type RemoteService:
<generate-with
class="com.company.ourapp.rebind.rpc.MyServiceInterfaceProxyGenerator">
<when-type-assignable class="com.google.gwt.user.client.rpc.RemoteService"/>
</generate-with>
Extend ProxyCreator and override #getProxySupertype(). Use it in MyServiceInterfaceProxyGenerator#createProxyCreator() so that you can define the base class for all the generated RemoteServiceProxies.
/**
* This proxy creator extends the default GWT {#link ProxyCreator} and replaces {#link RemoteServiceProxy} as base class
* of proxies with {#link MyRemoteServiceProxy}.
*/
public class MyProxyCreator extends ProxyCreator {
public MyProxyCreator(JClassType serviceIntf) {
super(serviceIntf);
}
#Override
protected Class<? extends RemoteServiceProxy> getProxySupertype() {
return MyRemoteServiceProxy.class;
}
}
Make sure both your MyProxyCreator and your MyServiceInterfaceProxyGenerator are located in a package that will not get cross-compiled by GWT into javascript. Otherwise you will see an error like this:
[ERROR] Line XX: No source code is available for type com.google.gwt.user.rebind.rpc.ProxyCreator; did you forget to inherit a required module?
You are now ready to extend RemoteServiceProxy and override #doCreateRequestCallback()! Here you can do anything you like and apply it to every callback that goes to your server. Make sure that you add this class, and any other class you use here, in my case AsyncCallbackProxy, to your client package to be cross-compiled.
/**
* The remote service proxy extends default GWT {#link RemoteServiceProxy} and proxies the {#link AsyncCallback} with
* the {#link AsyncCallbackProxy}.
*/
public class MyRemoteServiceProxy extends RemoteServiceProxy {
public MyRemoteServiceProxy(String moduleBaseURL, String remoteServiceRelativePath, String serializationPolicyName,
Serializer serializer) {
super(moduleBaseURL, remoteServiceRelativePath, serializationPolicyName, serializer);
}
#Override
protected <T> RequestCallback doCreateRequestCallback(RequestCallbackAdapter.ResponseReader responseReader,
String methodName, RpcStatsContext statsContext,
AsyncCallback<T> callback) {
return super.doCreateRequestCallback(responseReader, methodName, statsContext, new AsyncCallbackProxy<T>(callback));
}
}
References:
DevGuideCodingBasicsDeferred.html
An example applied to performance tracking
The type you're looking for is probably RemoteServiceProxy (not RpcServiceProxy), and I assume, that you should start with overriding the default binding in /com/google/gwt/user/RemoteService.gwt.xml (just copy the lines to your own .gwt.xml file and adjust):
<generate-with
class="com.google.gwt.user.rebind.rpc.ServiceInterfaceProxyGenerator">
<when-type-assignable class="com.google.gwt.user.client.rpc.RemoteService"/>
</generate-with>
There you'll find protected Class<? extends RemoteServiceProxy> getProxySupertype(), which you can override to return your own RemoteServiceProxyclass.
Haven't tried it yet, so this may need a few additional steps...
Normally the way in GWT to handle exceptions happening in async processes is via UncaughtExceptionHandlers.
I would use my own handler to manage those exceptions:
GWT.setUncaughtExceptionHandler(new UncaughtExceptionHandler() {
public void onUncaughtException(Throwable e) {
if (e instanceof WhateverException) {
// handle the exception here
}
}
});
Using this you dont need to subclass anything.
If by "one place" you mean "I want to handle all errors in one method", then I would suggest either catching and throwing stuff until they're in one place OR creating an EventBus that you basically just send every error to. Then you can just have a single handler attached to this bus that can handle everything.
I use Acceleo in order to generate code with a model I have made. I managed to protect my methods in order to protect them usinig "#generated NOT" in case I need to regenerate my code with Acceleo. The problem is that adding #generated NOT protect all the method content, that is to say the body, the signature and JavaDocs.
The thing is that I only need to keep the method body, or at least the method body and its signature, but I need the doc to be updated. How can I do this ?
Just for information here is an example of a potential generated class :
/*
* #generated
*/
public class ActeurRefEntrepriseServicesImpl implements ActeurRefEntrepriseServices {
#Autowired
HelloWorldService helloWorldService;
/**
* Service which say hello
*
* #param name
* user name
* #return print Hello username
*
* #generated NOT
*/
#Override
public void sayHello(final String name) {
helloWorldService.print(name);
}
}
Baptiste,
The #generated tags use the standard EMF protection rules : "#generated" means that the body of the block for which it is set will be generated, anything else means no re-generation. If you set something as "#generated" in any of your metamodels' generated code, you will see that there, too, the javadoc is preserved whatever the edits you do.
In short, you cannot tell EMF to re-generate anything other than the code itself.
If you need to have the body protected but not the javadoc, you have to shift from the "#generated" protection to Acceleo's [protected] blocks. i.e, change your template from :
[template generatedMethod(methodName : String)]
/**
* Some doc.
* #param param1
* param documentation.
* #generated
*/
[generateSignature(methodName)/] {
[generateBody()/]
}
[/template]
to something using a protected block :
[template generatedMethod(methodName : String)]
/**
* Some doc.
* #param param1
* param documentation.
*/
[protected (methodName)]
[generateSignature(methodName)/] {
[generateBody()/]
}
[/protected]
[/template]
With this paradigm, anything that is outside of the protected area will be regenerated, everything else will remain untouched by a regeneration.
See also the full documentation available from the Acceleo website.
If you absolutely need to use the "#generated" protection method for your model, you will need to tamper with the JMerger API from EMF and alter the launcher Acceleo generated for you in order to use your own merging strategy (see the getGenerationStrategy method from that launcher). Note that this is by no means an easy task.