Putting, POJO returned from mybatis, into HashTable degrades the performance as mybatis tries to load the pojo in its hasCode method - mybatis

I am using latest mybatis release i.e. mybatis3.2.2.
I have converted our old application which was on EJB CMP to mybatis, in which the newly converted mybatis code performance was much worse than the old code with EJB CMP.
Configuration settings of mybatis is as follows:
<settings>
<setting name="cacheEnabled" value="false"/>
<setting name="aggressiveLazyLoading" value="false" />
<setting name="lazyLoadingEnabled" value="true" />
<setting name="jdbcTypeForNull" value="VARCHAR"/>
<setting name="defaultExecutorType" value="REUSE"/>
<setting name="defaultStatementTimeout" value="25000"/>
</settings>
While analyzing performance issue through YJP profiler, I realized that HashTable.put(<mybatis returned Pojo>, <value>) methods were taking most of the time and seemed to be the only bottoleneck.
In the HashTable.put() method, we are putting Pojo returned from mybatis as a key. In which call, in turn it calls hashCode of that pojo, and from YJP I could see that in 'hashCode' method, it is calling 'org.apache.ibatis.executor.loader.CglibProxyFactory$EnhancedResultObjectProxyImpl.intercept(Object, Method, Object[], MethodProxy)', which is actually seemed like calling jdbc drivers and loading this pojo properties and those are the once taking all of the time.
Can any please help and guide me as in why mybatis is trying to load the Pojo while putting into HashTable in its hashCode method? Also, how can we improve its performance if at all we can.
Also, I tried to override 'hashCode' & 'equals' methods into my Pojo and used/compared only primary key properties, but it seems to have no effect, it is still calling 'executor.loader' of ibatis and doing same thing.

Ok, I have finally found the solution.
The 'lazyLoadTriggerMethods' setting configures the methods that trigger lazy loading.
By default it triggers lazy loading for 'equals,clone,hashCode,toString' methods.
I configured this property in SqlMapConfig.xml and removed 'equals,hashCode,toString' methods from it, as:
<settings>
<!-- below both entry required to achieve on-demand lazy loading -->
<setting name="aggressiveLazyLoading" value="false" />
<setting name="lazyLoadingEnabled" value="false" />
<setting name="jdbcTypeForNull" value="VARCHAR" />
<setting name="defaultExecutorType" value="REUSE"/>
<setting name="defaultStatementTimeout" value="25000"/>
<setting name="lazyLoadTriggerMethods" value="clone"/>
</settings>
Note: You should only do this when you are sure that the hashCode and equals implementation does not use lazy-loaded properties or uses the accessor methods for reading lazy-loaded properties.
It dramatically increased performance.
Thanks,
Parag

Related

Instanciate Entity Framework context per request via Microsoft Unity in WebApi 2.0

I have a N-layer solution that works correctly in my dev environment. Apparently it works also on production environment, but sometime the execution fails. I do not understand why. I just know that nothing changes on database, no usefull error is visible and no log is written.
My supposition is a concurrency problem. I think that something fails when I try to do more than one select once the entity framework context has been initialized.
Here how my solution is structured
In the facade I inject the entity framework context. Here the configuration on my web.config of the service interface:
<containers>
<container>
<types>
<register type="it.MC.IContext.IDataContext, IContext"
mapTo="it.MC.EntityFrameworkContext.PublicAreaContext, EntityFrameworkContext">
<lifetime type="singleton" />
</register>
<register type="it.MC.IFacade.IPublicAreaFacade, IFacade"
mapTo="it.MC.Facade.PublicAreaFacade, Facade">
<interceptor type="TransparentProxyInterceptor" />
<lifetime type="singleton" />
<constructor>
<param name="context" type="it.MC.IContext.IDataContext, IContext"/>
</constructor>
</register>
</types>
</container>
</containers>
As you can see, my context and the facade are singleton. I think both are really wrong. I think that both Facade that the entity Framewrk context should be instanciate per request. I think this will solve the problem of the concurrency too.
Can anyone help me to correct my code please?
Thank you
I know that your question is:
Can anyone help me to correct my code please?
I read it like this:
Can anyone help me change this code so that IContext and IFacade will be re-initialized per request.
With that said... Yes, I also doubt that you want to keep your IContext as a singleton.
Why you shouldn't use singleton DataContexts in Entity Framework
Here's how you can change the lifetimemanager to PerRequestLifetimeManager, if that's what you want. Note that you probably need the Unity.Mvc NuGet-package.
<containers>
<container>
<types>
<register type="it.MC.IContext.IDataContext, IContext"
mapTo="it.MC.EntityFrameworkContext.PublicAreaContext, EntityFrameworkContext">
<lifetime type="Microsoft.Practices.Unity.PerRequestLifetimeManager, Microsoft.Practices.Unity.Mvc" />
</register>
<register type="it.MC.IFacade.IPublicAreaFacade, IFacade"
mapTo="it.MC.Facade.PublicAreaFacade, Facade">
<interceptor type="TransparentProxyInterceptor" />
<lifetime type="Microsoft.Practices.Unity.PerRequestLifetimeManager, Microsoft.Practices.Unity.Mvc" />
<constructor>
<param name="context" type="it.MC.IContext.IDataContext, IContext"/>
</constructor>
</register>
</types>
</container>
</containers>
Before moving to production I suggest you read this post about the PerRequestLifetimeManager.
Its purpose would be to only instantiate one instance per request,
which could (for example) prevent redundant operations and lookups
during the course of a single request.
The danger is if someone assumes that the object created is a good
place to store state during the request. The idea of dependency
injection is that a class receives a dependency (commonly an
interface) and doesn't "know" anything about it at all except that it
implements that interface.
Also, think about the Facade you got, and how it will work if it's re-initated every request. Does it perform any heavy operations at initialization? You might want to think about the lifetimemanager for that one.
UPDATE
Since you're using WebAPI you should be able to use HierarchicalLifetimeManager instead.
http://www.asp.net/web-api/overview/advanced/dependency-injection
The dependency resolver attached to the HttpConfiguration object has
global scope. When Web API creates a controller, it calls BeginScope.
This method returns an IDependencyScope that represents a child scope.
Web API then calls GetService on the child scope to create the
controller. When request is complete, Web API calls Dispose on the
child scope. Use the Dispose method to dispose of the controller’s
dependencies.
http://www.devtrends.co.uk/blog/introducing-the-unity.webapi-nuget-package
If you are registering any components that implement IDisposable such
as Entity Framework's DbContext, you will want to make sure that these
components get disposed of at the end of the request. This is achieved
by registering these components with a HierarchicalLifetimeManager.

Spring Data JPA: Repositories for multiple database / Entitymanger configurations

I have two Entitymanager bean configurations. Each pointing to a separate database with a different schema (one is Oracle, the other one is an in-memory H2)
What could I do to solve the ambiguity of what Entitymanager should be used for each Repository? Right now I'm getting this error:
No unique bean of type [javax.persistence.EntityManagerFactory] is defined:
expected single bean but found 2
I guess I could provide a quick-fix simply by using something like
<jpa:repositories base-package="com.foo.repos.ora"
entity-manager-factory-ref="entityManagerFactoryA">
<jpa:repositories base-package="com.foo.repos.m2"
entity-manager-factory-ref="entityManagerFactoryB">
But hopefully there is a better solution.
EDIT:
I give you an idea of the current scenario:
Spring-Config: there're two EM
<jpa:repositories base-package="com.foo.repos.ora" entity-manager-factory-ref="entityManagerFactory"/>
<jpa:repositories base-package="com.foo.repos.m2" entity-manager-factory-ref="entityManagerFactory2"/>
<context:component-scan base-package="com.foo" /> ....
Everything from here on is in "package com.foo.repos.ora"
Following the pattern of how to make a custom repository I get two interfaces 'ARepository', 'ARepositoryCustom' and its implementation 'ARepositoryImpl' like so
#Repository
public interface ARepository extends ARepositoryCustom, JpaRepository<myEntity, BigDecimal>, QueryDslPredicateExecutor {
}
public interface ARepositoryCustom {
FooBar lookupFooBar()
}
public class ARepositoryImpl extends QueryDslRepositorySupport implements ARepositoryCustom {
ARepositoryImpl(Class<?> domainClass) {
super(domainClass.class)
}
ARepositoryImpl() {
this(myEntity.class)
}
#Override
FooBar lookupFooBar() {
JPQLQuery query = ....
....
return found
}
}
resulting in the following error message:
Caused by: org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'aRepositoryImpl': Injection of
persistence dependencies failed; nested exception is
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
unique bean of type [javax.persistence.EntityManagerFactory] is
defined: expected single bean but found 2
Which is of course correct, there are 2 EM beans, but since I restricted EM #1 aka 'entityManagerFactory' to package 'com.foo.repos.ora' only, I'm still not sure how to reference the exact EM bean.
There is no magic under the hood.
<jpa:repositories base-package="com.foo.repos.ora" entity-manager-factory-ref="entityManagerFactory"/>
doesn't help you at all with your custom interface implementations. Best way I found is to treat your custom implementations as regular beans. So I defined a 'sharedEntitManager' bean in my spring configuration like so
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
...
</bean>
<bean id="sharedEntityManager" class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
<property name = "entityManagerFactory" ref="entityManagerFactory"/>
</bean>
After that, I simply injected the EntityManager into my implementation beans
<bean id="aRepositoryImpl" class="comm.foo.repos.ora.ARepositoryImpl">
<property name="entityManager" ref="sharedEntityManager"/>
</bean>
The 'entity-manager-factory-ref' attribute discriminates between different entitymanager factories but only for straight Spring Data Repositories (i.e. only for interfaces). It doesn't however concern itself with any of your implementations.
To sum it up
1) if you simply rely on standard Spring Data repositories with no custom implementation, use the "entity-manager-factory-ref" attribute to differentiate databases.
2a) Additionally, if you use any custom implementation, inject the appropriate EntityManager directly into the implementing class. Wirering is done under control of your spring xml configuration. For some reason I wasn't able to use the #Autowire annotation with a #Qualifier to reference the correct EntityManager. EDIT I just learned about the #Resource annotation
#Resource(name = "sharedEntityManagerA")
EntityManager entityManager
<bean id="sharedEntityManagerA" name="sharedEntityManagerA" class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
<property name = "entityManagerFactory" ref="entityManagerFactory"/>
</bean>
With this at hand selecting what EntityMAnger should be used becomes straightforward. No need of plumbing everything togehther in your context xml.
2b) As an alternative to Spring's xml configuration for hooking up your stuff you may also go with
#PersistenceContext( unitName = "nameOfPersistenceUnit" )
to inject the correct EntitymanagerFactory
While 'nameOfPersistenceUnit' referes to your persistence sitting in your standard JPA persistence.xml
However 2b) doesn't go well with 'QueryDslRepositorySupport', since it expects an EntityManager instance. But I found that 'QueryDslRepositorySupport' doesn't offer much support anyway, so I removed it.

Does Autofac allow specifying what dependency type should be injected into constructor

Lets say I have EmailService which implments IEmailService. And EmailService has constructor dependency on ILoggingService. Now given that I have several implementations of ILoggingService, can achieve something like this:
<component service="IEmailService, MyInterfaces" type="EmailService, MyLib">
<parameters>
<parameter name="loggingService" value="LoggingService, MyLib" />
</parameters>
</component>
I have looked at giving names to registered types but so far couldn't find an example of how to use them from XML configuration.
In short I want to use XML configuration to specify which concrete logger implementation gets injection.
XML configuration in Autofac is targeted more toward the 80% use case rather than being a full implementation of Autofac's flexibility in XML form. Autofac instead recommends using its module mechanism for configuration. Modules, coupled with the XML configuration, can be a very powerful way to achieve what you're looking to accomplish and still have that flexibility to switch between dependencies as needed.
First, create an Autofac module that does the registration you want:
public class EmailModule
{
protected override void Load(ContainerBuilder builder)
{
// Register a named logging service so we can locate
// this specific one later.
builder.RegisterType<LoggingService>()
.Named<ILoggingService>("emailLogger");
// Create a ResolvedParameter we can use to force resolution
// of the constructor parameter to the named logger type.
var loggingServiceParameter = new ResolvedParameter(
(pi, ctx) => pi.Name == "loggingService",
(pi, ctx) => ctx.ResolveNamed<ILoggingService>("emailLogger"));
// Add the ResolvedParameter to the type registration so it
// knows to use it when resolving.
builder.RegisterType<EmailService>()
.As<IEmailService>()
.WithParameter(loggingServiceParameter);
}
}
Notice it's a little more complex of a registration because you're requiring a very specific resolution.
Now in XML configuration, register that module:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section
name="autofac"
type="Autofac.Configuration.SectionHandler, Autofac.Configuration"/>
</configSections>
<autofac>
<modules>
<module type="EmailModule, MyAssembly" />
</modules>
</autofac>
</configuration>
When you want to switch configurations, register a different module rather than fiddling with specific component registries.
Code disclaimer: I'm writing the syntax from memory and I'm not a compiler, so you may have to do a little tweaking... but the premise holds. Isolate the complexity in a module, then register your module.

NoSQL with ColdFusion, Bean+Service+DAO & OOP or good old Array/Struct & Procedural?

How do you architect the CF backend model w/ NoSQL that are simple, flexible, efficient and clean?
Since NoSQL doc has no fixed schema like SQL row, it doesn't really fit well with Objects which are rather static. Therefore the typical Bean+DAO+Service OOP architecture doesn't seem to fit well.
I'm thinking of using plain old Struct's, but then I cannot add behavior onto it and it's going to make the whole project very procedural, which may not be a bad thing?
However, if I just use plain old struct, the DB implementations is leaked everywhere including the View layer...
Or... shall I translate the array's into CF's Query object for the View layer?
Comment? Idea? Suggestion?
Thanks!
I've written a couple applications in CF that use NoSQL datastores - one uses the Google App Engine datastore, and another with MongoDB.
In both cases, I made CFCs to act as my objects. But, I used a homegrown object "framework" that uses onMissingMethod for accessors, and cfproperty with lots of custom metadata to define properties of the objects.
For instance, this is all I NEED to define for a model, unless it has custom business logic:
<cfcomponent output="false" persistentLayer="GAE" persistentClass="asana" extends="com.bespokelogic.framework.BaseModel">
<cfproperty name="id" type="string" persistentDatatype="string" settable="true" gettable="true" required="true">
<cfproperty name="deckSet" type="string" persistentDatatype="string" settable="true" gettable="true" default="basic">
<cfproperty name="englishName" type="string" persistentDatatype="string" settable="true" gettable="true">
<cfproperty name="traditionalName" type="string" persistentDatatype="string" settable="true" gettable="true">
<cfproperty name="pronunciation" type="string" persistentDatatype="string" settable="true" gettable="true">
<cfproperty name="anatomicalFocus" type="array" persistentDatatype="array" settable="true" gettable="true" default="#arrayNew(1)#">
<cfproperty name="therapeuticFocus" type="array" persistentDatatype="array" settable="true" gettable="true" default="#arrayNew(1)#">
<cfproperty name="benefits" type="string" persistentDatatype="string" settable="true" gettable="true">
<cfproperty name="variations" type="string" persistentDatatype="string" settable="true" gettable="true">
<cfproperty name="contraindications" type="array" persistentDatatype="array" settable="true" gettable="true" default="#arrayNew(1)#">
<cfproperty name="skill" type="string" persistentDatatype="string" settable="true" gettable="true">
<cfproperty name="instructions" type="string" persistentDatatype="string" settable="true" gettable="true">
</cfcomponent>
The CFCs all extend a base model which has validate, serialize, deserialize, and virtual getter/setter methods.
Then, I have a persistence layer that knows how to get and put objects from/into the datastore.
I would then write a service for each of the models which utilize the persistence layer.
The upshot is that the models know how to serialize their property data, and the persistencelayer knows how to put that into the datastore.
So, in a sense, its not an object-relational manager, but more of an object-document manager.
The framework's a lot more full featured in reality, as my design was that I take some models, and persist them in SQL, some in NoSQL, all in the same application - and I could swap out the underlying datastore with no recoding of the app. It was a partial success.
In your case, if you're using a single datastore, you can skip all that complicated stuff.
You just need a base object which knows how to serialize and deserialize models, and you getter/setter stuff. Decide how you want to store property data in the CFC. I used a struct called "variables.instance._properties{}"
Then write a service for your model(s) that has "put" and "fetch" methods. The "put" method, for instance, takes a model, calls the "serialize" method on it to turn it into JSON, then stuffs it into Mongo. The "fetch" method gets the Mongo record, creates a new instance of the CFC, and passes the Mongo record to the deserialize method.
That was pretty rambling...
TL;DR: "Objects in CF (such as they are) are not really all that static. Use CFCs. Use onMissingMethod to allow dynamic properties. Store properties in a way that allows you to serialize and deserialize them into a format (usually JSON) that is easily digestible by your datastore. Write a simple persistence layer that gets and puts documents to/from the datastore. Write simple services which implement your persistence layer and take and return you dynamic models.
CF's pretty well suited for NoSQL in my opinion.
I've settled with Proxy object (that has an embed 'instance' struct). The DAO layer just use getMemento() & setMemento()
I also used an Iterator object for iterating through an array of results.

Optional job parameters in Spring Batch

Is it possible make a job parameter optional in the sense that it evaluates to null if it is not specified instead of throwing an exception?
What I'm after is something like a
<bean id="fileNamePattern" class="java.lang.String" scope="step">
<constructor-arg value="#{jobParameters[fileNamePattern]}" />
</bean>
that I could pass as a property to another bean that handles the case where fileNamePattern is not specified.
Optional jobParameters do come across as null. The issue you have here is trying to create a java.lang.String with null. You could implement your own bean that knows how to handle null appropriately of course. However, there is another option.
The most common option, is to place the jobParameter directly into the property of the bean you have using fileNamePattern on. Of course, this will require that bean be scoped as "step". Here is a very simple example of what I mean.
<bean id="helloWorld"
class="com.foo.example.HelloWorldTasklet" scope="step">
<property name="someOptionalParameter" value="#{jobParameters[someOptionalParameter]}" />
</bean>
Another option would be to use a factory bean (of your own implementation) instead of java.lang.String.