Generic way to initialize a JPA 2 lazy association - jpa

So, the question at hand is about initializing the lazy collections of an "unknown" entity, as long as these are known at least by name. This is part of a more wide effort of mine to build a generic DataTable -> RecordDetails miniframework in JSF + Primefaces.
So, the associations are usually lazy, and the only moment i need them loaded is when someone accesses one record of the many in the datatable in order to view/edit it. The issues here is that the controllers are generic, and for this I also use just one service class backing the whole LazyLoading for the datatable and loading/saving the record from the details section.
What I have with come so far is the following piece of code:
public <T> T loadWithDetails(T record, String... associationsToInitialize) {
final PersistenceUnitUtil pu = em.getEntityManagerFactory().getPersistenceUnitUtil();
record = (T) em.find(record.getClass(), pu.getIdentifier(record));
for (String association : associationsToInitialize) {
try {
if (!pu.isLoaded(record, association)) {
loadAssociation(record, association);
}
} catch (..... non significant) {
e.printStackTrace(); // Nothing else to do
}
}
return record;
}
private <T> void loadAssociation(T record, String associationName) throws IntrospectionException, InvocationTargetException, IllegalAccessException, NoSuchFieldException {
BeanInfo info = Introspector.getBeanInfo(record.getClass(), Object.class);
PropertyDescriptor[] props = info.getPropertyDescriptors();
for (PropertyDescriptor pd : props) {
if (pd.getName().equals(associationName)) {
Method getter = pd.getReadMethod();
((Collection) getter.invoke(record)).size();
}
}
throw new NoSuchFieldException(associationName);
}
And the question is, did anyone start any similar endeavor, or does anyone know of a more pleasant way to initialize collections in a JPA way (not Hibernate / Eclipselink specific) without involving reflection?
Another alternative I could think of is forcing all entities to implement some interface with
Object getId();
void loadAssociations();
but I don't like the idea of forcing my pojos to implement some interface just for this.

With the reflection solution you would suffer the N+1 effect detailed here: Solve Hibernate Lazy-Init issue with hibernate.enable_lazy_load_no_trans
You could use the OpenSessionInView instead, you will be affected by the N+1 but you will not need to use reflection. If you use this pattern your transaction will remain opened until the end of the transaction and all the LAZY relationships will be loaded without a problem.
For this pattern you will need to do a WebFilter that will open and close the transaction.

Related

How to find all managed attached objects in EntityManager (JPA)

Is there a way to get all objects which are currently attached in the entity manager?
I want to write some monitoring code which will report the number of attached objects and their classes.
Meaning finding all objects which were loaded by previous queries and find operations into the entity manager.
I'm using EclipseLink, so a specific solution is good too.
EclipseLink's JPA interface pretty much wraps its native code such that an EntityManager uses a UnitOfWork session underneath (and the EMF wraps a ServerSession). You need to get at the UnitOfWork if you want to see what entities it is managing.
If using JPA 2.0, you can use the EntityManager unwrap method:
UnitOfWork uow = em.unwrap(UnitOfWork.class);
otherwise, use some casting
UnitOfWork uow = ((EntityManagerImpl)em).getUnitOfWork();
From there, the UnitOfWork has a list of all registered (aka managed) entities. You can use the UOW to directly log what it has using the printRegisteredObjects() method, or obtain it yourself using getCloneMapping().keySet().
You can also see deleted objects by using hasDeletedObjects() and then getDeletedObjects().keySet() if there are any, as and the same for new objects using hasNewObjectsInParentOriginalToClone() and getNewObjectsCloneToOriginal().keySet()
you can use JPA in a lot of ways i am still unaware of, and there is a lot going on under the hood in eclipselink that i still do not fully understand, but it looks like it is possible to see into the persistence context. USE THIS CODE AT YOUR OWN RISK. it is only meant to give you a hint that it is possible to inspect the context. (whether the code is right or wrong i'm posting it because it would have helped me when i was trying to decide whether to use eclipselink. there doesn't seem to be much in the way of documentation about how to do this properly.)
public void saveChanges() {
Date now = new Date();
JpaEntityManager jem = em.unwrap(JpaEntityManager.class);
UnitOfWorkImpl uow = jem.unwrap(UnitOfWorkImpl.class);
// inserts
for (Object entity : uow.getNewObjectsCloneToOriginal().keySet()) {
if (entity instanceof IAuditedEntity) {
IAuditedEntity auditedEntity = (IAuditedEntity) entity;
auditedEntity.setAuditedUserId(this.userId);
auditedEntity.setAuditedAt(now);
auditedEntity.setCreatedAt(now);
}
}
// updates
UnitOfWorkChangeSet uowChangeSet = (UnitOfWorkChangeSet) uow.getUnitOfWorkChangeSet();
if (uowChangeSet != null) {
List<IAuditedEntity> toUpdate = new ArrayList<>();
for(Entry<Object, ObjectChangeSet> entry : uowChangeSet.getCloneToObjectChangeSet().entrySet()) {
if (entry.getValue().hasChanges()) {
if (entry.getKey() instanceof IAuditedEntity) {
toUpdate.add((IAuditedEntity) entry.getKey());
}
}
}
for (IAuditedEntity auditedEntity : toUpdate) {
auditedEntity.setAuditedUserId(this.userId);
auditedEntity.setAuditedAt(now);
}
}
// deletions
Project jpaProject = uow.getProject();
boolean anyAuditedDeletions = false;
for (Object entity : uow.getDeletedObjects().keySet()) {
if (entity instanceof IAuditedEntity) {
anyAuditedDeletions = true;
DeletedEntity deletion = new DeletedEntity();
deletion.setTableName(jpaProject.getClassDescriptor(entity.getClass()).getTableName());
deletion.setEntityId(((IAuditedEntity) entity).getId());
deletion.setAuditedUserId(this.userId);
em.persist(deletion);
}
}
}
You can achieve this by inspecting the entities on MetaModel which can be obtained from any EntityManager.
Example usage:
EntityManager em = // get your EM however...
for(EntityType<?> entityType : em.getMetaModel().getEntities())
{
Class<?> managedClass = entityType.getBindableJavaType();
System.out.println("Managing type: " + managedClass.getCanonicalName());
}
This example will print out all of the class types being managed by the EntityManager. To get all of the actual objects being managed, simply query all objects of that type on the EntityManager.
Update:
As of JPA 2.0 you can cache results that will be managed by javax.persistence.Cache. However, with plain JPA there is no way to actually retrieve the objects stored in the cache, the best you can do is check if a certain object is in the Cache via Cache.contains(Class cls, Object pk):
em.getEntityManagerFactory().getCache().contains(MyData.class, somePK);
However, EclipseLink extends Cache with JpaCache. You can use this to actually get the object from the cache via JpaCache.getObject(Class cls, Object id). This doesn't return a collection or anything, but it's the next best thing.
Unfortunately, if you want to actually access objects in the cache, you will need to manage this yourself.
I dont see such an option in the EntityManager interface. There is only a contains(Object entity) method but you need to pass the conrete objects and they are the checked for existentnce in the PersistenceContext. Also looking at the PersistenceContext interface i dont see such an option.

wicket :how to combine CompoundPropertyModel and LoadableDetachableModel

I want to achieve two goals:
I want my model to be loaded every time from the DB when it's in a life-cycle (for every request there will be just one request to the DB)
I want my model to be attached dynamically to the page and that wicket will do all this oreable binding for me
In order to achieve these two goals I came to a conclusion that I need to use both CompoundPropertyModel and LoadableDetachableModel.
Does anyone know if this is a good approach?
Should I do new CompoundPropertyModel(myLoadableDetachableModel)?
Yes, you are right, it is possible to use
new CompoundPropertyModel<T>(new LoadableDetachableModel<T> { ... })
or use static creation (it does the same):
CompoundPropertyModel.of(new LoadableDetachableModel<T> { ... })
that has both features of compound model and lazy detachable model. Also detaching works correctly, when it CompoudPropertyModel is detached it also proxies detaching to inner model that is used as the model object in this case.
I use it in many cases and it works fine.
EXPLANATION:
See how looks CompoundPropertyModel class (I'm speaking about Wicket 1.6 right now):
public class CompoundPropertyModel<T> extends ChainingModel<T>
This mean, CompoundPropertyModel adds the property expression behavior to the ChainingModel.
ChainingModel has the following field 'target' and the constructor to set it.
private Object target;
public ChainingModel(final Object modelObject)
{
...
target = modelObject;
}
This take the 'target' reference to tho object or model.
When you call getObject() it checks the target and proxies the functionality if the target is a subclass of IModel:
public T getObject()
{
if (target instanceof IModel)
{
return ((IModel<T>)target).getObject();
}
return (T)target;
}
The similar functionality is implemented for setObject(T), that also sets the target or proxies it if the target is a subclass of IModel
public void setObject(T object)
{
if (target instanceof IModel)
{
((IModel<T>)target).setObject(object);
}
else
{
target = object;
}
}
The same way is used to detach object, however it check if the target (model object) is detachable, in other words if the target is a subclass if IDetachable, that any of IModel really is.
public void detach()
{
// Detach nested object if it's a detachable
if (target instanceof IDetachable)
{
((IDetachable)target).detach();
}
}

Nested DbContext due to method calls - Entity Framework

In the following case where two DbContexts are nested due to method calls:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
Method_B();
//...do some more work here
}
}
public void Method_B() {
using (var db = new SomeDbContext()) {
//...do some work
}
}
Question:
Will this nesting cause any issues? (and will the correct DbContext be disposed at the correct time?)
Is this nesting considered bad practice, should Method_A be refactored into:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
}
Method_B();
using (var db = new SomeDbContext()) {
//...do some more work here
}
}
Thanks.
Your DbContext derived class is actually managing at least three things for you here:
the metadata that describes your database and your entity model,
the underlying database connection, and
a client side "cache" of entities loaded using the context, for change tracking, relationship fixup, etc. (Note that although I term this a "cache" for want of a better word, this is generally short lived and is just to support EFs functionality. It's not a substitute for proper caching in your application if applicable.)
Entity Framework generally caches the metadata (item 1) so that it is shared by all context instances (or, at least, all instances that use the same connection string). So here that gives you no cause for concern.
As mentioned in other comments, your code results in using two database connections. This may or may not be a problem for you.
You also end up with two client caches (item 3). If you happen to load an entity from the outer context, then again from the inner context, you will have two copies of it in memory. This would definitely be confusing, and could lead to subtle bugs. This means that, if you don't want to use shared context objects, then your option 2 would probably be better than option 1.
If you are using transactions, there are further considerations. Having multiple database connections is likely to result in transactions being promoted to distributed transactions, which is probably not what you want. Since you didn't make any mention of db transactions, I won't go into this further here.
So, where does this leave you?
If you are using this pattern simply to avoid passing DbContext objects around in your code, then you would probably be better off refactoring MethodB to receive the context as a parameter. The question of how long-lived context objects should be comes up repeatedly. As a rule of thumb, create a new context for a single database operation or for a series of related database operations. (See, for example this blog post and this question.)
(As an alternative, you could add a constructor to your DbContext derived class that receives an existing connection. Then you could share the same connection between multiple contexts.)
One useful pattern is to write your own class that creates a context object and stores it as a private field or property. Then you make your class implement IDisposable and its Dispose() method disposes the context object. Your calling code news up an instance of your class, and doesn't have to worry about contexts or connections at all.
When might you need to have multiple contexts active at the same time?
This can be useful when you need to write code that is multi-threaded. A database connection is not thread-safe, so you must only ever access a connection (and therefore an EF context) from one thread at a time. If that is too restrictive, you need multiple connections (and contexts), one per thread. You might find this interesting.
You can alter your code by passing to Method_B the context. If you do so, the creation of the second db call SomeDbContext will not be necessary.
there a question an answer in stackoverflow in this link
Proper use of "Using" statement for datacontext
It is a bit late answer, but still people may be looking so here is another way.
Create class, that cares about disposing for you. In some scenarios, there would be a function usable from different places in solution. This way you avoid creating multiple instances of DbContext and you can use nested calls as many as you like.
Pasting simple example.
public class SomeContext : SomeDbContext
{
protected int UsingCount = 0;
public static SomeContext GetContext(SomeContext context)
{
if (context != null)
{
context.UsingCount++;
}
else
{
context = new SomeContext();
}
return context;
}
private SomeContext()
{
}
protected bool MyDisposing = true;
protected override void Dispose(bool disposing)
{
if (UsingCount == 0)
{
base.Dispose(MyDisposing);
MyDisposing = false;
}
else
{
UsingCount--;
}
}
public override int SaveChanges()
{
if (UsingCount == 0)
{
return base.SaveChanges();
}
else
{
return 0;
}
}
}
Example of usage
public class ExmapleNesting
{
public void MethodA()
{
using (var context = SomeContext.GetContext(null))
{
// manipulate, save it, just do not call Dispose on context in using
MethodB(context);
}
MethodB();
}
public void MethodB(SomeContext someContext = null)
{
using (var context = SomeContext.GetContext(someContext))
{
// manipulate, save it, just do not call Dispose on context in using
// Even more nested functions if you'd like
}
}
}
Simple and easy to use.
If you think number of connections to Database,and impact of times that new connections must be opened, not an important problem and you have no limitation for support your application to run at best performance, everything is OK.
Your code works well. Because create just a db context has a low impact in your performance,meta data will be cached after first loading, and connection to your database just occurs when the code need to execute a query. With liitle performance consideration and code design, I offer you to make context factory to have just an instance of each Db Context for each instance of your application.
You can take a look at this link for more performance considerations
http://msdn.microsoft.com/en-us/data/hh949853

How to make JPA EntityListeners validate the existence of an interface

I am working in J2EE 5 using JPA, I have a working solution but I'm looking to clean up the structure.
I am using EntityListeners on some of the JPA objects I am persisting, the listeners are fairly generic but depend on the beans implementing an interface, this works great if you remember to add the interface.
I have not been able to determine a way to tie the EntityListener and the Interface together so that I would get an exception that lead in the right direction, or even better a compile time error.
#Entity
#EntityListener({CreateByListener.class})
public class Note implements CreatorInterface{
private String message;....
private String creator;
....
}
public interface CreatorInterface{
public void setCreator(String creator);
}
public class CreateByListener {
#PrePersist
public void dataPersist(CreatorInterface data){
SUser user = LoginModule.getUser();
data.setCreator(user.getName());
}
}
This functions exactly the way I want it to, except when a new class is created and it uses the CreateByListener but does not implement the CreatorInterface.
When this happens a class cast exception is thrown somewhere deep from within the JPA engine and only if I happen to remember this symptom can I figure out what went wrong.
I have not been able to figure a way to require the interface or test for the presence of the interface before the listener would be fired.
Any ideas would be appreciated.
#PrePersist
public void dataPersist(Object data){
if (!(data instanceof CreatorInterface)) {
throw new IllegalArgumentException("The class "
+ data.getClass()
+ " should implement CreatorInterface");
}
CreatorInterface creatorInterface = (CreatorInterface) data;
SUser user = LoginModule.getUser();
creatorInterface.setCreator(user.getName());
}
This does basically the same thing as what you're doing, but at least you'll have a more readable error message indicating what's wrong, instead of the ClassCastException.

Limiting EF result set permanently by overriding ObjectQuery ESQL

Does anyone have any idea how to limit result set of EntityFramework permanently? I'm speaking about something like this Conditional Mapping. This is exactly what I want to achieve with one exception: I want to do this programmatically. That's because condition value will be passed to EF only on context creation. Beside I don't want this column to disappear from mapping.
I know how to achieve this with EF2.0 and reflection. I was using CreateQuery() method to generate my own ObjectQuery. CreateQuery() allows to inject my own ESQL query with additional condition e.g. WHERE TABLE.ClientID == value.
Problem with EF40 is that there is no more ObjectQuery but only ObjectSet and CreateQuery() is not used. I have no idea how to inject my own ESQL query.
The reason why I want to limit result sets is that I want to separate clients data from each other. This separation should be done automatically inside context so that programmers will not have to add condition .Where(x => x.ClientID == 5) to each individual query.
Maybe my approach is completely bad — but I don't know any alternative.
You don't need reflection for this. You can simply use class inherited from ObjectContext or create custom implementation of UnitOfWork and Repositories which will wrap this functionality in better way (upper layer has access only to UnitOfWork and Repositories which do not expose EF context).
Simple example of object context:
public class CustomContext : ObjectContext
{
private ObjectSet<MyObject> _myObjectsSet;
private int _clientId;
public CustomContext(string connectionString, int clientId)
: base(connectionString)
{
_myObjectSet = CreateObjectSet<MyObject>();
_clientId = clientId;
}
public IQueryable<MyObject> MyObjectQuery
{
get
{
return _myObjectsSet.Where(o => o.ClientId == _clientId);
}
}
}