I'm using JasperReports with JavaBeans (I need to print reports in a application that uses Hibernate). Now I can work out Beans collections and use them in JasperReports, but sometimes I wonder if there is a way to access bean properties without it being a collection. What I mean is that I use JRBeanCollectionSource as a source for the various subReports. Suppose I've a list of People and each of them has a Car property. Now is there a way to access directly car properties without seeing it as a collection?
You could try pulling the property out of the Bean and putting in a different DataSource like for example JRMapCollectionDataSource.
This would mean not having to deal with the whole Bean collection each time.
Here is some sample code for constructing a DataSource.
Collection<Map<String, Object>> myColl = new ArrayList<Map<String,Object>>();
Map<String, Object> map1 = new HashMap<String, Object>();
map1.put("Field1","Value1");
map1.put("Field2","Value2");
map1.put("Field3", someObject);
myColl.add(map1);
JRMapCollectionDataSource source = new JRMapCollectionDataSource(myColl);
Related
Facing issue while loading document with Map<String,Object> type in my model.
protected Map<String, Object> attributes;
while loading data from couchbase getting Basic type must not be null! error.
is there any way we can load such attributes from couchbase, I am able to store but loading has issue.
protected Map<String, Object> attributes;
It's not possible to construct objects of type Object from data from a document. The second parameter of Map must be an entity or simple type.
Is there a way in hibernate-search 6 to project multiple fields and map them directly to a POJO object or I should handle it by myself. I'm not sure that I understand the composite method described in the documentation. For example I can do something like that:
SearchResult<List<?>> result = searchSession.search(indicies)
.select(f -> f.composite(f.field("field1"), f.field("field2"), f.field("field3"),f.field("field4")))
.where(SearchPredicateFactory::matchAll)
.fetch(20)
And then I can manually map the returned List of fields to a POJO. But is there a more fancy way to do that without the need to manually loop through the list of fields and set them to the POJO instance?
At the moment projecting to a POJO is only possible for fairly simple POJOs, with up to three fields, using the syntax shown in the documentation. For more fields than that, you have to go through a List.
If you're using the Elasticsearch backend, you can theoretically retrieve the document as a JsonObject and then use Gson to map it to a POJO.
There are plans to offer more fancy solutions, but we're not there yet.
Below code gives error and it says School class must implement DBObject interface. The problem is that this interface has tons of methods. I have nearly 100 class and I don't want to write millions of methods. Is there any easy way to save an object?
DBCollection table = db.getCollection("school");
School document = new School();
table.insert(document);
Instead of implementing DBObject or extending one of the existing implementations like BasicDBObject, you could have all objects which can be saved in the database have a method public DBObject toDBObject() which creates and returns a DBObject representation of the object. The BasicDBObject is a Map<String, Object> which handles the object data as key/value pairs, so it is a good candidate for this.
For a more generic solution, you could use reflection to create a method which can convert any Java object into a DBObject. To have more control over this, you could make up some annotations, add them to your classes and have your conversion method check them.
Now you have created your own object mapping framework for MongoDB. But why reinvent the wheel when others have already done it? So before you do this, check out if the existing mapping frameworks like morphia fulfill your use-case - they likely do and will save you hours of programming and weeks of debugging.
[opinion]
I usually despise object-relational mappers in the context of relational databases because of the impedance mismatch problem, but for heterogeneous databases like MongoDB they make a lot more sense, because you can store objects which have the same base-class but also some different class-specific fields in the same table collection without any ugly workarounds.
[/opinion]
I've yet to use Morphia, but I'm considering it for a current project.
Suppose I have a POJO with a number of #Reference annotations and I ask Morphia to fetch the object graph from the database. If I then make another DAO or DataStore call and ask Morphia to fetch some object that was already instantiated in the first graph, would Morphia return a reference to the already instantiated object or would it create a new instance?
If Morphia returns a new instance of the object each time, does anyone have a recommendation of how to best approach creating a Morphia-backed repository that won't duplicate already-instantiated objects?
As I see it in Morphia, it will re read every reference.
This is one of the problems, why I created Morphium. I integrated a caching layer there, so if you read a reference, this one won't be read again (at least, if you search by ID...)
We use morphia in production and there are two ways to make sure you don't load the references which is something we came across too.
One is to use the lazy loading option when you define the #Reference element in your main class. This of course means that this behavior is 'global' to that object.
The better way to do this is to not define an #Reference using Morphia and instead managing the references yourself. Let me know if you need a code sample.
I've stopped using #Reference too and instead declare something like:
ObjectId itemId
rather than having a field item. This has 2 benefits: (1) it lets me define a getter through a helper getObject(...) method which I have written with object caching and (2) it stores a simple ObjectId in the Mongo object rather than a full DBRef which includes the collection name and thus about twice the data size.
I have an entity bean that will represent an expected result over multiple databases/datasources and can also be different queries executed, but same result always comming back. So the bean is re-used over different datasources that should be able to be dynamicly selected.
Is it possible with JPA to select during runtime the data source to be used to execute a query, and return the same type of entity bean?
Also, does my ejb/application need to define the datasources that will be used? Or can I always specify via jndi what datasource to use? Modifying the descriptor's and re-deploying an application everytime a new datasource is created is not an option.
Sorry if the question does not make 100% sense, rather difficult to get the idea through.
Is it possible with JPA to select during runtime the data source to be used to execute a query, and return the same type of entity bean?
You can't change the datasource of a persistence unit at runtime. However, you can configure several persistence unit and use one or another EntityManagerFactory. Maybe JPA is not the right tool for your use case.
Modifying the descriptor's and re-deploying an application everytime a new datasource is created is not an option.
And how will the application be aware of the "available datasources"?
You can change the JPA datasource at runtime, but the approach is tricky (introspection, JPA implementation specific, ...).
I've implemented my own implementation of javax.persistence.spi.PersistenceProviderwhich override the org.hibernate.ejb.HibernatePersistence and sets the datasource in both the Map and PersistenceUnitInfo of the PersistenceProvider just before creating the EntityManagerFactory. This way, my EntityManagerFactory has a datasource which has been configured at runtime. I keep my EntityManagerFactory until the application is undeployed.
You could use the same be approach and create N different EntityManagerFactory, each with its specific datasource. However keep in mind that each ÈntityManagerFactory uses a lot of memory.