Is there a way to iterate query results through JPA API, similar to Hibernate's Criteria.scroll() method? It is big performance improvement with big resultsets, rows can be processed as they are read.
No, there is not such a construct in JPA. Also not in JPA 2.1.
JPA 2.2 provides TypedQuery.getResultStream(), but default implementation does not have expected effect (calls getResultList). Also specific implementations do not always lead to performance improvements, as can be seen in this article.
Related
I have two fields in an entity:
id
parentId
I want a self-join to fetch (hierarchical data) the childrens of parent id.
Something like oracle Hierarchical Queries:
At the moment, Hibernate Search does expose runtime joins capabilities.
If your goal is to order results "parents first", I think you may be able to create a getter that creates a string similar to "rootId.grandParentId.parentId.thisId", and index the result of that getter. Then you can sort on that string. That would clearly be a hack, but it may work.
Alternatively, you may be able to leverage native join capabilities of Lucene or Elasticsearch within Hibernate Search But that will require extensive knowledge of Lucene or Elasticsearch.
With Hibernate Search 5, you may be able to implement it for Lucene, but probably not for Elasticsearch. Unforunately, documentation of Lucene features is sparse.
With Hibernate Search 6, you may be able to implement it in both cases.
You will need:
native fields (Lucene/ES)
native predicates
obviously a good deal of knowledge of advanced Lucene/Elasticsearch features. For Lucene, documentation is sparse. For Elasticsearch, here is a good place to start: https://www.elastic.co/guide/en/elasticsearch/reference/current/parent-join.html
I am using Couchbase with Spring Data and wish to implement bulkGet of Couchbase. Please let me know the following:
Is it possible via Spring Data?
If yes, can you share an example?
Is findAll (using _all view) comparable to bulkGet in terms of performance?
Can I fetch the _id along with the Couchbase document?
Environment:- Couchbase 4.0, Spring Data 2.0.0.RELEASE, Java 8.
Thanks in Advance!
I assume you are asking about a bulk get in the context of repositories.
First, there is currently no complete support of a "bulkGet" in Spring Data Couchbase. Most of the implementation is based on the SDK synchronous API, and bulk get is something usually done using the asynchronous API, using RxJava.
Note that there is no actual "bulkGet" operation at the protocol level in Couchbase, it's just the SDK issuing multiple single Get and batching them together.
To answer your second question, the above is important. The bulk get pattern discussed in the Couchbase Java SDK documentation (here) gives a slight performance boost because unlike in synchronous mode, we don't wait for the retrieval of one item to get the next.
The findAll() and findAll(Iterable) methods in Spring Data Couchbase both operate on top of a view, which allows to only retrieve documents that match the entity type of your repository but introduces a level of indirection that can lower performance compared to a pure sequence of key/value gets.
So the closest you could get to a bulk operation like that in Spring Data Couchbase would be to know all the IDs you're interested in and then perform a findOne per ID.
In the near term, the code behind the findAll(Iterable) signature could maybe be improved by applying a bulk get pattern on all provided IDs, but that would mean forgetting about the type checking induced by the view, so I'm not sure...
What's the best approach for doing multi-table aggregates, or non aggregate multi table results in my Spring Data repositories. I don't care about mapping back to entities, I just need a list of objects returned I can massage into a JSON response.
If you don't care about entities, repositories are not the tool for the job. Repositories are defined to simulate collections of aggregates (which are special kinds of entities usually).
So to answer the question from your headline (which surprisingly seems to be the opposite of what you're asking in the description): just do it. Define your entity types including the relations that form the aggregate, create a repository for them and query it, define query methods etc.
If you don't care about types (which is perfectly fine, too), have a look at jOOQ which is centered around SQL to efficiently query relational databases, but wrapped into a nice API.
I am trying to understand better EclipseLink NoSQL, but I am having trouble understand it's limitations, what it currently supports and I simply can't find anything about the future plans the team has.
So, in short, I have quite a big list of questions that i would like to know, if you people don't mind:
Does EclipseLink support:
Object Oriented queries
CRUD of entities
Polymorphic entities
Embeddable objects (components)
Basic types
Unidirectional and Bidirectional relationships (if yes, which ones?)
Collections (Set, List, Map, etc)
Full JPA support (I assume it does, but just in case I am wrong)
Denormalization
Complex joins and aggregations
Apart from these questions, are there any other limitations or jewels of the crown that I should know of?
Also, what is the team currently working in? What are the future plans?
I would be super happy if someone here could provide me with links or documentation for the questions I provided above, since I couldn't find anything :S
Thanks in advance, Pedro.
This depends on the NoSQL platform, for MongoDB a subset of JPQL and Criteria are supported (joins to external relationships are not supported)
Yes.
Yes, inheritance is supported.
Yes, Embeddables are supported, and ElementCollections (these are stored inline in the JSON document)
Yes
All unidirectional relationships are supported, bi-directional (mappedBy) is not supported, in NoSQL you just need to use two unidirectional relationships.
Yes.
Most of JPA. Some features such as joins, atomic transactions are not supported if the NoSQL platform does not support them (JPA transactions work, just rollback will not result in a rollback of any flushed changes if the database does not support transactions).
Yes.
Joins are not supported. Queries to embedded relationships is.
In an effort to complete the content of this discussion, I am now posting what I have found.
Currently (use discussion date as reference) EclipseLink Supports:
Complex hierarchical (including XML)
Indexed hierarchical data
Mapped hierarchical data (such as JSON)
CRUD operations
Embedded objects and collections
Inheritance
Relationships
Subset of JP-QL and Criteria API, dependant on NoSQL database's query support
Still needing answer:
Eclipselink limitations
Future plans
Sources:
http://www.eclipse.org/eclipselink/documentation/2.5/concepts/nosql001.htm#BJEIHEJG
http://wiki.eclipse.org/EclipseLink/FAQ/NoSQL
http://www.eclipse.org/eclipselink/documentation/2.5/concepts/app_tl_ext003.htm#CJAECHBD
http://www.eclipse.org/eclipselink/documentation/
What are the options for MongoDB schema migrations/upgrades?
We (my colleagues and I) have a somewhat large (~100 million record) MongoDB collection. This collection is mapped (ORM'd) to a Scala lift-mongodb object that has been through a number of different iterations. We've got all sorts of code in there which handles missing fields, renames, removals, migrations, etc.
As much as the whole "schema-less" thing can be nice and flexible, in this case it's causing a lot of code clutter as our object continues to evolve. Continuing down this "flexible object" path is simply not sustainable.
How have you guys implemented schema migrations/upgrades in MongoDB with Scala? Does a framework for this exist? I know that Foursquare uses Scala with MongoDB and Rogue (their own query DSL)... does anyone know how they handle their migrations?
Thank you.
Perhaps this can help somewhat, this is how Guardian.co.uk handle this:
http://qconlondon.com/dl/qcon-london-2011/slides/MatthewWall_WhyIChoseMongoDBForGuardianCoUk.pdf
Schema upgrades
This can be mitigated by:
Adding a “version” key to each document
Updating the version each time the application modifies a document
Using MapReduce capability to forcibly migrate documents from older versions if required
I program migrations of MongoDB data with my own Scala framework "Subset". It lets define document fields pretty easily, fine tune data serialization (e.g. write "date" is a specific format and so on) and build queries and update modifiers in terms of the defined fields. This "gist" gives a good introduction