What is the difference between JPAQuery and JPAQueryFactory?
And, When to use which?
According to the querydsl reference documentation:
Both JPAQuery and HibernateQuery implement the JPQLQuery interface.
For the examples of this chapter the queries are created via a JPAQueryFactory instance. JPAQueryFactory should be the preferred
option to obtain JPAQuery instances.
But, I could not understand clearly.
Can anyone explain it briefly?
What matters is that Hibernates query language (HQL) is a superset of JPA's query language (JPQL). Hibernate also has a special method for result set transformation and being able to iterate over scrollable result sets without the need to keep a reference to all records in memory. In order to take advantage of this extra functionality, the HQLTemplates and the HibernateHandler have to be used. The first is responsible for serializing the additional types of expressions, the second for the integration with Hibernates Query implementation. The HibernateHandler is actually obtained from the HQLTemplates as well, so all that remains is specifying HQLTemplates.
And in fact: a JPAQuery instantiated with HQLTemplates.INSTANCE for the Templates variable, behaves the same as a HibernateQuery. FWIW, if you provide an EntityManager instance with the construction of your JPAQuery, then the appropriate implementation for Templates is deduced for your ORM vendor automatically.
All JPAQueryFactory really is, is a factory method that binds the EntityManager and Templates variables for newly instantiated JPAQueries. This eliminates the need to pass these as a variable individually for each instantiation of a JPAQuery.
There is no need to use the JPAQueryFactory, but it could make your code easier to read. Furthermore, a lot of code examples on the QueryDSL website utilize the query factory, so it might make it easier to use these examples as snippets in your own code.
Related
I have this situation:
Spring Data JPA: Work with Pageable but with a specific set of fields of the entity
It about to work with Spring Data and working with a specific set of fields of an #Entity
The two suggestions are totally valid for me:
DTO projections
Projection interfaces
Even more, in spring-data-examples appears both together (I know for sample purposes):
CustomerRepository.java
Thus:
When is mandatory use one over the other and why?
Exists a cost of performance one over the other?
Note in the Class-based Projections (DTOs) section says the following:
Another way of defining projections is by using value type DTOs (Data
Transfer Objects) that hold properties for the fields that are
supposed to be retrieved. These DTO types can be used in exactly the
same way projection interfaces are used, except that no proxying
happens and no nested projections can be applied.
Seems the advantages are: except that no proxying happens and no nested projections can be applied
DTO Approach
Pro
Simple and straigt forward
Con
It will result in more code as you have to create DTO class with constructor and getters/setters (unless you utilize Project Lombok to avoid boilerplate
code for DTOs).
No nested projections can be applied.
Projections
Pro
Less code as it uses only interfaces.
Nested projections can be applied
Dynamic projection allows you write one generic repository method to return
different subset of the attributes in entity object based on client's needs.
Con
Spring generates proxy at runtime
Query could return the entire entity object from database to Spring layer though a trimmed version (via Projection) is returned from Spring layer to client. I wasn't sure about this specific disadvantage, hoping someone to edit this answer if necessary.
If you need nested or dynamic projection, you probably want Projection approach rather than DTO approach.
Refer to official Spring doc for details.
I think that DTO was the first possible solution to work with a small set of data from the Entities. Today, many operations can also be made with projections, but you need to be careful with performance. If you see this Janssen's post Entities or DTOs – When should you use which projection? you will note that DTOs have better performance than projections for reading operations.
If you don't have the problem with performance, projections will be more graceful.
When is the implementation for repositories generated by Spring Data? At compile time or runtime? Can I see the implementation repository implementation generated by Spring Data?
tl;dr
No, for a very simple reason: there's no code generation going on. The implementation is based on proxies and a method interceptor delegating the call executions to the right places.
Details
Effectively, a method execution can be backed by 3 types of code:
The store specific implementation of CrudRepository. Have a look for types named Simple(Jpa|Mongo|Neo4|…)Repository (see the JPA specific one here). They have "real" implementations for all of the methods in CrudRepository and PagingAndSortingRepository.
Query methods are effectively executed by QueryExecutorMethodInterceptor.doInvoke(…) (see here). It's basically a 3-step-process to find the delegation target and invoke it. The actual execution is done in classes named (Jpa|Mongo|Neo4j…)QueryExecution (see this one for example).
Custom implementation code is called directly, also from QueryExecutorMethodInterceptor.
The only thing left is the query derivation, which consists of two major parts: method name parsing and query creation. For the former, have a look at PartTree. It takes a method name and a base type and will return you a parsed AST-like structure or throw an exception if it fails to resolve properties or the like.
The latter is implemented in classes named PartTree(Jpa|Mongo|Neo4j|…)Query and delegates to additional components for actually creating the store specific query. E.g. for JPA the interesting bits are probably in JpaQueryCreator.PredicateBuilder.build() (see here).
I have written a wrapper around ADO.NET's DbProviderFactory that I use extensively throughout my applications. I also have written a lot of code that maps IDataReader rows to POCOs. However, as I have tons of classes the whole thing is getting to be a pain in the ass to maintain.
I have been looking at replacing the whole she-bang with a micro-orm like Petapoco. I have a few queries though:
I have lots of POCOs that contain other POCOs in them as properties. How well does the Petapoco support this?
Should I use a ORM like Massive or Simple.Data that returns a dynamic object and map that to a POCO?
Are there any approaches I can take to the whole mapping of rows to POCOs? I can't really use convention-based tools as my database isn't particularly consistent in how it is designed.
How about using a text templating/code generator to build out a lightweight persistence layer? I have a battle-hardened open source project called TextMetal to generate the necessary persistence layer based on tried and true architectural decisions. The only lacking thing is object to object relations but it does support query expressions and works well with poorly designed data schemas.
You can see a real world project that uses the above tool call Can Do It For.
Feel free to ask me about any design decisions once you take a look-sse.
Simple.Data automagically casts its dynamic type to static types. It will map nested properties as long as they have been eager-loaded using the .With method. So for example
Customer customer = db.Customer.WithOrders().Get(42);
would populate the Orders property of the customer object.
Could you use QueryFirst, or modify it? It takes your sql and wraps it in vanilla ADO code, generated at design time. You get fresh POCOs from your result schema every time you save your file. Additionally, you can choose to test all queries and regenerate all wrappers via the option in the tools menu. It's dependent on Sql Server and SqlClient, so unless you do some modification, you'll lose DbProviderFactory.
I think it's possible to write select queries with either Zend_Db_Select or Zend_Db_Table_Abstract, but I don't understand when to use which of the two.
Is one more optimized for something than the other? Are joins "easier" with one or the other?
Thanks!
There are a few different options in producing queries, for historical reasons.
In early versions of Zend Framework, Zend_Db_Tables had a method fetchAll with parameters where, order, offset and limit which you could use to fetch rows from a table. Developers soon found limitations with this approach. How would you add a GROUP BY clause?
Zend_Db_Select was invented to solve this problem, and you'll notice that since ZF 1.5, the fetchAll and related methods accept a Zend_Db_Select instance as the first parameter. Using the other parameters of fetchAll is now deprecated and you should pass in either an SQL string or a Zend_Db_Select object.
A Zend_Db_Select is simply a programmatic interface for building an SQL query. It's great for changing parts of the SQL based on user input or different factors, as instead of manipulating strings, you can just change the method calls and arguments.
Zend_Db_Table will return a Zend_Db_Table_Select (a Zend_Db_Select subclass) instance with its table name predefined if you call its select method - this is about the only difference between Zend_Db_Select and Zend_Db_Table_Select.
Your consideration really is whether to use Zend_Db_Select or to write SQL manually. Zend_Db_Select isn't infinitely flexible but it is easy to read, manipulate and work with.
I do more and more exercise with Lambda but I do not figure out why sometime example use .AsQueryable(); that use the IQueryable and sometime it omit the .AsQueryable(); and use the IEnumerable.
I have read the MSDN but I do no see how "executing an expression tree" is an advantage over not.
Anyone can explain it to me?
IQueryable implements IEnumerable, so right off the bat, with IQueryable, you can do everything that you can do with IEnumerable. IQueryables deal with converting some lambda expression into query on the underlying data source - this could be a SQL database, or an object set.
Basically, you should usually not have to care one way or the other if it is an IQueryable, or an IEnumerable.
As a user, you generally shouldn't have to care, it's really about the implementor of the data source. If the data providers just implements IEnumerable, you are basically doing LINQ to objects execution, i.e. it's in memory operation on collections. IQueryable provides the data source the capability to translate the expression tree into an alternate representation for execution once the expression is executed (usually by enumeration), which is what Linq2Sql, Linq2Xml and Linq2Entities do.
The only time i can see the end user caring, is if they wish to inspect the Expression tree that would be executed, since IQueryable exposes Expression. Another use might be inspecting the Provider. But both of those scenarios should really be reserved for implementors of other query providers and want to transform the expression. The point of Linq is that you shouldn't have to care about the implementation of expression execution to be used.
I agree with Arne Claassen, there are cases when you need to think about the underlying implmentatoin provided by the data sources. For example check this blog post which shows how the SQL generated by IEnumerable and IQueryable are different in certain scenarios.