ADO.Net - Performance difference between Execute Reader and Execute Scalar - ado.net

I know the purpose of Execute Reader and Execute Scalar. But Execute Reader can serve the purpose of Execute Scalar. So why to use Execute Scalar? Is there any performance difference between them?
Which is faster?
Thanks.

Difference depends on the IDbCommand implementation; often performance is the same when ExecuteScalar internally executes the same code as ExecuteReader: good example is SqlCommand: both methods call internal RunExecuteReader method, so there are no any difference in performance.
Many popular IDbCommand implementations work in the same manner as SqlClient (MySqlConnector, NpgSql, Microsoft.Data.Sqlite), but potentially it is possible to have ADO.NET connector that offers better performance for ExecuteScalar.
In short, if you call concrete class (say, SqlCommand) you can use either ExecuteReader or ExecuteScalar. If you use IDbCommand interface (say, in reusable library) and know nothing about the implementation usage of ExecuteScalar may give some performance benefits when used with connector that has specially optimized implementation.

Related

Difference between JPAQuery and JPAQueryFactory

What is the difference between JPAQuery and JPAQueryFactory?
And, When to use which?
According to the querydsl reference documentation:
Both JPAQuery and HibernateQuery implement the JPQLQuery interface.
For the examples of this chapter the queries are created via a JPAQueryFactory instance. JPAQueryFactory should be the preferred
option to obtain JPAQuery instances.
But, I could not understand clearly.
Can anyone explain it briefly?
What matters is that Hibernates query language (HQL) is a superset of JPA's query language (JPQL). Hibernate also has a special method for result set transformation and being able to iterate over scrollable result sets without the need to keep a reference to all records in memory. In order to take advantage of this extra functionality, the HQLTemplates and the HibernateHandler have to be used. The first is responsible for serializing the additional types of expressions, the second for the integration with Hibernates Query implementation. The HibernateHandler is actually obtained from the HQLTemplates as well, so all that remains is specifying HQLTemplates.
And in fact: a JPAQuery instantiated with HQLTemplates.INSTANCE for the Templates variable, behaves the same as a HibernateQuery. FWIW, if you provide an EntityManager instance with the construction of your JPAQuery, then the appropriate implementation for Templates is deduced for your ORM vendor automatically.
All JPAQueryFactory really is, is a factory method that binds the EntityManager and Templates variables for newly instantiated JPAQueries. This eliminates the need to pass these as a variable individually for each instantiation of a JPAQuery.
There is no need to use the JPAQueryFactory, but it could make your code easier to read. Furthermore, a lot of code examples on the QueryDSL website utilize the query factory, so it might make it easier to use these examples as snippets in your own code.

Why use interfaces

I see the benefit of interfaces, to be able to add new implementations via contract.
I dont see following problem:
Imagine you have interface DB with method "startTransaction".
Everything is fine you implement it in MySQL, PostgreSQL. But tomorrow you move to mongodb - then you have no transaction support.
What do you do?
1) Empty method - bad because you think you have transactions but u havent
2) Create your own - then you should have some parameters that will be different that regular "startTransaction" method.
And on top of that sometimes simple interfaces just doesnt work.
Example: You need additional parameters for different implementations.
If you're exposing the concept of transactions on your interface, then you must functionally support transactions no matter what, since users of the interface will logically depend on it. I.e., if a caller can start a transaction, then they expect to also be able to roll back a transaction of several queries. Since Mongo doesn't natively have any concept of rolling back transactions, there's one of two possibilities:
You implement the possibility of rolling back queries in code, emulating the functionality of transactions for a database which doesn't natively support it. (Whether that's even reliably possible in Mongo is a debatable topic.)
Your interface is working at the wrong level of abstraction. If your interface is promising functionality an implementation can't deliver, then either the interface or the implementation is unrealistic.
In practice, Mongo and SQL databases are such different beasts that you would either never make this kind of change without changing large parts of your business logic around it; or you specify your interface using an extremely minimal common-denominator interface only, most certainly not exposing technology-specific concepts on an abstract interface.
You are mostly correct, interfaces can be very useful, but also problematic in (fast) changing code, a best practice conserning interfaces is to keep them as small as possible.
When something can handle an transaction, create an interface only for handling an transaction. Split them up in as small as logically possible parts, in that way, when new classes emerge, you can assign them the specific interfaces that can determine their methods.
For the multiple parameter problem, this can indeed be problematic, see if you can determine if this specific value could be moved to a constructor, or indicates that the action that you are doing is indeed sightly different from the action that does not need this parameter.
I hope this helps, goodluck
You are right interfaces are used to add new implementations via contract but those implementations have to posses some similarity.
Let's take an example:
You cannot implement dog using human interface because dog is a living organism.
Same thing you are trying to do here.You are trying to implement a non-sql db using sql db implementation.

Explaination RPC (remote procedure call) and RMI (remote method invocation)

Can someone explain this in a (better/simpler) way?
The remote procedure call (RPC) approach extends the common programming
abstraction of the procedure call to distributed environments, allowing a calling process to call a procedure in a remote node as if it is local.
Remote method invocation (RMI) is similar to RPC but for distributed objects, with added benefits in terms of using object-oriented programming concepts in
distributed systems and also extending the concept of an object reference to the
global distributed environments, and allowing the use of object references as
parameters in remote invocations.
I just don't understand the way it is explained...
Taking out the remote aspect, which is common to both, the difference is the difference between calling a function in a procedural language and calling a method in an OOP language.

Moving from Class::DBI to DBIx::Class

I'm currently doing some research on DBIx::Class in order to migrate my current application from Class::DBI. Honestly I'm a bit disappointed about the DBIx::Class when it comes to configuring the result classes, with Class::DBI I could setup metadata on models just by calling the on function without a code generator and so on my question is ... can I the same thing with DBIX::Class also it seems that client-side triggers are not supported in DBIx::Class or i'm not looking at the wrong docs?
Triggers can be implemented by redefining the appropriate method (new/create/update/delete etc) in the Result class, and calling the parent (via $self->next::method()) within it, either before or after your code. Admittedly it's a bit clumsy compared to the before/after triggers in Class::DBI.
As for metadata - are you talking about temporary columns on an object? i.e. data that won't be stored in the database row. These can be added easily using one of the Class::Accessor::* modules on CPAN
One of the hardest changes to make when switching from CDBI to DBIC is to think in terms of ResultSets - often what would have been implemented via a Class method in CDBI becomes a method on a ResultSet - and code may need to be refactored considerably, it's not always a straightforward conversion from one to the other.

IQueryable<T> vs IEnumerable<T> with Lambda, which to choose?

I do more and more exercise with Lambda but I do not figure out why sometime example use .AsQueryable(); that use the IQueryable and sometime it omit the .AsQueryable(); and use the IEnumerable.
I have read the MSDN but I do no see how "executing an expression tree" is an advantage over not.
Anyone can explain it to me?
IQueryable implements IEnumerable, so right off the bat, with IQueryable, you can do everything that you can do with IEnumerable. IQueryables deal with converting some lambda expression into query on the underlying data source - this could be a SQL database, or an object set.
Basically, you should usually not have to care one way or the other if it is an IQueryable, or an IEnumerable.
As a user, you generally shouldn't have to care, it's really about the implementor of the data source. If the data providers just implements IEnumerable, you are basically doing LINQ to objects execution, i.e. it's in memory operation on collections. IQueryable provides the data source the capability to translate the expression tree into an alternate representation for execution once the expression is executed (usually by enumeration), which is what Linq2Sql, Linq2Xml and Linq2Entities do.
The only time i can see the end user caring, is if they wish to inspect the Expression tree that would be executed, since IQueryable exposes Expression. Another use might be inspecting the Provider. But both of those scenarios should really be reserved for implementors of other query providers and want to transform the expression. The point of Linq is that you shouldn't have to care about the implementation of expression execution to be used.
I agree with Arne Claassen, there are cases when you need to think about the underlying implmentatoin provided by the data sources. For example check this blog post which shows how the SQL generated by IEnumerable and IQueryable are different in certain scenarios.