JDBC and ADO.Net: API comparison - ado.net

What are the analogies between the objects found in JDBC and the ones found in ADO.Net?
I know the object model in JDBC and ADO.Net are not exactly the same, but I think some analogies can be found among them (and key differences worth stating).
That would be useful for those who knows one API and wants to learn the other, serving as a starting point maybe, or avoiding misunderstandings caused by assumptions one makes about the API that wants to learn.
e.g.: Which is the ADO.Net object that provides the same functionality/behavior as the JDBC ResultSet? the same for PreparedStatemes, and so on...

Here is a simple sequence for ADO.NET:
// 1. create a connection
SqlConnection conn = new SqlConnection(xyz)
// 2. open the connection
conn.Open();
// 3. create a command
SqlCommand cmd = new SqlCommand("select * from xyz", conn);
// 4. execute and receive the result in a reader
SqlDataReader rdr = cmd.ExecuteReader();
// 5. get the results
while (rdr.Read())
{
//dosomething
}
Here is a simple sequence for JDBC:
// 1. create a connection
Connection con = DriverManager.getConnection(xyz);
// 2. create a statement
Statement stmt = con.createStatement();
// 3. execute and receive results in a result set
ResultSet rs = stmt.executeQuery("SELECT * from xyz");
// 4. Get the results
while (rs.next())
{
// do something
}
And here is the analogy (ADO.NET => JDBC):
SqlConnection => Connection
SqlCommand => Statement
SqlDataReader => ResultSet

Not very thorough with jdbc, but from what I know ADO.NET follows a disconnected architecture, where a connection is established only for the time a query has to be executed or read. Once the reader is read, connection can be closed. The data caching are achieved using datasets and data adapters. In ADO.NET only one reader is allowed per connection. While disconnected architecture is certainly possible in jdbc, its built on the concept of having live connection where you can have multiple readers per connection.
Another difference in the API is that there is built in functionality in jdbc to get the last inserted id, while ADO lacks one.
Also read a nice comparison on data caching in ADO and jdbc.

Related

Vert.x Reactive DB2 Client: How to pass DB2 specific JDBC-Properties?

My intention is to use the specific client re-route Feature(see here) of the DB2 JDBC Driver. Therefore I have to pass some specific JDBC properties to the Reactive DB2 Client connection configuration.
When I try to pass the properties via:
DB2ConnectOptions connectOptions =
new DB2ConnectOptions()
.setPreparedStatementCacheMaxSize(dbConf.getPreparedStatementCacheMaxSize())
.setCachePreparedStatements(dbConf.isCachePreparedStatements())
.setPort(6000)
.setHost("host1")
.setDatabase(dbConf.getDatenbankName())
.setUser(dbConf.getBenutzer())
.setPassword(dbConf.getPasswort()) // won't take affect
.addProperty("enableClientAffinitiesList", "1") // won't take affect
.addProperty("clientRerouteAlternateServerName", "host2") // won't take affect
.addProperty("clientRerouteAlternatePortNumber", "5000"); // won't take affect
// Create the pooled sqlClient
var dbPool =
DB2Pool.client(
vertx,
connectOptions,
new PoolOptions()
.setMaxSize(dbConf.getPoolSize())
.setConnectionTimeout(dbConf.getConnectionTimeout())
.setPoolCleanerPeriod(dbConf.getPoolCleanerPeriod())
.setIdleTimeout(dbConf.getIdleTimeout()));
The properties doesn't be passed thought to the JDBC-Driver.
Does anybody know, if "io.vertx.db2client.DB2ConnectOptions#addProperty" is the right place to pass JDBC specific properties to the DB2 Driver
or is this functionality for the Reactive DB2 Client not available?
I hope anybody can help, kind regards :-)

Entity Framework Arithabort ON, but still query is slow

I have a simple query
var count = await _context.ExchangeRate.AsNoTracking().CountAsync(u => u.Currency == "GBP");
The table has only 3 Columns and 10 rows data.
When I tried to execute the query from Net 5 project it is taking around 2.3 seconds for the first time and 500ms (+- 100) for subsequent requests. When I hit the same request in SSMS it is returning in almost no time (45ms as seen in sql profiler).
I have implemented ARITHABORT ON in EF from here
When I see in SQL Profiler it is setting ARITHABORT ON but still the query takes the same time for the first request and subsequent requests.
How do I achieve speed same as SSMS query speed. I need the query to run really speed as my project has requirement to the return the response in 1 second (Need to make atleast 5 simple DB calls...if 1 call takes 500ms then it is crossing 1 second requirement)
Edit
Tried with even ADO.Net. The execution time took as seen in SQL Profiler is 40ms where as when it reached the code it is almost 400ms. So much difference
using (var conn = new SqlConnection(connectionString))
{
var sql = "select count(ExchangeRate) as cnt from ExchangeRate where Currency = 'GBP'";
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "SET ARITHABORT ON; " + sql;
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
conn.Open();
var t1 = DateTime.Now;
var rd = cmd.ExecuteReader();
var t2 = DateTime.Now;
TimeSpan diff = t2 - t1;
Console.WriteLine((int)diff.TotalMilliseconds);
while (rd.Read())
{
Console.WriteLine(rd["cnt"].ToString());
}
conn.Close();
}
Your "first run" scenario is generally the one-off static initialization of the DbContext. This is where the DbContext works out its mappings for the first time and will occur when the first query is executed. The typical approach to avoid this occurring for a user is to have a simple "warm up" query that runs when the service starts up.. For instance after your service initializes, simply put something like the following:
// Warm up the DbContext
using (var context = new AppDbContext())
{
var hasUser = context.Users.Any();
}
This also serves as a quick start-up check that the database is reachable and responding. The query itself will do a very quick operation, but the DbContext will resolve its mappings at this time so any newly generated DbContext instances will respond without incurring that cost during a request.
As for raw performance, if it isn't a query that is expected to take a while and tie up a request, don't make it async. Asynchronous requests are not faster, they are actually a bit slower. Using async requests against the DbContext is about ensuring your web server / application thread is responsive while potentially expensive database operations are processing. If you want a response as quickly as possible, use a synchronous call.
Next, ensure that any fields you are filtering against, in this case Currency, are indexed. Having a field called Currency in your entity as a String rather than a CurrencyId FK (int) pointing to a Currency record is already an extra indexing expense as indexes on integers are smaller/faster than those on strings.
You also don't need to bother with AsNoTracking when using a Count query. AsNoTracking applies solely when you are returning entities (ToList/ToArray/Single/Firstetc.) to avoid having the DbContext holding onto a reference to the returned entity. When you use Count/Any or projection to return properties from entities using Select there is no entity returned to track.
Also consider network latency between where your application code is running and the database server. Are they the same machine or is there a network connection in play? How does this compare when you are performing a SSMS query? Using a profiler you can see what SQL EF is actually sending to the database. Everything else in terms of time is a cost of: Getting the request to the DB, Getting the resulting data back to the requester, parsing that response. (If in the case where you are returning entities, allocating, populating, checking against existing references, etc... In the case of counts etc. checking existing references)
Lastly, to ensure you are getting the peak performance, ensure that your DbContexts lifetimes are kept short. If a DbContext is kept open and has had a number of tracking queries run against in (Selecting entities without AsNoTracking) those tracked entity references accumulate and can have a negative performance impact on future queries, even if you use AsNoTracking as EF looks to check through it's tracked references for entities that might be applicable/related to your new queries. Many times I see developers assume DbContexts are "expensive" so they opt to instantiate them as little as possible to avoid those costs, only to end up making operations more expensive over time.
With all that considered, EF will never be as fast as raw SQL. It is an ORM designed to provide convenience to .Net applications when it comes to working with data. That convenience in working with entity classes rather than sanitizing and writing your own raw SQL every time comes with a cost.

ORMLite --- After .commit , .setAutoCommit --- Connection NOT closed

I use ORMLite on a solution made by server and clients.
On server side I use PostgreSQL, on client side I use SQLite. In code, I use the same ORMLite methods, without taking care of the DB that is managed (Postgres or SQLite). I used also pooled connection.
I don't have connection opened, when I need a Sql query, ORMLite takes care to open and close the connection.
Sometime I use the following code to perform a long operation in background on server side, so in DB PostgreSql.
final ConnectionSource OGGETTO_ConnectionSource = ...... ;
final DatabaseConnection OGGETTO_DatabaseConnection =
OGGETTO_ConnectionSource.getReadOnlyConnection( "tablename" ) ;
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, false);
// do long operation with Sql Queries ;
OGGETTO_DAO.commit(OGGETTO_DatabaseConnection);
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, true);
I noted that the number of open connections increased, therefore after sometime the number is so big to stop the server (SqlException "too many clients connected to DB").
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Of course I cannot add at the end a "OGGETTO_ConnectionSource.close()", because it closes the pooled connection source.
If I add at the end "OGGETTO_DatabaseConnection.close();", it doesn't work, open connections continue to increase.
How to solve it?
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Let's RTFM. Here are the javadocs for the ConnectionSource.getReadOnlyConnection(...) method. I will quote:
Return a database connection suitable for read-only operations. After you are done,
you should call releaseConnection(DatabaseConnection).
You need to do something like the following code:
DatabaseConnection connection = connectionSource.getReadOnlyConnection("tablename");
try {
dao.setAutoCommit(connection false);
try {
// do long operation with Sql Queries
...
dao.commit(connection);
} finally {
dao.setAutoCommit(connection, true);
}
} finally {
connectionSource.releaseConnection(connection);
}
BTW, this is approximately what the TransactionManager.callInTransaction(...) method is doing although it has even more try/finally blocks to ensure that the connection's state is reset. You should probably switch to it. Here are the docs for ORMLite database transactions.

Single connection per transaction vs single connection for all transactions?

Which database contact method is better? A single connection per transaction or a single connection for all transactions.
Actually, it depends on your use cases. Connection initialization may be costly process so you can avoid creating new connections for every single query for specific cases. However; the code below is a common best practice pattern for db query executions.
private int Execute(){
using (var connection = new SqlConnetion()){
var sql="Some query statement";
var command = new SqlCommand (sql, connection);
connection.open();
var result = command.ExecuteNonQuery(); // or ExecuteReader
return result;
}
}
After the execution leaves the scope, the connection and related resources are released thanks to using statement.

ADO.NET Best Practices for Connection and DataAdaptor Object Scope

This is my first post on StackOverflow, so please be gentle...
I have some questions regarding object scope for ADO.NET.
When I connect to a database, I typically use code like this:
OleDbConnection conn = new OleDbConnection("my_connection_string");
conn.Open();
OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * from Employees", conn);
OleDbCommandBuilder cb = new OleDbCommandBuilder(adapter);
DataTable dt = new DataTable();
adapter.Fill(dt);
conn.Close();
conn.Dispose();
Let's say that I bind the resulting DataTable to a grid control and allow my users to edit the grid contents. Now, when my users press a Save button, I need to call this code:
adapter.Update(dt);
Here are my questions:
1) Do I need to retain the adapter object that I created when I originally loaded the datatable, or can I create another adapter object in the Save button click event to perform the update?
2) If I do need to retain the original adapter object, do I also need to keep the connection object available and open?
I understand the disconnected model of ADO.NET - I'm just confused on object scope when it's time to update the database. If someone could give me a few pointers on best practices for this scenario, I would greatly appreciate it!
Thanks in advance...
1) You don't need the same DataAdapter, but if you create a new one it must use the same query as its base.
2) The DataAdapter will open its connection if the connection is closed. In that case it will close the connection again after it is done. If the connection is already open it will leave the connection open even after it is done.
Normally you would work as in your example. Create a Conneciton and a DataAdapter, fill a DataTable and dispose of the Connection and the DataAdapter afterwards.
Two comments to your code:
You don't need the CommandBuilder here since you only do a select. The command builder is only needed if you want to generate Insert, Update or Delete statements automatically. In that case you also need to set the InsertCommand, UpdateCommand or DeleteCommand on the DataAdapter manually from the CommandBuilder.
Second. Instead of calling Dispose manually you should use the Using clause. It ensures that your objects will be disposed of even if an exception is thrown.
Try to change your code to this:
DataTable dt = new DataTable();
using (OleDbConnection conn = new OleDbConnection("my_connection_string"))
using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * from Employees", conn))
{
adapter.Fill(dt);
}
Note that I define the DataTable outside the using clauses. This is needed to ensure that the table is in scope when you leave the usings. Also note that you don't need the Dispose call on the DataAdapter or the Close call on the Connection. Both are done implicitly when you leave the usings.
Oh. And welcome to SO :-)
To answer your questions:
Ideally, you should retain the same DataAdapter because it has already performed it's initialization. A DataAdapter provides properties such as the SelectCommand, UpdateCommand, InsertCommand and DeleteCommand which allow you to set different Command objects to perform these different function on the datasource. So, you see, the DataAdapter is designed to be reused for multiple commands (for the same database connection). Your use of the CommandBuilder (though, not recommended) creates the other Commands by analysing the SelectCommand, thus allowing you to perform Updates, Deletes and Inserts using the same CommandBuilder.
It is best to allow the DataAdapter to implicitly handle database connections. #Rune Grimstad has already elaborated on this implicit behaviour and it's useful to understand this. Ideally, connections should be closed as soon as possible.
There are two additional details worth adding to Rune Grimstad's excellent answer.
First, the CommandBuilder (if it is needed) implements IDisposable, and therefore should be wrapped in its own 'using' statement. Surprisingly (at least to me), Disposing the DataAdapter does not appear to Dispose the associated CommandBuilder. The problem I observed when I failed to do this was that even though I called Dispose on the DataAdapter, and the Connection state was 'Closed', I could not remove a temporary database once I had used a CommandBuilder to Update that database.
Second, the statement "... In that case you also need to set the InsertCommand, UpdateCommand or DeleteCommand on the DataAdapter manually ..." is not always correct. For many trivial cases, the CommandBuilder will automatically create the correct INSERT, UPDATE, and DELETE statements based on the SELECT statement provided to the DataAdapter, and meta data from the database. 'Trivial', in this case, means that only a single table is accessed and that table has a Primary Key that is returned as part of the SELECT statement.