ADO.NET Best Practices for Connection and DataAdaptor Object Scope - ado.net

This is my first post on StackOverflow, so please be gentle...
I have some questions regarding object scope for ADO.NET.
When I connect to a database, I typically use code like this:
OleDbConnection conn = new OleDbConnection("my_connection_string");
conn.Open();
OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * from Employees", conn);
OleDbCommandBuilder cb = new OleDbCommandBuilder(adapter);
DataTable dt = new DataTable();
adapter.Fill(dt);
conn.Close();
conn.Dispose();
Let's say that I bind the resulting DataTable to a grid control and allow my users to edit the grid contents. Now, when my users press a Save button, I need to call this code:
adapter.Update(dt);
Here are my questions:
1) Do I need to retain the adapter object that I created when I originally loaded the datatable, or can I create another adapter object in the Save button click event to perform the update?
2) If I do need to retain the original adapter object, do I also need to keep the connection object available and open?
I understand the disconnected model of ADO.NET - I'm just confused on object scope when it's time to update the database. If someone could give me a few pointers on best practices for this scenario, I would greatly appreciate it!
Thanks in advance...

1) You don't need the same DataAdapter, but if you create a new one it must use the same query as its base.
2) The DataAdapter will open its connection if the connection is closed. In that case it will close the connection again after it is done. If the connection is already open it will leave the connection open even after it is done.
Normally you would work as in your example. Create a Conneciton and a DataAdapter, fill a DataTable and dispose of the Connection and the DataAdapter afterwards.
Two comments to your code:
You don't need the CommandBuilder here since you only do a select. The command builder is only needed if you want to generate Insert, Update or Delete statements automatically. In that case you also need to set the InsertCommand, UpdateCommand or DeleteCommand on the DataAdapter manually from the CommandBuilder.
Second. Instead of calling Dispose manually you should use the Using clause. It ensures that your objects will be disposed of even if an exception is thrown.
Try to change your code to this:
DataTable dt = new DataTable();
using (OleDbConnection conn = new OleDbConnection("my_connection_string"))
using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * from Employees", conn))
{
adapter.Fill(dt);
}
Note that I define the DataTable outside the using clauses. This is needed to ensure that the table is in scope when you leave the usings. Also note that you don't need the Dispose call on the DataAdapter or the Close call on the Connection. Both are done implicitly when you leave the usings.
Oh. And welcome to SO :-)

To answer your questions:
Ideally, you should retain the same DataAdapter because it has already performed it's initialization. A DataAdapter provides properties such as the SelectCommand, UpdateCommand, InsertCommand and DeleteCommand which allow you to set different Command objects to perform these different function on the datasource. So, you see, the DataAdapter is designed to be reused for multiple commands (for the same database connection). Your use of the CommandBuilder (though, not recommended) creates the other Commands by analysing the SelectCommand, thus allowing you to perform Updates, Deletes and Inserts using the same CommandBuilder.
It is best to allow the DataAdapter to implicitly handle database connections. #Rune Grimstad has already elaborated on this implicit behaviour and it's useful to understand this. Ideally, connections should be closed as soon as possible.

There are two additional details worth adding to Rune Grimstad's excellent answer.
First, the CommandBuilder (if it is needed) implements IDisposable, and therefore should be wrapped in its own 'using' statement. Surprisingly (at least to me), Disposing the DataAdapter does not appear to Dispose the associated CommandBuilder. The problem I observed when I failed to do this was that even though I called Dispose on the DataAdapter, and the Connection state was 'Closed', I could not remove a temporary database once I had used a CommandBuilder to Update that database.
Second, the statement "... In that case you also need to set the InsertCommand, UpdateCommand or DeleteCommand on the DataAdapter manually ..." is not always correct. For many trivial cases, the CommandBuilder will automatically create the correct INSERT, UPDATE, and DELETE statements based on the SELECT statement provided to the DataAdapter, and meta data from the database. 'Trivial', in this case, means that only a single table is accessed and that table has a Primary Key that is returned as part of the SELECT statement.

Related

Get int value from database

How i can get int value from database?
Table has 4 columns
Id, Author, Like, Dislike.
I want to get Dislike amount and add 1.
i try
var db = new memyContext();
var amountLike = db.Memy.Where(s => s.IdMema == id).select(like);
memy.like=amountLike+1;
I know that this is bad way.
Please help
I'm not entirely sure what your question is here, but there's a few things that might help.
First, if you're retrieving via something that reasonably only has one match, or in a scenario where you want just one thing, then you should be use SingleOrDefault or FirstOrDefault, respectively - not Where. Where is reserved for scenarios where you expect multiple things to match, i.e. the result will be a list of objects, not an object. Since you're querying by an id, then it's fairly obvious that you expect just one match. Therefore:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
Second, if you just need to read the value of Like, then you can use Select, but here there's two problems with that. First, Select can only be used on enumerables, as already discussed here, you need a single object, not a list of objects. In truth, you can sidestep this in a somewhat convoluted way:
var amountLike = db.Memy.Select(x => x.Like).SingleOrDefault(x => x.IdMema == id);
However, this is still flawed, because you not only need to read this value, but also write back to it, which then needs the context of the object it belongs to. As such, your code should actually look like:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
memy.Like++;
In other words, you pull out the instance you want to modify, and then modify the value in place on that instance. I also took the liberty of using the increment operator here, since it makes far more sense that way.
That then only solves part of your problem, as you need to persist this value back to the database as well, of course. That also brings up the side issue of how you're getting your context. Since this is an EF context, it implements IDisposable and should therefore be disposed when you're done with it. That can be achieved simply by calling db.Dispose(), but it's far better to use using instead:
using (var db = new memyContext())
{
// do stuff with db
}
And while we're here, based on the tags of your question, you're using ASP.NET Core, which means that even this is sub-optimal. ASP.NET Core uses DI (dependency injection) heavily, and encourages you to do likewise. An EF context is generally registered as a scoped service, and should therefore be injected where it's needed. I don't have the context of where this code exists, but for illustration purposes, we'll assume it's in a controller:
public class MemyController : Controller
{
private readonly memyContext _db;
public MemyController(memyContext db)
{
_db = db;
}
...
}
With that, ASP.NET Core will automatically pass in an instance of your context to the constructor, and you do not need to worry about creating the context or disposing of it. It's all handled for you.
Finally, you need to do the actual persistence, but that's where things start to get trickier, as you now most likely need to deal with the concept of concurrency. This code could be being run simultaneously on multiple different threads, each one querying the database at its current state, incrementing this value, and then attempting to save it back. If you do nothing, one thread will inevitably overwrite the changes of the other. For example, let's say we receive three simultaneous "likes" on this object. They all query the object from the database, and let's say that the current like count is 0. They then each increment that value, making it 1, and then they each save the result back to the database. The end result is the value will be 1, but that's not correct: there were three likes just added.
As such, you'll need to implement a semaphore to essentially gate this logic, allowing only one like operation through at a time for this particular object. That's a bit beyond the scope here, but there's plenty of stuff online about how to achieve that.

How To:Transaction Rollback in squeryl

Can anybody please tell me how to handle a transaction rollback in squeryl explicitly?
And also how can we add or remove columns in squeryl dynamically?
Thanx...
Just to elaborate a bit on the response from #didierd. There is one Session/Connection bound to each transaction. You can access the current Session, and thereby the Connection with code like:
Session.currentSession.connection
Or, if you're not sure if you're within a transaction
Session.currentSessionOption map {_.connection}
If you do roll back the transaction this way it will be your responsibility to start a new one or make sure there is no further use of the connection, so use with care.
You have an access to the JDBC's java.sql.Connection (connection in Session), so if you really cannot use transaction / inTransaction, you can call rollback there.
With access to the connection, you can also execute arbitrary SQL requests and so change the database schema, but be mindful that your squeryl-using code has a static, compile time known schema.

How to inspect every query going to DB from Zend Framework

I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace

EF 4.1 How to merge a graph returned by web service into an existing context

I have a scenario where I need to merge in a large graph (over 2000 entities) returned by web service into existing DbContext. So, my first try:
' get new object from web service
Dim newCust = DAL.GetCustomer(ActiveCustomerView.CustomerID)
' attach the object graph into existing context
context.Customers.Attach(newCust)
This create duplicate conflicts in case I try to attach objects containing entities that already exists in the context.
To avoid this problem I’m clearing the local data and then call AcceptAllChanges before attach:
context.Orders.Local.Clear()
context.OrderItems.Local.Clear()
context.Messages.Local.Clear()
context.Contacts.Local.Clear()
context.Parcels.Local.Clear()
context.CustomerTransactions.Local.Clear()
context.ReturnedParcels.Local.Clear()
context.ReturnedOrderItems.Local.Clear()
context.InventoryItems.Local.Clear()
context.Shippments.Local.Clear()
context.Prints.Local.Clear()
DirectCast(context, IObjectContextAdapter).ObjectContext().AcceptAllChanges()
This works but the performance is not acceptable as it takes over 40 sec to clear the data.
Is there any other way to effectively accomplish that?
Can I dispose somehow DbSet's instate of clearing them?
I can’t destroy and create new context each time I need to attach the graph because it contains other objects that would get lost.

Get duplicate key exception when deleting and then re-adding child rows with Entity Framework

I'm using the Entity Framework to model a simple parent child relationship between a document and it's pages. The following code is supposed to (in this order):
make a few property updates to the document
delete any of the document's existing pages
insert a new list of pages passed into the method.
The new pages do have the same keys as the deleted pages because there is an index that consists of the document number and then the page number (1..n).
This code works. However, when I remove the first call to SaveChanges, it fails with:
System.Data.SqlClient.SqlException: Cannot insert duplicate key row in object 
'dbo.DocPages' with unique index 'IX_DocPages'.
Here is the working code with two calls to SaveChanges:
Document doc = _docRepository.GetDocumentByRepositoryDocKey(repository.Repository_ID, repositoryDocKey);
if (doc == null) {
doc = new Document();
_docRepository.Add(doc);
}
_fieldSetter.SetDocumentFields(doc, fieldValues);
List<DocPage> pagesToDelete = (from p in doc.DocPages
select p).ToList();
foreach (DocPage page in pagesToDelete) {
_docRepository.DeletePage(page);
}
_docRepository.GetUnitOfWork().SaveChanges(); //IF WE TAKE THIS OUT IT FAILS
int pageNo = 0;
foreach (ConcordanceDatabase.PageFile pageFile in pageList) {
++pageNo;
DocPage newPage = new DocPage();
newPage.PageNumber = pageNo;
newPage.ImageRelativePath = pageFile.Filespec;
doc.DocPages.Add(newPage);
}
_docRepository.GetUnitOfWork().SaveChanges(); //WHY CAN'T THIS BE THE ONLY CALL TO SaveChanges
If I leave the code as written, EF creates two transactions -- one for each call to SaveChanges. The first updates the document and deletes any existing pages. The second transaction inserts the new pages. I examined the SQL trace and that is what I see.
However, if I remove the first call to SaveChanges (because I'd like the whole thing to run in a single transaction), EF mysteriously does not do the deletes at all but rather generates only the inserts?? -- which result in the duplicate key error. I wouldn't think that waiting to call SaveChanges should matter here?
Incidentally, the call to _docRepository.DeletePage(page) does a objectContext.DeleteObject(page). Can anyone explain this behavior? Thanks.
I think a more likely explanation is that EF does do the deletes, but probably it does them after the insert, so you end up passing through an invalid state.
Unfortunately you don't have low level control over the order DbCommands are executed in the database.
So you need two SaveChanges() calls.
One option is to create a wrapping TransactionScope.
Then you can call SaveChanges() twice and it all happens inside the same transaction.
See this post for more information on the related techniques
Hope this helps
Alex
Thank you Alex. This is very interesting. I did indeed decide to wrap the whole thing up in a transaction scope and it did work fine with the two SaveChanges() -- which, as you point out, appear to be needed due to the primary key conflict with the deletes and subsequent inserts. A new issue now arises based on the article to which you linked. It properly advises to call SaveChanges(false) -- instructing EF to hold onto it's changes because the outer transaction scope will actually control whether those changes ever actually make it to the database. Once the controlling code calls scope.Complete(), the pattern is to then call EF's context.AcceptAllChanges(). But I think this will be problematic for me because I'm forced to call SaveChanges TWICE for the problem originally described. If both those calls to SaveChanges specify False for the accept changes parameter, then I suspect the second call will end up repeating the SQL from the first. I fear I may be in a Catch-22.