ADO.NET - Bad Practice? - ado.net

I was reading an article in MSDN several months ago and have recently started using the following snippet to execute ADO.NET code, but I get the feeling it could be bad. Am I over reacting or is it perfectly acceptable?
private void Execute(Action<SqlConnection> action)
{
SqlConnection conn = null;
try {
conn = new SqlConnection(ConnectionString);
conn.Open();
action.Invoke(conn);
} finally {
if (conn != null && conn.State == ConnectionState.Open) {
try {
conn.Close();
} catch {
}
}
}
}
public bool GetSomethingById() {
SomeThing aSomething = null
bool valid = false;
Execute(conn =>
{
using (SqlCommand cmd = conn.CreateCommand()) {
cmd.CommandText = ....
...
SqlDataReader reader = cmd.ExecuteReader();
...
aSomething = new SomeThing(Convert.ToString(reader["aDbField"]));
}
});
return aSomething;
}

What is the point of doing that when you can do this?
public SomeThing GetSomethingById(int id)
{
using (var con = new SqlConnection(ConnectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
// prepare command
using (var rdr = cmd.ExecuteReader())
{
// read fields
return new SomeThing(data);
}
}
}
}
You can promote code reuse by doing something like this.
public static void ExecuteToReader(string connectionString, string commandText, IEnumerable<KeyValuePair<string, object>> parameters, Action<IDataReader> action)
{
using (var con = new SqlConnection(connectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
cmd.CommandText = commandText;
foreach (var pair in parameters)
{
var parameter = cmd.CreateParameter();
parameter.ParameterName = pair.Key;
parameter.Value = pair.Value;
cmd.Parameters.Add(parameter);
}
using (var rdr = cmd.ExecuteReader())
{
action(rdr);
}
}
}
}
You could use it like this:
//At the top create an alias
using DbParams = Dictionary<string, object>;
ExecuteToReader(
connectionString,
commandText,
new DbParams() { { "key1", 1 }, { "key2", 2 } }),
reader =>
{
// ...
// No need to dispose
}
)

IMHO it is indeed a bad practice, since you're creating and opening a new database-connection for every statement that you execute.
Why is it bad:
performance wise (although connection pooling helps decrease the performance hit): you should open your connection, execute the statements that have to be executed, and close the connection when you don't know when the next statement will be executed.
but certainly context-wise. I mean: how will you handle transactions ? Where are your transaction boundaries ? Your application-layer knows when a transaction has to be started and committed, but you're unable to span multiple statements into the same sql-transaction with this way of working.

This is a very reasonable approach to use.
By wrapping your connection logic into a method which takes an Action<SqlConnection>, you're helping prevent duplicated code and the potential for introduced error. Since we can now use lambdas, this becomes an easy, safe way to handle this situation.

That's acceptable. I've created a SqlUtilities class two years ago that had a similar method. You can take it one step further if you like.
EDIT: Couldn't find the code, but I typed a small example (probably with many syntax errors ;))
SQLUtilities
public delegate T CreateMethod<T> (SqlDataReader reader);
public static T CreateEntity<T>(string query, CreateMethod<T> createMethod, params SqlParameter[] parameters) {
// Open the Sql connection
// Create a Sql command with the query/sp and parameters
SqlDataReader reader = cmd.ExecuteReader();
return createMethod(reader);
// Probably some finally statements or using-closures etc. etc.
}
Calling code
private SomeThing Create(SqlDataReader reader) {
SomeThing something = new SomeThing();
something.ID = Convert.ToIn32(reader["ID"]);
...
return something;
}
public SomeThing GetSomeThingByID (int id) {
return SqlUtilities.CreateEntity<SomeThing> ("something_getbyid", Create, ....);
}
Of course you could use a lambda expression instead of the Create method, and you could easily make a CreateCollection method and reuse the existing Create method.
However if this is a new project. Check out LINQ to entities. Is far easier and flexible than ADO.Net.

Well, In my opinion check what you do before going through it.Something that is working doesn't mean it is best and good programming practice.Check out and find a concrete example and benefit of using it.But if you are considering using for big projects it would be nice using frameworks like NHibernate.Because there are a lot projects even frameworks developed based on it,like http://www.cuyahoga-project.org/.

Related

Cannot attach database file when using Entity Framework Core Migration commands

I am using EntityFramework Core commands to migration database. The command I am using is like the docs suggests: dnx . ef migration apply. The problem is when specifying AttachDbFileName in connection string, the following error appear: Unable to Attach database file as database xxxxxxx. This is the connection string I am using:
Data Source=(LocalDB)\mssqllocaldb;Integrated Security=True;Initial Catalog=EfGetStarted2;AttachDbFileName=D:\EfGetStarted2.mdf
Please help how to attach the db file to another location.
Thanks
EF core seem to have troubles with AttachDbFileName or doesn't handle it at all.
EnsureDeleted changes the database name to master but keeps any AttachDbFileName value, which leads to an error since we cannot attach the master database to another file.
EnsureCreated opens a connection using the provided AttachDbFileName value, which leads to an error since the file of the database we want to create does not yet exist.
EF6 has some logic to handle these use cases, see SqlProviderServices.DbCreateDatabase, so everything worked quite fine.
As a workaround I wrote some hacky code to handle these scenarios:
public static void EnsureDatabase(this DbContext context, bool reset = false)
{
if (context == null)
throw new ArgumentNullException(nameof(context));
if (reset)
{
try
{
context.Database.EnsureDeleted();
}
catch (SqlException ex) when (ex.Number == 1801)
{
// HACK: EF doesn't interpret error 1801 as already existing database
ExecuteStatement(context, BuildDropStatement);
}
catch (SqlException ex) when (ex.Number == 1832)
{
// nothing to do here (see below)
}
}
try
{
context.Database.EnsureCreated();
}
catch (SqlException ex) when (ex.Number == 1832)
{
// HACK: EF doesn't interpret error 1832 as non existing database
ExecuteStatement(context, BuildCreateStatement);
// this takes some time (?)
WaitDatabaseCreated(context);
// re-ensure create for tables and stuff
context.Database.EnsureCreated();
}
}
private static void WaitDatabaseCreated(DbContext context)
{
var timeout = DateTime.UtcNow + TimeSpan.FromMinutes(1);
while (true)
{
try
{
context.Database.OpenConnection();
context.Database.CloseConnection();
}
catch (SqlException)
{
if (DateTime.UtcNow > timeout)
throw;
continue;
}
break;
}
}
private static void ExecuteStatement(DbContext context, Func<SqlConnectionStringBuilder, string> statement)
{
var builder = new SqlConnectionStringBuilder(context.Database.GetDbConnection().ConnectionString);
using (var connection = new SqlConnection($"Data Source={builder.DataSource}"))
{
connection.Open();
using (var command = connection.CreateCommand())
{
command.CommandText = statement(builder);
command.ExecuteNonQuery();
}
}
}
private static string BuildDropStatement(SqlConnectionStringBuilder builder)
{
var database = builder.InitialCatalog;
return $"drop database [{database}]";
}
private static string BuildCreateStatement(SqlConnectionStringBuilder builder)
{
var database = builder.InitialCatalog;
var datafile = builder.AttachDBFilename;
var dataname = Path.GetFileNameWithoutExtension(datafile);
var logfile = Path.ChangeExtension(datafile, ".ldf");
var logname = dataname + "_log";
return $"create database [{database}] on primary (name = '{dataname}', filename = '{datafile}') log on (name = '{logname}', filename = '{logfile}')";
}
It's far from nice, but I'm using it for integration testing anyway. For "real world" scenarios using EF migrations should be the way to go, but maybe the root cause of this issue is the same...
Update
The next version will include support for AttachDBFilename.
There may be a different *.mdf file already attached to a database named EfGetStarted2... Try dropping/detaching that database then try again.
You might also be running into problems if the user LocalDB is running as doesn't have correct permissions to the path.

TransactionScope with two datacontexts doesn't roll back

I am trying to solve situation with rolling back our datacontexts.
We are using one TransactionScope and inside two data contexts of two different databases.
At the end we want to save changes on both databases so we call .SaveChanges but the problem is that when an error occurs on the other database the changes on the first database are still saved.
What am I doing wrong in there that the first database doesn't roll back?
Thank you,
Jakub
public void DoWork()
{
using (var scope = new TransactionScope())
{
using (var rawData = new IntranetRawDataDevEntities())
{
rawData.Configuration.AutoDetectChangesEnabled = true;
using (var dataWareHouse = new IntranetDataWareHouseDevEntities())
{
dataWareHouse.Configuration.AutoDetectChangesEnabled = true;
... some operations with the data - no savechanges() is being called.
// Save changes for all items.
if (!errors)
{
// First database save.
rawData.SaveChanges();
// Fake data to fail the second database save.
dataWareHouse.Tasks.Add(new PLKPIDashboards.DataWareHouse.Task()
{
Description = string.Empty,
Id = 0,
OperationsQueue = new OperationsQueue(),
Queue_key = 79,
TaskTypeSLAs = new Collection<TaskTypeSLA>(),
Tasktype = null
});
// Second database save.
dataWareHouse.SaveChanges();
scope.Complete();
}
else
{
scope.Dispose();
}
}
}
}
From this article http://blogs.msdn.com/b/alexj/archive/2009/01/11/savechanges-false.aspx
try to use
rawData.SaveChanges(false);
dataWareHouse.SaveChanges(false);
//if everything is ok
scope.Complete();
rawData.AcceptAllChanges();
dataWareHouse.AcceptAllChanges();

Bulk inserts with EntityFramework 4.0 causes abort of transaction

We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/

Trouble Calling Stored Procedure from BackgroundWorker

I'm in ASP.NET MVC and am (mostly) using Entity Framework. I want to call a stored procedure without waiting for it to finish. My current approach is to use a background worker. Trouble is, it works fine without using the background worker, but fails to execute with it.
In the DoWork event handler when I call
command.ExecuteNonQuery();
it just "disappears" (never gets to next line in debug mode).
Anyone have tips on calling a sproc asynchronously? BTW, it'll be SQL Azure in production if that matters; for now SQL Server 2008.
public void ExecAsyncUpdateMemberScoreRecalc(MemberScoreRecalcInstruction instruction)
{
var bw = new BackgroundWorker();
bw.DoWork += new DoWorkEventHandler(AsyncUpdateMemberScoreRecalc_DoWork);
bw.WorkerReportsProgress = false;
bw.WorkerSupportsCancellation = false;
bw.RunWorkerAsync(instruction);
}
private void AsyncUpdateMemberScoreRecalc_DoWork(object sender, DoWorkEventArgs e)
{
var instruction = (MemberScoreRecalcInstruction)e.Argument;
string connectionString = string.Empty;
using (var sprocEntities = new DSAsyncSprocEntities()) // getting the connection string
{
connectionString = sprocEntities.Connection.ConnectionString;
}
using (var connection = new EntityConnection(connectionString))
{
connection.Open();
EntityCommand command = connection.CreateCommand();
command.CommandText = DSConstants.Sproc_MemberScoreRecalc;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_SageUserId, instruction.SageUserId);
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_EventType, instruction.EventType);
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_EventCode, instruction.EventCode);
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_EventParamId, instruction.EventParamId);
int result = 0;
// NEVER RETURNS FROM RUNNING NEXT LINE (and never executes)... yet it works if I do the same thing directly in the main thread.
result = command.ExecuteNonQuery();
}
}
Add a try catch around the call and see if any exceptions are caught and are thus aborting the thread.
try {
result = command.ExecuteNonQuery();
} catch(Exception ex) {
// Log this error and if needed handle or
throw;
}

Monotouch data sync - why does my code sometimes cause sqlite errors?

I have the following calls (actually a few more than this - it's the overall method that's in question here):
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshEventData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshLocationData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshActData);
1st point is - is it OK to call methods that call WCF services like this? I tried daisy chaining them and it was a mess.
An example of one of the refresh methods being called above is (they all follow the same pattern, just call different services and populate different tables):
public void RefreshEventData (object state)
{
Console.WriteLine ("in RefreshEventData");
var eservices = new AppServicesClient (new BasicHttpBinding (), new EndpointAddress (this.ServciceUrl));
//default the delta to an old date so that if this is first run we get everything
var eventsLastUpdated = DateTime.Now.AddDays (-100);
try {
eventsLastUpdated = (from s in GuideStar.Data.Database.Main.Table<GuideStar.Data.Event> ()
orderby s.DateUpdated descending
select s).ToList ().FirstOrDefault ().DateUpdated;
} catch (Exception ex1) {
Console.WriteLine (ex1.Message);
}
try {
eservices.GetAuthorisedEventsWithExtendedDataAsync (this.User.Id, this.User.Password, eventsLastUpdated);
} catch (Exception ex) {
Console.WriteLine ("error updating events: " + ex.Message);
}
eservices.GetAuthorisedEventsWithExtendedDataCompleted += delegate(object sender, GetAuthorisedEventsWithExtendedDataCompletedEventArgs e) {
try {
List<Event> newEvents = e.Result.ToList ();
GuideStar.Data.Database.Main.EventsAdded = e.Result.Count ();
lock (GuideStar.Data.Database.Main) {
GuideStar.Data.Database.Main.Execute ("BEGIN");
foreach (var s in newEvents) {
GuideStar.Data.Database.Main.InsertOrUpdateEvent (new GuideStar.Data.Event {
Name = s.Name,
DateAdded = s.DateAdded,
DateUpdated = s.DateUpdated,
Deleted = s.Deleted,
StartDate = s.StartDate,
Id = s.Id,
Lat = s.Lat,
Long = s.Long
});
}
GuideStar.Data.Database.Main.Execute ("COMMIT");
LocationsCount = 0;
}
} catch (Exception ex) {
Console.WriteLine("error InsertOrUpdateEvent " + ex.Message);
} finally {
OnDatabaseUpdateStepCompleted (EventArgs.Empty);
}
};
}
OnDatabaseUpdateStepCompleted - just iterates an updateComplete counter when it's called and when it knows that all of the services have come back ok it removes the waiting spinner and the app carries on.
This works OK 1st time 'round - but then sometimes it doesn't with one of these: http://monobin.com/__m6c83107d
I think the 1st question is - is all this OK? I'm not used to using threading and locks so I am wandering into new ground for me. Is using QueueUserWorkItem like this ok? Should I even be using lock before doing the bulk insert/update? An example of which:
public void InsertOrUpdateEvent(Event festival){
try {
if (!festival.Deleted) {
Main.Insert(festival, "OR REPLACE");
}else{
Main.Delete<Event>(festival);
}
} catch (Exception ex) {
Console.WriteLine("InsertOrUpdateEvent failed: " + ex.Message);
}
}
Then the next question is - what am I doing wrong that is causing these sqlite issues?
w://
Sqlite is not thread safe.
If you want to access Sqlite from more than one thread, you must take a lock before you access any SQLite related structures.
Like this:
lock (db){
// Do your query or insert here
}
Sorry, no specific answers, but some thoughts:
Is SqlLite even threadsafe? I'm not sure - it may be that it's not (to the wrapper isn't). Can you lock on a more global object, so no two threads are inserting at the same time?
It's possible that the MT GC is getting a little overenthusiastic, and releasing your string before it's been used. Maybe keep a local reference to it around during the insert? I've had this happen with view controllers, where I had them in an array (tabcontrollers, specificially), but if I didn't keep an member variable around with the reference, they got GC'ed.
Could you get the data in a threaded manner, then queue everything up and insert them in a single thread? Atleast as a test anyway.