TransactionScope Vs stored procedure - entity-framework

I have a web service hosted on multiple servers, as traffic increases, race conditions arise. We're using Entity Framework and host on Azure, I've been looking into either write the queries using TransactionScope or moving logic into a stored procedure and do a transaction there.
I was wondering what's the difference between using TransactionScope or a stored procedure? What are the best practices for this problem?

I would strongly discourage you from implementing transactions in stored procedures. This can greatly limit the flexibility you have in creating units of work (which a transaction is). Since you are using EF, I would encourage you to manage transactions in your business tier code. In this way, you have greater flexibility in defining and managing units of work.

TransactionScope allows for transactions in your EF statements. so the entire linq statement will rollback. Whereas having transactions within SPROCS will only rollback whatever is processed within the sproc.
since you are using EF which allows you to interact with the database through Linq, you'd be as well going with TransactionScope IMO

Related

Is optimistic locking equivalent to Select For Update?

It is my first time using EF Core and DDD concepts. Our database is Microsoft SQL Server. We use optimistic concurrency based on the RowVersion for user requests. This handles concurrent read and writes by users.
With the DDD paradigma user changes are not written directly to the database nor is the logic handled in database with a stored procedure. It is a three step process:
get aggregate from repository that pulls it from the database
update aggregate through domain commands that implement business logic
save aggregate back to repository that writes it to the database
The separation of read and write in the application logic can lead again to race conditions between parallel commands.
Since the time between read and write in the backend is normally fairly short, those race conditions can be handled with optimistic and also pessimistic locking.
To my understanding optimistic concurrency using RowVersion is sufficient for lost update problem, but not for write skew as is shown in Martin Kleppmann's book "Designing Data-Intensive Applications". This would require locking the read records.
To prevent write skew a common solution is to lock the records in step 1 with FOR UPDATE or in SQL Server with the hints UPDLOCK and HOLDLOCK.
EF Core does neither support FOR UPDATE nor SQL Server's WITH.
If I'm not able to lock records with EF Core does it mean there is no way to prevent write skew except using Raw SQL or Stored Procedures?
If I use RowVersion, I first check the RowVersion after getting the aggregate from the database. If it doesn't match I can fail fast. If it matches it is checked through EF Core in step 3 when updating the database. Is this pattern sufficient to eliminate all race conditions except write skew?
Since the write skew race condition occurs when read and write is on different records, it seems that there can always be a transaction added maybe later during development that makes a decision on a read. In a complex system I would not feel safe if it is not just simple CRUD access. Is there another solution when using EF Core to prevent write skew without locking records for update?
If you tell EF Core about the RowVersion attribute, it will use it in any update statement. BUT you have to be careful to preserve the RowVersion value from your data retrieval. The usual work pattern would retrieve the data, the user potentially edits the data, and then the user saves the data. When the user saves the data, you would normally have EF retrieve the entity, update the entity with the user's changes, and save the updates. EF uses the RowVersion in a Where clause to ensure nothing has changed since you read the data. This is the tricky part- you want to make sure the RowVersion is still the same as your initial data retrieval, not the second retrieval used to update the entity before saving.

Is there Entity Framework support for saving to multiple databases on the same SQL Server in single transaction?

I have two databases on the same SQL Server instance. I would like to write a record to each database in a single transaction.
In Linq-to-SQL, I would connect to either database with one context and use three part naming to identify the tables.
Is there a similar capability in Entity Framework?
I'm trying to avoid DTC, it has been forbidden - so the usual TransactionScope approach is not available to me.
There is not a way I know of... you could potentially use the UnitOfWork pattern
http://www.codeproject.com/Articles/581487/Unit-of-Work-Design-Pattern
That might allow you to at least go back to the other Db and un-commit?
Personally I think your going to struggle.

When to use t-SQL over the Entity Framework

Could someone tell me if there are any times when it is more advantageous to use t-SQL over the Entity Framework? I'm aware of the N+1 issue, but is there any other gotchas I should be aware of? For instance, do Linq-to-EF queries cache as well as stored procedures? Are there instances where the SQL generated by EF is less than optimal?
Thanks!
Whenever you need to do the work "inside" the DB server and not go back and forth between your code and Server.
Also - when you use stored procedures, you can alter the code without recompiling/deploying, it might be easier on production environments.
IMHO it sometimes easier to code complex SQL statements in T-SQL rather than using LINQ....

Alternative to TransactionScope for Entity Framework when using Sql Azure across multiple connections

I am using EF to access Sql azure. In one situation I need to make changes to two databases, for which normally I would use TransactionScope and it would escalate to MSDTC. Now MSDTC is not supported in Sql Azure, so I can't use TransactionScope.
Is there another way to do this? (other than doing it without the distributed transaction and having to manually rollback state somehow).
One way to write your code without using the TransactionScope class is to use SqlTransaction. The SqlTransaction class doesn’t use the transaction manager, it wraps the commands within a local transaction that is committed when you call the Commit() method.
I would suggest you looking at Handling Transactions in SQL Azure article.
TransactionScope is now supported for Azure SQL DB. See my answer to the following posting: TransactionScope() in Sql Azure. This also applies when you are using EF.

How to decide when use ADO.NET and when to connect to the database?

I'm learning some ADO.NET. I noticed quite a few Database functionality can also be found in ADO.NET.
I'm kind of confused. Do I use ADO.NET to manage all the interactions or should I make call to the Database?
I don't know what should be done by ADO.NET and what should be done at the database level.
Thanks for helping.
If you mean what should be handled in SQL statements issued from ADO.NET, and what should be done in stored procedures stored at the database level, as much as possible in stored procedures, at least that's what I live by. In addition to eliminating the chance of SQL injection, stored procedures allow you to modify sql calls without having to recompile and deploy your code as well as they enable execution plan re-use by the query optimizer.