How/When Sql server updates the record timestamp value, if I have a transaction with multiple CRUD operations - entity-framework

I am working on an asp.net mvc web application and I am using an entity framework to map my tables into model classes.
I have the model class representing VMS:-
public partial class TMSVirtualMachine
{
public int TMSVirtualMachineID { get; set; }
public int ServerID { get; set; }
public int RoleID { get; set; }
public Nullable<int> BackUpStatusID { get; set; }
public Nullable<int> StatusID { get; set; }
public Nullable<int> MonitoreID { get; set; }
public Nullable<decimal> TotalStorage { get; set; }
public string Comment { get; set; }
public byte[] timestamp { get; set; }
public Nullable<long> IT360SiteID { get; set; }
public virtual TMSServer TMSServer { get; set; }
//cde goes here…
}
And I have the following repository method , which will move all the current server;s VMS to another server, by changing the VM's ServerID as follow:-
public int changeVMsServer(AssignVMsToServer s, string username)
{
int count = 0;
var currentvms = tms.TMSVirtualMachines.Where(a => a.ServerID == s.serverIDForm);
foreach (var v in currentvms)
{
v.ServerID = s.serverIDTo;
tms.Entry(v).State = EntityState.Modified;
count++;
}
SaveChanges();
return count;
}
Currently if two users call the above method at the same time one of them will get a DBUpdateConcurrentException, since the timestamp for a VM when trying to save it, will be different than when the VM was retrieved.
My question is basically how SQL server 2008 r2 manage the timestamp column. Let take the following scenario:-
First user retrieve 5 VMs, then generate 5 SQL update commands and save.
Second user retrieve 5 VMs, then generate 5 SQL update , and when trying to save , EF will detect that the timestamp has been changed for atleast one VM and raise a DBUPdateException.
Now when the first user perform the 5 SQL updates operation , his work will not be saved until the 5 update operation successed , since the 5 update operations are wrapped in a single trancaction.
Q1) So when will sql server 2008 r2 changed the timestamp column for the 5 servers , when the transaction is completed ?, or when a single update operation is saved ?, and if the transaction failed will sql server return the old timestamp value ?
Sorry for the long email, but I tried searching for a clear answer , but could not reach a final conclusion.

MSDN documentation on timestamp is quite clear:
Is a data type that exposes automatically generated, unique binary numbers within a database. timestamp is generally used as a mechanism for version-stamping table rows. The storage size is 8 bytes. The timestamp data type is just an incrementing number and does not preserve a date or a time. To record a date or time, use a datetime data type.
...
You can use the timestamp column of a row to easily determine whether any value in the row has changed since the last time it was read. If any change is made to the row, the timestamp value is updated.
So, it's clear that the timestamp is modified when the row changes.
EF will use this column for concurrency checking: if an app reads a row, modifies it and try to save the changes in the DB and the timestamp changed since it was read, then the concurrency exception is thrown.
As to the transaction, what you're missing is the "transaction isolation" concept. The timestamp column changes when a change is made to the row. But what happens if a different connection tries to read this row depends on the isolation level: the row can be locked until the transaction finishes (so the other connection will have to wait until that moment), or the other connection can read the new uncomitted value or it can read the old value. It depends on the isolation level.
By default in SQL Server the isolation level is:
READ COMMITTED
Specifies that statements cannot read data that has been modified but not committed by other transactions. This prevents dirty reads. Data can be changed by other transactions between individual statements within the current transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default.

Related

Navigation Property Not Saving with BulkInsert()

Using EFCore 3.1 with the library EFCore.BulkExtensions 3.6.1 (latest version for EFCore 3.1).
Database server is SQL Server 2019.
Here is code to reproduce the error.
A simple Customer class with a navigation property from another class:
public class Customer
{
public int ID { get; set; }
public String Name { get; set; }
public Cont Continent { get; set; }
}
public class Cont
{
public int ID { get; set; }
public String Name { get; set; }
}
When I try to insert entities into Customers with populated navigation properties
using the "BulkInsert" method from the EFCore.BulkExtension library, the value of the navigation props do not get saved to the database:
Customer cust1 = new Customer
{
Continent = contList.Find(x => x.Name == "Europe"),
Name = "Wesson Co, Ltd."
};
Customer cust2 = new Customer
{
Continent = contList.Find(x => x.Name == "Asia"),
Name = "Tiranha, Inc."
};
// (checking the "Continent" props here shows them to be properly populated)
List<Customer> CustomerList = new List<Customer> { cust1, cust2 };
dbContext.BulkInsert(CustomerList);
The result is that the "ContinentID" column in the database is NULL.
Alternate way, as usual with the EF Core SaveChanges() works - change the last two lines to:
dbContext.Customers.AddRange(cust1, cust2);
dbContext.SaveChanges();
This works totally fine. But I have to insert a million records and SaveChanges() has a horrible performance for that scenario.
Is there anything I am doing wrong?
Using another (lower) version of the BulkExtension does not help. Higher versions won't work as they all target EFCore 5 with NetStandard 2.1 which my project does not currently support.
Could not find any hint or mention of navigation props related info in the EFCore.BulkExtension documentation.
Looking for what SQL is being sent only shows me a query like this
INSERT INTO dbo.Customers (ContinentID, Name) VALUES #p1, #p2
so it is up to BulkExtensions.BulkInsert() to place the values correctly, which it seemingly does not.
The point is that similar code has been working for 6 months, and now with a simple scenario as the above it won't, for any version of the BulkExtension library. So it is likley there must be something wrong with my code or my approach, but cannot find it.
UPDATE
Downgrading the package EFCore.BulkExtensions to 3.1.6 gives me a different error. Still does not work but here is the error:
System.InvalidOperationException : The given value 'Customer' of type String from the data source cannot be converted to type int for Column 2 [ContinentID] Row 1.
----> System.FormatException : Failed to convert parameter value from a String to a Int32.
----> System.FormatException : Input string was not in a correct format.
As it stands right now, this is a bug in the EFCore.BulkExtensions library - versions 3.2.1 through 3.3.5 will handle it (mostly) correctly, versions 3.3.6 - 3.6.1 do not.
Use version 3.3.5 for the most stable result, as of this writing.
(No data on version 5.x for EFCore 5)

Simple contract for use with FromSql()

With its recent improvements, I'm looking to move from Dapper back to EF (Core).
The majority of our code currently uses the standard patterns of mapping entities to tables, however we'd also like to be able to make simple ad-hoc queries that map to a simple POCO.
For example, say I have a SQL statement which returns a result set of strings. I created a class as follows...
public class SimpleStringDTO
{
public string Result { get; set; }
}
.. and called it as such.
public DbSet<SimpleStringDTO> SingleStringResults { get; set; }
public IQueryable<SimpleStringDTO> Names()
{
var sql = $"select name [result] from names";
var result = this.SingleStringResults.FromSql(sql);
return result;
}
My thoughts are that I could use the same DBSet and POCO for other simple queries to other tables.
When I execute it, EF throws an error "The entity type 'SimpleStringDTO' requires a primary key to be defined.".
Do I really need to define another field as a PK? There'll be cases where there isn't a PK defined. I just want something simple and flexible. Ideally, I'd rather not define a DBSet or POCO at all, just return the results straight to an IEnumerable<string>.
Can someone please point me towards best practises here?
While I wait for EF Core 2.1 I've ended up adding a fake key to my model
[Key]
public Guid Id { get; set; }
and then returning a fake Guid from SQL.
var sql = $"select newid(), name [result] from names";

Using always encrypted on a entity framework [code first] database

I have an MVC application that uses entity framework / code first. I'm trying to set up always encrypted in order to encrypt a column (social security number / SSN). I'm running everything in Azure, including using Azure vault to store keys.
I have two models, SystemUser and Person. SystemUser is essentially an account / login which can administer 1 or more People.
The definitions look a bit like:
public class Person
{
[StringLength(30)]
[Column(TypeName = "varchar")]
public string SSN { get; set; } // Social Security Number
...
[Required, MaxLength(128)]
public string SystemUserID { get; set; }
[ForeignKey("SystemUserID")]
public virtual SystemUser SystemUser { get; set; }
...
}
public class SystemUser
{
...
[ForeignKey("SystemUserID")]
public virtual HashSet<Person> People { get; set; }
...
}
I have a very basic page set up that just looks up a user and prints out their SSN. This works. I then adapted the page to update SSN and this also works. This to me implies that the Always Encrypted configuration and Azure Vault is set up correctly. I've got "Column Encryption Setting=Enabled" in the connection string and I encrypted the column SSN using SSMS (I'm using deterministic).
In my SystemUser class I have the following method as an implementation for Identity:
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<SystemUser> manager)
{
...
if (this.People.Any())
{
...
}
...
}
This is used for user logins. Running the code results in a:
System.Data.Entity.Core.EntityCommandExecutionException: An error
occurred while executing the command definition. See the inner
exception for details. ---> System.Data.SqlClient.SqlException:
Operand type clash: varchar is incompatible with varchar(30) encrypted
with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name =
'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name =
'CEK_Auto11', column_encryption_key_database_name = 'xxx')
collation_name = 'Latin1_General_BIN2'
It seems to fail on the line above "if (this.People.Any())". Putting a break point just before that line reveals the following about this.People:
'((System.Data.Entity.DynamicProxies.SystemUser_9F939A0933F4A8A3724213CF7A287258E76B1C6775B69BD1823C0D0DB6A88360)this).People'
threw an exception of type
'System.Data.Entity.Core.EntityCommandExecutionException' System.Collections.Generic.HashSet
{System.Data.Entity.Core.EntityCommandExecutionException}
Any ideas here? Am I doing something that Always Encrypted does not support?
Always encryption is not having support in entity framework. MS still working.
This Blog Using Always Encrypted with Entity Framework 6 explains how to use Always Encrypted with Entity Framework 6 for DataBase first and Code First From existing database and Code first-Migrations with work arounds for different scenarios and problems.
According to https://blogs.msdn.microsoft.com/sqlsecurity/2015/08/27/using-always-encrypted-with-entity-framework-6/
Pass the constant argument as closure – this will force parametrization, producing >correct query:
var ssn = "123-45-6789";
context.Patients.Where(p => p.SSN == ssn);

EF6: Table Splitting Not Working

I am trying to create an EF6 database where two tables, Addresses and Visits, share the same values as primary keys. Visits, conceptually, is an extension of Addresses. I'm splitting the tables because most of the records in Addresses don't require the fields contained in Visits.
I'm using the code first approach. Here's the relevant code for the Addresses:
public class Address
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int ID { get; set; }
[ForeignKey( "ID" )]
public virtual Visit Visit { get; set; }
and for Visits:
public class Visit
{
[Key]
[DatabaseGenerated( DatabaseGeneratedOption.Identity )]
public int ID { get; set; }
[ForeignKey("ID")]
public virtual Address Address { get; set; }
Based on my research, I also needed to include the following in my datacontext's OnModelCreating method:
modelBuilder.Entity<Visit>()
.HasOptional( v => v.Address )
.WithRequired();
Unfortunately, this doesn't work. I can update the database alright, after eliminating scaffolding calls to drop the primary index from Addresses (probably because the add-migration code thinks the primary key is "merely" a foreign key field). But when I run the application I get the following error:
Invalid column name 'Address_ID'.
Invalid column name 'Address_ID'.
From my limited experience with EF6 this looks like someplace deep inside the framework it's expecting there to be fields named 'Address_ID', probably in the Visits table (based on the 'table name'_'field name' naming structure I've seen for other implicitly added fields).
Is what I'm trying to do possible? If so, what am I missing in the configuration?
Additional Info
In trying out bubi's proposed solution, which unfortunately still generates the same error, that I could eliminate the OnModelCreating code and still get functional migration code generated.
Resolution
I finally did what I should've done earlier, which is examine the actual T-SQL code generated by the query which was blowing up. It turns out the problem was not in the Visit/Address linkage, but in a completely separate relationship involving another table. Apparently, somewhere along the way I did something to cause EF to think that other table (Voters) had an Address_ID foreign key field. In reality, the Address/Voter relationship should've been, and originally was, tied to a Voter.AddressID field.
Rather than try to unwind a large number of migrations I opted to blow away the database, blow away the migrations and start from scratch. After recreating the database -- but using bubi's suggestion -- I reloaded the data from backup and, voila, I was back in business.
For the sake of completeness, here's the code I ended up having to put into the OnModelCreating method call to get the Address/Visit relationship to work correctly:
modelBuilder.Entity<Visit>()
.HasRequired( v => v.Address )
.WithRequiredDependent( a => a.Visit );
modelBuilder.Entity<Address>()
.HasRequired( a => a.Visit )
.WithRequiredPrincipal( v => v.Address );
I am a little confused about why I have to use HasRequired in order to be able to use WithRequiredPrincipal/WithRequiredDependent, since not every entry in the Address table has an entry in the Visit table. That would seem to be "optional", not "required". But it appears to work, and maybe the "required" part is just internal to EF's model of the database, not the database itself.
There are 2 problems in the model:
- Only one of the Keys can be autonumbering, the other must get the same Id (this independently by EF).
- A mapping problem.
This model should work.
public class Address
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int Id { get; set; }
public string Description { get; set; }
public virtual Visit Visit { get; set; }
}
public class Visit
{
public Visit()
{
Address = new Address();
}
[Key]
[ForeignKey("Address")]
public int Id { get; set; }
public string Description { get; set; }
public virtual Address Address { get; set; }
}
Example of use
var visit = new Visit
{
Description = "Visit",
Address = {Description = "AddressDescription"}
};
db.Visits.Add(visit);
db.SaveChanges();
In addition to what bubi mentioned, your modelBuilder statement contradicts the model in that it doesn't mention Address.Visit as the inverse property. So it thinks that the property represents a separate relationship and tries to create the Address_ID column for that relationship.
You need to have
modelBuilder.Entity<Visit>()
// from your description sounds like every Visit needs an Address
.HasRequired(v => v.Address )
// need to mention the inverse property here if you have one
.WithOptional(a => a.Visit);
...or just remove the statement completely since you're already using attributes, and EF should be able to figure it out by convention.

Entity Framework: alternatives to using MultipleActiveResultSets

I'm using ASP.NET WebAPI and ran into a problem with a nested model that should be communicated via a WebAPI Controller:
The entities "bond, stock etc." each have a list of entities "price". Server-side, I use the following class to match this requirement..
public class Bond : BaseAsset
{
public int ID { get; set; }
public string Name { get; set; }
public virtual List<Price> Prices { get; set; }
}
This leads to the table "Price" having a column for bond, stock etc. and, in case a price is attached to a bond, an entry in its column for bond foreign key.
The error I initially got was
There is already an open DataReader associated with this Command
I fixed that by altering the Connection String to allow MultipleActiveResultSets.
However, I feel there must be better options or at least alternatives when handling nested models. Is it, e.g., a sign for bad model design when one runs into such a problem? Would eager loading change anything?
One alternative to mars is to disable lazy loading
In your DbContext
Configuration.LazyLoadingEnabled = false;
plus when you are loading your data you can explicit load your child tables
context.Bonds.Include(b => b.Prices)