DbContext Remove generates DbUpdateConcurrencyException when DATETIME includes milliseconds - entity-framework-core

I have a "legacy" table that I'm managing in a webapp. It uses a compound key, rather than an auto-increment integer, which I register during OnModelCreating:
modelBuilder.Entity<EmployeeLog>().HasKey(table => new { table.EmployeeID, table.DateCreated });
If it matters, the EmployeeID is a VARCHAR`` and DateCreated is a DATETIME.
When I attempt to delete a record:
...
// the UI stores the date/time without milliseconds, use a theta join to bracket the time
var model = await _context.EmployeeLogs.FirstOrDefaultAsync(l => l.EmployeeID == driverID && (l.DateCreated >= date && l.DateCreated < date.AddSeconds(1)));
_context.Remove(model);
var result = await _context.SaveChangesAsync();
I get an error when SaveChangesAsync is called:
DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded.
The model is correctly identified and contains the expected data.
I appears that the issue is related to the granularity of the date. If milliseconds are included, for example 2021-04-27 08:06:33.193, then the error is generated. If the date/time does not include milliseconds, 2021-04-27 08:06:33.000 for example, the deletion works correctly.
I can truncate milliseconds when creating the record, but I'd like to know if there is a way to handle this if the record already contains milliseconds.
** edit **
I don't have any control over the vendor's database decisions, so I need a solution that addresses that reality.

Related

StartsWith not working with integer data type

I am having System.NullReferenceException: Object reference not set to an instance of an object. while running the code below, where lotId is of integer type:
inventories = inventories.Where(u => u.lotId.ToString().StartsWith(param.Lot));
It used to work in netcoreapp2.0 but not working in netcoreapp3.1
The reason it likely worked before was because you were running EF Core 2.x which enabled client-side evaluation by default, where EF Core 3.1+ have it disabled by default. You can enable it for that DbContext instance, or better, consider an approach that doesn't result in a client-side evaluation. For instance if your lot IDs are 7 digit numbers where the first digit denotes a Lot, then calculate a range to compare:
var lotStart = param.Lot * 1000000;
var lotEnd = lotStart + 999999;
inventories = inventories.Where(u => u.lotId >= lotStart && u.lotId <= lotEnd);
This assumes that the first single digit was used to group lots. Client-side evaluation should be avoided where possible because it results in returning far more data to be processed in memory. A client-side eval version as you had it would be returning all inventory records with whatever filtering it might be able to do, then filter out the lot ID check after all of those Lots are loaded.

How to handle different resolution time.Time objects after retrieving from a Postgres db

I'm working on a basic CRUD service. I'm trying to test that when I create/store an object and then retrieve that object from the DB, the object I get is the same. For a bit of implementation detail, I'm trying to persist a struct into a Postgres DB, then retrieve that struct and compare the two structs to ensure they are equal.
I'm hitting an issue whereby the original struct's time.Time field has a higher resolution than the one retrieved from the DB, presumably because Postgres has a smaller resolution for timestamps? (I'm storing the time objects as Postgres's timestamp with time zone)
The original time.Time object: 2020-12-20 20:20:11.1699442 +0000 GMT m=+0.002995101
The time retrieved from the DB: 2020-12-20 20:20:11.169944 +0000 GMT
Is there any way around this?
My options seem to be:
truncate the original time's resolution. Issues: can't seem to find any way to do that, plus, I don't want storage implementation details leaking into my domain layer
instead compare the object IDs to ensure they're the same. Issues: this seems flimsy and doesn't assure me that everything I store from that struct is returned as it was
compare each field manually and do some conversion of the time objects so they are the same resolution. Issues: this is messy and only kicks this issue down the road
This situation can come up in a number of circumstances, any time there are multiple platforms in play, which use different precision for times.
The best way to handle such tests is to check that the delta between the two times is sufficiently small. i.e.:
var expected time.Time = /* your expected value */
var actual time.Time = /* the actual value */
if delta := expected.Sub(actual); delta < -time.Millisecond || delta > time.Millisecond {
t.Fail("actual time is more than 1ms different than expected time")
}

Entity Framework OrderBy().Where() expression

I've been using Entity Framework for a little while now with no issues, until i stumbled upon a curly one....well it is for me atleast anyway. I have searched the internet and cannot find anything related to this but i assume it is merely because i am asking the wrong question. So here goes...
query= query.OrderByDescending(u => u.DateCreated);
This is simple and works fine. However the table being queried is for workflow and there are 4 date columns, CreatedDate, EstimatedDate, RevisedDate and ActualDate. At the beginning of the workflow for this element the CreatedDate will and all the other date columns will be NULL. As the element progresses through workflow the subsequent dates will be filled.
So what i am trying to achieve is this, i don't want any grouping, i just want the date to be used for OrderBy() to be the last date in the workflow.
I can achieve this by adding another column to my table called FilterDate which is used solely for sorting and gets updated with the appropriate date based upon workflow, however this is adding another column to my table just because I can't come up with a smart method of achieving this.
It's not pretty, but this should be what you're looking for, assuming that ActualDate (if populated) is always >= RevisedDate >= EstimatedDate => CreatedDate:
query= query.OrderByDescending(u => u.ActualDate.HasValue
? u.ActualDate.Value
: u.RevisedDate.HasValue
? u.RevisedDate.Value
: u.EstimatedDate.HasValue
? u.EstimatedDate.Value
: u.CreatedDate);
This will order by whichever date is available preferencing the Actual over Revised over Estimated and defaulting to the Created.
This doesn't handle if it's possible that a RevisedDate could be after an ActualDate for instance. If the row had an RevisedDate of 2020-12-02 and an ActualDate of 2020-11-25, this query would be using the ActualDate for comparison, not the later RevisedDate.
If I understand your question properly then you don't need to add an extra column to your table.
You just need to add a [NotMapped] decorated property to your object, because that property won't be saved in the database.
[NotMapped]
public DateTime FilteringDate
{
get
{
if (ActualDate.HasValue) return (DateTime)ActualDate;
else if (RevisedDate.HasValue) return (DateTime)RevisedDate;
else if (EstimatedDate.HasValue) return (DateTime)EstimatedDate;
else return CreatedDate;
}
}
usage
query = query.OrderByDescending(u => u.FilteringDate);

WCF Data services remove milliseconds from DateTime when expand

I get some strange behavior, when using WCF Data Services 5.6.
In my case, I have table, with 1 column set with Concurrency=Fixed, and this column hold date time field from database, updated each time when row is edited.
In case I just retrieve entity - this column has correct value with milliseconds. But if I do mapping - milliseconds are removed.
Here is a issue at glance :
====================================================================
void Main()
{
var b = from p in TABLE1 where p.ID == 100 select p;
b.Dump();
}
Request in this case is : http://xxxx/Data.svc/TABLE1(100M)
And data returned from service is :
<d:COL1 m:type="Edm.DateTime">2015-02-16T12:13:52.972</d:COL1>
====================================================================
As you can see , here time is returned with milliseconds - .972
In other case :
void Main()
{
var tmp = from p in TABLE1 where p.ID == 100 select
new TABLE1()
{
ID=p.ID,
COL1=p.COL1
};
var a1 = tmp.ToList();
a1.Dump();
}
Request in this case is : http://xxxx/Data.svc/TABLE1(100M)?$select=ID,COL1
<d:COL1 m:type="Edm.DateTime">2015-02-16T12:13:52</d:COL1>
====================================================================
Time is returned without milliseconds.
Does anybody have same problem? May be its a bug in the WCF Data services or in the model?
Ok, seems like I found an answer to this, or at least way to avoid the problem.
First I traced generated SQL from framework, and I see that in first case, im getting SQL
SELECT ID, COL1 FROM TABLE1
and in second case, I got
SELECT ID, CAST( COL1 AS DATETIME) FROM TABLE1
which cause the problem.
Then I tried to update EF to version 6, WCF Data services to version 5.6.3,
Oracle ODP to latest one, tried to use Oracle managed driver... no any success.
Then I played little bit with table definition, and I saw that my col1 with type TIMESTAMP in database and DateTime in the model was defined as NOT NULL.
If I remove this from database definition, I got right value with milliseconds.
So , may be this is a bug in Oracle driver, but probably this is a bug in the WCF Data Services. At least I found a way to use concurrency in my case with this solution.

C# Comparing lists of data from two separate databases using LINQ to Entities

I have 2 SQL Server databases, hosted on two different servers. I need to extract data from the first database. Which is going to be a list of integers. Then I need to compare this list against data in multiple tables in the second database. Depending on some conditions, I need to update or insert some records in the second database.
My solution:
(WCF Service/Entity Framework using LINQ to Entities)
Get the list of integers from 1st db, takes less than a second gets 20,942 records
I use the list of integers to compare against table in the second db using the following query:
List<int> pastDueAccts; //Assuming this is the list from Step#1
var matchedAccts = from acct in context.AmAccounts
where pastDueAccts.Contains(acct.ARNumber)
select acct;
This above query is taking so long that it gives a timeout error. Even though the AmAccount table only has ~400 records.
After I get these matchedAccts, I need to update or insert records in a separate table in the second db.
Can someone help me, how I can do step#2 more efficiently? I think the Contains function makes it slow. I tried brute force too, by putting a foreach loop in which I extract one record at a time and do the comparison. Still takes too long and gives timeout error. The database server shows only 30% of the memory has been used.
Profile the sql query being sent to the database by using SQL Profiler. Capture the SQL statement sent to the database and run it in SSMS. You should be able to capture the overhead imposed by Entity Framework at this point. Can you paste the SQL Statement emitted in step #2 in your question?
The query itself is going to have all 20,942 integers in it.
If your AmAccount table will always have a low number of records like that, you could just return the entire list of ARNumbers, compare them to the list, then be specific about which records to return:
List<int> pastDueAccts; //Assuming this is the list from Step#1
List<int> amAcctNumbers = from acct in context.AmAccounts
select acct.ARNumber
//Get a list of integers that are in both lists
var pastDueAmAcctNumbers = pastDueAccts.Intersect(amAcctNumbers);
var pastDueAmAccts = from acct in context.AmAccounts
where pastDueAmAcctNumbers.Contains(acct.ARNumber)
select acct;
You'll still have to worry about how many ids you are supplying to that query, and you might end up needing to retrieve them in batches.
UPDATE
Hopefully somebody has a better answer than this, but with so many records and doing this purely in EF, you could try batching it like I stated earlier:
//Suggest disabling auto detect changes
//Otherwise you will probably have some serious memory issues
//With 2MM+ records
context.Configuration.AutoDetectChangesEnabled = false;
List<int> pastDueAccts; //Assuming this is the list from Step#1
const int batchSize = 100;
for (int i = 0; i < pastDueAccts.Count; i += batchSize)
{
var batch = pastDueAccts.GetRange(i, batchSize);
var pastDueAmAccts = from acct in context.AmAccounts
where batch.Contains(acct.ARNumber)
select acct;
}