I am using EFCore.BulkExtensions for insert and update records in a table. But I have a problem with update records on condition.
For example, I have 15 records (10 to insert, 5 to update). I need to insert 10, but update only 2, because 3 records have old value in UpdatedAt property (database contains more recent data).
If I use EFCore.BulkExtensions like this:
_dbContext.BulkInsertOrUpdateAsync(entitiesList, _bulkConfig);
10 records will be inserted and 5 records will be updated. So the data in the database will be updated by older ones.
To solve my problem I want something like this:
_dbContext.BulkInsertOrUpdateAsync(entitiesList, _bulkConfig,
(oldRecord, newRecord) => newRecord.UpdatedAt > oldRecord.UpdatedAt);
Can you suggest some efficient way to solve this problem with EFCore.BulkExtensions?
This is not direct answer for EFCore.BulkExtensions, but alternative how to do that with linq2db.EntityFrameworkCore. Note that I'm one of the creators.
await context.SomeTable
.ToLinqToDBTable()
.Merge()
.Using(entitiesList)
.On((t, s) => t.Id == s.Id)
.InsertWhenNotMatched()
.UpdateWhenMatchedAnd((t, s) => s.UpdatedAt > t.UpdatedAt)
.MergeAsync();
Select appropriate package 2.x for EF Core 2.x, 3.x for EF Core 3.1.x, etc.
Related
How can i transform this sql query to an EF linq command
"update dbo.table set col1= col1 + 1 where Id = 27"
i want to execute this query using one command to avoid concurrency problems
in case of another client modify the record in the same time
i'm looking for doing that using EF but in one command
i tried this but i'm looking for a better solution :
context.table1.FromSqlInterpolated($"update dbo.table set col1= col1+ 1 where Id=27").FirstOrDefaultAsync();
I would propose to use linq2db.EntityFrameworkCore (note that I'm one of the creators)
Then you can do that with ease:
await context.table1.Where(x => x.Id == 27)
.Set(x => x.Col1, prev => prev.Col1 + 1)
.UpdateAsync();
There are ways to update a column without first querying, but the problem you have is that the update is based on the existing value.
Entity Framework can't help you there. You can only do what you want with a direct SQL statement.
Even the original SQL statement should be executed within a transaction if you want to be sure no other changes can occur between reading and updating the value. It's one SQL statement, but the db still has to read the value, increment and store.
I have a large repository method which generates a regular query at backend, some of the parameters I pass to that repository method are the max-results, firs-result, order-by and order-by-dir in order to control the total of records to display, pagination and the order of the records. The problem is when I am in some configuration ex.(4th page, max-results:10, first-result:40), this should give me the 40th to 50th records of +1000 records in database but only is returning -10 records from +1000 records.
QB Code
....
return $total ? //this is a bool parameter to find out if I want the records or the records amount
$qb
->select($qb->expr()->count('ec.id'))
->getQuery()->getSingleScalarResult() :
$qb//these are the related entities all are joined by leftJoin of QB
->addSelect('c')
->addSelect('e')
->addSelect('pr')
->addSelect('cl')
->addSelect('ap')
->addSelect('com')
->addSelect('cor')
->addSelect('nav')
->addSelect('pais')
->addSelect('tarifas')
->addSelect('transitario')
->orderBy(isset($options['sortBy']) ? $options['sortBy'] : 'e.bl', isset($options['sortDir']) ? $options['sortDir'] : 'asc')
->getQuery()
->setMaxResults(isset($options['limit']) ? $options['limit'] : 10)
->setFirstResult(isset($options['offset']) ? $options['offset'] : 0)
->getArrayResult();
Scenario 1: QueryBuilder with orderBy and database
QB: In this case the result is only one entity with the expected data, but only one entity not 10 when exists more than 1000 records
DB: In this case I get 10 records but with the same entity(the same output from QB but repeated 10 times)
Scenario 2: QueryBuilder with out orderBy and database
QB: In this case the result is as expected 10 records filtered from +1000 records
DB: In this case the result is as expected 10 records
The only problem in this scenario is that I can't order my results using the QB.
Environment description
Symfony: 3.4.11
PostgeSQL: 9.2
PHP 7.2
OS: Ubuntu Server 16.04 x64
Why doctrine/postgres are giving me that kind of result?
There is no Exceptions, miss configurations its only cuts the results when I use orderBy
As from comments posting this as an answer
I guess its because you are selecting related entities via left join, So you will be getting multiple results per main entity (due one to many relationships) but not in a sorted manner but when you do order by on your result set, the duplicates shows up in a same row, In absence of order by the the duplicates were still there but not in same row as unsorted results so you haven't noticed/considered them as duplicate record.
What i think as a workaround for your case is select only your main entity lets A in query builder don't select related ones addSelect(...) and use lazy loading when you want to display your desired results from related entities.
I've a repository in which i insert a series of events.
Now i want to make sure that when i insert event dtos that there arent any new events in the meanwhile.
So what i do now is
Query HighestChangedDate or highest version
Assert nothing newer (throw if something is newer)
Make insert
But now between query and insert there is a timegap . I need to group these together so that it is somekind of transactional.
Is there a way to do this or do i have to live with this gap and solve this problem with the infrastructure ?
I found it out.
I can use FindOneAndUpdate as an Upsert. It is possible to omit the update values and only set the SetOnInsert Values
like this
var builder = new UpdateDefinitionBuilder<Dto>();
var def = builder
.SetOnInsert(x => x.Version, EventVersion)
...
now when running the following update it makes only inserts if my version condition is met
collection.FindOneAndUpdate(VersionCondition, updateValues, new FindOneAndUpdateOptions<Event>() {IsUpsert = true});
after this i can assert if the document that is returned is the same like i wanted to insert.
I get some strange behavior, when using WCF Data Services 5.6.
In my case, I have table, with 1 column set with Concurrency=Fixed, and this column hold date time field from database, updated each time when row is edited.
In case I just retrieve entity - this column has correct value with milliseconds. But if I do mapping - milliseconds are removed.
Here is a issue at glance :
====================================================================
void Main()
{
var b = from p in TABLE1 where p.ID == 100 select p;
b.Dump();
}
Request in this case is : http://xxxx/Data.svc/TABLE1(100M)
And data returned from service is :
<d:COL1 m:type="Edm.DateTime">2015-02-16T12:13:52.972</d:COL1>
====================================================================
As you can see , here time is returned with milliseconds - .972
In other case :
void Main()
{
var tmp = from p in TABLE1 where p.ID == 100 select
new TABLE1()
{
ID=p.ID,
COL1=p.COL1
};
var a1 = tmp.ToList();
a1.Dump();
}
Request in this case is : http://xxxx/Data.svc/TABLE1(100M)?$select=ID,COL1
<d:COL1 m:type="Edm.DateTime">2015-02-16T12:13:52</d:COL1>
====================================================================
Time is returned without milliseconds.
Does anybody have same problem? May be its a bug in the WCF Data services or in the model?
Ok, seems like I found an answer to this, or at least way to avoid the problem.
First I traced generated SQL from framework, and I see that in first case, im getting SQL
SELECT ID, COL1 FROM TABLE1
and in second case, I got
SELECT ID, CAST( COL1 AS DATETIME) FROM TABLE1
which cause the problem.
Then I tried to update EF to version 6, WCF Data services to version 5.6.3,
Oracle ODP to latest one, tried to use Oracle managed driver... no any success.
Then I played little bit with table definition, and I saw that my col1 with type TIMESTAMP in database and DateTime in the model was defined as NOT NULL.
If I remove this from database definition, I got right value with milliseconds.
So , may be this is a bug in Oracle driver, but probably this is a bug in the WCF Data Services. At least I found a way to use concurrency in my case with this solution.
I am using MVC 3 and Entity Framework 4.1. I need to return a view that has a list of rows of DISTINCT values from my Documents database table. In SQL Server, the query that works is as follows:
SELECT DISTINCT(DocNum), Title, DocDate, DocFileName FROM Documents
How do I do the same thing in MVC 3?
var result = (from d in cntx.Documents
select d).Distinct();
Try:
var query = context.Documents.Select(x => new
{
x.DocNum,
x.Title,
x.DocDate,
x.DocFileName
}).Distinct().ToList();
Distinct must go over all returned columns otherwise you could end up with single DocNumber and for example multiple dates and query engine wouldn't know which date to select because only one record with given DocNumber can be returned.