Why does EF Core 7 always translate my datetime where clause to '0001-01-01T00:00:00.0000000' - entity-framework-core

I'm pretty sure I've done this in EF Core 6 and it worked before, but now I've upgraded to EF Core 7 and my datetime values are being translated to '0001-01-01T00:00:00.0000000'
For example:
Console.Write("DateFrom:");
Console.WriteLine(dateFrom);
query = query.Where(e => e.TIME_START >= dateFrom);
var count = await query.CountAsync(cancellationToken: cancellationToken);
produces this in the console:
DateFrom:1/30/2023 12:00:00 AM
[10:15:11 INF] Executed DbCommand (19ms) [Parameters=[#__dateFrom_0='0001-01-01T00:00:00.0000000'], CommandType='Text', CommandTimeout='30']
SELECT COUNT(*)
FROM [reports].[RPT_RUN] AS [r]
WHERE [r].[SUCCESS] = CAST(1 AS bit) AND [r].[TIME_START] >= #__dateFrom_0
I am under the impression that this comparison in the where clause should this work?

Don't know why, but it works fine if a move the DbContext code to a brand-new project. Seems like something in my existing project is causing the issue.

Related

Is it possible to have hibernate generate update from values statements for postgresql?

Given a postgresql table
Table "public.test"
Column | Type | Modifiers
----------+-----------------------------+-----------
id | integer | not null
info | text |
And the following values :
# select * from test;
id | info
----+--------------
3 | value3
4 | value4
5 | value5
As you may know with postgresql you can use this kind of statements to update multiples rows with different values :
update test set info=tmp.info from (values (3,'newvalue3'),(4,'newvalue4'),(5,'newvalue5')) as tmp (id,info) where test.id=tmp.id;
And it results in the table being updated in a single queries to :
# select * from test;
id | info
----+--------------
3 | newvalue3
4 | newvalue4
5 | newvalue5
I have been looking around everywhere as to how to make hibernate generate this kind of statements for update queries. I know how to make it work for insert queries (with reWriteBatchedInserts jdbc option and hibernate batch config options).
But is it possible for update queries or do I have to write the native query myself ?
No matter what I do, hibernate always sends separate update queries to the database (I'm looking to the postgresql server statements logs for this affirmation).
2020-06-18 08:19:48.895 UTC [1642] LOG: execute S_6: BEGIN
2020-06-18 08:19:48.895 UTC [1642] LOG: execute S_8: update test set info = $1 where id = $2
2020-06-18 08:19:48.895 UTC [1642] DETAIL: parameters: $1 = 'newvalue3', $2 = '3'
2020-06-18 08:19:48.896 UTC [1642] LOG: execute S_8: update test set info = $1 where id = $2
2020-06-18 08:19:48.896 UTC [1642] DETAIL: parameters: $1 = 'newvalue4', $2 = '4'
2020-06-18 08:19:48.896 UTC [1642] LOG: execute S_8: update test set info = $1 where id = $2
2020-06-18 08:19:48.896 UTC [1642] DETAIL: parameters: $1 = 'newvalue4', $2 = '5'
2020-06-18 08:19:48.896 UTC [1642] LOG: execute S_1: COMMIT
I always find it many times faster to issue a single massive update query than many separate update targeting single rows. With many seperate update queries, even though they are sent in a batch by the jdbc driver, they still need to be processed sequentially by the server, so it is not as efficient as a single update query targeting multiples rows. So if anyone has a solution that wouldn't involve writing native queries for my entities, I would be very glad !
Update
To further refine my question I want to add a clarification. I'm looking for a solution that wouldn't abandon Hibernate dirty checking feature for entities updates. I'm trying to avoid to write batch update queries by hand for the general case of having to updating a few basic fields with different values on an entity list. I'm currently looking into the SPI of hibernate to see it if it's doable. org.hibernate.engine.jdbc.batch.spi.Batch seems to be the proper place but I'm not quite sure yet because I've never done anything with hibernate SPI). Any insights would be welcomed !
You can use Blaze-Persistence for this which is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model.
It does not yet support the FROM clause in DML, but that is about to land in the next release: https://github.com/Blazebit/blaze-persistence/issues/693
Meanwhile you could use CTEs for this. First you need to define a CTE entity(a concept of Blaze-Persistence):
#CTE
#Entity
public class InfoCte {
#Id Integer id;
String info;
}
I'm assuming your entity model looks roughly like this
#Entity
public class Test {
#Id Integer id;
String info;
}
Then you can use Blaze-Persistence like this:
criteriaBuilderFactory.update(entityManager, Test.class, "test")
.with(InfoCte.class, false)
.fromValues(Test.class, "newInfos", newInfosCollection)
.bind("id").select("newInfos.id")
.bind("info").select("newInfos.info")
.end()
.set("info")
.from(InfoCte.class, "cte")
.select("cte.info")
.where("cte.id").eqExpression("test.id")
.end()
.whereExists()
.from(InfoCte.class, "cte")
.where("cte.id").eqExpression("test.id")
.end()
.executeUpdate();
This will create an SQL query similar to the following
WITH InfoCte(id, info) AS(
SELECT t.id, t.info
FROM (VALUES(1, 'newValue', ...)) t(id, info)
)
UPDATE test
SET info = (SELECT cte.info FROM InfoCte cte WHERE cte.id = test.id)
WHERE EXISTS (SELECT 1 FROM InfoCte cte WHERE cte.id = test.id)

Doctrine query result is empty but the generated running query works in Postgres

I have a native query in Doctrine, which generate the right SQL, which runs in PostgreSQL. However the Doctrine returns null/empty results.
public count unReadedNotifications(User $user) {
$rsm = new ResultSetMapping();
$sql =
"SELECT count(ele->'status') FROM notification CROSS JOIN LATERAL jsonb_array_elements(items) status(ele) WHERE notification.id = ? and (ele->'status')::jsonb #> '1'";
$query = $this->getEntityManager()->createNativeQuery($sql,$rsm)->setParameter('1', $user->getNotification()->getId());
return $query->getSingleScalarResult();
}
This Doctrine native query returns null.
If I check the profiler I see the generated query is:
SELECT count(ele->'status') FROM notification CROSS JOIN LATERAL jsonb_array_elements(items) status(ele) WHERE notification.id = 21 and (ele->'status')::jsonb #> '1';
If I copy-paste this into pgAdmin it runs and it retrieve the desired "2".
So pgAdmin can gives me the right result, but Doctrine not.
Can somebody see, where and what went wrong?
Doctrine version: 2.6.3, Postgres is 10.

Entity Framework executes both INSERT and UPDATE stored procedures when you do INSERT

I have this stored procedure, which is mapped in an Entity Framework 4.1 object. The call is made within the
using (TransactionScope transaction = new TransactionScope())
{
try
{
DbEntity.Car.AddObject(CarInfo);
DbEntity.SaveChanges();
/* Other object savings */
transaction.Complete();
DbEntity.AcceptAllChanges();
}
catch (Exception exp)
{
throw exp;
}
finally
{
DbEntity.Dispose();
}
}
I see the stored procedure mapping done currently. If I execute the stored procedure alone on MS SQL server, it executes it correctly.
Here is the stored procedure
ALTER PROCEDURE [dbo].[Carinsert] #Qty INT
,#StyleID INT
,#TFee MONEY
,#HWayTax MONEY
,#OFees MONEY
,#OFeesDescription NTEXT
,#MUp DECIMAL(18, 4)
,#BAss MONEY
,#PriceMSRP MONEY
,#PriceSpecial MONEY
AS
BEGIN
SET nocount ON
DECLARE #PTotal MONEY
DECLARE #TaxFeesNet MONEY
DECLARE #CarID INT
SET #TaxFeesNet = Isnull(#TFee, 0) + Isnull(#HWayTax, 0)
+ Isnull(#OFees, 0)
IF( #PriceSpecial IS NULL )
BEGIN
SET #PTotal = #PriceMSRP + #TaxFeesNet
END
ELSE
BEGIN
SET #PTotal = #PriceSpecial + #TaxFeesNet
END
INSERT INTO Car
(Qty
,StyleID
,MUp
,BAss
,PriceMSRP
,PriceSpecial
,TFee
,HWayTax
,OFees
,OFeesDescription
,PriceTotal)
VALUES (#Qty
,#StyleID
,#MUp
,#BAss
,#PriceMSRP
,#PriceSpecial
,#TFee
,#HWayTax
,#OFees
,#OFeesDescription
,#PTotal)
SELECT Scope_identity() AS CarID
END
If I execute this like on MS SQL it calculates the PriceTotal column in the table as 3444.00, which is correct.
#Qty= 5,
#StyleID = 331410,
#TFee = NULL,
#HWayTax = NULL,
#OFees = NULL,
#OFeesDescription = NULL,
#MUp = 4,
#BAss = 10000,
#PriceMSRP = 20120,
#PriceSpecial = 3444
When I run the MVC web application, and I debug & see these are the values passed and the PriceTotal comes to 20120.00
I couldn't figure out why it does not do the IF ELSE calculation & use the price.
Does anybody else see something weird? This has been daunting for few days now. Any help appreciated. Thanks
Update
I updated the title to better guide others
After few minutes of posting the question, I figured it all out.
There was actually 2 bug.
Entity framework used the UPDATE stored procedure even when I try to INSERT new records. Just for records, the UPDATE stored procedure required the CarID [primary key]. May be EF first does the INSERT and then does the UPDATE right away, even for creating new records?
The UPDATE sproc had a bug checking for NULL using <>. It should have been IS NOT NULL
In anycase, now This is a bigger issue. How to force EF to use INSERT sproc and not the UPDATE when I only want to do is create a new record?
I tried
DbEntityContext.ObjectStateManager.ChangeObjectState(carInfo, EntityState.Added);
and still EF kept on calling the UPDATE sproc.

EF4 - update [Table] set #p = 0 where

While going through SQL profiler, I noticed the following query generated by EF4.
exec sp_executesql N'declare #p int
update [dbo].[User]
set #p = 0
where (([UserID] = #0) and ([RowVersion] = #1))
select [RowVersion]
from [dbo].[User]
where ##ROWCOUNT > 0 and [UserID] = #0',N'#0 int,#1 binary(8)',#0=1,#1=0x000000000042DDCD
I am not sure why EF4 generates this while I am actually not updating any columns of the User table in that UnitOfWork. Running this query updates the RowVersion column (timestamp datatype) which leads to OptimisticConcurrencyException in the next UnitOfWork.
A quick googling led me to this link, which confirms that others have also run into this scenario without finding a solution yet.
Would greatly appreciate any pointers.
Edit: A sample code to replicate the issue.
User and Session tables have a foreign key relationship. Also, in EF4 I have set the "Concurrency Mode" property of RowVersion columns of both entities to Fixed.
Below is a sample method to replicate the scenario.
private static void UpdateSession()
{
using (var context = new TestEntities())
{
context.ContextOptions.ProxyCreationEnabled = false;
var session = context.Users.Include("Sessions").First().Sessions.First();
session.LastActivityTime = DateTime.Now;
context.ApplyCurrentValues("Sessions", session);
context.SaveChanges();
}
}
I see from Sql profiler the following queries being genrated by EF4.
exec sp_executesql N'update [dbo].[Session]
set [LastActivityTime] = #0
where (([SessionID] = #1) and ([RowVersion] = #2))
select [RowVersion]
from [dbo].[Session]
where ##ROWCOUNT > 0 and [SessionID] = #1',N'#0 datetime2(7),#1 int,#2 binary(8)',#0='2011-06-20 09:43:30.6919628',#1=1,#2=0x00000000000007D7
And the next query is weird.
exec sp_executesql N'declare #p int
update [dbo].[User]
set #p = 0
where (([UserID] = #0) and ([RowVersion] = #1))
select [RowVersion]
from [dbo].[User]
where ##ROWCOUNT > 0 and [UserID] = #0',N'#0 int,#1 binary(8)',#0=1,#1=0x00000000000007D3
Not sure if this is still problem for you but here is the hotfix by MS http://support.microsoft.com/kb/2390624
Found this link referenced on another forum and was able to obtain a download for the hotfix mentioned by Kris Ivanov.
http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=2390624
Not sure about EF4, but with 4.1 we turned off the rowversion/timestamp by setting it to concurrenttoken = false.
We did this because
It was a calculated field in our db
It should never be changed by the application (in our case)

Entity Framework object missing related data

I have an Entity Framework model that has two tables, client and postcode. Postcode can have many clients, client can have 1 postcode. They are joined on the postcode.
The two tables are mapped to views.
I have some clients that do not have a Postcode in the model, however in the DB they do!
I ran some tests and found postcodes that were returning clients when I do Postcode.Clients but not all of the clients? In the db a postcode had 14 related clients but EF was only returning the first 6. Basically certain postcodes are not returning all the data.
Lazy loading is turned on and I have tried turning it off without any luck.
Any ideas?
I am using VS 2010, C#, .NET 4.0, EF4 and SQL Server 2008
Thanks
UPDATE:
I have been running through this in LinqPad. I try the following code
Client c = Clients.Where(a => a.ClientId == 9063202).SingleOrDefault();
c.PostcodeView.Dump();
This returns null.
I then take the generated SQL and run this in a separate SQL query and it works correctly (after I add the # to the start of the variable name)
SELECT TOP (2)
[Extent1].[ClientId] AS [ClientId],
[Extent1].[Surname] AS [Surname],
[Extent1].[Forename] AS [Forename],
[Extent1].[FlatNo] AS [FlatNo],
[Extent1].[StNo] AS [StNo],
[Extent1].[Street] AS [Street],
[Extent1].[Town] AS [Town],
[Extent1].[Postcode] AS [Postcode]
FROM (SELECT
[ClientView].[ClientId] AS [ClientId],
[ClientView].[Surname] AS [Surname],
[ClientView].[Forename] AS [Forename],
[ClientView].[FlatNo] AS [FlatNo],
[ClientView].[StNo] AS [StNo],
[ClientView].[Street] AS [Street],
[ClientView].[Town] AS [Town],
[ClientView].[Postcode] AS [Postcode]
FROM [dbo].[ClientView] AS [ClientView]) AS [Extent1]
WHERE 9063202 = [Extent1].[ClientId]
GO
-- Region Parameters
DECLARE #EntityKeyValue1 VarChar(8) = 'G15 6NB'
-- EndRegion
SELECT
[Extent1].[Postcode] AS [Postcode],
[Extent1].[ltAstId] AS [ltAstId],
[Extent1].[ltLhoId] AS [ltLhoId],
[Extent1].[ltChcpId] AS [ltChcpId],
[Extent1].[ltCppId] AS [ltCppId],
[Extent1].[ltWardId] AS [ltWardId],
[Extent1].[ltAst] AS [ltAst],
[Extent1].[ltCpp] AS [ltCpp],
[Extent1].[ltWard] AS [ltWard],
[Extent1].[WardNo] AS [WardNo],
[Extent1].[Councillor] AS [Councillor],
[Extent1].[ltAdminCentre] AS [ltAdminCentre],
[Extent1].[ltChcp] AS [ltChcp],
[Extent1].[Forename] AS [Forename],
[Extent1].[Surname] AS [Surname],
[Extent1].[AreaNo] AS [AreaNo],
[Extent1].[LtAomId] AS [LtAomId],
[Extent1].[OOHltCoordinatorId] AS [OOHltCoordinatorId],
[Extent1].[OvernightltCoordinatorId] AS [OvernightltCoordinatorId],
[Extent1].[DayltCoordinatorId] AS [DayltCoordinatorId]
FROM (SELECT
[PostcodeView].[Postcode] AS [Postcode],
[PostcodeView].[ltAstId] AS [ltAstId],
[PostcodeView].[ltLhoId] AS [ltLhoId],
[PostcodeView].[ltChcpId] AS [ltChcpId],
[PostcodeView].[ltCppId] AS [ltCppId],
[PostcodeView].[ltWardId] AS [ltWardId],
[PostcodeView].[ltAst] AS [ltAst],
[PostcodeView].[ltCpp] AS [ltCpp],
[PostcodeView].[ltWard] AS [ltWard],
[PostcodeView].[WardNo] AS [WardNo],
[PostcodeView].[Councillor] AS [Councillor],
[PostcodeView].[ltAdminCentre] AS [ltAdminCentre],
[PostcodeView].[ltChcp] AS [ltChcp],
[PostcodeView].[Forename] AS [Forename],
[PostcodeView].[Surname] AS [Surname],
[PostcodeView].[AreaNo] AS [AreaNo],
[PostcodeView].[LtAomId] AS [LtAomId],
[PostcodeView].[DayltCoordinatorId] AS [DayltCoordinatorId],
[PostcodeView].[OOHltCoordinatorId] AS [OOHltCoordinatorId],
[PostcodeView].[OvernightltCoordinatorId] AS [OvernightltCoordinatorId]
FROM [dbo].[PostcodeView] AS [PostcodeView]) AS [Extent1]
WHERE [Extent1].[Postcode] = #EntityKeyValue1
Ended up removing the relationship and manually getting child data.
Nasty but cannot find a reason why this is happening. Cheers for the comments