Entity Framework object missing related data - entity-framework

I have an Entity Framework model that has two tables, client and postcode. Postcode can have many clients, client can have 1 postcode. They are joined on the postcode.
The two tables are mapped to views.
I have some clients that do not have a Postcode in the model, however in the DB they do!
I ran some tests and found postcodes that were returning clients when I do Postcode.Clients but not all of the clients? In the db a postcode had 14 related clients but EF was only returning the first 6. Basically certain postcodes are not returning all the data.
Lazy loading is turned on and I have tried turning it off without any luck.
Any ideas?
I am using VS 2010, C#, .NET 4.0, EF4 and SQL Server 2008
Thanks
UPDATE:
I have been running through this in LinqPad. I try the following code
Client c = Clients.Where(a => a.ClientId == 9063202).SingleOrDefault();
c.PostcodeView.Dump();
This returns null.
I then take the generated SQL and run this in a separate SQL query and it works correctly (after I add the # to the start of the variable name)
SELECT TOP (2)
[Extent1].[ClientId] AS [ClientId],
[Extent1].[Surname] AS [Surname],
[Extent1].[Forename] AS [Forename],
[Extent1].[FlatNo] AS [FlatNo],
[Extent1].[StNo] AS [StNo],
[Extent1].[Street] AS [Street],
[Extent1].[Town] AS [Town],
[Extent1].[Postcode] AS [Postcode]
FROM (SELECT
[ClientView].[ClientId] AS [ClientId],
[ClientView].[Surname] AS [Surname],
[ClientView].[Forename] AS [Forename],
[ClientView].[FlatNo] AS [FlatNo],
[ClientView].[StNo] AS [StNo],
[ClientView].[Street] AS [Street],
[ClientView].[Town] AS [Town],
[ClientView].[Postcode] AS [Postcode]
FROM [dbo].[ClientView] AS [ClientView]) AS [Extent1]
WHERE 9063202 = [Extent1].[ClientId]
GO
-- Region Parameters
DECLARE #EntityKeyValue1 VarChar(8) = 'G15 6NB'
-- EndRegion
SELECT
[Extent1].[Postcode] AS [Postcode],
[Extent1].[ltAstId] AS [ltAstId],
[Extent1].[ltLhoId] AS [ltLhoId],
[Extent1].[ltChcpId] AS [ltChcpId],
[Extent1].[ltCppId] AS [ltCppId],
[Extent1].[ltWardId] AS [ltWardId],
[Extent1].[ltAst] AS [ltAst],
[Extent1].[ltCpp] AS [ltCpp],
[Extent1].[ltWard] AS [ltWard],
[Extent1].[WardNo] AS [WardNo],
[Extent1].[Councillor] AS [Councillor],
[Extent1].[ltAdminCentre] AS [ltAdminCentre],
[Extent1].[ltChcp] AS [ltChcp],
[Extent1].[Forename] AS [Forename],
[Extent1].[Surname] AS [Surname],
[Extent1].[AreaNo] AS [AreaNo],
[Extent1].[LtAomId] AS [LtAomId],
[Extent1].[OOHltCoordinatorId] AS [OOHltCoordinatorId],
[Extent1].[OvernightltCoordinatorId] AS [OvernightltCoordinatorId],
[Extent1].[DayltCoordinatorId] AS [DayltCoordinatorId]
FROM (SELECT
[PostcodeView].[Postcode] AS [Postcode],
[PostcodeView].[ltAstId] AS [ltAstId],
[PostcodeView].[ltLhoId] AS [ltLhoId],
[PostcodeView].[ltChcpId] AS [ltChcpId],
[PostcodeView].[ltCppId] AS [ltCppId],
[PostcodeView].[ltWardId] AS [ltWardId],
[PostcodeView].[ltAst] AS [ltAst],
[PostcodeView].[ltCpp] AS [ltCpp],
[PostcodeView].[ltWard] AS [ltWard],
[PostcodeView].[WardNo] AS [WardNo],
[PostcodeView].[Councillor] AS [Councillor],
[PostcodeView].[ltAdminCentre] AS [ltAdminCentre],
[PostcodeView].[ltChcp] AS [ltChcp],
[PostcodeView].[Forename] AS [Forename],
[PostcodeView].[Surname] AS [Surname],
[PostcodeView].[AreaNo] AS [AreaNo],
[PostcodeView].[LtAomId] AS [LtAomId],
[PostcodeView].[DayltCoordinatorId] AS [DayltCoordinatorId],
[PostcodeView].[OOHltCoordinatorId] AS [OOHltCoordinatorId],
[PostcodeView].[OvernightltCoordinatorId] AS [OvernightltCoordinatorId]
FROM [dbo].[PostcodeView] AS [PostcodeView]) AS [Extent1]
WHERE [Extent1].[Postcode] = #EntityKeyValue1

Ended up removing the relationship and manually getting child data.
Nasty but cannot find a reason why this is happening. Cheers for the comments

Related

Why does EF Core 7 always translate my datetime where clause to '0001-01-01T00:00:00.0000000'

I'm pretty sure I've done this in EF Core 6 and it worked before, but now I've upgraded to EF Core 7 and my datetime values are being translated to '0001-01-01T00:00:00.0000000'
For example:
Console.Write("DateFrom:");
Console.WriteLine(dateFrom);
query = query.Where(e => e.TIME_START >= dateFrom);
var count = await query.CountAsync(cancellationToken: cancellationToken);
produces this in the console:
DateFrom:1/30/2023 12:00:00 AM
[10:15:11 INF] Executed DbCommand (19ms) [Parameters=[#__dateFrom_0='0001-01-01T00:00:00.0000000'], CommandType='Text', CommandTimeout='30']
SELECT COUNT(*)
FROM [reports].[RPT_RUN] AS [r]
WHERE [r].[SUCCESS] = CAST(1 AS bit) AND [r].[TIME_START] >= #__dateFrom_0
I am under the impression that this comparison in the where clause should this work?
Don't know why, but it works fine if a move the DbContext code to a brand-new project. Seems like something in my existing project is causing the issue.

Poltergeist using FromSqlRaw with EF Core5

I have to retrieve this data from 2 different databases within the same instance
For which I put that same SQL statement in my code
public int ImportarUTEs()
{
try
{
int registrosAñadidos = 0;
var registrosSAP = _contextSAP.Licitadores
.FromSqlRaw(#"select distinct ot.IDLICITADOR as IdLicitador,
l.cardcode as CodigoSAP,
ic.cardname as Nombre
from ofertantes ot INNER JOIN licitadores l on ot.idlicitador=l.idlicitador
inner join ofertas o on o.codigoanalizada=ot.codigoanalizada
inner join Fulcrum.dbo.OCRD ic on l.cardcode=ic.cardcode collate SQL_Latin1_General_CP1_CI_AS
where year(o.fechapres)>=2015 AND
ot.idlicitador in(
select IDLICITADOR from LICITADORES
GROUP by IDLICITADOR
HAVING COUNT(*)>1
)
order by IdLicitador, CodigoSAP")
.ToList();
But what is my surprise when I see the result obtained
When you have to obtain the 2 records corresponding to LicitasorID 2368, I see that I get 3 records where [8] repeats the value of [6] and instead of being the value corresponding to LicitadorID 2881 and CodigoSAP 430FULCRUM, it assigns the value LicitadorID 2368. But the strangest thing is that when it is time to collect the values of IdLicitador 3150 it turns out that it does the same thing, IdLicitador 3150 and CodigoSAP 430FULCRUM the [10 ] turns it into IdLicitador 2368 and CodigoSAP 430FULCRUM.
That is, for some reason that I can't understand the value obtained in the EF Core 5 project is not the same as the one obtained in the SQL Server instance and I can't think of what to do about it
Any idea, please?
Thanks
The problem was to define the primary key in LicitadoresSAP entity
modelBuilder.Entity<LicitadorSAP>()
.HasKey(c => new { c.IdLicitador, c.CodigoSAP });
Now works fine

TYPO3 DBAL Querybuilder: Nested SELECT statements?

Is it possible to build nested SELECT statements like the one below using the DBAL QueryBuilder?
SELECT i.id, i.stable_id, i.version, i.title
FROM initiatives AS i
INNER JOIN (
SELECT stable_id, MAX(version) AS max_version FROM initiatives GROUP BY stable_id
) AS tbl1
ON i.stable_id = tbl1.stable_id AND i.version = tbl1.max_version
ORDER BY i.stable_id ASC
The goal is to query an external non TYPO3 table which contains different versions of each data set. Only the data set with the highest version number should be rendered. The database looks like this:
id, stable_id, version, [rest of the data row]
stable_id is the external id of the data set. id is the internal autoincrement id. And version is also incremented automatically.
Code example:
$queryBuilder = GeneralUtility::makeInstance(ConnectionPool::class)->getQueryBuilderForTable($this->table);
$result = $queryBuilder
->select(...$this->select)
->from($this->table)
->join(
'initiatives',
$queryBuilder
->select('stable_id, MAX(version) AS max_version' )
->from('initiatives')
->groupBy('stable_id'),
'tbl1',
$queryBuilder->and(
$queryBuilder->expr()->eq(
'initiatives.stable_id',
$queryBuilder->quoteIdentifier('tbl1.stable_id')
),
$queryBuilder->expr()->eq(
'initiatives.version',
$queryBuilder->quoteIdentifier('tbl1.max_version')
)
)
)
->orderBy('stable_id', 'DESC')
I cannot figure out the correct syntax for the ON ... AND statement. Any idea?
Extbase queries have JOIN capabilities but are otherwise very limited. You could use custom SQL (see ->statement() here), though.
A better API to build complex queries is the (Doctrine DBAL) QueryBuilder, including support for JOINs, database functions like MAX() and raw expressions (->addSelectLiteral()). Make sure to read until the ExpressionBuilder where it gets interesting.
So Extbase queries are useful in order to retrieve Extbase (model) objects. It can make implicit use of its knowledge of your data structure in order to save you some code but only supports rather simple queries.
The (Doctrine DBAL) QueryBuilder fulfills all other needs. If needed, you can convert the raw data to Extbase models, too. (for example $propertyMapper->convert($data, Job::class)).
I realize that we lack clear distinguishing between the two because they were both known at some time as "QueryBuilder", but they are totally different. That's why I like to add "Doctrine" when referring to the non-Extbase one.
An example with a JOIN ON multiple criteria.
$q = TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance(TYPO3\CMS\Core\Database\ConnectionPool::class)
->getQueryBuilderForTable('fe_users');
$res = $q->select('*')
->from('tt_content', 'c')
->join(
'c',
'be_users',
'bu',
$q->expr()->andX(
$q->expr()->eq(
'c.cruser_id', $q->quoteIdentifier('bu.uid')
),
$q->expr()->comparison(
'2', '=', '2'
)
)
)
->setMaxResults(5)
->execute()
->fetchAllAssociative();
Short answer: it is not possible because the table to be joined in is generated on the fly. The related expression is back-ticked and thus causes an SQL error.
But: The SQL query can be changed to the following SQL query which does basically the same:
SELECT i1.id,stable_id, version, title, p.name, l.name, s.name
FROM initiatives i1
WHERE version = (
SELECT MAX(i2.version)
FROM initiatives i2
WHERE i1.stable_id = i2.stable_id
)
ORDER BY stable_id ASC
And this can be rebuild with the DBAL queryBuilder:
$queryBuilder = GeneralUtility::makeInstance(ConnectionPool::class)->getQueryBuilderForTable($this->table);
$result = $queryBuilder
->select(...$this->select)
->from($this->table)
->where(
$queryBuilder->expr()->eq(
'initiatives.version',
'(SELECT MAX(i2.version) FROM initiatives i2 WHERE initiatives.stable_id = i2.stable_id)'
),
->orderBy('stable_id', 'DESC')
->setMaxResults( 50 )
->execute();

How to use CASE clause (DB2) to display values from a different table?

I'm working in a bank so I had to adjust the column names and information in the query to fit the external web, so if there're any weird mistakes know it is somewhat fine.
I'm trying to use the CASE clause to display data from a different table, I know this is a workaround but due to certain circumstances I'm obligated to use it, plus it is becoming interesting to figure out if there's an actual solution.
The error I'm receiving for the following query is:
"ERROR [21000] [IBM][CLI Driver][DB2] SQL0811N The result of a scalar
fullselect, SELECT INTO statement, or VALUES INTO statement is more
than one row."
select bank_num, branch_num, account_num, client_id,
CASE
WHEN exists(
select *
from bank.services BS
where ACCS.client_id= BS.sifrur_lakoach
)
THEN (select username from bank.services BS where BS.client_id = ACCS.client_id)
ELSE 'NONE'
END username_new
from bank.accounts accs
where bank_num = 431 and branch_num = 170
EDIT:
AFAIK we're using DB2 v9.7:
DSN11015 - DB21085I Instance "DB2" uses "64" bits and DB2 code release "SQL09075" with
level identifier "08060107".
Informational tokens are "DB2 v9.7.500.702", "s111017", "IP23287", and Fix Pack "5".
Use listagg function to include all results.
select bank_num, branch_num, account_num, client_id,
CASE
WHEN exists(
select *
from bank.services BS
where ACCS.client_id= BS.sifrur_lakoach
)
THEN (select LISTAGG(username, ', ') from bank.services BS
where BS.client_id = ACCS.client_id)
ELSE 'NONE'
END username_new
from bank.accounts accs
where bank_num = 431 and branch_num = 170

H2 Optimize select statement / shutdown defrag

Test Case:
drop table master;
create table master(id int primary key, fk1 int, fk2 int, fk3 int, dataS varchar(255), data1 int, data2 int, data3 int, data4 int,data5 int,data6 int,data7 int,data8 int,data9 int,b1 boolean,b2 boolean,b3 boolean,b4 boolean,b5 boolean,b6 boolean,b7 boolean,b8 boolean,b9 boolean,b10 boolean,b11 boolean,b12 boolean,b13 boolean,b14 boolean,b15 boolean,b16 boolean,b17 boolean,b18 boolean,b19 boolean,b20 boolean,b21 boolean,b22 boolean,b23 boolean,b24 boolean,b25 boolean,b26 boolean,b27 boolean,b28 boolean,b29 boolean,b30 boolean,b31 boolean,b32 boolean,b33 boolean,b34 boolean,b35 boolean,b36 boolean,b37 boolean,b38 boolean,b39 boolean,b40 boolean,b41 boolean,b42 boolean,b43 boolean,b44 boolean,b45 boolean,b46 boolean,b47 boolean,b48 boolean,b49 boolean,b50 boolean);
create index idx_comp on master(fk1,fk2,fk3);
#loop 5000000 insert into master values(?, mod(?,100), mod(?,5), ?,'Hello World Hello World Hello World',?, ?, ?,?, ?, ?, ?, ?, ?,true,true,true,true,true,true,false,false,false,true,true,true,true,true,true,true,false,false,false,true,true,true,true,true,true,true,false,false,false,true,true,true,true,true,true,true,false,false,false,true,true,true,true,true,true,true,false,false,false,true);
1.The following select statement takes up to 30seconds. Is there a way to optimize the response time?
SELECT count(*), SUM(CONVERT(b1,INT)) ,SUM(CONVERT(b2,INT)),SUM(CONVERT(b3,INT)),SUM(CONVERT(b4,INT)),SUM(CONVERT(b5,INT)),SUM(CONVERT(b6,INT)),SUM(CONVERT(b7,INT)),SUM(CONVERT(b8,INT)),SUM(CONVERT(b9,INT)),SUM(CONVERT(b10,INT)),SUM(CONVERT(b11,INT)),SUM(CONVERT(b12,INT)),SUM(CONVERT(b13,INT)),SUM(CONVERT(b14,INT)),SUM(CONVERT(b15,INT)),SUM(CONVERT(b16,INT))
FROM master
WHERE fk1=53 AND fk2=3
2.I tried shutdown defrag. But this statement took about 40min for my test case. After shutdown defrag the select takes up to 15seconds. If i execute the statement again it takes under 1sec. Even if stop and start the server, the statement takes about 1sec.
Has H2 a persistent Cache?
Infrastructure: WebBrowser <-> H2 Console Server <-> H2 DB: h2 1.3.158
According to the profiler output, the main problem (93%) is reading from the disk. I ran this in the H2 Console:
#prof_start;
SELECT ... FROM master WHERE fk1=53 AND fk2=3;
#prof_stop;
and got:
Profiler: top 3 stack trace(s) of 48039 ms [build-158]:
4084/4376 (93%):
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:338)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
at org.h2.store.FileStore.readFully(FileStore.java:285)
at org.h2.store.PageStore.readPage(PageStore.java:1253)
at org.h2.store.PageStore.getPage(PageStore.java:707)
at org.h2.index.PageDataIndex.getPage(PageDataIndex.java:225)
at org.h2.index.PageDataNode.getRowWithKey(PageDataNode.java:269)
at org.h2.index.PageDataNode.getRowWithKey(PageDataNode.java:270)
According to EXPLAIN ANALYZE SELECT it's reading over 55'000 pages from the disk (2 KB each page; 110 MB) for this query. I'm not sure how other databases perform for such a query. But I guess if possible the query should be changed so that it reads less data.
Is it possible to have a temporary table/view that already has the datatype conversions done? If it's feasible to have that update itself from the main table occassionally (once a night or so), then you've got a lot of processing power that goes into the conversion done already.
If that's not feasible, you may want to do multiple sub-selects, one for each "b" column, where you only pull where b# = 1. Then do a COUNT instead of a SUM, which should be faster as well. For instance:
SELECT (count1+count2) AS Count,
(SELECT COUNT(*) FROM master WHERE fk1=53 AND fk2=3 AND b1=1) AS count1
(SELECT COUNT(*) FROM master WHERE fk1=53 AND fk2=3 AND b2=1) AS count2
I'm not sure if that exact syntax works in your program, but hopefully as a generic SQL idea it gets you on the right track.