entity-framework Can not run count command when data is amount of records - entity-framework

My code
var numberOfStudents = _context.Student.Count();
In the database , about 4.000.000 records.
I tried
var numberOfStudents = _context.Student.Count();
but "Timeout expired" error still occur.
I setted command timout to 5 mins. But this error still occur.

Please run this query and tell me how long does it take to execute
Select Count(*) from Students
Also, please post your table structure, including all the Indices.
I assume adding a suitable index on the table will solve the problem, but I need more information. Most probably, adding any NonClustered Index will make this fast. Please see SQL count(*) performance.

Related

Postgres some tables are not working after copying data directory

We had an issue with our Hard drive so we copied the data directory into a new hard and we are facing some issues now the issue is I can't read data from three tables for example if I query Select * from table_name limit 10 it returns me 10 rows but when I try to get a count with Select count(*) from table name; it does not return anything and gets stuck. I tried to recover it according to this fix but when I try to vaccume the table I get error ERROR: uncommitted xmin 60148772 from before xid cutoff 60165118 needs to be frozen. The data is very important to us if any one can help please let me know what extra details required I have also posted some questions here and here but now the situation is above at the moment.

Postgres query shows under the 'Most time consuming section' of heroku

This below query I am using for a search:
I guess this current time is not bad but still I am searching for some kind of more optimizations. Also I saw in the analyze report this nested loop and nested loop joins shows the red. It would be great if I get an idea to reduce that. I was thinking to add index for search key. It would be great if I can get more suggestions to improve this. Here I have added the explain analyze result with 3 times execution, which ran in production
You could try to add ingredients.name or ingredients.code to an existing index or to create a new index so that more rows are filtered during ingredients index scan.
You should also try to avoid to use fonction on column name such as LOWER(ingredients.name) to make sure the right index is used.

When i run a specific query i get , ORA-00604: error occurred at recursive SQL level 1 ORA-12899: value too large for column"PLAN_TABLE"."OBJECT_NAME"

Am using Oracle 12.1 c when i run specific query ( i cant show for security reason , and because its un related); i get exception
ORA-00604: error occurred at recursive SQL
level 1 ORA-12899: value too large for column "SOME_SCHEMA"."PLAN_TABLE"."OBJECT_NAME"
(actual: 38, maximum: 30)
I cant make it work , i will try revert last changes i did because it was working before.
BTW i was doing Explain and doing index optimizations
Any idea why!
P.S i will keep trying
How i solved this:
When i was reverting and reviewing my last changes i was doing alters for adding indexes, and each time i try to run the query again to make sure it is working.
So when i reached a specific alter i noticed the name of the index is too long,
so even if the index was created successfully, but the explain plan for select
was failing not the select it self.
The solution:
I renamed the index to be shorter ( 30 maximum ) it worked
Change table/column/index names size in oracle 11g or 12c
Why are Oracle table/column/index names limited to 30 characters?
Using EXPLAIN PLAN Oracle websites docs

Entity Framework 4 Any()

Basically I have a query where I need to check to see if the data exists on another table.
bool isInUse;
IsInUse = entity.H83SAF_HEALTH_PLAN.H83SAF_CONSENT.Any();
When I used clutch to trap the query to see what was going on, I could see the query running basically like this:
SELECT
"Extent1"."ACTIVE" AS "ACTIVE",
"Extent1"."IS_COMPLETE" AS "IS_COMPLETE",
"Extent1"."DATE_CREATED" AS "DATE_CREATED",
"Extent1"."DATE_MODIFIED" AS "DATE_MODIFIED",
"Extent1"."CREATED_BY" AS "CREATED_BY",
"Extent1"."MODIFIED_BY" AS "MODIFIED_BY"
...more columns..natter, natter..
FROM "H83FTF"."H83SAF_CONSENT" "Extent1"
WHERE ("Extent1"."HEALTH_PLAN_ID" = 1)
The query itself is fine. I have a problem with the .Any() statement. What I thought should happen is that the query should quit abruptly when the .Any() condition is met.
Yet when the query I run, it looks like like the query is bringing back over 18,000 records (which I don't use) I only want to see if the data exists on the other table if the condition is met - as it is, the query hangs up the website while 18,000 rows are executed with the .Any() statement.
The first row has the condition met but my understanding is that .Any() should quit or stop the moment the condition is met.
I tried firstordefault() yet it still fetches 18000 rows in the memory...
Finally came to a conclusion is that using .Any() on EF 5.0 does not translate to exist in Oracle 11g - solution was to do a direct call and bring back yes or no.

Can I have more than 250 columns in the result of a PostgreSQL query?

Note that PostgreSQL website mentions that it has a limit on number of columns between 250-1600 columns depending on column types.
Scenario:
Say I have data in 17 tables each table having around 100 columns. All are joinable through primary keys. Would it be okay if I select all these columns in a single select statement? The query would be pretty complex but can be programmatically generated. The reason for doing this is to get denormalised data to populate a web page. Please do not ask why though :)
Quite obviously if I do create table table1 as (<the complex select statement>), I will be hitting the limit mentioned in the website. But do simple queries also face the same restriction?
I could probably find this out by doing the exercise myself. In the next few days I probably will. However, if someone has an idea about this and the problems I might face by doing a single query, please share the knowledge.
I can't find definitive documentation to back this up, but I have
received the following error using JDBC on Postgresql 9.1 before.
org.postgresql.util.PSQLException: ERROR: target lists can have at most 1664 entries
As I say though, I can't find the documentation for that so it may
vary by release.
I've found the confirmation. The maximum is 1664.
This is one of the metrics that is available for confirmation in the INFORMATION_SCHEMA.SQL_SIZING table.
SELECT * FROM INFORMATION_SCHEMA.SQL_SIZING
WHERE SIZING_NAME = 'MAXIMUM COLUMNS IN SELECT';