Can't count() a PostgreSql table [duplicate] - postgresql

I need to know the number of rows in a table to calculate a percentage. If the total count is greater than some predefined constant, I will use the constant value. Otherwise, I will use the actual number of rows.
I can use SELECT count(*) FROM table. But if my constant value is 500,000 and I have 5,000,000,000 rows in my table, counting all rows will waste a lot of time.
Is it possible to stop counting as soon as my constant value is surpassed?
I need the exact number of rows only as long as it's below the given limit. Otherwise, if the count is above the limit, I use the limit value instead and want the answer as fast as possible.
Something like this:
SELECT text,count(*), percentual_calculus()
FROM token
GROUP BY text
ORDER BY count DESC;

Counting rows in big tables is known to be slow in PostgreSQL. The MVCC model requires a full count of live rows for a precise number. There are workarounds to speed this up dramatically if the count does not have to be exact like it seems to be in your case.
(Remember that even an "exact" count is potentially dead on arrival under concurrent write load.)
Exact count
Slow for big tables.
With concurrent write operations, it may be outdated the moment you get it.
SELECT count(*) AS exact_count FROM myschema.mytable;
Estimate
Extremely fast:
SELECT reltuples AS estimate FROM pg_class where relname = 'mytable';
Typically, the estimate is very close. How close, depends on whether ANALYZE or VACUUM are run enough - where "enough" is defined by the level of write activity to your table.
Safer estimate
The above ignores the possibility of multiple tables with the same name in one database - in different schemas. To account for that:
SELECT c.reltuples::bigint AS estimate
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname = 'mytable'
AND n.nspname = 'myschema';
The cast to bigint formats the real number nicely, especially for big counts.
Better estimate
SELECT reltuples::bigint AS estimate
FROM pg_class
WHERE oid = 'myschema.mytable'::regclass;
Faster, simpler, safer, more elegant. See the manual on Object Identifier Types.
Replace 'myschema.mytable'::regclass with to_regclass('myschema.mytable') in Postgres 9.4+ to get nothing instead of an exception for invalid table names. See:
How to check if a table exists in a given schema
Better estimate yet (for very little added cost)
This does not work for partitioned tables because relpages is always -1 for the parent table (while reltuples contains an actual estimate covering all partitions) - tested in Postgres 14.
You have to add up estimates for all partitions instead.
We can do what the Postgres planner does. Quoting the Row Estimation Examples in the manual:
These numbers are current as of the last VACUUM or ANALYZE on the
table. The planner then fetches the actual current number of pages in
the table (this is a cheap operation, not requiring a table scan). If
that is different from relpages then reltuples is scaled
accordingly to arrive at a current number-of-rows estimate.
Postgres uses estimate_rel_size defined in src/backend/utils/adt/plancat.c, which also covers the corner case of no data in pg_class because the relation was never vacuumed. We can do something similar in SQL:
Minimal form
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
FROM pg_class
WHERE oid = 'mytable'::regclass; -- your table here
Safe and explicit
SELECT (CASE WHEN c.reltuples < 0 THEN NULL -- never vacuumed
WHEN c.relpages = 0 THEN float8 '0' -- empty table
ELSE c.reltuples / c.relpages END
* (pg_catalog.pg_relation_size(c.oid)
/ pg_catalog.current_setting('block_size')::int)
)::bigint
FROM pg_catalog.pg_class c
WHERE c.oid = 'myschema.mytable'::regclass; -- schema-qualified table here
Doesn't break with empty tables and tables that have never seen VACUUM or ANALYZE. The manual on pg_class:
If the table has never yet been vacuumed or analyzed, reltuples contains -1 indicating that the row count is unknown.
If this query returns NULL, run ANALYZE or VACUUM for the table and repeat. (Alternatively, you could estimate row width based on column types like Postgres does, but that's tedious and error-prone.)
If this query returns 0, the table seems to be empty. But I would ANALYZE to make sure. (And maybe check your autovacuum settings.)
Typically, block_size is 8192. current_setting('block_size')::int covers rare exceptions.
Table and schema qualifications make it immune to any search_path and scope.
Either way, the query consistently takes < 0.1 ms for me.
More Web resources:
The Postgres Wiki FAQ
The Postgres wiki pages for count estimates and count(*) performance
TABLESAMPLE SYSTEM (n) in Postgres 9.5+
SELECT 100 * count(*) AS estimate FROM mytable TABLESAMPLE SYSTEM (1);
Like #a_horse commented, the added clause for the SELECT command can be useful if statistics in pg_class are not current enough for some reason. For example:
No autovacuum running.
Immediately after a large INSERT / UPDATE / DELETE.
TEMPORARY tables (which are not covered by autovacuum).
This only looks at a random n % (1 in the example) selection of blocks and counts rows in it. A bigger sample increases the cost and reduces the error, your pick. Accuracy depends on more factors:
Distribution of row size. If a given block happens to hold wider than usual rows, the count is lower than usual etc.
Dead tuples or a FILLFACTOR occupy space per block. If unevenly distributed across the table, the estimate may be off.
General rounding errors.
Typically, the estimate from pg_class will be faster and more accurate.
Answer to actual question
First, I need to know the number of rows in that table, if the total
count is greater than some predefined constant,
And whether it ...
... is possible at the moment the count pass my constant value, it will
stop the counting (and not wait to finish the counting to inform the
row count is greater).
Yes. You can use a subquery with LIMIT:
SELECT count(*) FROM (SELECT 1 FROM token LIMIT 500000) t;
Postgres actually stops counting beyond the given limit, you get an exact and current count for up to n rows (500000 in the example), and n otherwise. Not nearly as fast as the estimate in pg_class, though.

I did this once in a postgres app by running:
EXPLAIN SELECT * FROM foo;
Then examining the output with a regex, or similar logic. For a simple SELECT *, the first line of output should look something like this:
Seq Scan on uids (cost=0.00..1.21 rows=8 width=75)
You can use the rows=(\d+) value as a rough estimate of the number of rows that would be returned, then only do the actual SELECT COUNT(*) if the estimate is, say, less than 1.5x your threshold (or whatever number you deem makes sense for your application).
Depending on the complexity of your query, this number may become less and less accurate. In fact, in my application, as we added joins and complex conditions, it became so inaccurate it was completely worthless, even to know how within a power of 100 how many rows we'd have returned, so we had to abandon that strategy.
But if your query is simple enough that Pg can predict within some reasonable margin of error how many rows it will return, it may work for you.

Reference taken from this Blog.
You can use below to query to find row count.
Using pg_class:
SELECT reltuples::bigint AS EstimatedCount
FROM pg_class
WHERE oid = 'public.TableName'::regclass;
Using pg_stat_user_tables:
SELECT
schemaname
,relname
,n_live_tup AS EstimatedCount
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;

How wide is the text column?
With a GROUP BY there's not much you can do to avoid a data scan (at least an index scan).
I'd recommend:
If possible, changing the schema to remove duplication of text data. This way the count will happen on a narrow foreign key field in the 'many' table.
Alternatively, creating a generated column with a HASH of the text, then GROUP BY the hash column.
Again, this is to decrease the workload (scan through a narrow column index)
Edit:
Your original question did not quite match your edit. I'm not sure if you're aware that the COUNT, when used with a GROUP BY, will return the count of items per group and not the count of items in the entire table.

You can also just SELECT MAX(id) FROM <table_name>; change id to whatever the PK of the table is

In Oracle, you could use rownum to limit the number of rows returned. I am guessing similar construct exists in other SQLs as well. So, for the example you gave, you could limit the number of rows returned to 500001 and apply a count(*) then:
SELECT (case when cnt > 500000 then 500000 else cnt end) myCnt
FROM (SELECT count(*) cnt FROM table WHERE rownum<=500001)

For SQL Server (2005 or above) a quick and reliable method is:
SELECT SUM (row_count)
FROM sys.dm_db_partition_stats
WHERE object_id=OBJECT_ID('MyTableName')
AND (index_id=0 or index_id=1);
Details about sys.dm_db_partition_stats are explained in MSDN
The query adds rows from all parts of a (possibly) partitioned table.
index_id=0 is an unordered table (Heap) and index_id=1 is an ordered table (clustered index)
Even faster (but unreliable) methods are detailed here.

Related

select 10000 records take too long time in PostgreSQL

my table contains 1 billion records. It is also partitioned by month.Id and datetime is the primary key for the table. When I select
select col1,col2,..col8
from mytable t
inner join cte on t.Id=cte.id and dtime>'2020-01-01' and dtime<'2020-10-01'
It uses index scan, but takes more than 5 minutes to select.
Please suggest me.
Note: I have set work_mem to 1GB. cte table results comes with in 3 seconds.
Well it's the nature of join and it is usually known as a time consuming operation.
First of all, I recommend to use in rather than join. Of course they have got different meanings, but in some cases technically you can use them interchangeably. Check this question out.
Secondly, according to the relation algebra whenever you use join each rows of mytable table is combined with each rows from the second table, and DBMS needs to make a huge temporary table, and finally igonre unsuitable rows. Undoubtedly all the steps and the result would take much time. Before using the Join opeation, it's better to filter your tables (for example mytable based date) and make them smaller, and then use the join operations.

Strategies for reducing CPU and Reads when creating/updating index is not an option

This is a generalized question as I am looking for a variety of solutions from the community.
I have a stored procedure that on a particular JOIN in a temp table it has CPU that is too high. This is a relative 'too high' as it is over 200 CPU and our threshold for stored procedures is 800 CPU. The other JOINs are lower but when added to my 200+ CPU temp table it goes over.
This particular table does have a single nonclustered index outside of the default clustered index. Even when using the correct conditions the optimizer uses the clustered index. I can force it to use the nonclustered index but this quadruples the reads (putting me over our read threshold) and only lowers the CPU by a little more than half, so my CPU is better but not great still.
I have already discussed that there is a missing index (according to SQL Sentry) with my DBA but because the columns I need are not heavily used he won't create a new index or include my columns in the current index - this brings me to the point where I disclose that there is a key lookup involved.
I may not have any other solutions and I may have to live with one either the high CPU or high reads, but I wanted to know if there were other options I wasn't aware of and thus the question here, to this great community.
Sample of what the code looks like:
SELECT
qbcl.UniqueID [PropertyHmy],
CONVERT(DATETIME, qbcl.SomeDate) [SomeDate],
x.MonthStart [MonthStart],
x.MonthEnd [MonthEnd],
ISNULL(qbcl.SomeInt, 0) [SomeIntCount],
qbcl.TypeID [TypeID],
qbcl.FileLogID [FileLogID],
qbcl.CancelFileLogID [CancelFileLogID],
qbcl.ApproveFileLogID [ApproveFileLogID],
qbcl.ReadyAt [ReadyAt],
qbcl.ExportedAt [ExportedAt]
FROM
#SomeTempTable x
JOIN SomeTable qbcl WITH (INDEX(IX_SomeTable_2A_10A)) ON qbcl.UniqueID = x.PropertyHmy
AND qbcl.TypeID = 1
UNION ALL
SELECT
qbcl.UniqueID [PropertyHmy],
CONVERT(DATETIME, qbcl.SomeDate) [SomeDate],
x.MonthStart [MonthStart],
x.MonthEnd [MonthEnd],
ISNULL(qbcl.SomeInt, 0) [SomeIntCount],
qbcl.TypeID [TypeID],
NULL [FileLogID],
NULL [CancelFileLogID],
NULL [ApproveFileLogID],
NULL [ReadyAt],
NULL [ExportedAt]
FROM
#SomeTempTable x
JOIN SomeTable qbcl WITH (INDEX(IX_SomeTable_2A_10A)) ON qbcl.UniqueID = x.PropertyHmy
AND qbcl.TypeID = 2
From my SomeTable example, almost all the columns are not included in the index and cause a key lookup.
Link to my execution plan.
And a screenshot of the same plan:

How to find if the row is part of index?

I want find if the row is part of an indexed column in postgresql.
For example:
When I open the table object I see my indexes and the cardinality of the each index.
my total rows in the table is 1,45,454 but the cardinality of the all the indexes are 1,45,300. Some 150 odd rows are not indexed in any of the indexes that I have created.
I ran the below query to find the cardinality,
SELECT relname,
relkind,
reltuples AS cardinality,
relpages
FROM pg_class
WHERE relname LIKE '%table_name%';
Could someone please explain why some rows are left as part of indexing and how to find the rows the 150 rows that are not indexed in my original table.
my total rows in the table is 1,45,454 but the cardinality of the all the indexes are 1,45,300
that means you have 154 duplicated index entries thus some of those 154 index entries(or less) points to more than 1 row(or more).
From the postgres documentation on planner statistics:
For efficiency reasons, reltuples and relpages are not updated on-the-fly, and so they usually contain somewhat out-of-date values. They are updated by VACUUM, ANALYZE, and a few DDL commands such as CREATE INDEX. A VACUUM or ANALYZE operation that does not scan the entire table (which is commonly the case) will incrementally update the reltuples count on the basis of the part of the table it did scan, resulting in an approximate value. In any case, the planner will scale the values it finds in pg_class to match the current physical table size, thus obtaining a closer approximation.
In other words, as long as that number is approximately correct, there's nothing wrong and nothing to worry about. If it were wildly off (say "203" instead of its current value), then it would be time to issue a VACUUM or ANALYZE job on the table.
Also worth checking the value of default_statistics_target. If that's set too low, your statistics will end up less and less accurate.

Select query with offset limit is too much slow

I have read from internet resources that a query will be slow when the offset increases. But in my case I think its too much slow. I am using postgres 9.3
Here is the query (id is primary key):
select * from test_table offset 3900000 limit 100;
It returns me data in around 10 seconds. And I think its too much slow. I have around 4 million records in table. Overall size of the database is 23GB.
Machine configuration:
RAM: 12 GB
CPU: 2.30 GHz
Core: 10
Few values from postgresql.conf file which I have changed are as below. Others are default.
shared_buffers = 2048MB
temp_buffers = 512MB
work_mem = 1024MB
maintenance_work_mem = 256MB
dynamic_shared_memory_type = posix
default_statistics_target = 10000
autovacuum = on
enable_seqscan = off ## its not making any effect as I can see from Analyze doing seq-scan
Apart from these I have also tried by changing the values of random_page_cost = 2.0 and cpu_index_tuple_cost = 0.0005 and result is same.
Explain (analyze, buffers) result over the query is as below:
"Limit (cost=10000443876.02..10000443887.40 rows=100 width=1034) (actual time=12793.975..12794.292 rows=100 loops=1)"
" Buffers: shared hit=26820 read=378984"
" -> Seq Scan on test_table (cost=10000000000.00..10000467477.70 rows=4107370 width=1034) (actual time=0.008..9036.776 rows=3900100 loops=1)"
" Buffers: shared hit=26820 read=378984"
"Planning time: 0.136 ms"
"Execution time: 12794.461 ms"
How people around the world negotiates with this problem in postgres? Any alternate solution will be helpful for me as well.
UPDATE:: Adding order by id (tried with other indexed column as well) and here is the explain:
"Limit (cost=506165.06..506178.04 rows=100 width=1034) (actual time=15691.132..15691.494 rows=100 loops=1)"
" Buffers: shared hit=110813 read=415344"
" -> Index Scan using test_table_pkey on test_table (cost=0.43..533078.74 rows=4107370 width=1034) (actual time=38.264..11535.005 rows=3900100 loops=1)"
" Buffers: shared hit=110813 read=415344"
"Planning time: 0.219 ms"
"Execution time: 15691.660 ms"
It's slow because it needs to locate the top offset rows and scan the next 100. No amounts of optimization will change that when you're dealing with huge offsets.
This is because your query literally instruct the DB engine to visit lots of rows by using offset 3900000 -- that's 3.9M rows. Options to speed this up somewhat aren't many.
Super-fast RAM, SSDs, etc. will help. But you'll only gain by a constant factor in doing so, meaning it's merely kicking the can down the road until you reach a larger enough offset.
Ensuring the table fits in memory, with plenty more to spare will likewise help by a larger constant factor -- except the first time. But this may not be possible with a large enough table or index.
Ensuring you're doing index-only scans will work to an extent. (See velis' answer; it has a lot of merit.) The problem here is that, for all practical purposes, you can think of an index as a table storing a disk location and the indexed fields. (It's more optimized than that, but it's a reasonable first approximation.) With enough rows, you'll still be running into problems with a larger enough offset.
Trying to store and maintain the precise position of the rows is bound to be an expensive approach too.(This is suggested by e.g. benjist.) While technically feasible, it suffers from limitations similar to those that stem from using MPTT with a tree structure: you'll gain significantly on reads but will end up with excessive write times when a node is inserted, updated or removed in such a way that large chunks of the data needs to be updated alongside.
As is hopefully more clear, there isn't any real magic bullet when you're dealing with offsets this large. It's often better to look at alternative approaches.
If you're paginating based on the ID (or a date field, or any other indexable set of fields), a potential trick (used by blogspot, for instance) would be to make your query start at an arbitrary point in the index.
Put another way, instead of:
example.com?page_number=[huge]
Do something like:
example.com?page_following=[huge]
That way, you keep a trace of where you are in your index, and the query becomes very fast because it can head straight to the correct starting point without plowing through a gazillion rows:
select * from foo where ID > [huge] order by ID limit 100
Naturally, you lose the ability to jump to e.g. page 3000. But give this some honest thought: when was the last time you jumped to a huge page number on a site instead of going straight for its monthly archives or using its search box?
If you're paginating but want to keep the page offset by any means, yet another approach is to forbid the use of larger page number. It's not silly: it's what Google is doing with search results. When running a search query, Google gives you an estimate number of results (you can get a reasonable number using explain), and then will allow you to brows the top few thousand results -- nothing more. Among other things, they do so for performance reasons -- precisely the one you're running into.
I have upvoted Denis's answer, but will add a suggestion myself, perhaps it can be of some performance benefit for your specific use-case:
Assuming your actual table is not test_table, but some huge compound query, possibly with multiple joins. You could first determine the required starting id:
select id from test_table order by id offset 3900000 limit 1
This should be much faster than original query as it only requires to scan the index vs the entire table. Getting this id then opens up a fast index-search option for full fetch:
select * from test_table where id >= (what I got from previous query) order by id limit 100
You didn't say if your data is mainly read-only or updated often. If you can manage to create your table at one time, and only update it every now and then (say every few minutes) your problem will be easy to solve:
Add a new column "offset_id"
For your complete data set ordered by ID, create an offset_id simply by incrementing numbers: 1,2,3,4...
Instead of "offset ... limit 100" use "where offset_id >= 3900000 limit 100"
you can optimise in two steps
First get maximum id out of 3900000 records
select max(id) (select id from test_table order by id limit 3900000);
Then use this maximum id to get the next 100 records.
select * from test_table id > {max id from previous step) order by id limit 100 ;
It will be faster as both queries will do index scan by id.
This way you get the rows in semi-random order. You are not ordering the results in a query, so as a result, you get the data as it is stored in the files. The problem is that when you update the rows, the order of them can change.
To fix that you should add order by to the query. This way the query will return the rows in the same order. What's more then it will be able to use an index to speed the query up.
So two things: add an index, add order by to the query. Both to the same column. If you want to use the id column, then don't add index, just change the query to something like:
select * from test_table order by id offset 3900000 limit 100;
First, you have to define limit and offset with order by clause or you will get inconsistent result.
To speed up the query, you can have a computed index, but only for these condition :
Newly inserted data is strictly in id order
No delete nor update on column id
Here's how You can do it :
Create a row position function
create or replace function id_pos (id) returns bigint
as 'select count(id) from test_table where id <= $1;'
language sql immutable;
Create a computed index on id_pos function
create index table_by_pos on test_table using btree(id_pos(id));
Here's how You call it (offset 3900000 limit 100):
select * from test_table where id_pos(id) >= 3900000 and sales_pos(day) < 3900100;
This way, the query will not compute the 3900000 offset data, but only will compute the 100 data, making it much faster.
Please note the 2 conditions where this approach can take place, or the position will change.
I don't know all of the details of your data, but 4 million rows can be a little hefty. If there's a reasonable way to shard the table and essentially break it up into smaller tables it could be beneficial.
To explain this, let me use an example. let's say that I have a database where I have a table called survey_answer, and it's getting very large and very slow. Now let's say that these survey answers all come from a distinct group of clients (and I also have a client table keeping track of these clients). Then something I could do is I could make it so that I have a table called survey_answer that doesn't have any data in it, but is a parent table, and it has a bunch of child tables that actually contain the data the follow the naming format survey_answer_<clientid>, meaning that I'd have child tables survey_answer_1, survey_answer_2, etc., one for each client. Then when I needed to select data for that client, I'd use that table. If I needed to select data across all clients, I can select from the parent survey_answer table, but it will be slow. But for getting data for an individual client, which is what I mostly do, then it would be fast.
This is one example of how to break up data, and there are many others. Another example would be if my survey_answer table didn't break up easily by client, but instead I know that I'm typically only accessing data over a year period of time at once, then I could potentially make child tables based off of year, such as survey_answer_2014, survey_answer_2013, etc. Then if I know that I won't access more than a year at a time, I only really need to access maybe two of my child tables to get all the data I need.
In your case, all I've been given is perhaps the id. We can break it up by that as well (though perhaps not as ideal). Let's say that we break it up so that there's only about 1000000 rows per table. So our child tables would be test_table_0000001_1000000, test_table_1000001_2000000, test_table_2000001_3000000, test_table_3000001_4000000, etc. So instead of passing in an offset of 3900000, you'd do a little math first and determine that the table that you want is table test_table_3000001_4000000 with an offset of 900000 instead. So something like:
SELECT * FROM test_table_3000001_4000000 ORDER BY id OFFSET 900000 LIMIT 100;
Now if sharding the table is out of the question, you might be able to use partial indexes to do something similar, but again, I'd recommend sharding first. Learn more about partial indexes here.
I hope that helps. (Also, I agree with Szymon Guz that you want an ORDER BY).
Edit: Note that if you need to delete rows or selectively exclude rows before getting your result of 100, then sharding by id will become very hard to deal with (as pointed out by Denis; and sharding by id is not great to begin with). But if your 'just' paginating the data, and you only insert or edit (not a common thing, but it does happen; logs come to mind), then sharding by id can be done reasonably (though I'd still choose something else to shard on).
How about if paginate based on IDs instead of offset/limit?
The following query will give IDs which split all the records into chunks of size per_page. It doesn't depend on were records deleted or not
SELECT id AS from_id FROM (
SELECT id, (ROW_NUMBER() OVER(ORDER BY id DESC)) AS num FROM test_table
) AS rn
WHERE num % (per_page + 1) = 0;
With these from_IDs you can add links to the page. Iterate over :from_ids with index and add the following link to the page:
:from_id_index
When user visits the page retrieve records with ID which is greater than requested :from_id:
SELECT * FROM test_table WHERE ID >= :from_id ORDER BY id DESC LIMIT :per_page
For the first page link with from_id=0 will work
1
To avoid slow pagination with big tables always use auto-increment primary key then use the query below:
SELECT * FROM test_table WHERE id > (SELECT min(id) FROM test_table WHERE id > ((1 * 10) - 10)) ORDER BY id DESC LIMIT 10
1: is the page number
10: is the records per page
Tested and work well with 50 millions records.
There are two simple approaches to solve such a problem
Splitting the query into two subqueries that the first one do all the heavy job on index-only scan as described here
Create calculated index that holds the offset as described here, this can be enhanced using window functions.

Questionable performance using IF EXISTS with inner existence checks

This is in a stored procedure..This if statement, then I do a little work. The #AsOfDate is a passed in variable of date datatype. The question I have is Why do I get better performance by removing the inner-most exists, but ONLY when the entire statement is in an IF EXISTS?
The two tables:
dbo.TXXX_InventoryDetail -- 1.3 billion records..stats up to date
dbo.TXXX_InventoryFull -- 9.8 million records..stats up to date
Statement:
if exists (select 1
from dbo.TXXX_InventoryDetail o
where exists (select 1
from dbo.TXXX_InventoryFull i
where i.C001_AsOfDate= o.C001_AsOfDate
and i.C001_ProductID=o.C001_ProductID
and i.C001_StoreNumber=o.C001_StoreNumber
and i.C001_AsOfDate=#AsOfDate
and (i.C001_LastModelDate!=o.C001_LastModelDate
or o.C001_InventoryQty!=o.C001_InventoryQty
or i.C001_OnOrderQty!=o.C001_OnOrderQty
or i.C001_TBOQty!=o.C001_TBOQty
or i.C001_ModelQty!=o.C001_ModelQty
or i.C001_TBOAdjustQty!=o.C001_TBOAdjustQty
or i.C001_ReturnQtyPending!=o.C001_ReturnQtyPending
or i.C001_ReturnQtyInProcess!=o.C001_ReturnQtyInProcess
or i.C001_ReturnQtyDueOut!=o.C001_ReturnQtyDueOut))
and o.C001_AsOfDate=#AsOfDate)
io output:
Table 'TXXX_InventoryFull'. Scan count 9240262, logical reads 29548864
Table 'T001_InventoryDetail'. Scan count 1, logical reads 17259
If I remove the second where exists and do a join:
if exists (select 1
from dbo.TXXX_InventoryDetail o,
dbo.TXXX_InventoryFull i
where i.C001_AsOfDate= o.C001_AsOfDate
and i.C001_ProductID=o.C001_ProductID
and i.C001_StoreNumber=o.C001_StoreNumber
and i.C001_AsOfDate=#AsOfDate
and (i.C001_LastModelDate!=o.C001_LastModelDate
or o.C001_InventoryQty!=o.C001_InventoryQty
or i.C001_OnOrderQty!=o.C001_OnOrderQty
or i.C001_TBOQty!=o.C001_TBOQty
or i.C001_ModelQty!=o.C001_ModelQty
or i.C001_TBOAdjustQty!=o.C001_TBOAdjustQty
or i.C001_ReturnQtyPending!=o.C001_ReturnQtyPending
or i.C001_ReturnQtyInProcess!=o.C001_ReturnQtyInProcess
or i.C001_ReturnQtyDueOut!=o.C001_ReturnQtyDueOut)
and o.C001_AsOfDate=#AsOfDate)
io output:
Table 'TXXX_InventoryDetail'. Scan count 0, logical reads 333952
Table 'TXXX_InventoryFull'. Scan count 1, logical reads 630
Now..the reason I think it is the if exists is that if I remove it and do a select count(*) like this:
select COUNT(*)
from dbo.T001_InventoryDetail o
where exists (select 1
from dbo.TXXX_InventoryFull i
where i.C001_AsOfDate= o.C001_AsOfDate
and i.C001_ProductID=o.C001_ProductID
and i.C001_StoreNumber=o.C001_StoreNumber
and i.C001_AsOfDate=#AsOfDate
and (i.C001_LastModelDate!=o.C001_LastModelDate
or o.C001_InventoryQty!=o.C001_InventoryQty
or i.C001_OnOrderQty!=o.C001_OnOrderQty
or i.C001_TBOQty!=o.C001_TBOQty
or i.C001_ModelQty!=o.C001_ModelQty
or i.C001_TBOAdjustQty!=o.C001_TBOAdjustQty
or i.C001_ReturnQtyPending!=o.C001_ReturnQtyPending
or i.C001_ReturnQtyInProcess!=o.C001_ReturnQtyInProcess
or i.C001_ReturnQtyDueOut!=o.C001_ReturnQtyDueOut))
and o.C001_AsOfDate=#AsOfDate
TXXX_InventoryFull'. Scan count 41, logical reads 692
T001_InventoryDetail'. Scan count 65, logical reads 17477
Worktable'. Scan count 0, logical reads 0
It is generally said that one should avoid doing coordinated subqueries in the predicate, as these tend to force nested loop joins. When querying large datasets, especially where one's trying to discover a difference between the sets, it's important to allow the query optimizer to choose dynamically between hash, merge and nested loop algorhithms, which may not be possible if the query is structured using a coordinated subquery. Better to create these as derived tables in the FROM clause.
I have found similar issues using the EXISTS statement on a SQL 08 R2 server, where the exact same statement runs fine on SQL 08 and SQL 05.
I found that changing something like
WHILE EXISTS(SELECT * FROM X)
Would be super slow, but:
WHILE ISNULL((SELECT TOP 1 ID FROM X), 0) <> 0
Runs perfectly fast again.
To me, it seems like an R2 issue...
I would guess that the plan you get is quite different when you use the join. Perhaps the imbalance in the number of rows (very large outer table, smaller inner table) is giving the optimizer fits, but it can probably eliminate rows much easier with the join (you'll probably see additional loop operators with the worse query). Tough to really guess without seeing the plans or being able to reproduce, but you should always aim at eliminating the most rows as early in the plan as possible. Pulling back millions of rows through several operators / subqueries only to eliminate most of them later in the plan is almost certainly going to yield worse performance.