Select query with offset limit is too much slow - postgresql

I have read from internet resources that a query will be slow when the offset increases. But in my case I think its too much slow. I am using postgres 9.3
Here is the query (id is primary key):
select * from test_table offset 3900000 limit 100;
It returns me data in around 10 seconds. And I think its too much slow. I have around 4 million records in table. Overall size of the database is 23GB.
Machine configuration:
RAM: 12 GB
CPU: 2.30 GHz
Core: 10
Few values from postgresql.conf file which I have changed are as below. Others are default.
shared_buffers = 2048MB
temp_buffers = 512MB
work_mem = 1024MB
maintenance_work_mem = 256MB
dynamic_shared_memory_type = posix
default_statistics_target = 10000
autovacuum = on
enable_seqscan = off ## its not making any effect as I can see from Analyze doing seq-scan
Apart from these I have also tried by changing the values of random_page_cost = 2.0 and cpu_index_tuple_cost = 0.0005 and result is same.
Explain (analyze, buffers) result over the query is as below:
"Limit (cost=10000443876.02..10000443887.40 rows=100 width=1034) (actual time=12793.975..12794.292 rows=100 loops=1)"
" Buffers: shared hit=26820 read=378984"
" -> Seq Scan on test_table (cost=10000000000.00..10000467477.70 rows=4107370 width=1034) (actual time=0.008..9036.776 rows=3900100 loops=1)"
" Buffers: shared hit=26820 read=378984"
"Planning time: 0.136 ms"
"Execution time: 12794.461 ms"
How people around the world negotiates with this problem in postgres? Any alternate solution will be helpful for me as well.
UPDATE:: Adding order by id (tried with other indexed column as well) and here is the explain:
"Limit (cost=506165.06..506178.04 rows=100 width=1034) (actual time=15691.132..15691.494 rows=100 loops=1)"
" Buffers: shared hit=110813 read=415344"
" -> Index Scan using test_table_pkey on test_table (cost=0.43..533078.74 rows=4107370 width=1034) (actual time=38.264..11535.005 rows=3900100 loops=1)"
" Buffers: shared hit=110813 read=415344"
"Planning time: 0.219 ms"
"Execution time: 15691.660 ms"

It's slow because it needs to locate the top offset rows and scan the next 100. No amounts of optimization will change that when you're dealing with huge offsets.
This is because your query literally instruct the DB engine to visit lots of rows by using offset 3900000 -- that's 3.9M rows. Options to speed this up somewhat aren't many.
Super-fast RAM, SSDs, etc. will help. But you'll only gain by a constant factor in doing so, meaning it's merely kicking the can down the road until you reach a larger enough offset.
Ensuring the table fits in memory, with plenty more to spare will likewise help by a larger constant factor -- except the first time. But this may not be possible with a large enough table or index.
Ensuring you're doing index-only scans will work to an extent. (See velis' answer; it has a lot of merit.) The problem here is that, for all practical purposes, you can think of an index as a table storing a disk location and the indexed fields. (It's more optimized than that, but it's a reasonable first approximation.) With enough rows, you'll still be running into problems with a larger enough offset.
Trying to store and maintain the precise position of the rows is bound to be an expensive approach too.(This is suggested by e.g. benjist.) While technically feasible, it suffers from limitations similar to those that stem from using MPTT with a tree structure: you'll gain significantly on reads but will end up with excessive write times when a node is inserted, updated or removed in such a way that large chunks of the data needs to be updated alongside.
As is hopefully more clear, there isn't any real magic bullet when you're dealing with offsets this large. It's often better to look at alternative approaches.
If you're paginating based on the ID (or a date field, or any other indexable set of fields), a potential trick (used by blogspot, for instance) would be to make your query start at an arbitrary point in the index.
Put another way, instead of:
example.com?page_number=[huge]
Do something like:
example.com?page_following=[huge]
That way, you keep a trace of where you are in your index, and the query becomes very fast because it can head straight to the correct starting point without plowing through a gazillion rows:
select * from foo where ID > [huge] order by ID limit 100
Naturally, you lose the ability to jump to e.g. page 3000. But give this some honest thought: when was the last time you jumped to a huge page number on a site instead of going straight for its monthly archives or using its search box?
If you're paginating but want to keep the page offset by any means, yet another approach is to forbid the use of larger page number. It's not silly: it's what Google is doing with search results. When running a search query, Google gives you an estimate number of results (you can get a reasonable number using explain), and then will allow you to brows the top few thousand results -- nothing more. Among other things, they do so for performance reasons -- precisely the one you're running into.

I have upvoted Denis's answer, but will add a suggestion myself, perhaps it can be of some performance benefit for your specific use-case:
Assuming your actual table is not test_table, but some huge compound query, possibly with multiple joins. You could first determine the required starting id:
select id from test_table order by id offset 3900000 limit 1
This should be much faster than original query as it only requires to scan the index vs the entire table. Getting this id then opens up a fast index-search option for full fetch:
select * from test_table where id >= (what I got from previous query) order by id limit 100

You didn't say if your data is mainly read-only or updated often. If you can manage to create your table at one time, and only update it every now and then (say every few minutes) your problem will be easy to solve:
Add a new column "offset_id"
For your complete data set ordered by ID, create an offset_id simply by incrementing numbers: 1,2,3,4...
Instead of "offset ... limit 100" use "where offset_id >= 3900000 limit 100"

you can optimise in two steps
First get maximum id out of 3900000 records
select max(id) (select id from test_table order by id limit 3900000);
Then use this maximum id to get the next 100 records.
select * from test_table id > {max id from previous step) order by id limit 100 ;
It will be faster as both queries will do index scan by id.

This way you get the rows in semi-random order. You are not ordering the results in a query, so as a result, you get the data as it is stored in the files. The problem is that when you update the rows, the order of them can change.
To fix that you should add order by to the query. This way the query will return the rows in the same order. What's more then it will be able to use an index to speed the query up.
So two things: add an index, add order by to the query. Both to the same column. If you want to use the id column, then don't add index, just change the query to something like:
select * from test_table order by id offset 3900000 limit 100;

First, you have to define limit and offset with order by clause or you will get inconsistent result.
To speed up the query, you can have a computed index, but only for these condition :
Newly inserted data is strictly in id order
No delete nor update on column id
Here's how You can do it :
Create a row position function
create or replace function id_pos (id) returns bigint
as 'select count(id) from test_table where id <= $1;'
language sql immutable;
Create a computed index on id_pos function
create index table_by_pos on test_table using btree(id_pos(id));
Here's how You call it (offset 3900000 limit 100):
select * from test_table where id_pos(id) >= 3900000 and sales_pos(day) < 3900100;
This way, the query will not compute the 3900000 offset data, but only will compute the 100 data, making it much faster.
Please note the 2 conditions where this approach can take place, or the position will change.

I don't know all of the details of your data, but 4 million rows can be a little hefty. If there's a reasonable way to shard the table and essentially break it up into smaller tables it could be beneficial.
To explain this, let me use an example. let's say that I have a database where I have a table called survey_answer, and it's getting very large and very slow. Now let's say that these survey answers all come from a distinct group of clients (and I also have a client table keeping track of these clients). Then something I could do is I could make it so that I have a table called survey_answer that doesn't have any data in it, but is a parent table, and it has a bunch of child tables that actually contain the data the follow the naming format survey_answer_<clientid>, meaning that I'd have child tables survey_answer_1, survey_answer_2, etc., one for each client. Then when I needed to select data for that client, I'd use that table. If I needed to select data across all clients, I can select from the parent survey_answer table, but it will be slow. But for getting data for an individual client, which is what I mostly do, then it would be fast.
This is one example of how to break up data, and there are many others. Another example would be if my survey_answer table didn't break up easily by client, but instead I know that I'm typically only accessing data over a year period of time at once, then I could potentially make child tables based off of year, such as survey_answer_2014, survey_answer_2013, etc. Then if I know that I won't access more than a year at a time, I only really need to access maybe two of my child tables to get all the data I need.
In your case, all I've been given is perhaps the id. We can break it up by that as well (though perhaps not as ideal). Let's say that we break it up so that there's only about 1000000 rows per table. So our child tables would be test_table_0000001_1000000, test_table_1000001_2000000, test_table_2000001_3000000, test_table_3000001_4000000, etc. So instead of passing in an offset of 3900000, you'd do a little math first and determine that the table that you want is table test_table_3000001_4000000 with an offset of 900000 instead. So something like:
SELECT * FROM test_table_3000001_4000000 ORDER BY id OFFSET 900000 LIMIT 100;
Now if sharding the table is out of the question, you might be able to use partial indexes to do something similar, but again, I'd recommend sharding first. Learn more about partial indexes here.
I hope that helps. (Also, I agree with Szymon Guz that you want an ORDER BY).
Edit: Note that if you need to delete rows or selectively exclude rows before getting your result of 100, then sharding by id will become very hard to deal with (as pointed out by Denis; and sharding by id is not great to begin with). But if your 'just' paginating the data, and you only insert or edit (not a common thing, but it does happen; logs come to mind), then sharding by id can be done reasonably (though I'd still choose something else to shard on).

How about if paginate based on IDs instead of offset/limit?
The following query will give IDs which split all the records into chunks of size per_page. It doesn't depend on were records deleted or not
SELECT id AS from_id FROM (
SELECT id, (ROW_NUMBER() OVER(ORDER BY id DESC)) AS num FROM test_table
) AS rn
WHERE num % (per_page + 1) = 0;
With these from_IDs you can add links to the page. Iterate over :from_ids with index and add the following link to the page:
:from_id_index
When user visits the page retrieve records with ID which is greater than requested :from_id:
SELECT * FROM test_table WHERE ID >= :from_id ORDER BY id DESC LIMIT :per_page
For the first page link with from_id=0 will work
1

To avoid slow pagination with big tables always use auto-increment primary key then use the query below:
SELECT * FROM test_table WHERE id > (SELECT min(id) FROM test_table WHERE id > ((1 * 10) - 10)) ORDER BY id DESC LIMIT 10
1: is the page number
10: is the records per page
Tested and work well with 50 millions records.

There are two simple approaches to solve such a problem
Splitting the query into two subqueries that the first one do all the heavy job on index-only scan as described here
Create calculated index that holds the offset as described here, this can be enhanced using window functions.

Related

How to index a table in Postgres to speed up ORDER BY

How do you create an index in PostgreSQL 11 to speed up a specific query containing an ORDER BY?
I have a query that needs to get the first 100 records from a table containing 2M records, along with a few common filters like:
SELECT id, first_name, last_name
FROM users
WHERE active = true AND region IN (1,2,3)
ORDER BY last_active_timestamp DESC;
Without the ORDER BY clause, it returns in ~1 sec, almost instantly. However, with the clause, it takes an excruciating ~5 minutes.
So I tried creating a partial index like:
CREATE INDEX CONCURRENTLY my_user_index ON users (active, region, last_active_timestamp DESC NULLS LAST)
WHERE region IN (1, 2, 3) AND active = True;
but that had virtually no effect. The above query still takes several minutes. Is that just a limitation of ORDER BY in Postgres, or is there a different type of index I could use to speed it up?
To try an index was correct but you used the wrong one. Try this here:
CREATE INDEX CONCURRENTLY my_user_index
ON users (last_active_timestamp DESC)
WHERE region IN (1, 2, 3)
AND active = true;
Your index was only sorted by last_active_timestamp after already being sorted by active and region, thus you could not just use the index to have your sorted output.
For some more speedup, you could also include the columns of your select-clause within the index using INCLUDE (id, first_name, last_name). Now your query can (if the planner chooses so and I think it will) run on the index only without touching the table data at all.
In order to use an index with the ORDER BY in your query, you need to index on all the relevant columns (last_active_timestamp, along with a condition to include only active==true and regions a,b,c). This will essentially pull the data out in order for you).
Also, if you share your EXPLAIN ANALYZE output, you may see a Sort Method: external merge Disk: ####kB, indicating that the sort spilled out to disk and not in memory, due to an insufficiently-sized work_mem. The solution would then be to increase work_mem to a value of at least ####kB, and try again.
Note that you can set work_mem on a per-session basis, as a global change in work_mem could potentially have negative side-effects, such as running out of memory, because postgresql.conf-configured work_mem is allocated for each session (basically, it has a multiplicative effect).
If the query is still slow after tuning up work_mem (i.e., it's all sorting in memory, and it's still slow), then your returned data set is simply too large to sort quickly.

Optimal way of using joins in Redshift

I have 2 tables in AWS redshift. The details are as below
a) impressions (to count the number of impressions of a particular ad)
Number of rows (170 million)
distribution key(ad_campaign)
sort key (created_on)
b) clicks (to count the number of clicks of a particular ad).
Number of rows (80 million)
distribution key(ad_campaign)
sort key (created_on)
I have a single DC1 Large cluster with 2 slices.
I am trying to run the below query
select impressions.offer_id, count(imp_cnt) from
bidsflyer.tblImpressionLog_Opt impressions
full join bidsflyer.tblTrackingLinkLog_Opt clicks
on impressions.offer_id=clicks.offer_id and date_trunc('week',
impressions.created_on)=date_trunc('week', clicks.created_on)
where impressions.created_on >= '2017-07-27 00:00:00'
group by 1
This query takes more then 8 mins to run. I think this is quite large considering the volume of data, which I feel is not huge.
The query plan looks like something below
XN HashAggregate (cost=2778257688268.43..2778257688268.60 rows=67 width=12)
-> XN Hash Left Join DS_DIST_NONE (cost=179619.84..2778170875920.65 rows=17362469555 width=12)
Hash Cond: (("outer".offer_id = "inner".offer_id) AND (date_trunc('week'::text, "outer".created_on) = date_trunc('week'::text, "inner".created_on)))
-> XN Seq Scan on tblimpressionlog_opt impressions (cost=0.00..724967.36 rows=57997389 width=20)
Filter: (created_on >= '2017-07-27 00:00:00'::timestamp without time zone)
-> XN Hash (cost=119746.56..119746.56 rows=11974656 width=12)
-> XN Seq Scan on tbltrackinglinklog_opt clicks (cost=0.00..119746.56 rows=11974656 width=12)
Can anyone provide me guidance of the correct usage of distribution key and sort keys.
How should I design my query?
Table setup:
1) According to the plan, the most expensive operation is grouping by offer_id. This makes sense because you didn't sort or distribute your data by offer_id. Your tables are quite large, so you can recreate the table with an interleaved sort key by (offer_id,created_on) (interleaved keys are supposed to give equal and order-independent weight to the included columns and are known to have positive effect on larger tables).
2) If you join by weeks you can materialize your week column (create a physical column and populate it with date_trunc output). That might save you some computation effort to get these values dynamically during the join. However, this operation is cheap and if your table is already sorted by timestamp column Redshift might already scan the appropriate blocks only. Also, if each offer runs for a short period of time (meaning offer column has high cardinality and high correlation with time column) you can have a compound sort key by (offer_id,week_created) that will allow merge join that is faster, and aggregate will fun fast as well.
3) if you don't use ad_campaign in other queries you can distribute both tables by offer_id. Having join column in distribution key is a good practice, it's unlikely that your query will benefit from this since you have a single node and distribution style mostly affects multinode setups.
All recommendations are just the assumptions without knowing the exact nature of your data, they require running benchmarks (create table with the recommended configuration, copy data, vaccuum, analyze, run the same query at least 3 times and compare times with the original setup). I would appreciate if you do this and post results here.
RE the query itself, you can replace FULL JOIN with JOIN because you don't need it. FULL JOIN should be used when you want to get not only the intersection of both tables but also impressions that don't have related clicks and vice versa. Which doesn't seem the case because you are filtering by impressions.created_on and group by impressions.offer_id. So, all you need is just the intersection. Replacing FULL JOIN by simple JOIN also might affect query performance. If you want to see the offers that have zero clicks you can use LEFT JOIN.
Merge join is faster than hash join, you should try to achieve merge join. You sort key looks okay, but is your data actually sorted? Redshift does not automatically keep table's rows sorted by sort key, there is no way for redshift to perform merge join on your table. Running a full vacuum on the table, redshift will start performing merge join.
select * from svv_table_info where table = 'impressions'
select * from svv_table_info where table = 'clicks'
Use above query to check the amount of unsorted data you have in your table.
Run a full vacuum on both your tables. Depending on the amount of unsorted data this might take a while and use a lot of your cluster resource.
VACUUM impressions to 100 percent
VACUUM clicks to 100 percent
If I’ve made a bad assumption please comment and I’ll refocus my answer.

Selecting primary key:Why postgres prefers to do sequential scan vs index scan

I have the following table
create table log
(
id bigint default nextval('log_id_seq'::regclass) not null
constraint log_pkey
primary key,
level integer,
category varchar(255),
log_time timestamp,
prefix text,
message text
);
It contains like 3 million of rows.
I'm comparing the following queries:
EXPLAIN SELECT id
FROM log
WHERE log_time < now() - INTERVAL '3 month'
LIMIT 100000
which yields the following plan:
Limit (cost=0.00..19498.87 rows=100000 width=8)
-> Seq Scan on log (cost=0.00..422740.48 rows=2168025 width=8)
Filter: (log_time < (now() - '3 mons'::interval))
And the same query with ORDER BY id instruction added:
EXPLAIN SELECT id
FROM log
WHERE log_time < now() - INTERVAL '3 month'
ORDER BY id ASC
LIMIT 100000
which yields
Limit (cost=0.43..25694.15 rows=100000 width=8)
-> Index Scan using log_pkey on log (cost=0.43..557048.28 rows=2168031 width=8)
Filter: (log_time < (now() - '3 mons'::interval))
I have the following questions:
The absence of ORDER BY instruction allows Postgres not to care about the order of rows. They may be as well delivered sorted. Why it does not use index without ORDER BY?
How can Postgres use index in the first place in such a query? WHERE clause of the query contains a non-indexed column and to fetch that column, sequential database scan will be required, but the query with ORDER BY doesn't indicate that.
The Postgres manual page says:
For a query that requires scanning a large fraction of the table, an explicit sort is likely to be faster than using an index because it requires less disk I/O due to following a sequential access pattern
Can you please clarify this statement for me? Index is always ordered. And reading an ordered structure is always faster, it is always a sequential access (at least in terms of page scanning) than reading non-ordered data and then ordering it manually.
Can you please clarify this statement for me? Index is always ordered. And reading an ordered structure is always faster, it is always a sequential access (at least in terms of page scanning) than reading non-ordered data and then ordering it manually.
The index is read sequentially, yes, but postgres needs to follow up with a read of the rows from the table. That is, in most cases, if an index identifies 100 rows, then postgres will need to perform up to 100 random reads against the table.
Internally, the postgres planner weighs sequential and random reads differently, with random reads generally much more expensive. The settings seq_page_cost and random_page_cost determine those. There are other settings you can view and tinker with if you want, though I recommend being very conservative with modifications.
Let's go back to your earlier questions:
The absence of ORDER BY instruction allows Postgres not to care about the order of rows. They may be as well delivered sorted. Why it does not use index without ORDER BY?
The reason is the sort. As you note later, the index doesn't include the constraining column, so it doesn't make any sense to use the index. Instead, the planner is basically saying "read the whole table, figure out which rows conform to the constraint, and then return the first 100000 of them, in whatever order we find them".
The sort changes things. In that case, the planner is saying "we need to sort by this field, and we have an index which is already sorted, so read rows from the table in index order, checking against the constraint, until we have 100000 of them, and return that set".
You'll note that the cost estimates (e.g. '0.43..25694.15') are much higher for the second query -- the planner thinks that doing so many random reads from the index scan is going to cost significantly more than just reading the whole table at once with no sorting.
Hope that helps, and let me know if you have further questions.

Slow Postgres 9.3 Queries, again

This is a follow-up to the question at Slow Postgres 9.3 queries.
The new indexes definitely help. But what we're seeing is sometimes queries are much slower in practice than when we run EXPLAIN ANALYZE. An example is the following, run on the production database:
explain analyze SELECT * FROM messages WHERE groupid=957 ORDER BY id DESC LIMIT 20 OFFSET 31980;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=127361.90..127441.55 rows=20 width=747) (actual time=152.036..152.143 rows=20 loops=1)
-> Index Scan Backward using idx_groupid_id on messages (cost=0.43..158780.12 rows=39869 width=747) (actual time=0.080..150.484 rows=32000 loops=1)
Index Cond: (groupid = 957)
Total runtime: 152.186 ms
(4 rows)
With slow query logging turned on, we see instances of this query taking over 2 seconds. We also have log_lock_waits=true, and no slow locks are reported around the same time. What could explain the vast difference in execution times?
LIMIT x OFFSET y generally performs not much faster than LIMIT x + y. A large OFFSET is always comparatively expensive. The suggested index in the linked question helps, but while you cannot get index-only scans out of it, Postgres still has to check visibility in the heap (the main relation) for at least x + y rows to determine the correct result.
SELECT *
FROM messages
WHERE groupid = 957
ORDER BY id DESC
LIMIT 20
OFFSET 31980;
CLUSTER on your index (groupid,id) would help to increase locality of data in the heap and reduce the number of data pages to be read per query. Definitely a win. But if all groupid are equally likely to be queried, that's not going to remove the bottleneck of too little RAM for cache. If you have concurrent access, consider pg_repack instead of CLUSTER:
Optimize Postgres timestamp query range
Do you actually need all columns returned? (SELECT *) A covering index enabling index-only scans might help if you only need a few small columns returned. (autovacuum must be strong enough to cope with writes to the table, though. Read-only table would be ideal.)
Also, according to your linked question, your table is 32 GB on disk. (Typically a bit more in RAM). The index on (groupid,id) adds another 308 MB at least (without any bloat):
SELECT pg_size_pretty(7337880.0 * 44); -- row count * tuple size
Making sense of Postgres row sizes
You have 8 GB RAM, of which you expect around 4.5 GB to be used for cache (effective_cache_size = 4608MB). That's enough to cache the index for repeated use, but not nearly enough to also cache the whole table.
If your query happens to find data pages in cache, it's fast. Else, not so much. Big difference, even with SSD storage (much more with HDD).
Not directly related to this query, but 8 MB of work_mem (work_mem = 7864kB) seems way to small for your setup. Depending on various other factors I would set this to at least 64MB (unless you have many concurrent queries with sort / hash operations). Like #Craig commented, EXPLAIN (BUFFERS, ANALYZE) might tell us more.
The best query plan also depends on value frequencies. If only few rows pass the filter, the result might be empty for certain groupid and the query is comparatively fast. If a large portion of the table has to be fetched, a plain sequential scan wins. You need valid table statistics (autovacuum again). And possibly a larger statistics target for groupid:
Keep PostgreSQL from sometimes choosing a bad query plan
Since OFFSET is slow, an alternative is to simulate OFFSET using another column and some index preparation. We require a UNIQUE column (like a PRIMARY KEY) on the table. If there is none, one can be added with:
CREATE SEQUENCE messages_pkey_seq ;
ALTER TABLE messages
ADD COLUMN message_id integer DEFAULT nextval('messages_pkey_seq');
Next we create the position column for the OFFSET simulation:
ALTER TABLE messages ADD COLUMN position INTEGER;
UPDATE messages SET position = q.position FROM (SELECT message_id,
row_number() OVER (PARTITION BY group_id ORDER BY id DESC) AS position
FROM messages ) AS q WHERE q.message_id=messages.message_id ;
CREATE INDEX ON messages ( group_id, position ) ;
Now we are ready for the new version of the query in the OP:
SELECT * FROM messages WHERE group_id = 957 AND
position BETWEEN 31980 AND (31980+20-1) ;

Count(*) filtering on indexed datetime field runs too long

I am trying to count all the records created yesterday. There is a created_at column and it is indexed.
If i run
explain
select count(*) from events where created_at::date = current_date - 1;
It says
Aggregate (cost=14365728.05..14365728.06 rows=1 width=0)
-> Index Only Scan using index_events_created_at on events (cost=0.57..14362310.20 rows=1367140 width=0)
Filter: ((created_at)::date = (('now'::cstring)::date - 1))
So it event kind of knows how many rows there are. But the
select count(*) from events where created_at::date = current_date - 1;
query itself keeps running forever. Why is that?
TRY this:
SELECT count(*)
FROM events
WHERE created_at >= current_date - 1
AND created_at < current_date;
So, to start: Why is the explain plan able to provide an estimated row count so much quicker than the query can run?
The optimizer is estimating the row count based on stored statistics and/or extrapolations from stored statistics. As you can see, this isn't necessary very accurate. (Based on comment discussion, the estimate was off by almost 20%.) So the query has to actually count, based on either data in the table or data in the index. So that's more work. But it's not obvious why it's 10 minutes worth of "more work".
One reasonable guess would be lock contention. Depending on your transaction isolation settings, it could be that your query keeps having to wait on inserts or updates to the table to finish. (The optimizer wouldn't have this problem in calculating its estimate, because it will just assume that the effects of concurrent queries are not a big deal for its purposes.) Even though none of the added data would affect your count, table-level locks could still conflict.
One way to test this theory would be to copy the table, so that you have a table with the same data (and same indexes, etc) that nobody's querying, and see if your count runs faster against it.
(As an aside: In general when the stats seem significantly off you could suspect that the optimizer had picked a poor execution plan; but it's hard to see how an index scan could be the wrong solution here.)