Are there performance gains to using WHERE ST_IsValid? - postgresql

I'm using PostgreSQL 9.2 with PostGIS 2.0.1 on Windows.
Consider a table some_table with a GEOMETRY column named geom.
Query 1:
UPDATE some_table
SET geom = ST_MakeValid(geom)
Query 2:
UPDATE some_table
SET geom = ST_MakeValid(geom)
WHERE NOT ST_IsValid(geom)
Does calling ST_IsValid as a filter (as in Query 2) offer any performance gains (over Query 1)?

Expanding on Craig's comment, the answer is "maybe." There are a lot of possible answers here and it depends on a lot of things.
For example, suppose 80% of your table is invalid and you care about that 20%. And now suppose that ST_IsValid takes 60% of the CPU time that ST_MakeValid does. You would run the ST_IsValid on all of your table (0.6 * 1) plus you would run the ST_MakeValid function on the other 20% (1 * 0.2). This would save you about 20% of your time without an index. If you had a functional index it might save you a bunch of time (the numbers are hypothetical of course).
Suppose on the other hand half your table was invalid. You'd run the cheaper function on all rows (0.6 * 1) and you would run the more expensive function on the other (1 * 0.5), leading to a net slowdown in your query by about 10%. This also means that if virtually all your rows are valid there can be no benefit performance-wise to checking.
So the answer is that you really need to check with EXPLAIN ANALYSE on your specific set.

Related

Can't count() a PostgreSql table [duplicate]

I need to know the number of rows in a table to calculate a percentage. If the total count is greater than some predefined constant, I will use the constant value. Otherwise, I will use the actual number of rows.
I can use SELECT count(*) FROM table. But if my constant value is 500,000 and I have 5,000,000,000 rows in my table, counting all rows will waste a lot of time.
Is it possible to stop counting as soon as my constant value is surpassed?
I need the exact number of rows only as long as it's below the given limit. Otherwise, if the count is above the limit, I use the limit value instead and want the answer as fast as possible.
Something like this:
SELECT text,count(*), percentual_calculus()
FROM token
GROUP BY text
ORDER BY count DESC;
Counting rows in big tables is known to be slow in PostgreSQL. The MVCC model requires a full count of live rows for a precise number. There are workarounds to speed this up dramatically if the count does not have to be exact like it seems to be in your case.
(Remember that even an "exact" count is potentially dead on arrival under concurrent write load.)
Exact count
Slow for big tables.
With concurrent write operations, it may be outdated the moment you get it.
SELECT count(*) AS exact_count FROM myschema.mytable;
Estimate
Extremely fast:
SELECT reltuples AS estimate FROM pg_class where relname = 'mytable';
Typically, the estimate is very close. How close, depends on whether ANALYZE or VACUUM are run enough - where "enough" is defined by the level of write activity to your table.
Safer estimate
The above ignores the possibility of multiple tables with the same name in one database - in different schemas. To account for that:
SELECT c.reltuples::bigint AS estimate
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname = 'mytable'
AND n.nspname = 'myschema';
The cast to bigint formats the real number nicely, especially for big counts.
Better estimate
SELECT reltuples::bigint AS estimate
FROM pg_class
WHERE oid = 'myschema.mytable'::regclass;
Faster, simpler, safer, more elegant. See the manual on Object Identifier Types.
Replace 'myschema.mytable'::regclass with to_regclass('myschema.mytable') in Postgres 9.4+ to get nothing instead of an exception for invalid table names. See:
How to check if a table exists in a given schema
Better estimate yet (for very little added cost)
This does not work for partitioned tables because relpages is always -1 for the parent table (while reltuples contains an actual estimate covering all partitions) - tested in Postgres 14.
You have to add up estimates for all partitions instead.
We can do what the Postgres planner does. Quoting the Row Estimation Examples in the manual:
These numbers are current as of the last VACUUM or ANALYZE on the
table. The planner then fetches the actual current number of pages in
the table (this is a cheap operation, not requiring a table scan). If
that is different from relpages then reltuples is scaled
accordingly to arrive at a current number-of-rows estimate.
Postgres uses estimate_rel_size defined in src/backend/utils/adt/plancat.c, which also covers the corner case of no data in pg_class because the relation was never vacuumed. We can do something similar in SQL:
Minimal form
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
FROM pg_class
WHERE oid = 'mytable'::regclass; -- your table here
Safe and explicit
SELECT (CASE WHEN c.reltuples < 0 THEN NULL -- never vacuumed
WHEN c.relpages = 0 THEN float8 '0' -- empty table
ELSE c.reltuples / c.relpages END
* (pg_catalog.pg_relation_size(c.oid)
/ pg_catalog.current_setting('block_size')::int)
)::bigint
FROM pg_catalog.pg_class c
WHERE c.oid = 'myschema.mytable'::regclass; -- schema-qualified table here
Doesn't break with empty tables and tables that have never seen VACUUM or ANALYZE. The manual on pg_class:
If the table has never yet been vacuumed or analyzed, reltuples contains -1 indicating that the row count is unknown.
If this query returns NULL, run ANALYZE or VACUUM for the table and repeat. (Alternatively, you could estimate row width based on column types like Postgres does, but that's tedious and error-prone.)
If this query returns 0, the table seems to be empty. But I would ANALYZE to make sure. (And maybe check your autovacuum settings.)
Typically, block_size is 8192. current_setting('block_size')::int covers rare exceptions.
Table and schema qualifications make it immune to any search_path and scope.
Either way, the query consistently takes < 0.1 ms for me.
More Web resources:
The Postgres Wiki FAQ
The Postgres wiki pages for count estimates and count(*) performance
TABLESAMPLE SYSTEM (n) in Postgres 9.5+
SELECT 100 * count(*) AS estimate FROM mytable TABLESAMPLE SYSTEM (1);
Like #a_horse commented, the added clause for the SELECT command can be useful if statistics in pg_class are not current enough for some reason. For example:
No autovacuum running.
Immediately after a large INSERT / UPDATE / DELETE.
TEMPORARY tables (which are not covered by autovacuum).
This only looks at a random n % (1 in the example) selection of blocks and counts rows in it. A bigger sample increases the cost and reduces the error, your pick. Accuracy depends on more factors:
Distribution of row size. If a given block happens to hold wider than usual rows, the count is lower than usual etc.
Dead tuples or a FILLFACTOR occupy space per block. If unevenly distributed across the table, the estimate may be off.
General rounding errors.
Typically, the estimate from pg_class will be faster and more accurate.
Answer to actual question
First, I need to know the number of rows in that table, if the total
count is greater than some predefined constant,
And whether it ...
... is possible at the moment the count pass my constant value, it will
stop the counting (and not wait to finish the counting to inform the
row count is greater).
Yes. You can use a subquery with LIMIT:
SELECT count(*) FROM (SELECT 1 FROM token LIMIT 500000) t;
Postgres actually stops counting beyond the given limit, you get an exact and current count for up to n rows (500000 in the example), and n otherwise. Not nearly as fast as the estimate in pg_class, though.
I did this once in a postgres app by running:
EXPLAIN SELECT * FROM foo;
Then examining the output with a regex, or similar logic. For a simple SELECT *, the first line of output should look something like this:
Seq Scan on uids (cost=0.00..1.21 rows=8 width=75)
You can use the rows=(\d+) value as a rough estimate of the number of rows that would be returned, then only do the actual SELECT COUNT(*) if the estimate is, say, less than 1.5x your threshold (or whatever number you deem makes sense for your application).
Depending on the complexity of your query, this number may become less and less accurate. In fact, in my application, as we added joins and complex conditions, it became so inaccurate it was completely worthless, even to know how within a power of 100 how many rows we'd have returned, so we had to abandon that strategy.
But if your query is simple enough that Pg can predict within some reasonable margin of error how many rows it will return, it may work for you.
Reference taken from this Blog.
You can use below to query to find row count.
Using pg_class:
SELECT reltuples::bigint AS EstimatedCount
FROM pg_class
WHERE oid = 'public.TableName'::regclass;
Using pg_stat_user_tables:
SELECT
schemaname
,relname
,n_live_tup AS EstimatedCount
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
How wide is the text column?
With a GROUP BY there's not much you can do to avoid a data scan (at least an index scan).
I'd recommend:
If possible, changing the schema to remove duplication of text data. This way the count will happen on a narrow foreign key field in the 'many' table.
Alternatively, creating a generated column with a HASH of the text, then GROUP BY the hash column.
Again, this is to decrease the workload (scan through a narrow column index)
Edit:
Your original question did not quite match your edit. I'm not sure if you're aware that the COUNT, when used with a GROUP BY, will return the count of items per group and not the count of items in the entire table.
You can also just SELECT MAX(id) FROM <table_name>; change id to whatever the PK of the table is
In Oracle, you could use rownum to limit the number of rows returned. I am guessing similar construct exists in other SQLs as well. So, for the example you gave, you could limit the number of rows returned to 500001 and apply a count(*) then:
SELECT (case when cnt > 500000 then 500000 else cnt end) myCnt
FROM (SELECT count(*) cnt FROM table WHERE rownum<=500001)
For SQL Server (2005 or above) a quick and reliable method is:
SELECT SUM (row_count)
FROM sys.dm_db_partition_stats
WHERE object_id=OBJECT_ID('MyTableName')
AND (index_id=0 or index_id=1);
Details about sys.dm_db_partition_stats are explained in MSDN
The query adds rows from all parts of a (possibly) partitioned table.
index_id=0 is an unordered table (Heap) and index_id=1 is an ordered table (clustered index)
Even faster (but unreliable) methods are detailed here.

PostgreSQL Query Performance Fluctuates

We have a system that loads data and then conducts data QC in PostgreSQL. The QC function's performance fluctuates drastically in one of our environments with no apparent pattern. I was able to track down the performance of the following simple query in the QC function:
WITH foo AS (SELECT full_address, jsonb_agg (gad_rec_id) gad_rec_ids
FROM azgiv.v_full_addresses
WHERE gad_gly_id = 495
GROUP BY full_address
HAVING count(1) > 1)
SELECT gad_nguid, gad_rec_id, foo.full_address
FROM azgiv.v_full_addresses JOIN foo
ON foo.full_address = v_full_addresses.full_address
AND v_full_addresses.gad_gly_id = 495;
When I ran into slow-performance situation (Fig 2), I had to ANALYZE the table behind the view before the query plan changes to fast (Fig 1). The v_full_addresses is a simple view of a partitioned table with bunch of columns concatenated.
Here are two images of the query plans for the above query. I am newbie when comes to understanding query optimization and any help is greatly appreciated.
&
If performance improves after you ANALYZE a table, that means that the database's knowledge about the distribution of the data is outdated.
The best remedy is to tell PostgreSQL to collect these statistics more often:
ALTER TABLE some_table SET (autovacuum_analyze_scale_factor = 0.02);
0.02 is five times lower than the default 0.1, so statistics will be gathered five times more often.
If the bad query plans are generated right after a bulk load, you must choose a different strategy. In this case the problem is that it takes up to a minute for auto-analyze to kick in and calculate new statistics.
In that case you should run an explicit ANALYZE at the end of the bulk load.

Select * from table_name is running slow

The table contains around 700 000 data. Is there any way to make the query run faster?
This table is stored on a server.
I have tried to run the query by taking the specific columns.
If select * from table_name is unusually slow, check for these things:
Network speed. How large is the data and how fast is your network? For large queries you may want to think about your data in bytes instead of rows. Run select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'TABLE_NAME'; and compare that with your network speed.
Row fetch size. If the application or IDE is fetching one-row-at-a-time, each row has a large overhead with network lag. You may need to increase that setting.
Empty segment. In a few weird cases the table's segment size can increase and never shrink. For example, if the table used to have billions of rows, and they were deleted but not truncated, the space would not be released. Then a select * from table_name may need to read a lot of empty extents to get to the real data. If the GB size from the above query seems too large, run alter table table_name move; to rebuild the table and possible save space.
Recursive query. A query that simple almost couldn't have a bad execution plan. It's possible, but rate, for a recursive query has a bad execution plan. While the query is running, look at select * from gv$sql where users_executing > 0;. There might be a data dictionary query that's really slow and needs to be tuned.

First call of query on big table is surprisingly slow

I have a query that feels like it is taking more time then it should be. This only applies on the first query for a given set of parameters, so when cached there is no issue.
I am not sure what to expect, however, given the setup and settings I was hoping someone could shed some light on a few questions and give some insight into what can be done to speed up the query. The table in question is fairly large and Postgres estimates around 155963000 in it (14 GB).
Query
select ts, sum(amp) as total_amp, sum(230 * factor) as wh
from data_cbm_aggregation_15_min
where virtual_id in (1818) and ts between '2015-02-01 00:00:00' and '2015-03-31 23:59:59'
and deleted is null
group by ts
order by ts
When I started looking into this the query it took around 15 seconds, after some changes I have gotten it to around 10 seconds which still seems long for a simply query like this. Here are the results from explain analyze: http://explain.depesz.com/s/97V1. Note the reason why GroupAggregate returns the same amount of rows is this example only has one virtual_id being used, but there can be more.
Table and index
Table being queried, it has values inserted into it every 15 minutes
CREATE TABLE data_cbm_aggregation_15_min (
virtual_id integer NOT NULL,
ts timestamp without time zone NOT NULL,
amp real,
recs smallint,
min_amp real,
max_amp real,
deleted boolean,
factor real DEFAULT 0.25,
min_amp_ts timestamp without time zone,
max_amp_ts timestamp without time zone
)
ALTER TABLE data_cbm_aggregation_15_min ALTER COLUMN virtual_id SET STATISTICS 1000;
ALTER TABLE data_cbm_aggregation_15_min ALTER COLUMN ts SET STATISTICS 1000;
The index that is used in the query
CREATE UNIQUE INDEX idx_data_cbm_aggregation_15_min_virtual_id_ts
ON data_cbm_aggregation_15_min USING btree (virtual_id, ts DESC);
ALTER TABLE data_cbm_aggregation_15_min
CLUSTER ON idx_data_cbm_aggregation_15_min_virtual_id_ts;
Postgres settings
Other settings are default.
default_statistics_target = 100
maintenance_work_mem = 2GB
effective_cache_size = 11GB
work_mem = 256MB
shared_buffers = 3840MB
random_page_cost = 1
What I have tried
I have been following the Things to try before you post in https://wiki.postgresql.org/wiki/Slow_Query_Questions and the results in a bit more detail were as follows:
Fiddling with the Postgres settings, mostly lowering random_page_cost since the index scan, while it seems not too special is miles ahead of the bitmap heap scan it tried doing instead when the random_page_cost was higher.
Adding increased statistics to the virtual_id and ts columns which the index and WHERE conditions are based on. The query planner's estimated row count was much closer to the actual row count after changing this.
Clustering on the idx_data_cbm_aggregation_15_min_virtual_id_ts index did not seem to change much, not that I noticed.
Running VACUUM manually did not change much, I am already running autovacuum so this was no surprise.
Running REINDEX on the index shrunk it considerably (by almost 50%!) but it did not improve the speed by much.
A couple of small improvements
SELECT ts, sum(amp) AS total_amp, sum(factor) * 230 AS wh
FROM data_cbm_aggregation_15_min
WHERE virtual_id = 1818
AND ts >= '2015-02-01 00:00'
AND ts < '2015-04-01 00:00'
AND deleted IS NULL
GROUP BY ts
ORDER BY ts;
sum(230 * factor) - it's cheaper to multiply the sum once instead of multiplying each element: sum(factor) * 230 The result is the same, even with NULL values.
ts between '2015-02-01 00:00:00' and '2015-03-31 23:59:59' is potentially incorrect. To include all of March 2015, use the presented alternative. BETWEEN is translated to ts >= lower AND ts <= upper anyway. It is always slightly faster to spell it out.
virtual_id in (1818) is just a needlessly convoluted way to say virtual_id = 1818.
Better index, potentially bigger improvement
CREATE INDEX data_cbm_aggregation_15_min_special_idx
ON data_cbm_aggregation_15_min (virtual_id, ts, amp, factor)
WHERE deleted IS NULL;
I see nothing in your question that would suggest DESC in your original index. While Index Scan Backward is almost as fast as a plain Index Scan, it's still better to drop the modifier.
Most importantly, there are index-only scans since Postgres 9.2. The two index columns I appended (amp, factor) only make sense if you get index-only scans out of it.
Since you obviously are not interested in deleted rows, make it a partial index. Only pays if you have more than a few deleted rows in the table.
If you have other large parts of the table that can be excluded, add more conditions - and remember to repeat the condition in the query (even if it seems redundant) so Postgres understands that the index is applicable.
Table definition
Reordering table columns like this would save 8 bytes per row:
CREATE TABLE data_cbm_aggregation_15_min (
virtual_id integer NOT NULL,
recs smallint,
deleted boolean,
ts timestamp NOT NULL,
amp real,
min_amp real,
max_amp real,
factor real DEFAULT 0.25,
min_amp_ts timestamp,
max_amp_ts timestamp
);
Related:
Configuring PostgreSQL for read performance
Most important information for last
The first query call can be substantially more expensive for very big tables, since the whole table cannot be cached. Subsequent calls profit from the populated cache. Postgres caches blocks, not necessarily whole tables.
One more thing that can be important for the first call. Due to the MVCC model of Postgres it has to maintain visibility information. When reading pages of a table the first time since the last write operation, Postgres opportunistically updates visibility information, which can impose some extra cost for the first access (and help a lot for subsequent calls). More in the manual here. Related answer on dba.SE:
Why does a SELECT statement dirty cache buffers in Postgres?
About what you've tried so far
SET STATISTICS 1000 for ts and virtual_id was an excellent idea, but the effect was largely nullified by setting random_page_cost = 1, which basically forces an index scan for this query either way.
random_page_cost = 1 is telling Postgres that random access is just as cheap as sequential access. This makes sense for a DB that (almost) completely resides in cache. For a DB with huge tables like yours, this setting seems too extreme (even if it gets Postgres to favor the desired index scan). Set it to random_page_cost = 1.1 or probably higher.
A bitmap index scan is typically a good plan for the first call of the query you presented - for data distributed randomly across the table. Since you clustered the table just like you need it for this query, an index scan is more efficient. The question is: will your table stay clustered?
Your settings for work_mem and other resources depend on how much RAM you have, the speed of your disks, on access pattern, how many concurrent connections you typically have, what other programs on the server compete for resources, etc. work_mem = 256MB seems too high. You don't need nearly as much for the presented query. Setting it that high may actually harm performance, because it reduces RAM available to cache.
REINDEX is not redundant immediately after CLUSTER, since that recreates all indexes anyway. You must have run REINDEX before cluster, or you have heavy write access on the table to get so much bloat again already.
Various
Upgrade to Postgres 9.4 (or the upcoming 9.5, currently alpha). Version 9.2 is 3 years old now, the latest version has received many improvements.
The query plan suggests that nothing is actually aggregated. rows=4,117 are read from the index and rows=4,117 remain after GroupAggregate. Looks like rows are unique on ts already? Then you can remove the aggregation completely and make it a simple SELECT ...
If that's just a misleading EXPLAIN output and you typically output much fewer rows than are read, a MATERIALIZED VIEW with index on ts would be another option. Especially in combination with Postgres 9.4, which introduces REFRESH MATERIALIZED VIEW CONCURRENTLY.

PostgreSQL - PostGIS query optimization

I have a query which creates an input to pgRouting pgr_drivingDistance function:
CREATE TEMP TABLE tmp_edge AS
SELECT
e."Id" as id,
e."Source" as source,
e."Target" as target,
e."Length" / (1000*LEAST("Speed", "SpeedMin")/60) as cost
FROM "Edge" e,
"SpeedLimit" sl
WHERE sl."VehicleKindId" = 1
AND e.the_geom &&
ST_MakeEnvelope(
x1-(1000*GREATEST("Speed", "SpeedMax")/60)*13,
y1-(1000*GREATEST("Speed", "SpeedMax")/60)*13,
x1+(1000*GREATEST("Speed", "SpeedMax")/60)*13,
y1+(1000*GREATEST("Speed", "SpeedMax")/60)*13, 3857)
AND sl."RoadCategoryId" = e."CategoryId";
In the WHERE clause I calculate the same thing several times to get bounding box coordinates.
I tried to put calculations into FROM part and use alias for calculated column, but then whole execution time increases twice.
Edge table is quite large (1 milion) and SpeedLimit is several dozen record.
Is there any way to enhance this query?
It is recommended way to join tables using JOIN syntax. And then later restrict given set wit WHERE. What is ST_MakeEnvelope? You can use Index on expression in PostgreSQL ;)
Expression indexes in PostgreSQL
Since you are using expressions you might benefit from them.
And you might use Explain analyize to notice your bottlenecks in the query