Are idx_scan statistics reset automatically (default)? - postgresql

I was looking at the tables (pg_stat_user_indexes and pg_stat_user_tables) and discovered many indices that are not being used.
But before I think about doing any operations to remove these indices, I need to understand what period was the analysis of this data (idx_scan), has it been since the database was created?
In the pg_stat_database table (column stats_reset) there is a date that normally is today or up to 15 days ago, but does this process interfere with the tables I mentioned above?
No command pg_stat_reset() was executed.
Does the pg_stat_reset() command clear the tables (pg_stat_user_indexes and pg_stat_user_tables)?
My goal is to understand the period of data collected so that I can make a decision.

Statistics are cumulative and are kept from the time of cluster creation on.
So if you see the pg_stat_database.stats_reset change regularly, there must be somebody or something doing that explicitly with the pg_stat_reset() function.
Doing so is somewhat problematic, because this resets all statistics, including those in pg_stat_user_tables which govern when autovacuum and autoanalyze take place. So after a reset these will be a little out of whack until autoanalyze has collected new statistics.
The better way is to take regular snapshots and calculate the difference.
You are right that you should collect data over a longer time before you determine if an index can be canned or not. For example, some activity may only take place once a month, but require certain indexes.
Before dropping indexes, consider that indexes also serve other purposes besides being scanned:
They can be UNIQUE or back a constraint, in which case they serve a purpose even when they are never scanned.
Indexes on expressions make PostgreSQL collect statistics on the distribution of the indexed expression, which can have a notable effect on query planning and the quality of your execution plans.
You could use the query in this blog to find all the indexes that serve no purpose at all.

Only superuser is allowed to reset statistic. Query planer depends on statistic.
Use snapshots:
CREATE TABLE stat_idx_snap_m10_d29_16_12 AS SELECT * FROM pg_stat_user_indexes;
CREATE TABLE stat_idx_snap_m10_d29_16_20 AS SELECT * FROM pg_stat_user_indexes;
Analyze difference any time later:
SELECT
s2.relid, s2.indexrelid, s2.schemaname, s2.relname, s2.indexrelname,
s2.idx_scan - s1.idx_scan as idx_scan,
s2.idx_tup_read - s1.idx_tup_read as idx_tup_read,
s2.idx_tup_fetch - s1.idx_tup_fetch as idx_tup_fetch
FROM stat_idx_snap_m10_d29_16_20 s2
FULL OUTER JOIN stat_idx_snap_m10_d29_16_12 s1
ON s2.relid = s1.relid AND s2.indexrelid = s1.indexrelid
ORDER BY s2.idx_scan - s1.idx_scan ASC;

Related

kdb: getting one row from HDB

For a normal table, we can select one row using select[1] from t. How can I do this for HDB?
I tried select[1] from t where date=2021.02.25 but it gives error
Not yet implemented: it probably makes sense, but it’s not defined nor implemented, and needs more thinking about as the language evolves
select[n] syntax works only if table is already loaded in memory.
The easiest way to get 1st row of HDB table is:
1#select from t where date=2021.02.25
select[n] will work if applied on already loaded data, e.g.
select[1] from select from t where date=2021.02.25
I've done this before for ad-hoc queries by using the virtual index i, which should avoid the cost of pulling all data into memory just to select a couple of rows. If your query needs to map constraints in first before pulling a subset, this is a reasonable solution.
It will however pull N rows for each date partition selected due to the way that q queries work under the covers. So YMMV and this might not be the best solution if it was behind an API for example.
/ 5 rows (i[5] is the 6th row)
select from t where date=2021.02.25, sum=`abcd, price=1234.5, i<i[5]
If your table is date partitioned, you can simply run
select col1,col2 from t where date=2021.02.25,i=0
That will get the first record from 2021.02.25's partition, and avoid loading every record into memory.
Per your first request (which is different to above) select[1] from t, you can achieve that with
.Q.ind[t;enlist 0]

where column in (single value) performance

I am writing dynamic sql code and it would be easier to use a generic where column in (<comma-seperated values>) clause, even when the clause might have 1 term (it will never have 0).
So, does this query:
select * from table where column in (value1)
have any different performance than
select * from table where column=value1
?
All my test result in the same execution plans, but if there is some knowledge/documentation that sets it to stone, it would be helpful.
This might not hold true for each and any RDBMS as well as for each an any query with its specific circumstances.
The engine will translate WHERE id IN(1,2,3) to WHERE id=1 OR id=2 OR id=3.
So your two ways to articulate the predicate will (probably) lead to exactly the same interpretation.
As always: We should not really bother about the way the engine "thinks". This was done pretty well by the developers :-) We tell - through a statement - what we want to get and not how we want to get this.
Some more details here, especially the first part.
I Think this will depend on platform you are using (optimizer of the given SQL engine).
I did a little test using MySQL Server and:
When I query select * from table where id = 1; i get 1 total, Query took 0.0043 seconds
When I query select * from table where id IN (1); i get 1 total, Query took 0.0039 seconds
I know this depends on Server and PC and what.. But The results are very close.
But you have to remember that IN is non-sargable (non search argument able), it will not use the index to resolve the query, = is sargable and support the index..
If you want the best one to use, You should test them in your environment because they both work so good!!

Design question for a table with too many joins OR polymorphic relations in Postgres 11.7

I've been given a table that I'm not sure how to design. I'm hoping for some design suggestions, or pointers in the right direction. The table is called edge and is meant to store some event traces, and IDs that link out to a host of possible lookup tables. Leaving out everything but IDs, here's what the table contains, all UUIDs:
ID
InvID
OrgID
FacilityID
FromAssemblyID
FromAssociatedTo
FromAssociatedToID
FromClinicID
FromFacilityDepartmentID
FromFacilityID
FromFacilityLocationID
FromScanAtFacilityID
FromScanID
FromSCaseID
FromSterilizerLoadID
FromWasherLoadID
FromWebUserID
ToAssemblyID
ToAssociatedTo
ToAssociatedToID
ToClinicID
ToFacilityDepartmentID
ToFacilityID
ToFacilityLocationID
ToNodeDTS
ToScanAtFacilityID
ToScanID
ToSCaseID
ToSterilizerLoadID
ToUserName
ToWasherLoadID
ToWebUserID
That's an overwhelming number of IDs to possibly join on. I remember reading that the Postgres planner kind of gives up when you've got a dozen+ joins. The idea being that there are so many permutations to explore, that the planning time could quickly overwhelm the query time. If you boil it down, the "from" and "to" links are only ever going to have one key value across all of those fields. So, implemented as a polymorphic/promiscuous relations, something like this:
ID
InvID
OrgID
FacilityID
FromID
FromType
ToID
ToType
ToWebUserID
This table is going to be ginormous, so speed is/will be a consideration.
I encouraged the author not to use a polymorphic design, although the appeal is obvious. (I like Karwin's SQL Antipatterns book.) But now, confronted with nearly three dozen IDs, I'm a bit stumped.
Is there a common solution to this kind of problem? Namely, where you've got a central table like this with connections to a wide variety of possible tables? I don't have a Data Warehousing background, but this looks somewhat like that. (The author of this table has read Kimball's books, but not done any Data Warehouse implementations either.)
Important: We're using JOIN to do lookups on related values that might change, we're not using it to change the size of the result set. Just pretend it would always be LEFT JOIN.
With that in mind, what I've thought of is to skip joining on the From and To IDs, and instead use custom function calls to look up required values from the related tables. like (pseudo-code)
GetUserName(uuid) : citext
...and os on for other values of interest in this and other tables...
The function would return '' when the UUID is 0000etc.
I appreciate that this isn't the crispest question in the history of SO, and I what I'm hoping for pointers in a fruitful direction.
This smacks of “premature optimization” (which is a source of evil) based on something that you “remember reading”, so maybe some enlightenment about join optimization will help.
One rule of thumb that I follow in questions like this is to model things so that your queries become simple and natural. Experience shows that that often leads to good performance.
I assume that the table you show is the fact table of a star schema, and the foreign keys point to the many dimension tables, so that your query will look like
SELECT ...
FROM fact
JOIN dim1 ON fact.dim1_id = dim1.id
JOIN dim2 ON fact.dim3_id = dim2.id
JOIN dim3 ON fact.dim3_id = dim3.id
...
WHERE dim1.col1 = ...
AND dim2.col2 BETWEEN ... AND ...
AND dim3.col3 < ...
...
Now PostgreSQL will by default only consider all join permutations of the first eight tables (join_collapse_limit), and the rest of the tables are just joined in the order in which they appear in the query.
Moreover, if the number of tables reaches the threshold of 12 (geqo_threshold), the genetic query optimizer takes over, a component that simulates evolution by mutation and survival of the fittest with randomly chosen execution plans (really!) and consequently doesn't always come up with the same execution plan for the same query.
So my advice would be to write the queries in a way that the first seven dimension tables are the ones with the biggest chance of reducing the number of result rows most significantly (based on the WHERE conditions). You can also increase join_collapse_limit, because if your queries take a long time to run anyway, you can easily afford the planner to spend more time thinking about the best plan.
Then you'd set geqo = off to disable the genetic query optimizer.
If you design your queries according to these principles, you should be able to get good execution plans without messing up the data model.

Data Lake Analytics - Large vertex query

I have a simple query which make a GROUP BY using two fields:
#facturas =
SELECT a.CodFactura,
Convert.ToInt32(a.Fecha.ToString("yyyyMMdd")) AS DateKey,
SUM(a.Consumo) AS Consumo
FROM #table_facturas AS a
GROUP BY a.CodFactura, a.DateKey;
#table_facturas has 4100 rows but query takes several minutes to finish. Seeing the graph explorer I see it uses 2500 vertices because I'm having 2500 CodFactura+DateKey unique rows. I don't know if it normal ADAL behaviour. Is there any way to reduce the vertices number and execute this query faster?
First: I am not sure your query actually will compile. You would need the Convert expression in your GROUP BY or do it in a previous SELECT statement.
Secondly: In order to answer your question, we would need to know how the full query is defined. Where does #table_facturas come from? How was it produced?
Without this information, I can only give some wild speculative guesses:
If #table_facturas is coming from an actual U-SQL Table, your table is over partitioned/fragmented. This could be because:
you inserted a lot of data originally with a distribution on the grouping columns and you either have a predicate that reduces the number of rows per partition and/or you do not have uptodate statistics (run CREATE STATISTICS on the columns).
you did a lot of INSERT statements, each inserting a small number of rows into the table, thus creating a big number of individual files. This will "scale-out" the processing as well. Use ALTER TABLE REBUILD to recompact.
If it is coming from a fileset, you may have too many small files in the input. See if you can merge them into less, larger files.
You can also try to hint a small number of rows in your query that creates #table_facturas if the above does not help by adding OPTION(ROWCOUNT=4000).

Partial index not being used in psql 8.2

I would like to run a query on a large table along the lines of:
SELECT DISTINCT user FROM tasks
WHERE ctime >= '2012-01-01' AND ctime < '2013-01-01' AND parent IS NULL;
There is already an index on tasks(ctime), but most (75%) of rows have a non-NULL parent, so that's not very effective.
I attempted to create a partial index for those rows:
CREATE INDEX CONCURRENTLY task_ctu_np ON tasks (ctime, user)
WHERE parent IS NULL;
but the query planner continues to choose the tasks(ctime) index instead of my partial index.
I'm using postgresql 8.2 on the server, and my psql client is 8.1.
First, I second Richard's suggestion that upgrading should be at the top of your priority. The areas of partial indexes, etc. have, as I understood it, improved significantly since 8.2.
The second thing is you really need the actual query plans with timing information (EXPLAIN ANALYZE) because without these we can't talk about selectivity, etc.
So my order of business if I were you would be to upgrade first and then tune after that.
Now, I understand that 8.3 is a big upgrade (it is the only one that caused us issues in LedgerSMB). You may need some time to address that, but the alternative is to get further behind and be asking questions on a version that is less and less in current understanding as time goes on.