We have a insert only table for which we often get bad results due to query plan using nested loops instead of hash joins. To solve this we have to run ANALYZE manually (vacuum sometimes don't run on insret only tables, long story, not the point here). When I try to run analyze on replica machine, I get ERROR: cannot execute ANALYZE during recovery error. So this made me think that we maybe don't need to execute ANALYZE on replica.
My question is: are statistics propagated to replica when executing analyze on master node?
Question in link below is similar to this one, but it is asked in regards to vacuum. We are only using ANALYZE.
https://serverfault.com/questions/212219/postgresql-9-does-vacuuming-a-table-on-the-primary-replicate-on-the-mirror
Statistics are stored in table, and this table is replicated from primary server to replica. So you don't need and you cannot to run ANALYZE on replica (physical replication)
Related
We're planning to use PostgreSQL in our production system with foreign data wrapper (FDW).
A replication from the master server is due to technical issues not an option.
However, we have one big issue, which is performance.
We figured out that wie need the ANALYZE command to run periodically for postgres to collect statistics.
But, how often is "periodically"? - Is there any chance to "scan" that our statistics are outdated or not existing at all?
What would a query on "pg_statistic" would look like in this scenario?
And most importantly, is this way a good one for production; we plan to ANALYZE all important tables once a day, should that be enough?
Conclusion
You can not. Microsoft explicitly states: "you cannot manually remove an execution plan from the cache" in this article called 'Understanding the Procedure Cache on SQL Azure'.
Original Question
On SQL Server a single execution plan can be deleted from the cache using [DBCC FREEPROCCACHE(plan_handle varbinary(64))][1]. There is different [documentation about DBCC FREEPROCCACHE on SQL Azure][2]. It seems that it removes all cached execution plans from all compute nodes and or control nodes (whatever those nodes might be, I don't know). I do not understand why SQL on Azure of the Server version would differ in this aspect.
However, I am looking for a way to delete a single execution plan from the cache on Azure. Is there any way to delete a single execution plan on Azure? Maybe using a query instead of a DBCC?
There is no way to remove single execution plan from cache.
If your execution plan is related to only few tables/one table and if you are ok with removal of cache for those tables as well, then you can alter the table ,add a non null column and remove the column.This will force flush the cache ..
Changing schema of the tables causes cache flush(not single plan, all plans) for those tables involved
I do not understand why SQL on Azure of the Server version would differ in this aspect.
This has to do with database as a offering, you are offered a database(this may be in some server with multiple databases) and some dbcc commands affect whole instance,so they kind of banned all DBCC commands.There is a new offering called Managed instance(which is same as on premises server,but with high availabilty of Azure database), you may want to check that as well
I'm creating a reporting engine that makes a couple of long queries over a standby server and process the result with pandas. Everything works fine but sometimes I have some issues with the execution of those queries using a psycopg2 cursor: the query is cancelled with the following message:
ERROR: cancelling statement due to conflict with recovery
Detail: User query might have needed to see row versions that must be removed
I was investigating this issue
PostgreSQL ERROR: canceling statement due to conflict with recovery
https://www.postgresql.org/docs/9.0/static/hot-standby.html#HOT-STANDBY-CONFLICT
but all solutions suggest fixing the issue making modifications to the server's configuration. I can't make those modifications (We won the last football game against IT guys :) ) so I want to know how can I deal with this situation from the perspective of a developer. Can I resolve this issue using python code? My temporary solution is simple: catch the exception and retry all the failed queries. Maybe could be done better (I hope so).
Thanks in advance
There is nothing you can do to avoid that error without changing the PostgreSQL configuration (from PostgreSQL 9.1 on, you could e.g. set hot_standby_feedback to on).
You are dealing with the error in the correct fashion – simply retry the failed transaction.
The table data on the hot standby slave server is modified while a long running query is running. A solution (PostgreSQL 9.1+) to make sure the table data is not modified is to suspend the replication on the slave and resume after the query.
select pg_xlog_replay_pause(); -- suspend
select * from foo; -- your query
select pg_xlog_replay_resume(); --resume
I recently encountered a similar error and was also in the position of not being a dba/devops person with access to the underlying database settings.
My solution was to reduce the time of the query where ever possible. Obviously this requires deep knowledge of your tables and data, but I was able to solve my problem with a combination of a more efficient WHERE filter, a GROUPBY aggregation, and more extensive use of indexes.
By reducing the amount of server side execute time and data, you reduce the chance of a rollback error occurring.
However, a rollback can still occur during your shortened window, so a comprehensive solution would also make use of some retry logic for when a rollback error occurs.
Update: A colleague implemented said retry logic as well as batching the query to make the data volumes smaller. These three solutions have made the problem go away entirely.
I got the same error. What you CAN do (if the query is simple enough), is deviding the data into smaller chunks as a workaround.
I did this within a python loop to call the query multiple times with the LIMIT and OFFSET parameter like:
query_chunk = f"""
SELECT *
FROM {database}.{datatable}
LIMIT {chunk_size} OFFSET {i_chunk * chunk_size}
"""
where database and datatable are the names of your sources..
The chunk_size is individually and to set this to a not too high value is crucial for the query to finish.
I have given a postgres 9.2 DB around 20GB of size.
I looked through the database and saw that it has been never run vacuum and/or analyze on any tables.
Autovacuum is on and the transaction wraparound limit is very far (only 1% of it).
I know nothing about the data activity (number of deletes,inserts, updates), but I see, it uses a lot of index and sequence.
My question is:
does the lack of vacuum and/or analyze affect data integrity (for example a select doesn't show all the rows matches the select from a table or from an index)? The speed of querys and writes doesn't matter.
is it possible that after the vacuum and/or analyze the same query gives a different answer than it would executed before the vacuum/analyze command?
I'm fairly new to PG, thank you for your help!!
Regards,
Figaro88
Running vacuum and/or analyze will not change the result set produced by any select operation (unless there was a bug in PostgreSQL). They may effect the order of results if you do not supply an ORDER BY clause.
If I am to run cluster reindex and analyze on a table, what is the best order to do so?
There doesn't seem to be any point analyzing before clustering, since clustering will invalidate the "correlation" statistics if it rearranges the heap. So I'd say cluster first.
cluster reorganizes physically your data so it must be the first operation then reindex (data placement will probably changes after clustering your table) and finally analyze witch gives some informations (stat) to the query planner.