Postgres foreign data tables - keys and indexes - ANALYZE - postgresql

We're planning to use PostgreSQL in our production system with foreign data wrapper (FDW).
A replication from the master server is due to technical issues not an option.
However, we have one big issue, which is performance.
We figured out that wie need the ANALYZE command to run periodically for postgres to collect statistics.
But, how often is "periodically"? - Is there any chance to "scan" that our statistics are outdated or not existing at all?
What would a query on "pg_statistic" would look like in this scenario?
And most importantly, is this way a good one for production; we plan to ANALYZE all important tables once a day, should that be enough?

Related

Sync Elasticsearch Postgresql on a Springboot application

I have Postgresql as my primary database and I would like to take advantage of the Elasticsearch as a search engine for my SpringBoot application.
Problem: The queries are quite complex and with millions of rows in each table, most of the search queries are timing out.
Partial solution: I utilized the materialized views concept in the Postgresql and have a job running that refreshes them every X minutes. But on systems with huge amounts of data and with other database transactions (especially writes) in progress, the views tend to take long times to refresh (about 10 minutes to refresh 5 views). I realized that the current views are at it's capacity and I cannot add more.
That's when I started exploring other options just for the search and landed on Elasticsearch and it works great with the amount of data I have. As a POC, I used the Logstash's Jdbc input plugin but then it doesn't support the DELETE operation (bummer).
From here the soft delete is the option which I cannot take because:
A) Almost all the tables in the postgresql DB are updated every few minutes and some of them have constraints on the "name" key which in this case will stay until a clean-up job runs.
B) Many tables in my Postgresql Db are referenced with CASCADE DELETE and it's not possible for me to update 220 table's Schema and JPA queries to check for the soft delete boolean.
The same question mentioned in the link above also provides PgSync that syncs the postgresql with elasticsearch periodically. However, I cannot go with that either since it has LGPL license which is forbidden in our organization.
I'm starting to wonder if anyone else encountered this strange limitation of elasticsearch and RDMS.
I'm open to other options rather than elasticsearch to solve my need. I just don't know what's the right stack to use. Any help here is much appreciated!

Joining Tables between Multiple Foreign Servers with Foreign Data Wrapper Causes Performance Issue

One of my legacy PHP applications is using a PostgreSQL database with Foreign Data Wrapper. This database has a local table and two foreign servers set up (one pointing to database A, another pointing to database B).
The application uses ORM to construct SQL queries. One of the complex queries is actually joining 6 tables across the two foreign servers and also the local table. And the query just hangs forever because the those 6 tables have on average millions of records.
There are many more queries like this in the legacy app. I have configured the foreign servers to use_remote_estimate 'true' and increase the fetch_size but still see no drastic improvements.
I'm wondering if there are some configurations that can be done on the foreign server to optimise the query speed. Before I start rewriting the whole application to not use PHP and ORM.
Selectivity estimation problems in FDW are very common, and can lead to plans with atrocious performance. Since you are looking for magic bullet, have you tried running ANALYZE on the foreign tables in the local server, so it can use local statistics to some come up with plans? You might want to set up a clone to test this in. ANALYZE can also make things worse, and there is no easy way to undo it once done.
Another step might be setting cursor_tuple_fraction to 1 (or at least much higher than the defaults) on the servers on the foreign sides. This could help if the overall query plan is sound on the local side, but the execution on the foreign sides is bad.
Barring those, you need to look at EXPLAIN (VERBOSE) and EXPLAIN (ANALYZE) of an archetypical bad query to figure out what is going on.
Before I start rewriting the whole application to not use PHP and ORM.
Why would that help? Do you already know how rewrite the queries to make them faster, you just can't get the ORM to cooperate?

(How) Is it possible to convert tables into foreign tables in Postgres?

We have a large table in our Postgres production database which we want to start "sharding" using foreign tables and inheritance.
The desired architecture will be to have 1 (empty) table that defines the schema and several foreign tables inheriting from the empty "parent" table. (possible with Postgres 9.5)
I found this well written article https://www.depesz.com/2015/04/02/waiting-for-9-5-allow-foreign-tables-to-participate-in-inheritance/ that explains everything on how to do it from scratch.
My question is how to reduce the needed migration of data to a minimum.
We have this 100+ GB table now, that should become our first "shard". And in the future we will regulary add new "shards". At some point, the older shards will be moved to another tablespace (on cheaper hardware since they become less important).
My question now:
Is there a way to "ALTER" an existing table to be a foreign table instead?
No way to use alter table to do this.
You really have to basically do it manually. This is no different (really) than doing table partitioning. You create your partitions, you load the data. You direct reads and writes to the partitions.
Now in your case, in terms of doing sharding there are a number of tools I would look at to make this less painful. First, if you make sure your tables are split the way you like them first, you can use a logical replication solution like Bucardo to replicate the writes while you are moving everything over.
There are some other approaches (parallelized readers and writers) that may save you some time at the expense of db load, but those are niche tools.
There is no native solution for shard management of standard PostgreSQL (and I don't know enough about Postgres-XL in this regard to know how well it can manage changing shard criteria). However pretty much anything is possible with a little work and knowledge.

Replicate selected postgresql tables between two servers?

What would be the best way to replicate individual DB tables from a Master postgresql server to a slave machine? It can be done with cron+rsync, or with whatever postgresql might have build in, or some sort of OSS tool, but so far the postgres docs don't seem to cover how to do table replication. I'm not able to do a full DB replication because some tables have license->IP stuff connected to it, and I can't replicate those on the slave machine. I don't need instant replication, hourly would be acceptable as well.
If I need to just rsync, can someone help identify what files within the /var/lib/pgsql directory would need to be synced, or how I would know what tables they are.
Starting with Postgres 10, logical replication is built into Postgres! This is often a better solution than external solutions. The Postgres docs are great and easy to follow. It's very easy. See the quick setup docs, which in essense boils down to running this:
-- On publisher DB
CREATE PUBLICATION mypub FOR TABLE users, departments;
-- On subscriber DB
CREATE SUBSCRIPTION mysub CONNECTION 'dbname=foo host=bar user=repuser' PUBLICATION mypub;
You might want to try Bucardo, which is an open source software to synchronize rows between tables even if they are in a remote location. It's a very simple software, and it is capable of creating one-way synchronization relationships as well.
Check out http://bucardo.org/wiki/Bucardo
You cannot get anything useful by copying individual tables files in the data directory. If you want to replicate selected tables, there are a number of good options.
http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling

Postgresql for OLAP

Does anyone have experience of using PostgreSQL for an OLAP setup, using cubes against the database etc. Having come across a number of idiosyncracies when using MySQL for OLAP, are there reasons in favour of using PostgreSQL instead (assuming that I want to go the open source route)?
There are a number of data warehousing software vendors that are based on Postgresql (and contribute OLAP related changes back to core fairly regularly). Check out https://greenplum.org/. You'll find that PG works a lot better (for nearly any workload, OLAP especially) than MySQL. Greenplum and other similar solutions should work a bit better than PG depending on your data sets and use cases.
PGSQL is much better suited for Data Warehousing compared to MySQL. We had thought initially to go with MySQL, but it performs poorly in aggregations if data grows to a few million rows. PGSQL performs almost 20 times faster in caparison with MySQL for 20 million records for a single fact table on same hardware setup. If for some reason you choose to go with MySQL, then you should use MyISAM storage engine for fact tables rather then InnoDB; you will see slightly better performance.