Problema with kinked materialized view when overwriting existing postgis table - postgresql

Main question: I have several views depending on a PostgreSQL/PostGIS table and a final materialized view created by querying the other views. I need a fast and updatable final result (i.e. MV) to use in a QGIS project.
My aim is to update the starting table by overwriting it with new (lots of) values and hopefully have update views and materialized view. I use QGIS DB Manager to overwrite existing table but I get an error because of mv depending on it. If I delete mv, overwrite table and then recreate mv everything is ok but I'd like to avoid manual operations as much as possible.
Is there a better way to reach my goal?
Another question: If I set a trigger to refresh a mv when I update/insert/delete values in a table, would it work even in case of overwriting entire table with a new one?

Refreshing a materialized view runs the complete defining query, so that is a long running and heavy operation for a complicated query.
It is possible to launch REFRESH MATERIALIZED VIEW from a trigger (it had better be a FOR EACH STATEMENT trigger then), but that would make every data modification so slow that I don't think that is practically feasible.
One thing that might work is to implement something like a materialized view that refreshes immediately “by hand”:
create a regular table for the “materialized view” and fill it with data by running the query
on each of the underlying tables, define a row level trigger that modifies the materialized view in accordance with the changes that triggered it
This should work for views where the definition is simple enough, for complicated queries it will not be possible.

Related

Custom caching of table in postgres

I would like to do something like
results = select * from quick_table
if no results:
insert into quick_table select slow_query
results = select * from quick_table
return results
This is a pretty standard caching pattern. Is there any way I can do this in postgres that's more clever than just literally writing a function to do what I listed above?
The PostgreSQL feature that comes closest to what you want to do is a materialized view.
This creates a copy on disk of the results of your view, which you can then query as if it were a table. You can also add indexes to it in the usual way.
A caveat is that when you generate a materialized view, its data does not update automatically when the source tables’ data change. To reflect changes, you must issue a REFRESH MATERIALIZED VIEW command.
Typical approaches to refreshing are:
Run the refresh as a background task (e.g., in a cron job)
Add triggers to the source tables such that changing data in them causes the view to refresh.
Each approach has advantages and disadvantages, so the route you take will depend on your circumstance. It may also be useful to make sure you can add a unique index to your MV, as that will allow you to run concurrent refreshes; without that, a refresh places an exclusive lock on the view, so it won’t be readable until the refresh has finished.

Using views in postgresql to enable transparent replacement of backing tables

We have a view that aggregates from a backing table. The idea is to reduce cpu load by using a pre-aggregated table, and to periodically refresh it with the following:
create new_backing_table (fill it)
begin
drop backingtable
rename new_backingtable to backingtable
commit
while in production. The latency caused by the refresh interval is acceptable. Incremental updates are possible but not desirable.
Anyone has a comment on this scheme ?
Check out materialized views. This may suit your use case. It can be used to store query results at creation then refreshed at a later time.
A materialized view is defined as a table which is actually physically stored on disk, but is really just a view of other database tables. In PostgreSQL, like many database systems, when data is retrieved from a traditional view it is really executing the underlying query or queries that build that view.
https://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html

Concurrency problems from eager materialization?

Using PostgreSQL 9.6.5, I am moving away from the built-in MATERIALIZED VIEW model because it lacks incremental refresh, and for my purposes (replication through SymmetricDS) I actually need the data in table storage, not view. My solution:
Create an "unmaterialized view" VIEW_XXX_UNMAT (really just a select)
Create table VIEW_XXX as select from VIEW_XXX_UNMAT (snapshot)
Add primary key to VIEW_XXX
Create a refresh function which deletes from and reinserts into VIEW_XXX based on the primary key
Create INSERT/DELETE/UPDATE triggers for each table involved in VIEW_XXX_UNMAT, which call the refresh function with the appropriate PK
My inspiration comes from this PGCon 2008 talk and once we get over the hurdle of creating all these triggers it all works nicely. Obviously we are limiting the UPDATE triggers to the columns that are involved, and only refreshing the view if NEW data is distinct from OLD data.
At this point I'd like to know:
If there are any better solutions out there (extension?) keeping in mind that I need tables in the end
Performance issues (I understand the refresh cost is write-bound, but because the materialized table is indexed, it's pretty fast?)
Concurrency issues (what if two people are updating the same field at the same time, which refreshes the materialized view through triggers, will one of them fail or will MVCC "take care of it"?)

Is it possible to have indexes on non-materialized views?

In PostgreSQL, can I have an index on a non-materialized view?
I'm using a view in my application and it basically works well, but I'd like to speed up access to its data. I could switch to a materialized view, but I don't want to have to refresh it.
No
From http://postgresql.nabble.com/Indexes-not-allowed-on-read-only-views-Why-td4812152.html
in postgres, views are essentially macros, thus there is no data to index
and
A normal (non-materialized) view doesn't have any data of its own, it
pulls it from one or more other tables on the fly during query
execution. The execution of a view is kind of similar to a
set-returning function or a subquery, almost as if you'd substituted
the view definition into the original query.
That means that the view will use any indexes on the original
table(s), but there isn't really even an opportunity to check for
indexes on the view its self because the view's definition is
effectively substituted into the query. If the view definition is
complex enough that it does a lot of work where indexes on the
original table(s) don't help, that work has to be done every time.
and
What you CAN do is use triggers to maintain your own materialized
views as regular tables, and have indexes on the tables you maintain
using triggers. This is widely discussed on the mailing list and isn't
hard to do, though it's tricky to make updates perform well with some
kinds of materialized view query.

Inserts to indexed views

Greetings Overflowers,
Is there an SQL DBMS that allows me to create an indexed view in which I can insert new rows without modifying the original tables of the view? I will need to query this view after performing the in-view-only inserts. If the answer is no, what other methods can do the job? I simply want to merge a set of rows that comes from another server with the set of rows in the created view -in a specific order- to be able to perform fast queries against the merged set, ie the indexed view, without having to persist the received set in disk. I am not sure if using in-memory database would perform well as the merged sets grow ridiculously?
What do you think guys?
Kind regards
Well, there's no supported way to do that, since the view has to be based on some table(s).
Besides that, indexed views are not meant to be used like that. You don't have to push any data into the index view thinking that you will make data retrieval faster.
I suggest you keep your view just the way it is. And then have a staging table, with the proper indexes created on it, in which you insert the data coming from the external system.
The staging table should be truncated anytime you want to get rid of the data (so right before you're inserting new data). That should be done in a SNAPSHOT ISOLATION transaction, so your existing queries don't read dirty data, or deadlock.
Then you have two options:
Use an UNION ALL clause to merge the results from the view and the staging table when you want to retrieve your data.
If the staging table shouldn't be merged, but inner joined, then you perhaps can integrate it in the indexed view.