Inserts to indexed views - merge

Greetings Overflowers,
Is there an SQL DBMS that allows me to create an indexed view in which I can insert new rows without modifying the original tables of the view? I will need to query this view after performing the in-view-only inserts. If the answer is no, what other methods can do the job? I simply want to merge a set of rows that comes from another server with the set of rows in the created view -in a specific order- to be able to perform fast queries against the merged set, ie the indexed view, without having to persist the received set in disk. I am not sure if using in-memory database would perform well as the merged sets grow ridiculously?
What do you think guys?
Kind regards

Well, there's no supported way to do that, since the view has to be based on some table(s).
Besides that, indexed views are not meant to be used like that. You don't have to push any data into the index view thinking that you will make data retrieval faster.
I suggest you keep your view just the way it is. And then have a staging table, with the proper indexes created on it, in which you insert the data coming from the external system.
The staging table should be truncated anytime you want to get rid of the data (so right before you're inserting new data). That should be done in a SNAPSHOT ISOLATION transaction, so your existing queries don't read dirty data, or deadlock.
Then you have two options:
Use an UNION ALL clause to merge the results from the view and the staging table when you want to retrieve your data.
If the staging table shouldn't be merged, but inner joined, then you perhaps can integrate it in the indexed view.

Related

Problema with kinked materialized view when overwriting existing postgis table

Main question: I have several views depending on a PostgreSQL/PostGIS table and a final materialized view created by querying the other views. I need a fast and updatable final result (i.e. MV) to use in a QGIS project.
My aim is to update the starting table by overwriting it with new (lots of) values and hopefully have update views and materialized view. I use QGIS DB Manager to overwrite existing table but I get an error because of mv depending on it. If I delete mv, overwrite table and then recreate mv everything is ok but I'd like to avoid manual operations as much as possible.
Is there a better way to reach my goal?
Another question: If I set a trigger to refresh a mv when I update/insert/delete values in a table, would it work even in case of overwriting entire table with a new one?
Refreshing a materialized view runs the complete defining query, so that is a long running and heavy operation for a complicated query.
It is possible to launch REFRESH MATERIALIZED VIEW from a trigger (it had better be a FOR EACH STATEMENT trigger then), but that would make every data modification so slow that I don't think that is practically feasible.
One thing that might work is to implement something like a materialized view that refreshes immediately “by hand”:
create a regular table for the “materialized view” and fill it with data by running the query
on each of the underlying tables, define a row level trigger that modifies the materialized view in accordance with the changes that triggered it
This should work for views where the definition is simple enough, for complicated queries it will not be possible.

Using views in postgresql to enable transparent replacement of backing tables

We have a view that aggregates from a backing table. The idea is to reduce cpu load by using a pre-aggregated table, and to periodically refresh it with the following:
create new_backing_table (fill it)
begin
drop backingtable
rename new_backingtable to backingtable
commit
while in production. The latency caused by the refresh interval is acceptable. Incremental updates are possible but not desirable.
Anyone has a comment on this scheme ?
Check out materialized views. This may suit your use case. It can be used to store query results at creation then refreshed at a later time.
A materialized view is defined as a table which is actually physically stored on disk, but is really just a view of other database tables. In PostgreSQL, like many database systems, when data is retrieved from a traditional view it is really executing the underlying query or queries that build that view.
https://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html

An alternative design to insert/update of talend

I have a requirement in Talend where in I have to update/insert rows from the source table to the destination table. The source and destination tables are identical. The source gets refreshed by a business process and need to update/insert these results in the destination table.
I had designed for the 'insert or update' in tmap and tmysqloutput. However, the job turns out to be super slow
As an alternative to the above solution I am trying to do design the insert and update separately.In order to do this, I was wanting to hash the source rows as the number of rows would be usually less.
So, my question I will hash the input rows but when I join them with the destination rows in tmap should I hash the destination rows as well? Or should I use the destination rows as it is and then join them?
Any suggestions on the job design here?
Thanks
Rathi
If you are using the same database, you should not use ETL loading techniques but ELT loading so that all processing will happen in the database. Talend offers a few ELT components which are a bit different to use but very helpful for this case. I've had things to speed up by multiple magnitudes using only those components.
It is still a good idea to use an indexed hashed field both in the source and the target, which is done in a same way in loading Satellites in the Data Vault 2.0 model.
Alternatively, if you have direct access to the source table database, you could consider adding triggers for C(R)UD scenarios. Doing this, every action on the source database could be reflected in your database immediately. Remember though that you might need to think about a buffer table ("staging") where you could store your changes so that you are able to ingest fast, process later. In this table only the changed rows and the change type (create, update, delete) would be present for you to process. This decouples loading and processing which can be helpful if there will be a problem with loading or processing later on.
Yes i believe that you should use hash component for destination table as well.
Because than your processing (lookup) will be very fast as its happening in memory
If not than lookup load may take more time.

How much space do VIEWS occupy in PostgreSQL?

If I store my query results as views does it take more space of my memory in comparison to a table with query results?
Another question about views is that can I have new query based on the results of a query that is stored as views?
Views don't store query results, they store queries.
Some RDBMS allow the way to store query results (for some queries): this is called materialized views in Oracle and indexed views in SQL Server.
PostgreSQL does not support those (though, as #CalvinCheng mentioned, you can emulate those using triggers or rules).
Yes, you can use views in your queries. However, a view is just a convenient way to refer to a complex query by name, not a way to store its results.
For Question 1
To answer your first question, you cannot store your query results as views but you can achieve a similar functionality using PostgreSQL's trigger feature.
PostgreSQL supports creation of views natively but not the creation of materialized views (views that store your results) - but this can be handled using triggers. See http://wiki.postgresql.org/wiki/Materialized_Views
views do not take up RAM ("memory").
For Question 2
And to answer the second question, to update a view in postgresql, you will need to use CREATE RULE - http://www.postgresql.org/docs/devel/static/sql-createrule.html
CREATE RULE defines a new rule applying to a specified table or view. CREATE OR REPLACE RULE will either create a new rule, or replace
an existing rule of the same name for the same table.
I would like to point out that as of Postgres 9.3, Materialized Views are supported
PostgreSQL view is a saved query. Once created, selecting from a view is exactly the same as selecting from the original query, it returns the query each time. so views do not take up memory.
You can not store your query results as views, views are just queries, but you can achieve a similar functionality using materialized views. Materialized views they are only updated on demand. Second, the whole materialized view must be updated; there is no way to only update a single stale row.
So in that case you have to eagerly update view whenever a change occurs that would invalidate a row. It can be done with triggers.

Most efficient way of bulk loading unnormalized dataset into PostgreSQL?

I have loaded a huge CSV dataset -- Eclipse's Filtered Usage Data using PostgreSQL's COPY, and it's taking a huge amount of space because it's not normalized: three of the TEXT columns is much more efficiently refactored into separate tables, to be referenced from the main table with foreign key columns.
My question is: is it faster to refactor the database after loading all the data, or to create the intended tables with all the constraints, and then load the data? The former involves repeatedly scanning a huge table (close to 10^9 rows), while the latter would involve doing multiple queries per CSV row (e.g. has this action type been seen before? If not, add it to the actions table, get its ID, create a row in the main table with the correct action ID, etc.).
Right now each refactoring step is taking roughly a day or so, and the initial loading also takes about the same time.
From my experience you want to get all the data you care about into a staging table in the database and go from there, after that do as much set based logic as you can most likely via stored procedures. When you load into the staging table don't have any indexes on the table. Create the indexes after the data is loaded into the table.
Check this link out for some tips http://www.postgresql.org/docs/9.0/interactive/populate.html