When does PostgreSQL create a temporary table under the hood? - postgresql

What are the cases that make PostgreSQL create temporary tables without being explicitly told except for join operations ?

Never. A temporary table is a specific thing, and that specific thing is never created implicitly as far as I know. There are things which "can be thought of as" temporary tables, but they are not the same thing as temporary tables. They are analogies, not identities.
Many things can be backed by temporary files, but that also is not the same thing as a temporary table.

Related

Two names, or permanent alias, for the same Postgres table, and column -- during migration

How can I create a permanent alias for a table (and also a column) such that queries against either name work?
I'd like to do this to enable renaming tables in our software. Our migrations need to run against live clusters, thus there's a period where the software version will be older than the DB version. It has to work with the old names of the tables and columns.
I see that it's possible to create a view with rules for insert, update, and delete, which I think is fairly close, but I'm wondering if there is a simpler approach. This approach also doesn't work if I wish to simply rename a column in a table (that is, without having to rename the table at the same time).

How to synchronise a foreign table with a local table?

I'm using the Oracle foreign data wrapper and would like to have local copies of some of my foreign tables locally. Is there another option than having materialized views and refreshing them manually?
Not really, unless you want to add functionality in Oracle:
If you add a trigger on the Oracle table that records all data modifications in another table, you could define a foreign table on that table. Then you can regularly run a function in PostgreSQL that takes the changes since you checked last time and applies them to a PostgreSQL table.
If you understand how “materialized view logs” work in Oracle (I don't, and I think the documentation doesn't tell), you could define a foreign table on that and use it like above. That might be cheaper.
Both of these ideas would still require you to regularly run something in PostgreSQL, but you might be cheaper. Perhaps (if you have the money) you could use Oracle Heterogenous Services to modify a PostgreSQL table whenever something changes in an Oracle table.

Dropping of temp tables in PostgreSQL?

Curious as to whether or not one should be dropping temp tables that are used strictly in regards to the function they are declared within? New to postgreSQL and haven't found much information on the topic. I'm aware of the fact that in SQL this is taken care of automatically but certainly MS SQL and postgreSQL have their differences. What do you think is best practice in terms of the dropping of temp tables declared in functions if at all necesarry?
they are somewhat different for MS and Pg. Ms treats local temp tables created in SP specially - drops on the completion of the procedure Postgres does not currently support GLOBAL temp tables (specifying it in create statement is ignored)
Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP.
This presently makes no difference in PostgreSQL and is deprecated;
The best practice is not very much applicable here I would say. Leaving temp tables for duration of the session is ok (they will be dropped at the end). But often you would prefer using ON COMMIT DROP to drop table after transaction ends, not session... If endless session is comparably ok for postgres, endless transaction can be so-so for MVCC and locking and so on... Again you might want to look into ways to fight it...
To summarise: It is often practice to leave temp tables persist till the end of session, more "normal" to leave them persist till end of transaction. Postgres does not treat temp tables created in fn() specially. Postgres does not have GLOBAL temp tables. Depending on the code you write and env you have you might want to drop or leave temp table to be dropped automatically. Mind session/transaction pooling particularities here a s well.

report migration from SQL server to Oracle

I have a report in SQL server and I am migrating this to Oracle.
The approach I used in SQL server is load sum(sales) , person for given month into temporary tables (hash tables) and use this table to join with other transaction tables show the details, but when it comes to oracle I am not sure if I can use the same method here, because hash tables (temporary tables in SQL server) are specific to session and might not create any problem with output, please advise if there is anything in oracle which is analogous to that.
I came to know there are global temp tables in oracle, do they work in the manner I mentinoed above, also
If a user has no create/drop table privileges can they still use gloabal temp tables?
please help me.
You'll have to show some code or atleast some pseudo-code of how your process runs for anyone to help you. Having said that...
One thing that is different in oracle compared to temporary tables in other databases is that you do not create them each time you need them. You create them once and the data in the table is present either until you commit/rollback (transaction based) or until you end your session (session-based global temporary tables). Also, The data in a temporary table is visible only to the session that inserts the data into the table..
If you are generating the output files once and you don't need that data later, then Global temporary tables would probably fit in cleanly, with some minor changes.
Since you do not create the temporary tables each time you use them, you don't need the create/drop privilege. All you'd need is the insert/read privilege. Just read will not help because you cannot read another session's data anyways, so there is no use for it.

PostgreSQL: update a schema when views from another schema depend on it

Here is my setup. I have two schemas: my_app and static_data. The latter is imported from a static dump. For the needs of my application logic, I made views that use the tables of static_data, and I stored them in the my_app schema.
It all works great. But I need to update the static_data schema with a new dump, and have my views use the new data. The problem is, whatever I do, my views will always reference the old schema!
I tried importing the new dump in a new schema, static_data_new, then trying to delete static_data and rename static_data_new to static_data. It doesn't work because my views depend on tables in static_data, therefore PostgreSQL won't let me delete it.
Then I tried setting search_path to static_data_new. But when I do that, the views still reference the old tables!
Is it possible to have views that reference tables using the search_path? Thanks.
Views are bound to the underlying objects. Renaming the object does not affect this link.
I see basically 3 different ways to deal with your problem:
DELETE your views and re-CREATE them after you have your new table(s) in place. Simple and fast, as soon as you have your complete create script together. Don't forget to reset privileges, too. The recreate script may be tedious to compile, though.
Use table-functions (functions RETURNING SETOF rows or RETURNING TABLE) instead of a views. Thereby you get "late binding": the object names will be looked up in the system catalogs at execution time, not at creation time. It will be your responsibility that those objects can, in fact, be found.
The search_path can be pre-set per function or the search_path of the executing role will be effective for objects that are not explicitly schema-qualified. Detailed instructions and links in this related answer on SO.
Functions are basically like prepared statements and behave subtly different from views. Details in this related answer on dba.SE.
Take the TRUNCATE and INSERT route for the new data instead of DELETE and CREATE. Then all references stay intact. Find a more detailed answer about that here.
If foreign keys reference your table you have to use DELETE FROM TABLE instead - or drop and recreate the foreign key constraints. it will be your responsibility that the referential integrity can be restored, or the recreation of the foreign key will fail.