report migration from SQL server to Oracle - oracle10g

I have a report in SQL server and I am migrating this to Oracle.
The approach I used in SQL server is load sum(sales) , person for given month into temporary tables (hash tables) and use this table to join with other transaction tables show the details, but when it comes to oracle I am not sure if I can use the same method here, because hash tables (temporary tables in SQL server) are specific to session and might not create any problem with output, please advise if there is anything in oracle which is analogous to that.
I came to know there are global temp tables in oracle, do they work in the manner I mentinoed above, also
If a user has no create/drop table privileges can they still use gloabal temp tables?
please help me.

You'll have to show some code or atleast some pseudo-code of how your process runs for anyone to help you. Having said that...
One thing that is different in oracle compared to temporary tables in other databases is that you do not create them each time you need them. You create them once and the data in the table is present either until you commit/rollback (transaction based) or until you end your session (session-based global temporary tables). Also, The data in a temporary table is visible only to the session that inserts the data into the table..
If you are generating the output files once and you don't need that data later, then Global temporary tables would probably fit in cleanly, with some minor changes.
Since you do not create the temporary tables each time you use them, you don't need the create/drop privilege. All you'd need is the insert/read privilege. Just read will not help because you cannot read another session's data anyways, so there is no use for it.

Related

How to replicate rows into different tables of different database in postgresql?

I use postgresql. I have many databases in a server. There is one database which I use the most say 'main'. This 'main' has many tables inside it. And also other databases have many tables inside them.
What I want to do is, whenever a new row is inserted into 'main.users' table I wish to insert the same data into 'users' table of other databases. How shall I do it in postgresql? Similarly I wish to do the same for all actions like UPDATE, DELETE etc.,
I had gone through the "logical replication" concept as suggested by you. In my case I know the source db name up front and I will come to know the target db name as part of the query. So it is going to be dynamic.
How to achieve this? is there any db concept available in postgresql? Or I welcome all other possible ways as well. Please share me some idea on this.
If this is all on the same Postgres instance (aka "cluster"), then I would recommend to use a foreign table to access the tables from the "main" database in the other databases.
Those foreign tables look like "local" tables inside each database, but access the original data in the source database directly, so there is no need to synchronize anything.
Upgrade to a recent PostgreSQL release and use logical replication.
Add a trigger on the table in the master database that uses dblink to access and write the other databases.
Be sure to consider what should be done if the row alreasdy exists remotely, or if the rome server is unreachable.
Also not that updates propogated usign dblink are not rolled back if the inboking transaction is rolled back

Dropping of temp tables in PostgreSQL?

Curious as to whether or not one should be dropping temp tables that are used strictly in regards to the function they are declared within? New to postgreSQL and haven't found much information on the topic. I'm aware of the fact that in SQL this is taken care of automatically but certainly MS SQL and postgreSQL have their differences. What do you think is best practice in terms of the dropping of temp tables declared in functions if at all necesarry?
they are somewhat different for MS and Pg. Ms treats local temp tables created in SP specially - drops on the completion of the procedure Postgres does not currently support GLOBAL temp tables (specifying it in create statement is ignored)
Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP.
This presently makes no difference in PostgreSQL and is deprecated;
The best practice is not very much applicable here I would say. Leaving temp tables for duration of the session is ok (they will be dropped at the end). But often you would prefer using ON COMMIT DROP to drop table after transaction ends, not session... If endless session is comparably ok for postgres, endless transaction can be so-so for MVCC and locking and so on... Again you might want to look into ways to fight it...
To summarise: It is often practice to leave temp tables persist till the end of session, more "normal" to leave them persist till end of transaction. Postgres does not treat temp tables created in fn() specially. Postgres does not have GLOBAL temp tables. Depending on the code you write and env you have you might want to drop or leave temp table to be dropped automatically. Mind session/transaction pooling particularities here a s well.

db2look from SQL

Is it possible to get the table structure like db2look from SQL?
Or the only way is from command line? Thus, by wrapping a external stored procedure in C I could call the db2look, but that is not what I am looking for.
Clarification added later:
I want to know which tables have the non logged option from SQL.
It is possible to create the table structure from regular SQL and the public DB2 catalog - however, it is complex and requires some deeper skills.
The metadata is available in the DB2 catalog views in the SYSCAT schema. For a regular table you would first start off by looking into the values in SYSCAT.TABLES and SYSCAT.COLUMNS. From there you would need to branch off to other views depending on what table and column options you are after, whether time-travel tables, special partitioning rules, or many other options are involved.
Serge Rielau published an article on developerWorks called Backup and restore SQL schemas for DB2 Universal Database that provides a set of stored procedures that will do exactly what you're looking for.
The article is quite old (2006) so you may need to put some time in to update the procedures to be able to handle features that were added to DB2 since the date of publication, but the procedures may work for you now and are a nice jumping off point.

libpq code to create, list and delete databases (C++/VC++, PostgreSQL)

I am new to the PostgreSQL database. What my visual c++ application needs to do is to create multiple tables and add/retrieve data from them.
Each session of my application should create a new and distinct database. I can use the current date and time for a unique database name.
There should also be an option to delete all the databases.
I have worked out how to connect to a database, create tables, and add data to tables. I am not sure how to make a new database for each run or how to retrieve number and name of databases if user want to clear all databases.
Please help.
See the libpq examples in the documentation. The example program shows you how to list databases, and in general how to execute commands against the database. The example code there is trivial to adapt to creating and dropping databases.
Creating a database is a simple CREATE DATABASE SQL statement, same as any other libpq operation. You must connect to a temporary database (usually template1) to issue the CREATE DATABASE, then disconnect and make a new connection to the database you just created.
Rather than creating new databases, consider creating new schema instead. Much less hassle, since all you need to do is change the search_path or prefix your table references, you don't have to disconnect and reconnect to change schemas. See the documentation on schemas.
I question the wisdom of your design, though. It is rarely a good idea for applications to be creating and dropping databases (or tables, except temporary tables) as a normal part of their operation. Maybe if you elaborated on why you want to do this, we can come up with solutions that may be easier and/or perform better than your current approach.

Replicating data between Postgres DBs

I have a Postgres DB that is used by a chat application. The chat system often truncates these tables when they grow to big but I need this data copied to another Postgres database. I will not be truncating the tables in this DB.
How I can configure a few tables on the chat-system's database to replicate data to another Postgres database. Is there a quick way to accomplish this?
Slony can replicate only select tables, but I'm not sure how it handles truncates, and it can be a pain to configure.
You might also use something like pgpool to send copies of the insert statements to a second database.
You might modify the source of your chat application to do two writes (one to each db) when a new record is created.
You could just write a script in Perl/PHP/Python to read from one and write to another, then fire it by cron so that you're sure it gets run before truncation.
If you only copy a batch of rows every other day, you may be better off with a plain INSERT to a different schema in the same database or a different database in the same database cluster (you need something like dblink for that).
The safest / fastest solution in the same database would be a data-modifying CTE. Something along these lines:
WITH del AS (
DELETE FROM tbl
WHERE <some condition>
RETURNING *
)
INSERT INTO backup.tbl
SELECT * FROM del;
For true replication consider these official sources:
https://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling
https://www.postgresql.org/docs/current/runtime-config-replication.html